Evaluation Format · Demonstration
Sample Evaluation: How MowerLab Tests Robotic Mowers
This page shows the exact format a published MowerLab evaluation will follow once hands-on testing begins. No real testing has been performed. All figures and results shown here are illustrative examples only.
Why This Page Exists
Purpose of This Page
MowerLab is establishing a structured, independent evaluation programme for robotic mowers. Before the first physical test is conducted, this page demonstrates precisely what a published evaluation will contain — the categories covered, the measurement approach, the output format, and how results connect to classification.
This page is intended for manufacturers, buyers, and press who want to understand the rigour of MowerLab's methodology before hands-on testing begins. Transparency in the format is itself part of the standard.
Hypothetical Test Setup
Test Overview
Lawn Size
2,000 m²
Mixed open and enclosed zones
Terrain
Natural grass
Seasonal variation, mild surface irregularity
Slope Conditions
0° – 30°
Measured at fixed gradient increments
Obstacles
8 fixed, 2 dynamic
Posts, garden furniture, a child's bicycle
Each mower undergoes a minimum of three complete mowing cycles across the full test area. Results are averaged across runs. Anomalous runs (caused by external interference or equipment fault) are excluded and documented.
What We Test
Evaluation Categories
Autonomous Operation
- What is tested
- The mower's ability to complete a full mowing cycle without operator intervention — including docking, undocking, rain avoidance, and returning to missed zones.
- How it is measured
- Coverage percentage is calculated via GPS track log. Interventions are logged manually. Run time and battery consumption are recorded per cycle.
- Example result (demonstration only)
- Completed 92% coverage across three runs with zero manual interventions. Average run time: 3 h 20 min. One docking failure logged on run 2, self-corrected.
Slope Performance
- What is tested
- Traction, cut consistency, and safe recovery behaviour across increasing gradient levels from 0° to the manufacturer's rated maximum.
- How it is measured
- Gradient measured with calibrated digital inclinometer. Mower is run up and across the slope. Cut quality and wheel slip events are logged at each gradient step.
- Example result (demonstration only)
- Maintained traction and consistent cut up to 20°. At 24° the mower failed to complete uphill traversal and returned to dock safely. Did not tip or stall.
Navigation & Mapping
- What is tested
- Accuracy of perimeter detection, zone boundary adherence, and path efficiency. Tested for both initial map creation and subsequent runs.
- How it is measured
- Boundary deviation measured at 20 fixed reference points using survey tape. Path overlap ratio calculated from GPS logs. Map creation time recorded.
- Example result (demonstration only)
- Average boundary deviation: 12 cm. Path overlap ratio: 18% (acceptable; target is <25%). Initial map creation required 47 minutes for the full test area.
Obstacle Handling
- What is tested
- Detection range, avoidance accuracy, and behaviour on contact with static and dynamic obstacles. Includes post-avoidance path recovery.
- How it is measured
- Each obstacle is placed at a known distance. Distance at first detection is recorded. Contact events (including minor bumps) are counted and categorised.
- Example result (demonstration only)
- Detected all static obstacles at or before 35 cm. One minor contact event with the lowest post (10 cm diameter). Recovered path correctly within 4 seconds.
Multi-Zone Capability
- What is tested
- Whether the mower can manage distinct, non-contiguous zones as a single scheduled programme — including cross-zone transit routing.
- How it is measured
- Two physically separated zones (connected by a 1.2 m passage) are mapped. Scheduled runs are observed for zone completion order and transit accuracy.
- Example result (demonstration only)
- Successfully managed two zones across five scheduled cycles. Transit accuracy through the 1.2 m passage: 100%. No zone confusion events.
Output Format
Example Results (Format Only)
| Category | Metric | Example Result | Pass / Flag |
|---|---|---|---|
| Autonomous Operation | Coverage (avg.) | 92% without intervention | Pass |
| Autonomous Operation | Docking failures | 1 (self-corrected) | Pass |
| Slope Performance | Max maintained traction | 20° | Pass |
| Slope Performance | Slope recovery at 24° | Failed traversal — safe dock return | Flagged |
| Navigation & Mapping | Boundary deviation (avg.) | 12 cm | Pass |
| Navigation & Mapping | Path overlap ratio | 18% | Pass |
| Obstacle Handling | Detection distance | ≥35 cm for all obstacles | Pass |
| Obstacle Handling | Contact events | 1 minor contact | Flagged |
| Multi-Zone | Transit accuracy | 100% (5/5 cycles) | Pass |
Published evaluations will include all raw run logs, environmental conditions at time of testing, and any deviation from the standard test protocol. A “Flagged” result does not automatically disqualify a classification tier — context and severity are both considered.
From Results to Classification
How This Connects to Classification
Evaluation results feed directly into MowerLab's three-tier classification system. No single metric determines a tier — the classification reflects the overall capability profile across all categories.
Commercial Ready
- • Coverage ≥ 95% across all runs
- • Slope traction maintained at rated maximum
- • Zero unresolved contact events
- • Multi-zone transit accuracy ≥ 98%
- • Boundary deviation ≤ 15 cm
Advanced Residential
- • Coverage ≥ 85% with ≤ 1 intervention
- • Slope traction maintained up to 20°
- • Minor contact events self-corrected
- • Multi-zone capability with occasional prompts
- • Boundary deviation ≤ 25 cm
Entry-Level Residential
- • Coverage ≥ 75% with any number of runs
- • Slope performance limited to ≤ 15°
- • Requires operator intervention to clear events
- • Single-zone only or unreliable multi-zone
- • Boundary deviation above 25 cm acceptable
Manufacturer Information
What Brands Can Expect
When a manufacturer submits a mower for evaluation, they enter a defined, transparent process. MowerLab does not accept payment for favourable results — participation fees, if any, cover only logistics and are not tied to outcomes.
Submission & intake
The manufacturer ships a production-equivalent unit. MowerLab logs receipt, firmware version, and any manufacturer documentation provided. No proprietary information is required.
Structured evaluation
The mower is evaluated against the published category framework across a minimum of three full test cycles. No special configuration is applied beyond factory defaults unless the manufacturer requests specific settings and provides written rationale.
Draft review
Manufacturers receive a draft of the evaluation before publication. They may submit factual corrections only — scoring outcomes and classification decisions are not subject to manufacturer approval.
Publication
The full evaluation — including raw run logs, environmental conditions, and any flagged results — is published on MowerLab. The classification badge is updated on the product page. All evaluation documents are versioned and timestamped.
Re-evaluation
If a manufacturer releases a firmware update that materially affects performance, they may request a re-evaluation under the same protocol. The previous evaluation remains publicly accessible with a note indicating a newer evaluation exists.
What Manufacturers Receive
- • A published evaluation page linked from the product listing
- • A classification badge reflecting hands-on performance
- • The full run log dataset in CSV format
- • A one-page evaluation summary suitable for use in marketing (subject to accurate representation)
- • Permanent archival of the evaluation on MowerLab regardless of product lifecycle
Ready to Participate
Interested in having your mower evaluated?
Hands-on testing is being scheduled. Contact MowerLab to express interest or ask questions about the evaluation process.