MowerLab

Evaluation Format · Demonstration

Sample Evaluation: How MowerLab Tests Robotic Mowers

This page shows the exact format a published MowerLab evaluation will follow once hands-on testing begins. No real testing has been performed. All figures and results shown here are illustrative examples only.

Demonstration Format — Not Real Test Data

Why This Page Exists

Purpose of This Page

MowerLab is establishing a structured, independent evaluation programme for robotic mowers. Before the first physical test is conducted, this page demonstrates precisely what a published evaluation will contain — the categories covered, the measurement approach, the output format, and how results connect to classification.

Important: No mower has been tested by MowerLab at time of publication. Every score, percentage, or quoted result on this page is a constructed example used solely to illustrate the evaluation format.

This page is intended for manufacturers, buyers, and press who want to understand the rigour of MowerLab's methodology before hands-on testing begins. Transparency in the format is itself part of the standard.

Hypothetical Test Setup

Test Overview

The following describes a representative test environment. Actual test sites will conform to this template and be disclosed in each published evaluation.

Lawn Size

2,000 m²

Mixed open and enclosed zones

Terrain

Natural grass

Seasonal variation, mild surface irregularity

Slope Conditions

0° – 30°

Measured at fixed gradient increments

Obstacles

8 fixed, 2 dynamic

Posts, garden furniture, a child's bicycle

Each mower undergoes a minimum of three complete mowing cycles across the full test area. Results are averaged across runs. Anomalous runs (caused by external interference or equipment fault) are excluded and documented.

What We Test

Evaluation Categories

All “example results” in this section are fabricated for illustration. They do not represent any specific product.

Autonomous Operation

What is tested
The mower's ability to complete a full mowing cycle without operator intervention — including docking, undocking, rain avoidance, and returning to missed zones.
How it is measured
Coverage percentage is calculated via GPS track log. Interventions are logged manually. Run time and battery consumption are recorded per cycle.
Example result (demonstration only)
Completed 92% coverage across three runs with zero manual interventions. Average run time: 3 h 20 min. One docking failure logged on run 2, self-corrected.

Slope Performance

What is tested
Traction, cut consistency, and safe recovery behaviour across increasing gradient levels from 0° to the manufacturer's rated maximum.
How it is measured
Gradient measured with calibrated digital inclinometer. Mower is run up and across the slope. Cut quality and wheel slip events are logged at each gradient step.
Example result (demonstration only)
Maintained traction and consistent cut up to 20°. At 24° the mower failed to complete uphill traversal and returned to dock safely. Did not tip or stall.

Navigation & Mapping

What is tested
Accuracy of perimeter detection, zone boundary adherence, and path efficiency. Tested for both initial map creation and subsequent runs.
How it is measured
Boundary deviation measured at 20 fixed reference points using survey tape. Path overlap ratio calculated from GPS logs. Map creation time recorded.
Example result (demonstration only)
Average boundary deviation: 12 cm. Path overlap ratio: 18% (acceptable; target is <25%). Initial map creation required 47 minutes for the full test area.

Obstacle Handling

What is tested
Detection range, avoidance accuracy, and behaviour on contact with static and dynamic obstacles. Includes post-avoidance path recovery.
How it is measured
Each obstacle is placed at a known distance. Distance at first detection is recorded. Contact events (including minor bumps) are counted and categorised.
Example result (demonstration only)
Detected all static obstacles at or before 35 cm. One minor contact event with the lowest post (10 cm diameter). Recovered path correctly within 4 seconds.

Multi-Zone Capability

What is tested
Whether the mower can manage distinct, non-contiguous zones as a single scheduled programme — including cross-zone transit routing.
How it is measured
Two physically separated zones (connected by a 1.2 m passage) are mapped. Scheduled runs are observed for zone completion order and transit accuracy.
Example result (demonstration only)
Successfully managed two zones across five scheduled cycles. Transit accuracy through the 1.2 m passage: 100%. No zone confusion events.

Output Format

Example Results (Format Only)

Example Format — Not Real Data. The table below shows how results will be presented in a published evaluation. All figures are fabricated for illustrative purposes.
CategoryMetricExample ResultPass / Flag
Autonomous OperationCoverage (avg.)92% without interventionPass
Autonomous OperationDocking failures1 (self-corrected)Pass
Slope PerformanceMax maintained traction20°Pass
Slope PerformanceSlope recovery at 24°Failed traversal — safe dock returnFlagged
Navigation & MappingBoundary deviation (avg.)12 cmPass
Navigation & MappingPath overlap ratio18%Pass
Obstacle HandlingDetection distance≥35 cm for all obstaclesPass
Obstacle HandlingContact events1 minor contactFlagged
Multi-ZoneTransit accuracy100% (5/5 cycles)Pass

Published evaluations will include all raw run logs, environmental conditions at time of testing, and any deviation from the standard test protocol. A “Flagged” result does not automatically disqualify a classification tier — context and severity are both considered.

From Results to Classification

How This Connects to Classification

Evaluation results feed directly into MowerLab's three-tier classification system. No single metric determines a tier — the classification reflects the overall capability profile across all categories.

Commercial Ready

  • Coverage ≥ 95% across all runs
  • Slope traction maintained at rated maximum
  • Zero unresolved contact events
  • Multi-zone transit accuracy ≥ 98%
  • Boundary deviation ≤ 15 cm

Advanced Residential

  • Coverage ≥ 85% with ≤ 1 intervention
  • Slope traction maintained up to 20°
  • Minor contact events self-corrected
  • Multi-zone capability with occasional prompts
  • Boundary deviation ≤ 25 cm

Entry-Level Residential

  • Coverage ≥ 75% with any number of runs
  • Slope performance limited to ≤ 15°
  • Requires operator intervention to clear events
  • Single-zone only or unreliable multi-zone
  • Boundary deviation above 25 cm acceptable
The thresholds above are indicative. Final published criteria will be refined after the first evaluation cohort and published as a versioned methodology document.

Manufacturer Information

What Brands Can Expect

When a manufacturer submits a mower for evaluation, they enter a defined, transparent process. MowerLab does not accept payment for favourable results — participation fees, if any, cover only logistics and are not tied to outcomes.

1

Submission & intake

The manufacturer ships a production-equivalent unit. MowerLab logs receipt, firmware version, and any manufacturer documentation provided. No proprietary information is required.

2

Structured evaluation

The mower is evaluated against the published category framework across a minimum of three full test cycles. No special configuration is applied beyond factory defaults unless the manufacturer requests specific settings and provides written rationale.

3

Draft review

Manufacturers receive a draft of the evaluation before publication. They may submit factual corrections only — scoring outcomes and classification decisions are not subject to manufacturer approval.

4

Publication

The full evaluation — including raw run logs, environmental conditions, and any flagged results — is published on MowerLab. The classification badge is updated on the product page. All evaluation documents are versioned and timestamped.

5

Re-evaluation

If a manufacturer releases a firmware update that materially affects performance, they may request a re-evaluation under the same protocol. The previous evaluation remains publicly accessible with a note indicating a newer evaluation exists.

What Manufacturers Receive

  • • A published evaluation page linked from the product listing
  • • A classification badge reflecting hands-on performance
  • • The full run log dataset in CSV format
  • • A one-page evaluation summary suitable for use in marketing (subject to accurate representation)
  • • Permanent archival of the evaluation on MowerLab regardless of product lifecycle
Independence is non-negotiable.MowerLab's classification outcomes are editorially independent. Participation in the evaluation programme does not guarantee any particular classification, and brands have no ability to suppress or delay publication of results.

Ready to Participate

Interested in having your mower evaluated?

Hands-on testing is being scheduled. Contact MowerLab to express interest or ask questions about the evaluation process.