Spring Validation Demonstrations

Test 1: Row Navigation

Location: Carnegie Mellon University, B Floor

Equipment: Robot, 2 rows of artificial plants

Setup:

  • Place robot at the entrance of a row of artificial plants, facing into the row
  • The robot has a pre-generated map file

Test:

  1. The robot navigates to the end of the first row
  2. The robot turns into the next row
  3. The robot drives to the end of the second row
  4. Manually move the robot to start location and restart software; repeat 5 times

Success Criteria:

  • Robot fits in the row (MN1)
  • Robot arrives and stops at the far end of rows 1 and 2
  • The robot does not crush or trample any artificial plants (MN5)
  • The robot successfully switches into the second row in at least 4 out of 5 trials (MR5)

Test 2: Localization

Location: Carnegie Mellon University

Equipment: Robot, pre-recorded validation rosbag (software), localization performance measurement node (software)

Setup:

  • Load pre-recorded ROS Bag file with ground truth (from RTK GPS) onto the robot

Test:

  1. Playback ROS Bag file and observe divergence of ground truth and the actual position
  2. Observe the output of the localization validation node at the end of the run

Success Criteria:

  • The robot is in the correct row with 80% accuracy, and within 24 inches along the row (MR4)

Test 3: Pest/Disease Perception Software Test

Location: Carnegie Mellon University

Equipment: Robot, pre-collected and labeled dataset

Test ( Video Demo)

  1. The monitor software runs on test images of 1 type of plant and predicts severity* of holes and fungus for each image base on their leave area, hole area, and fungus area

Success Criteria

  •  successfully identifies fungus and holes severity with greater than 50% micro precision and recall* (MR9, MR10)
  • The robot successfully processes data at a rate faster than one field per 24 hours (MR 12)

*severity levels: mild,, moderate and alarming, represented as integers 1,2 and 3

*micro precision: all TP  / all TP + all FP

*micro recall : all TP  / all TP + all FN

 

Discussion

The SVD and SVD Encore tested the robot’s ability to traverse the field, accurately localize itself and evaluate the severity of pest and disease pressures.

The first test validated the system’s ability to fit in a row of 24 in, autonomously switch rows 80% of the times and not damage plants during navigation. The system was expected to traverse a row, switch to the next one and then traverse till the end of the next row. The system passed the test during SVD. However, while turning, the robot crushed the plant at the end of the row. This was because the row-detector could not detect the row until the robot was halfway into the row. Thus it completely relied on dead reckoning using visual odometry which was drifted and thus led to the collision. In our SVD encore, the robot, unfortunately, collided with plants while entering the second row on two occasions. This error seemed to mostly stem from dead reckoning as well, as the robot attempted to enter the second row in the wrong position, before actually seeing the row. The discrepancy in the location estimate between the two SVDs in unclear, however, possible sources are tweaks to noise parameters in the motion model and a ZED sensor mount which came slightly loose.

The second test was to validate the robots ability to localize itself within the correct row 80% of the times and along the row within 24in. The system was provided with sensor data and ground truth in the form of RTK GPS data and was expected to localize within error bounds. In the first SVD, the localizer was not able to satisfy the required error bounds. This was used using a dataset with wheel odometry only, as we hadn’t visited Rivendale Farms to collect a dataset since we mounted the ZED sensor which provides visual odometry. Our Rivendale test was rained out, so we instead collected a dataset at CMU with visual odometry. During SVD encore, the robot passed the test within the error bounds specified.

The third tested validated the robot’s ability to identify disease and holes. The robot was provided with images of plants with disease and holes. The trained neural network was able to categorize the image into levels of severity. The system was expected to categorize the holes and weeds with 50% of precision and recall and was able to achieve ~70% precision and recall.