Fall Test Plan

Introduction 

This document presents the tests planned by Team G for fall 2022. These tests are designed in such a way to fully showcase the working of various subsystems of the robot and to show that we meet the requirements (Appendix) set for the Fall Validation Demonstration (FVD). The test descriptions include objective, equipment, and elements involved, detailed procedure, and validation criteria.  The results of these tests will be reported during progress reviews. 

Logistics 

Location

All tests will be conducted in the B-level basement of Newell Simon Hall (NSH). All tests will be conducted in the hallway of NSH outside of the cage as navigation, perception, and manipulation subsystems require open space and adequate light. 

Personnel

The team members in Team G are sufficient to conduct all the tests. All members are assigned for every test as the tests will be conducted during group work sessions on Fridays. One team member will document the test and results, while another operates the robot from the operator’s computer. Members that have other commitments during this time will be excused.

Equipment 

For all tests, the robot and the operator’s computer will be used.  The test environment that will be used is an indoor farm setup that has been made using clothing racks.  The clothing racks support plant pots that hold fake tomato stems and branches made from green floral wire and manipulated to resemble real plants.  Each fake vine will hold up to two tomato clusters using a real organic section of a stem from a celery stick or other vegetation.  In some tests, the complete test environment is unnecessary and a small subset of it will be used instead as detailed in the test description.

Schedule 

Date PR Milestone(s)Test(s)Requirement(s)
9/2908Train YOLOv 5Final modular tool attachment Extend End of Arm tool ClassComplete Pure-Pursuit controllerTest 1, Test 2M.P.1, M.P.5
10/1309Include base alignment with manipulationComplete farm row path-planner with extensive testingConvert model to TensorRT frameworkTest 5,Test 6M.P.8, M.P.9, M.N.1
 11/310Integrate Pure-Pursuit controller and PlannerIntegrate TensorRT engine with ROS Test 3,Test 4M.P.6, M.P.7, D.P.2
 11/1711System Integration and extensive testingTest 7M.P.1-9M.N.1-3
 11/21FVDSystem Integration and extensive testingTest 7M.P.1-9M.N.1-3

Tests 

Test 1: Farm Layout Path Tracking

ObjectiveTo test the integrated operation of the path planning module and the control module while traversing the farm layout.
Equipment Laptop, Hello Robot, Test Environment
Elements Planner Module, Controller Module
PersonnelSteven, Pallavi, Aditya, Bruce, Vaidehi
LocationOutside the Cage
ProcedurePlace the robot at the start of the first row. Launch the navigation roslaunch file to spin the “stretch_driver” node, “lidar_pot_detector” node, “aruco_ros” node, and the “rp1_lidar” node.Launch the “row_planner”, “pure_pursuit_controller” and “aruco_goal_processing” nodes.Stay alert to terminate the “pure_pursuit_controller” node in case of unexpected behavior.
ValidationEnsure that the robot traverses the farm layout while maintaining a safe distance from the rows.

Test 2: Tomato Detection Model Validation using Test Data

ObjectiveTo validate the performance of tomato fruit detection model on Hello Robot 
Equipment Hello Robot, laptop, test environment, compute hardware
Elements Tomato fruit detection model, camera feed, tomato fruit dataset, test environment fruits, compute hardware
PersonnelAditya, Bruce
LocationOutside Cage
ProcedureTomato bunches are placed in the robot’s field of view. The robot is positioned at a fixed location and connected to the operator’s computer. Tomato fruit detection model and necessary scripts are launched. Detections from the model are visualized.
ValidationDetection bounding boxes are present whenever the tomatoes are in the robot’s field of view. Detection bounding boxes frame the tomatoes accurately. Model should achieve at least 50% MAP on the test data during training.

Test 3: Tomato Cluster Detection Algorithm Validation

ObjectiveTo test the tomato cluster detection algorithm by measuring the true positives and false positives. 
Equipment Hello robot, laptop, test environment 
Elements Tomato detection model and clustering algorithm
PersonnelAditya, Bruce
LocationOutside Cage
ProcedureThe robot is placed at the start location in the test environment Tomato fruit detection model and necessary scripts are launched. Verify that the model is detecting tomatoes. The robot navigates autonomously around the environment. The robot detects and localizes the tomato clusters in the environment. Appropriate markers are visualized in RViz.
ValidationThe robot should detect at least 7 out of 10 tomato bunches in the environment. The robot should not have more than 3 false detections.

Test 4: Tomato Cluster Localization Accuracy Test

ObjectiveTo test the accuracy of detected tomato cluster location in 3D space
Equipment Hello Robot, laptop, tomato bunches
Elements Tomato detection model, clustering algorithm, and localization algorithm
PersonnelAditya, Bruce
LocationOutside Cage
ProcedureThe robot is placed at the start location in the test environment. The Tomato fruit detection model and necessary scripts are launched. Verify that the model is detecting tomatoes. The robot navigates autonomously around the environment. The robot detects and localizes the tomato clusters in the environment. Appropriate markers are visualized in RViz.
ValidationAt each tomato cluster detection, stop the robot and measure the ground truth position of the tomato cluster with respect to the base_link frame and compare it with the detected location. The error should be less than 5cm. 

Test 5: End-Effector Position Control Test

ObjectiveTo validate end-effector position control on the Hello Robot given a target point in base_link frame.
Equipment Hello Robot, Laptop, Test Environment
Elements Hello Robot Stretch RE1 Firmware, Hello Robot Stretch RE1 Hardware
PersonnelPallavi, Vaidehi, Bruce, Aditya, Steven
LocationOutside the Cage
ProcedureSet up the test environment with interest points whose ground truth has been measured in the base_link frame. Launch the “end_effector_control_test” script.Pass locations of interest sites. Stay alert and ready to terminate the test script in case of unexpected behavior.
ValidationThe end-effector reaches within ± 5 cm of the requested position.

Test 6:  Tool Payload Capacity Validation and Cutting Performance

ObjectiveTo validate the payload capacity of the tool and cutting performance on stems.
Equipment Hello Robot, Laptop, Test Environment
Elements Tool Attachment, Tool Control Module
PersonnelPallavi, Vaidehi, Aditya, Bruce, Steven
LocationOutside the Cage
ProcedureSet up the test environment with fake tomato clusters hanging on organic stems. Place the robot close to the cluster such that they are within the robot’s reach envelope. Launch the “end_effector_control_test” script and the “tool_actuation” script. Stay alert and ready to terminate nodes in case of unexpected behavior.
ValidationThe gripper tool should be able to sustain a payload of 0.5 kg and the cutter should be able to cut the tomato cluster within two cutter passes.

Test 7: FVD Demo 

ObjectiveTo demonstrate a robot that can navigate through a farm layout, correctly identify tomato cluster locations, stop at the target cluster locations, and accurately grip and harvest the tomato cluster throughout the entire farm.
To demonstrate a robot that can navigate through a farm layout, correctly identify tomato cluster locations, stop at the target cluster locations, and accurately grip and harvest the tomato cluster through the entire farm.
Equipment Hello Robot, Laptop, Test Environment
Elements Navigation Modules (Row Path Planner, Pure Pursuit Controller, Lidar Pot Detection and Aruco Processing), Perception Modules (Detection model, Clustering Algorithm, Localization Algorithm), and Manipulation Modules (3D Position Traversal, Tool Action)
PersonnelAditya, Pallavi, Steven, Vaidehi, Bruce
LocationOutside the Cage
ProcedurePlace the robot at the fixed starting point on the farm. Ensure all fake tomato clusters have been placed at the correct spots on the farm. Launch the navigation, perception, and manipulation programs. Stay alert and ready to terminate the program in case of unexpected behavior.
ValidationThe robot should traverse the farm layout without interference with the farm rows, stop at the correct tomato cluster goal locations and harvest the tomato clusters.

Spring Test Plan

Introduction 

These tests are designed in such a way to fully showcase the working of various subsystems of the robot and to show that we meet the requirements set in fall 2021. The test descriptions include objective, equipment, and elements involved, detailed procedure, and validation criteria. 

Logistics 

Location

All tests will be conducted in the B-level basement of Newell Simon Hall (NSH). All tests will be conducted in the hallway of NSH outside of the cage as navigation, perception, and manipulation subsystems require open space and adequate light. 

Personnel

The team members in Team G are sufficient to conduct all the tests. Two members are assigned for every test. One team member will document the test and results, while another operates the robot from the operator’s computer. If needed, other members might join. 

Equipment 

For all tests, the robot and the operator’s computer will be used. For tests involving navigation subsystem, test setup equipment is needed. This included six racks of tomato vines arranged in a two-row formation, each rack has six tomato vines. Apart from this, the manipulation subsystem needs customized tools i.e., harvester tool and pollinator for conducting tests. 

Schedule 

The tests have been scheduled such that the results and performance of atleast one test will be reported during each of the progress reviews. 

Tests 

Test 1: UMBmark Test 

Objective
To gain a standardized accuracy measurement of our odometry system by performing a UMBmark test.
Equipment 
Laptop hello robot, test env., laser measurement device
Elements 
Odometry accuracy, onboard controls
Personnel
Steven, Pallavi
Location
Outside cage
Procedure
– Always stay alert and ready to terminate the node.
– Push the test node into the Hello Robot.
– Refer to the document for detailed images of the test.
– Use the laser measurement device to make initial and final ground truth measurements.
– Measure four points on the robot against a corner wall.
– Execute the test node in CW direction 5 times.
– Reposition the robot in the opposite direction and run the node in the CCW direction 5 times.
– Measure the error in the start and end configurations of the robot.
– Measure the accuracy value based on the UMBmark method.
Validation
Ensure that the odometry accuracy is within +- 10 cm for x and y positions.

Test 2: First-row detection 

Objective
To test the first-row detection and tracking algorithm using lidar on the actual test environment with plant and pot occlusions along with obstacle detection and handling capabilities.
Equipment 
LiDar unit, onboard controls, row detection, and tracking algorithm
Elements 
Laptop, hello robot, test env., obstacles
Personnel
Steven, Pallavi
Location
Outside Cage
Procedure
– Always stay alert and ready to terminate the node.
– Push the navigation node into the Hello Robot ROS software stack.
– Initialize the robot at the start location of the rows with a single side row first.
– Run the node.
– Initialize the robot between two rows.
– Run the node.
– Terminate the node.
Validation
– The robot should maintain a fixed distance of 10 cm from the edge of the row.
– The robot should traverse the row through incremental trajectory waypoints at a speed of 0.2 m/s without coming to rest at intermediate points.
– The robot should identify obstacles in the path and stop when necessary.
– The robot should perform the same for both single-sided rows and double-sided rows.

Test 3: End-of-row detection and row change maneuver

Objective
To test end of row detection and row change maneuver in both single side and double-sided row environments and row change internal to a single row as well.
Equipment 
Hello robot, laptop, test environment
Elements 
LiDAR, end-of-row detection algorithm, row change, control algorithm
Personnel
Steven, Pallavi
Location
Outside Cage
Procedure
– Always stay alert and ready to terminate the node.
– Initialize the robot at the start of a single-sided row.
– Start the navigation node and allow the robot to reach the end of the row.
– After the first row, terminate the node.
– Place the robot at the start of the double-sided row.
– Run the navigation node again.
– Allow the robot to reach the end of the row and perform the row transition.
– Terminate the node.
Validation
– The robot should maintain a fixed distance of 10 cm from the edge of the row.
– The robot should identify obstacles in the path and stop when necessary.
– The robot should perform the same for both single-sided rows and double-sided rows.
– The robot should perform the transition to a new row while maintaining a turning radius of between 0.6 to 1 m.
– The robot should also be able to perform a robot-centric turn within a single row for the second sweep with the arm retracted.
– The robot should not be disturbed by the presence of other objects in the environment.

Test 4: Integration of Hello robot and external camera

Objective
To test the integration of the Hello Robot, compute hardware and external camera
Equipment 
Hello Robot, laptop, test environment, compute hardware, external camera
Elements 
Hello Robot camera functionality, integration of Hello Robot and compute hardware, integration of compute hardware and external camera, integration of hello robot and external camera, power supply considerations
Personnel
Aditya, Pallavi
Location
Outside Cage
Procedure
– Launch script to communicate between Hello Robot and compute hardware.
– Launch script to test Hello Robot camera functionality with both Hello Robot and compute hardware.
– Launch script to test external camera functionality with both Hello Robot and compute hardware.
Validation
– Communication between the Hello Robot and compute hardware is established.
– Communication between cameras and the compute hardware is established.
– Communication between cameras and the Hello Robot is established.

Test 5: Tomato fruit detection

Objective
To validate the performance of tomato fruit detection model on Hello Robot 
Equipment 
Hello Robot, laptop, test environment, compute hardware
Elements 
Tomato fruit detection model, camera feed, tomato fruit dataset, test environment fruits, compute hardware
Personnel
Aditya, Pallavi
Location
Outside Cage
Procedure
– Tomato bunches are placed in the robot’s field of view.
– Robot is positioned at a fixed location and connected to the operator’s computer.
– Tomato fruit detection model and necessary scripts are launched.
– Detections from the model are visualized.
Validation
– Detection bounding boxes are present whenever the tomatoes are in the robot’s field of view.
– Detection bounding boxes frame the tomatoes approximately.

Test 6:  Tomato flower detection

Objective
To validate the performance of tomato flower detection model on Hello Robot.
Equipment 
Hello Robot, laptop, test environment, computer hardware
Elements 
Tomato fruit detection model, camera feed, tomato fruit dataset, compute hardware
Personnel
Aditya, Pallavi
Location
Outside cage
Procedure
– Tomato flowers are placed in the robot’s field of view.
– Robot is positioned at a fixed location and connected to the operator’s computer.
– Tomato flower detection model and necessary scripts are launched.
– Detections from the model are visualized.
Validation
– Detection bounding boxes are present whenever the flowers are in the robot’s field of view.
– Detection bounding boxes frame the flowers approximately.

Test 7: Localization of tomato fruit

Objective
To test that the tomato fruit detections from the model are being localized correctly in space.
Equipment 
Hello Robot, laptop, tomato bunches
Elements 
Object localization pipeline: depth image/point cloud generation and processing
Personnel
Aditya, Pallavi
Location
Outside cage
Procedure
– Tomato bunches are placed in the robot’s field of view.
– Robot is positioned at a fixed location and connected to the operator’s computer.
– Tomato fruit detection model and necessary scripts are launched.
– Verify that the model is detecting tomatoes.
– Point clouds and visual markers are visualized in RViz.
Validation
Visual markers corresponding to the bounding boxes are placed within 1m of each tomato in the point cloud.

Test 8: Localization of tomato flowers

Objective
To test that the tomato flower detections from the model are being localized correctly in space.
Equipment 
Hello Robot, laptop, tomato flowers
Elements 
Object localization pipeline, depth image/point cloud generation and processing
Personnel
Aditya, Pallavi
Location
Outside cage
Procedure
– Tomato flowers are placed in the robot’s field of view.
– Robot is positioned at a fixed location and connected to the operator’s computer.
– Tomato flower detection model and necessary scripts are launched.
– Verify that the model is detecting flowers. Point cloud and visual markers are visualized in RViz.
Validation
Visual markers corresponding to the bounding boxes are placed within 1m of each flower in the point cloud.

Test 9: Manipulation to assumed point 

Objective
The robot arm should reach the assumed point of interaction (POI).
Equipment 
Hello robot, laptop, PS controller, customized tools: harvester and pollinator
Elements 
Manipulation script
Personnel
Vaidehi, Bruce
Location
Outside cage
Procedure
– Place robot at its initial position.
– Move the arm to specific XYZ coordinates viewed in the spatial (lift mast) frame without the tool attached.
– Move the arm to the desired position with the pollinator attached.
– Move the arm to the desired position with the harvester attached.
Validation
– Reach the desired position with an accuracy of ±20mm.
– Check reproducibility and repeatability.

Test 10: Harvesting of tomato bunch

Objective
The Hello Robot with an end effector successfully harvest tomato cluster.
Equipment 
Hello robot, laptop, PS controller, customized harvester
Elements 
Manipulation script, tomato peduncle contact
Personnel
Vaidehi, Bruce
Location
Outside cage
Procedure
– Place robot at its initial position.
– Move the arm to the specified desired position with the cutter and gripper in the open position.
– Move the gripper part of the harvesting tool to the closed position such that it is holding the stem.
– Move the cutter part of the harvesting tool to the closed position such that it cuts the stem.
– Retract and lower the arm to drop position.
– Open gripper to release tomatoes.
Validation
– The robot wrist should decrease its velocity as it approaches the tomato clusters.
– Robot should be stationary when the end effector contacts the tomato peduncle.
– Gripper should grip the target peduncle.
– Tool should cut the stem in one pass.
– Gripper should not drop the tomato cluster when the tool is in the closed position.

Test 11: Pollination of tomato bunch

Objective
The Hello Robot with an end effector successfully pollinate tomato flowers.
Equipment 
Hello robot, laptop, PS controller, customized pollinator
Elements 
Manipulation script, flower contact, flower vibration, damage to flowers
Personnel
Vaidehi, Bruce
Location
Outside cage
Procedure
– Place robot at its initial position.
– Move the arm to the desired specified position with the pollinator attached.
– Start vibration.
– Finish vibration.
Validation
– The robot wrist should decrease its velocity as it approaches the flowers.
– Robot should be stationary when the end effector contacts the flowers.
– Vibration time should be less than 2 seconds.
– Petals of the flowers should not be damaged.

Test 12: SVD Demo 

Objective
Navigation in test setup. Object detection and localization test. Integration of navigation and manipulation.
Equipment 
Hello robot, laptop, customized tools, test setup
Elements 
Row detection and tracking algorithm, object localization pipeline, depth image/point cloud generation and processing flower contact, flower vibration, damage to flowers, tomato peduncle contact
Personnel
Aditya, Pallavi, Steven, Vaidehi, Bruce
Location
Outside Cage
Procedure
– Place the robot at the start of the test environment’s first row.
– Launch the reactive navigation package which will:
1. Detect and track rows using either lidar alone or in combination with visual feedback.
2. Command the robot to track the row with a safety distance of 10 cm from the side of the robot closest to the row.
3. Command a row change when the end of the row has been reached.
Validation
– The robot must track the rows while not colliding with plants and pots.
– The robot must successfully change rows when at the end of one row.
Procedure
– Place the robot at a fixed starting point.
– RC the robot in the test environment such that tomatoes/flowers are visible in camera’s FOV.
Validation
– The robot detects and localizes the tomatoes/flowers in the camera frame.
– Accurate bounding boxes are visible in the RGB images from the camera.
– Position markers are plotted in RViz for each detected tomato/flower approximately within 1m.
Procedure
– Place the robot at a fixed starting point.
– Navigate the robot to the pollination point
– Perform pollination test at pollination points
– Navigate to harvest points
– Perform harvesting test at harvest points
Validation
– The robot should reach the assumed harvesting/ pollination site.
– The robot wrist should decrease its velocity as it approaches the stem /flowers.
– Robot should be stationary when the end effector contacts the stem/ flowers.
– Gripper should grip the target peduncle.
– Tool should cut the stem in one pass.
– The gripper should not drop the tomato cluster when the tool is in the closed position.
– Vibration time should be less than 2 seconds.
– Petals of the flowers should not be damaged.