Click here for our Fall Test Plan document
FVD EXPERIMENTS
1. Validate end-to-end AutoValet system in Gazebo
-
- Location: Remote/CMU East Garage
- Equipment: Laptop, Gazebo
- Procedure:
- The robot is teleoperated in simulation from start to end and we time the run to act as a baseline for comparison with autonomous operation
- Launch all subsystems
- The robot will traverse the parking lot, generating waypoints that lie within the lane and traversing
to those waypoints - When an Aruco tag is detected, the system will calculate a corresponding parking goal pose based on
the pose of the tag - The robot will perform the parking maneuver and the system will stop
- Validation Criteria: The system runs to completion in less than twice the time for teleoperation of the robot in the same path (M.P.1)
The final parking error should be less than 75 cms in translation and +/-10 degrees in orientation (M.P.7)
Success Rate: 90% [Success if M.P.1 or M.P.7 met in 9 out of 10 runs]
2. Validate End-to-End AutoValet system on the Husky
-
- Location: CMU East Garage
- Equipment: Husky + Sensors, Laptop, Tape measure, Digital
protractor, Seat belt, Stopwatch - Procedure:
- Seat belt roll is flattened on lane line over the entire run length of the test
- Aruco marker placed upright at a spot after the turn looking towards the lane
- Robot starts 5 metres before a turn in the center of the lane
- We teleoperate the robot to the parking spot and record the time taken
- We reset the scene and place the robot back at the start point
- Full system is launched. Robot moves along the lane, detects the tag and tries to park
- We mark the final pose of the robot footprint on the ground. Record final pose and orientation
- Validation Criteria: The time taken for the autonomous parking should be less than twice the time taken while parking with teleoperation (M.P.1)
The final parking error should be less than 75 cms in translation and +/-10 degrees in orientation (M.P.7)
Success Rate: 50% [Success if M.P.1 or M.P.7 met in 2 out of 4 runs]
Fall Test Plan
MILESTONE | DELIVERABLE | TEST METHOD |
---|---|---|
Mid-September | Final hardware integration of SLAM and lane detection | Improve localization and train/validate detection network on parking lot data |
Late-September | Navigation on hardware | Execute point-to-point navigation on the Husky |
Mid-October | Exploration in simulation | Introduce the segmented lane information into the 2D costmap and generate waypoints that lie in this lane |
Late-October | Exploration on Husky | Show exploration algorithm working on parking lot data |
Mid-November | Parking maneuver | Robot will park itself in the target parking spot within 50 cm and 3 degrees of the target pose |
Late-November | Integration and testing | Full system validation on hardware |
SVD EXPERIMENTS
1. Localization Error Testing of SLAM Subsystem
- Location: Gazebo Simulation (Live Demo)
- Equipment: Laptop
- Procedure:
- ■ The robot starts 1m before the start of the turn in our simulated parking garage and the validation
script is initiated.
■ User tele-operates the robot around the turn to a point 1m after the end of the turn.
■ The script samples the error in x,y and yaw for every 0.3m moved by the robot in gazebo.
■ A plot of instantaneous error over sample points and average error at end of run is reported.
- ■ The robot starts 1m before the start of the turn in our simulated parking garage and the validation
- Validation criteria: Mean translational error should be less than 50 cm and mean rotational error
should be less than 5 degrees at discrete test positions (M.P.4)
2. Mapping Error Test
- Location: Gazebo Simulation
- Equipment: Laptop
- Procedure:
- ■ Place 4 boxes (1m x 1m) in the parking lot world in Gazebo
■ Tele-operate the robot in a loop around the simulated environment to generate a map
■ Inspect generated map and record estimated pose of box corners in the grid map manually (RViz)
■ Determine map error by averaging over all discrete pose errors of boxes
- ■ Place 4 boxes (1m x 1m) in the parking lot world in Gazebo
- Validation criteria: The mean error across all points should be within 20 cm (M.P.2)
3. Lane Detection Test
- Location: Gazebo simulation (Video Demo over the validation set of images)
- Equipment: Laptop
- Procedure:
- ■ The Gazebo world is modified so that the ground-truth lane is colored green. The robot is tele-operated over this “modified world” and the camera feed is recorded. The ground truth lane segment is extracted via HSV separation for validation.
■ For testing, we extract the original natural image feed of the camera from the “modified images” and pass it through our network to get a prediction of the lane.
■ We compare the generated ground truth mask with the network output to calculate the Intersection over Union (IOU) score for the segmentation result and also the mean IOU over all frames.
- ■ The Gazebo world is modified so that the ground-truth lane is colored green. The robot is tele-operated over this “modified world” and the camera feed is recorded. The ground truth lane segment is extracted via HSV separation for validation.
- Validation criteria: Mean Intersection over Union (mIOU) score of ego-lane over all frames in the validation set should be at least 60% (M.P.5) (reference)
Spring Milestones
DATE | MILESTONE | DATE | MILESTONE |
---|---|---|---|
Jan 30 | Test Environment Set up | Mar 31 | SLAM & Detection subsystem tested on simulation |
Feb 19 | Progress Review 1 | Apr 8 | Progress Review 4 |
Feb 29 | SLAM on simulation with teleoperation | Apr 22 | Spring Validation Demo (SVD) |
Mar 4 | Progress Review 2 | Apr 29 | SVD Encore |
Mar 20 | Preliminary Design Review | May 4 | Critical Design Review (CDR) |
Mar 25 | Progress Review 3 | May 7 | CDR Report |