Spring Test Plan
Equipment Required: Desktop system running Carla and the S.T.A.R.S. data processing pipeline, Steering Wheel System such as Logitech G920
Demonstration 1:
- Objective: To demonstrate the detection algorithm by extracting the actors’(vehicles) information from real-world/Carla video streams and visualizing them.
- Procedure:
- Capture visual streams form data captured from Carla/real world.
- Run our detector model on the camera stream, visualizing a bounding box showing the class of the detected actor.
- Validation Criteria:
- Detection Precision- At least 75% of the detected vehicles are actually vehicles.
- Detection Recall- At least 75% of the real-world objects are detected by the algorithm.
Demonstration 2:
- Objective: To demonstrate the tracking algorithm by extracting the actors’(vehicles) information from video streams and visualizing them.
- Procedure:
- Capture visual streams form data captured from Carla/real world and run the detection model as in demonstration 1.
- The detection output goes through the tracking algorithm which assigns an id to each actor and tracks its location in a bird’s eye view.
- The visualization shows each detected actor with its class and its id.
- Validation Criteria:
- MOTA (multi-object tracking accuracy) should be at least 40% and MOTP (multi-object tracking precision) should be at least 40%.
Demonstration 3
- Objective: To demonstrate the capability to control multiple modelled actors inside a simulator.
- Procedure:
- We spawn multiple actors inside Carla controlled by its default rule-based behavioral model.
- We compute the performance of Carla as we add actors controlled by our baseline behavioral model subsequently to the simulation environment.
- Validation Criteria:
- The simulation speed in CARLA is ≥10 frames per second on adding ≥3 actors (controlled by our baseline behavioral model) simultaneously.
Fall Test Plan
Equipment: Desktop system running Carla and the S.T.A.R.S. pipeline, Steering wheel system such as Logitech G920, Data Capture Unit (DCU) designed and fabricated by S.T.A.R.S.
Demonstration 1:
- Objective: To demonstrate various data capturing methods and pre-processing pipeline
- Procedure:
- The user drives the steering wheel system in the CARLA environment running on the desktop system. The video data from the user perspective and bird’s eye view is stored on the hard drive.
- The user mounts the DCU on a tripod at a traffic signal and captures multiple views of an intersection. The data shown will be pre-recorded.
- Video streams from the above data sources are then fed into a laptop, detection and tracking algorithms run on it.
- Validation Criteria:
- Data Capture Unit successfully mounted on the frame, collects data ≥ 30 frames per second.
- Data Capture Unit enables a 100% view of the intersection.
- The system detects 75% of actors (cars and pedestrians) seen in the input video.
- MOTA (multi-object tracking accuracy) should be 40% and MOTP(multi-object tracking precision) should be 40%.
Demonstration 2:
- Objective: To demonstrate the final traffic behavior model developed by S.T.A.R.S.
- Procedure:
- The scenario from demonstration 1 is used. All actors are given their start pose and final goals. The S.T.A.R.S model will be running as the ego vehicle.
- In another sub-demonstration, 3 to 5 S.T.A.R.S. models will be running in a loop, given their goals.
- Validation Criteria:
- Will have x% mean squared error between the predicted trajectory from the behavioral model and ground truth trajectory in a given scenario.
- Carla runs at ≥ 10 frames per second with ≥ 3 actors simultaneously.
Demonstration 3:
- Objective: To demonstrate that the system can be tuned for aggression.
- Procedure:
- Given a S.T.A.R.S. various parameters will be changed as part of aggression tuning. Multiple these models will be started with such configuration.
- Validation Criteria:
- Allow for tuning aggression parameters of models.