Skip to content

Fall Validation Experiments

There are four sections included in the Fall Validation Experiments, which cover the physical mounting structure, the stereo vision performance, the sensor synchronization, and the object detection. The team has achieved (fully achieved for three experiments and partially achieved for one experiment) all the requirements mentioned in the FVE plan. The detailed performance for the four experiments is illustrated in the tables below.

Table 1. Performance of Fall Validation Experiment A

Subject Goal
Sensor mounts Show robustness of sensor rack and mounting method (achieved)
Actual performance All sensors were rigidly attached on the testing vehicle. The relative position of sensors changed by less than 5 mm in any direction after 20 minutes of test drive

Table 2. Performance of Fall Validation Experiment B

Subject Goal
Stereo Vision Stereo Vision can work in adverse weather condition and give depth information above 20% accuracy (Achieved)
Actual performance Stereo vision can give the depth information of objects with an accuracy of > 80%

Table 3. Performance of Fall Validation Experiment C

Subject Goal
Synchronization Show synchronization between the two cameras.  (Achieved)

Show synchronization between the stereo vision system and the radar

(Need improvement)

Actual performance Both cameras were triggered at the same time with less than 1ms difference

Less than 0.1 s difference between timestamps of the cameras and radar

Depth info of the same (static) objects was acquired from both the cameras and radar

Table 4. Performance of Fall Validation Experiment D

Subject Goal
Object detection Object detection algorithm can detect vehicles and pedestrians with an accuracy of 60% (Achieved)
Actual performance Detection results with accuracy of 63% for vehicles, 44% for pedestrians. Classification accuracy was above 97%

Strong points:

  • Robustness of sensor mounts

Our hardware system is robust. After repeated outdoor driving tests in various road conditions and weather conditions, the relative position between the sensors stayed the same. The effectiveness of our current mounting solution can provide a firm and reliable basis for all the remaining on-road perception tasks to be conducted during next semester.

  • Object classification accuracy

The object classification accuracy is above 95%, which exceeds our expectations. This will help in developing a more accurate perception system.

  • Stereo vision accuracy

Stereo vision system can give the depth information of the objects of interest with an accuracy of around 88%. When combined with the radar, the accuracy of depth of the objects may be further improved after the multi-sensor calibration.

Weak points & Refinements:

  • Noisy Radar data

The radar still gives some noisy data even if the testing environment is an empty garage. This may confuse the system to give wrong estimate of the objects and their positions. In the future, the team is considering using an extended Kalman Filter (EKF) to filter the noisy points in order to extract useful information from the radar.

  • Stereo vision & object detection latency

Currently without a high-performance GPU, it takes longer than estimated to build the stereo vision and perform object detection and classification tasks in real time. Since the system shall work in real-time as it is specifically designed for autonomous driving related application, latency might cause a major problem. In the future, the team is planning to get a high-performance GPU along with the CPU to solve the latency problem.

  • Synchronization between radar and stereo cameras

To use both radar and stereo cameras together for the perception system, two sensors must be properly synchronized in order to detect and give correct information of the same objects at the same timestamp. For now, the team has completed the synchronization between cameras, but there is still work needed to be done on the synchronization between the radar and stereo cameras starting from next semester.