Skip to content

Spring Validation Experiments

SVE Document

Experiment G (Spring Validation Experiment)
Objective Demonstrate and verify functionality of the fully integrated system
Location Major streets around campus with moderate traffic flow in favorable weather conditions (daytime)
Elements Integrated perception system with the stereo cameras, radar, power devices, and computer on the testing vehicle (Volvo S60)
Procedure ·      Mount all sensors and place power and computing devices properly on the testing vehicle.

·      Drive the car around the campus for about 15 minutes.

·      Detect and classify pedestrians and vehicles on road. Display their relative positions and velocities in the customized GUI.

·      Track pedestrians and vehicles on road continuously with label.

Verification Criteria ·      The ID, relative position, velocity, and class (pedestrian or vehicle) of each detected object should be clearly displayed in the GUI in real time (shown in Figure 1).

·      The absolute position (longitude and latitude) and velocity of the testing vehicle should also be displayed clearly in the GUI in real time.

·      Criteria for Experiment A, B, C, D should be all be met successfully.

Evaluation

Although we were able to successfully complete most of the planned tasks we had set out to do by the SVE (see Table 1), we were unable to meet all the criteria. In the SVE, we did not include the velocities for the detected objects. This was due to simple oversight; we did have the system capabilities to calculate and display the velocities at the time. For the SVE Encore, we were more careful about presenting all the features of our perception system, and so we made sure to demonstrate them.

Even though we can claim that we have working implementations for a variety of advanced perception methods, many of our implementations need further work and tuning in order to perform consistently. We need to modify the algorithms and methods we currently use to be better suited to our system and applications.

Table 1. SVE performance checklist

Success Criteria SVE SVE Encore
Tracking ID of each object displayed in GUI No Yes
Relative position of each object displayed in GUI Yes Yes
Velocity of each object displayed in GUI No Yes
Classification of each object displayed in GUI Yes Yes
Absolute position (Lat. & Long. ) of host vehicle in GUI Yes Yes
Absolute velocity of host vehicle in GUI Yes Yes
Accuracy of filtered depth value of objects > 70% No Yes
Accuracy of detection and classification by vision > 60% Yes Yes
Accuracy of detection by integrated system > 70% No

Yes

  Strong and weak points

The following strengths and weaknesses of our perception system were noted by our team over the course of this project. The strong points are what we depend on for our system’s performance, whereas weak points are potential issues we might be able to fix in the future.

Strong points:

  • Robustness of sensor mounts –  Our sensor mounting system is robust. After repeated outdoor driving tests in various road and weather conditions, we noted that the positions of the sensors stayed the same. The effectiveness of our mounting solutions provide a strong foundation for the on-road performance of our perception functions.
  • Object classification accuracy – The object classification accuracy is above 80% for detected objects, which exceeds our expectations.
  • Radar position estimation accuracy – Our radar system provides depth information for objects of interest with an error rate of less than 5%. We use sensor fusion to thus bolster the performance of our stereo-vision subsystem.
  • Powerful computer: We selected high-end components for our project computer. As a result, it can perform calculations very fast and allow us to perceive the environment in real time.

Weak points:

  • Noisy tracking-level data from the radar – The radar gives us terribly noisy data when we try to acquire tracking data via the CAN bus. This is the case even if the testing environment is an empty garage. For now, we have our own tracking and filtering method that works well. However, in the future we would like to try and use these automatically calculated tracking points, provided we can extract useful information.
  • Stereo vision disparity map – Calculating the stereo-vision disparity map takes longer than we would like. Right now, we run the SGBM algorithm on our computer’s CPU for a refresh rate of ~5 Hz. In order to improve our real-time performance we should plan to run this algorithm on the computer’s high-performance GPU to increase speed.
  • Unimpressive stereo-vision range – In real-world testing, we found that our stereo-vision subsystem does not work very well if the objects are farther than 40 meters away. This is especially problematic for identifying pedestrians. In the future, we could work on adjusting the camera settings automatically depending on the type of environment and  conditions detected by our system. This could improve performance at medium-range.