Skip to content

Summary

 

 

  • Project Description

Current self-driving cars such as those used by Google and Uber have many limitations in their perception systems. As can be seen in Fig.1. below, existing sensor racks are bulky, expensive, and hard to maintain. This is due to the large number of redundant sensors used by such systems in order to avoid misreading the environment under various driving conditions.

  

Fig. 1. Current sensor racks used by Google (left) and Uber (right) autonomous cars

 

Another issue with existing automotive perception systems is that these systems do not adequately apply sensor fusion of complementary sensor systems. Consequently, even an advanced autonomous vehicle can fail if all its sensors are blinded by a single stimulus. For example, consider Tesla’s self-driving car crash that occurred when the test car’s LIDAR and vision sensors were overpowered and compromised by sunlight reflected from a crossing truck.

In comparison, Fig. 2. below shows the self-driving car developed by our sponsors, Delphi Automotive. This was one of the first autonomous vehicles to drive cross-country across the US. In this case, the sensors have been installed on the vehicle while preserving its form and aesthetics. This was possible by using fewer sensors (meaning less redundancy) through smarter programming and sensor fusion, which all makers of autonomous vehicles seek to achieve.

delphi-autonomous-driving-vehicle-parked-with-san-francisco-in-background-3-640x353

Fig. 2. Delphi’s self-driving SUV has integrated sensors

  • Project Information

    In the last section, we identified the user needs of autonomous vehicle developers to have an inexpensive, reliable, and minimalistic automotive perception system that is easy to integrate and test. Developers desire a system with intelligent sensor fusion both to reduce the number of total sensors used and to minimize the risk of misreading the environment.  In this project, we combine the input from multiple sensors to create an improved perception system that can be installed in any car for autonomous driving purposes. Our minimalistic system uses  fewer sensors, making it less expensive and easier to integrate compared existing solutions.

    We know that stereo vision and radar are typically used for short-range and long-range perception, respectively. The result of our project is therefore a standalone perception system that combines these two sensor systems to create a system that can simultaneously perceive in both the long and short range. Using rviz in ROS, the user can view the detected vehicles and pedestrians in the driving environment along with their positions and velocities in real-time. We found that the radar and the vision subsystems complement each other well for object detection. The vision system identifies pedestrians and vehicles accurately while the radar subsystem determines object positions and velocities accurately. By unifying these subsystems through sensor fusion, we improved object detection accuracy by over 10% (compared to vision alone). Additionally, we found that the radar and vision subsystems have different failure cases, which makes our unified system more robust (sunlight does not affect the radar, for example).

    Based on our results, we successfully developed a solution to meet the aforementioned user needs by creating a custom standalone perception system that can function independently or in tandem with an existing system. Through the sensor fusion of stereo vision and radar technologies, our system identifies objects in most driving conditions, while still being inexpensive, compact, and efficient relative to current solutions.

    Clearly, the motivation for this project stemmed from the desire to improve automotive perception for autonomous driving. To do so, we explored the use of complementary stereo-vision and radar sensor technologies to increase the accuracy and reliability of identifying objects in the driving environment.