Self-Driving cars are the future of the automobile and this is a burning question that when they will arrive on roads? Today the biggest problem with the self-driving car is the dense traffic. Also, it is difficult to test the system perfectly.
Researchers at the University of Southern California with Arizona State University published a new study that tackles the long-standing problem of self-driving cars developers: testing the perception algorithm which allows the cars to understand what it sees. The new mathematical method developed by the team is able to identify the bug in the system before the car hits the road.
“Making perception algorithms robust is one of the foremost challenges for autonomous systems,” said the study’s lead author Anand Balakrishnan, a USC computer science Ph.D. student.
“Using this method, developers can narrow in on errors in the perception algorithms much faster and use this information to further train the system. The same way cars have to go through crash tests to ensure safety, this method offers a pre-emptive test to catch errors in autonomous systems.”
The paper is titled “Specifying and Evaluating Quality Metrics for Vision-based Perception Systems”, was presented at the Design, Automation, and Test in Europe conference in Italy, Mar. 28.
#What are the Self-Driving Cars
Self-Driving cars are the autonomous cars that are capable of sensing the environment and road condition to drive themselves. It doesn’t require human intervention.
Autonomous cars combine a variety of sensors to perceive their surroundings, such as radar, Lidar, sonar, GPS, odometry and inertial measurement units. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
#Errors and learnings
Typically, Self-Driving cars recognize road conditions via machine learning algorithms. A huge data set is used with road images to train the system so that the system can recognize the objects on the road.
But the system can go wrong. In the case of a fatal accident between a self-driving car and a pedestrian in Arizona last March, the software classified the pedestrian as a “false positive” and decided it didn’t need to stop.
“We thought, clearly there is some issue with the way this perception algorithm has been trained,” said study co-author Jyo Deshmukh, a USC computer science professor, and former research and development engineer for Toyota, specializing in autonomous vehicle safety.
“When a human being perceives a video, there are certain assumptions about persistence that we implicitly use: if we see a car within a video frame, we expect to see a car at a nearby location in the next video frame. This is one of several ‘sanity conditions’ that we want the perception algorithm to satisfy before deployment.”
The team has built a mathematical logic that is used to test the machine learning algorithm using raw video dataset of driving scenes.
The team’s method could be used to identify anomalies or bugs in the perception algorithm before deployment on the road and allows the developer to pinpoint specific problems.