Improving the Vision of Self-Driving Vehicles

Author:
Chinese Association of Automation

Date
03/07/2020

 PDF

There may be a better way for autonomous vehicles to learn how to drive themselves: by watching humans. With the help of an improved sight-correcting system, self-driving cars could learn just by observing human operators complete the same task.

Researchers from Deakin University in Australia published their results in IEEE/CAA Journal of Automatica Sinica, a joint publication of the Institute of Electrical and Electronics Engineers (IEEE) and the Chinese Association of Automation.

The team implemented imitation learning, also called learning from demonstration. A human operator drives a vehicle outfitted with three cameras, observing the environment from the front and each side of the car. The data is then processed through a neural network -- a computer system based on how the brain's neurons interact to process information -- that allows the vehicles to make decisions based on what it learned from watching the human make similar decisions.

"The expectation of this process is to generate a model solely from the images taken by the cameras," said paper author Saeid Nahavandi, Alfred Deakin Professor, pro vice-chancellor, chair of engineering and director for the Institute for Intelligent Systems Research and Innovation at Deakin University. "The generated model is then expected to drive the car autonomously."

The processing system is specifically a convolutional neural network, which is mirrored on the brain's visual cortex. The network has an input layer, an output layer and any number of processing layers between them. The input translates visual information into dots, which are then continuously compared as more visual information comes in. By reducing the visual information, the network can quickly process changes in the environment: a shift of dots appearing ahead could indicate an obstacle in the road. This, combined with the knowledge gained from observing the human operator, means that the algorithm knows that a sudden obstacle in the road should trigger the vehicle to fully stop to avoid an accident.

"Having a reliable and robust vision is a mandatory requirement in autonomous vehicles, and convolutional neural networks are one of the most successful deep neural networks for image processing applications," Nahavandi said.

He noted a couple of drawbacks, however. One is that imitation learning speeds up the training process while reducing the amount of training data required to produce a good model. In contrast, convolutional neural networks require a significant amount of training data to find an optimal configuration of layers and filters, which can help organize data, produces a properly generated model capable of driving an autonomous vehicle.

"For example, we found that increasing the number of filters does not necessarily result in a better performance," Nahavandi said. "The optimal selection of parameters of the network and training procedure is still an open question that researchers are actively investigating worldwide." Next, the researchers plan to study more intelligent and efficient techniques, including genetic and evolutionary algorithms to obtain the optimum set of parameters to better produce a self-learning, self-driving vehicle.

EurekAlert!, the online, global news service operated by AAAS, the science society: https://www.eurekalert.org/pub_releases/2020-03/caoa-itv030520.php

RELATED