Olivier Bockenbach, head of functional safety for KPIT Technologies’ autonomous driving department, highlights the importance of avoiding both over- and undergeneralization when training neural networks, ahead of his presentation at the Autonomous Vehicle Software & AI Symposium, May 21-23, Stuttgart, Germany.
Tell us more about your presentation.
In the past few years, the automotive industry has moved rapidly from ADAS systems – which just assist the driver – to full-blown autonomous driving systems. However, these also have the responsibility of dodging suddenly appearing obstacles, actions which require advanced driving skills and a deep understanding of vehicle dynamics which can normally only be acquired through intensive training. Making sensors (primarily radars and cameras) and actuators (steering, braking and powertrain control systems) work together successfully is the name of the game here.
The controllability of the situation largely depends on an understanding of the vehicle’s surroundings, as well as the vehicle’s behavior. The number of parameters that can influence the outcome of an evasive maneuver is so large that model-based approaches struggle to comprehensively take all these into account, let alone prioritize them to map out an ideal trajectory.
Artificial intelligence techniques like deep learning aim to understand complex situations by learning from examples. Through proper training, deep learning-based systems learn to identify the features that are relevant in a given scenario and to ignore the ones that aren’t. Comprehensive training allows the deep learning system to understand similar situations and to propose an appropriate solution through inference. There are numerous publications demonstrating that the inference performance of such systems is steadily increasing, but we’ve still to achieve a 100% success rate.
What are the challenges in perfecting AI?
The issue is not just that some of the inferences the neural network makes are wrong; they’re often also hard to detect and finding them before they have dangerous knock-on effects requires a thorough investigation.
As a rule, there tend to be two major reasons for mispredictions, both related to the way that the neural network is trained. The first is inappropriate training. In some cases, the neural network has only been trained on a subset of the cases that it will encounter in the real world. When the neural network is then asked to provide an output for a scenario that it has never seen before, it’s likely that it will fail to deal with it appropriately.
The other major source of incorrect inferences is the inadequacy of the neural network during the training, rather than of the training data itself. Often called ‘overfitting’, or lack of generalization, this is the problem of the neural network learning the samples it is trained on, rather than extrapolating the underlying function. When faced with a new situation, it will make an incorrect inference and potentially cause a dangerous situation.
What can be done to prevent dangerous situations and mitigate their effects when they happen?
Stopping those mispredictions from occurring can be achieved through prevention during training and through active control during inference. Indeed, the training phase should guarantee complete and fine-grained coverage of the cases that occur in real life. At the same time, the training must avoid overfitting, so that the neural network makes appropriate generalizations. It’s therefore important that inputs to the neural network fall within the scope of the relevant training for the application.
Consider the following example applicable to autonomous driving: we need to verify that the speeds of surrounding vehicles that the neural network is inferring fall within the boundaries defined during training. Another possibility is to perform spatial and temporal tracking on the results. Of course, the surrounding traffic is still governed by the laws of physics, so a neural network predicting a vehicle changing direction with an acceleration of 5G is clearly making an incorrect prediction and this needs to be corrected.
Olivier will present ‘Autonomous driving and AI: an approach to achieve functional safety’ at the Autonomous Vehicle Software & AI Symposium, which is held during Autonomous Vehicle Technology Expo in Stuttgart on May 21-23. For more information about the Expo and Conference, as well as full conference programs, head to the event website. Expo entry is free of charge, but rates apply for conference passes, which provide access to three symposiums: Test & Development, Software & AI and Interior Design & Technology.