It is widely believed that integrating artificial intelligence into AVs is necessary but highly challenging. The hope is that developers will not have to develop code to describe how a vehicle should respond in every possible situation – but that eventually, AI systems will be able to use sensor data and algorithms to come to their own decisions regarding every driving maneuver.
However, there is a long way to go before AI performs in the same way as humans. “It is difficult to program an AV to cope with the huge number of situations it will encounter in the open world,” says Dr Lothar Baum, director of engineering for cognitive systems at Bosch. “Humans face the same problems, but do not address them solely based on explicit rules – the ingredients of classical software programs. Gut feeling cannot be engineered into a system.”
Learning curve
Kersten Heineke, automotive partner with McKinsey & Company, is also cautious about an AV’s ability to deal with all scenarios that driving might throw up. “Eventually, AVs will cope,” he says. “However, the situations that a car will need to be able to handle will be limited in the beginning by geofencing the car’s operations and, for example, exclude complex intersections. Operations might be suspended during night or in bad weather initially. Vehicles will continuously learn to cope with new edge cases so that we will gradually get closer to a state where the vehicle is able to handle every situation.”
Machine learning, where programs can be trained to make deductions, is seen by many as the way forward in achieving a system close to that of a human’s reactions. For example, Volkswagen reports feeding its image recognition algorithm with thousands of images, so the system can learn to distinguish between road users. “Deep learning and AI are not fundamentally new,” says Baum of Bosch. “What’s changed is the complexity of the networks, so we have more layers that have much better performance, especially visual capability that’s greater than the human eye.”
While many AI developments focus on increasing processing power to boost deep learning abilities, others are focusing on more human-like qualities. For example, iSee, a startup spin-off from the Massachusetts Institute of Technology, is working on a solution founded on understanding how and why humans make decisions that affect how they drive. “Understanding how humans react when driving and being able to put that into AI for autonomous driving could provide an important missing link in development,” says Josh Tenenbaum, one of iSee’s founders.
Meanwhile, Nissan has been researching how to actually harness signals from the human driver’s brain to “help the vehicle’s autonomous and manual systems learn from the driver”.
Power and data
One challenge for any AI developer is to provide the computing power needed to run complex algorithms and process vast amounts of sensor data. McKinsey’s Heineke reckons this is an area that needs to be addressed before AI will make big strides. “Onboard computing power requirements are massive and this is both costly and energy consuming, especially when we move into electric AVs, which will be demanded by cities,” he says.
Heineke also believes that AVs should share their data to improve their effectiveness. “There will be a tremendous amount of data, most of which will only be transferred when the vehicle comes in for its daily charging stop,” he says. “Only data on new edge cases and data for selected use cases such as a specific traffic situation will be shared live.”
Stefan Myhrberg, head of innovation at Ericsson, also points out the role of V2X in reducing the data requirements of AI. “We created an ecosystem to support our current autonomous bus trial in Stockholm, Sweden,” he says. “We chose the area for its 5G connectivity, which allows the vehicle to work with the Internet of Things. To extend the route range beyond that, we’d need more sensors and that demands greater computing ability.”
One of the big challenges facing AI developers is the regulatory hurdles that must be cleared to test AVs. When Ericsson started its trial at the start of 2018, gaining permission was the hardest part. “At first, the authorities wouldn’t recognize the buses as vehicles because they don’t have steering wheels or rearview mirrors,” says Myhrberg. “Each bus has a human operator on board in case of any safety issues, but they are really there more to look after the 12 passengers.”
While the Swedish Transport Agency found a workable compromise, this is a challenge for most AV trials around the world. Then there is the question of how AI will be legislated. TÜV SÜD is partnering with DFKI to develop an open validation and certification platform for self-driving AI systems.
Dual neural networks
Do the experts believe it possible that AI could reduce the time it takes to develop new AVs by cutting the number of scenarios that need to be tested? Baum of Bosch advises caution: “AI can lessen the effort needed for testing AVs, but human input is still needed because AI cannot always give consistent results compared with standard test procedures,” he says. “Small changes detected by cameras can alter the AI’s reactions drastically or fool the neural network in a way that a human would not be fooled. It’s still not understood why artificial neural networks react to some things that humans filter out. This is why we use dual neural networks, where one part identifies and explains what the other part sees and reacts to. Also, we have to remember that many people will still want to drive, so humans are still vital for their knowledge and skills in testing and development. The processes will change, but humans will not be replaced.”