In an exclusive interview ahead of this year’s ADAS & Autonomous Vehicle Technology Expo Europe, taking place in Stuttgart, Germany, on May 20, 21 and 22, Philip Koopman, associate professor at Carnegie Mellon University, discusses the role of a multi-constraint satisfaction framework for autonomous vehicle safety. He interrogates the traditional approach to safety that sees it as an engineering process to optimize for low net risk with low cost, and emphasizes achieving acceptability first before optimizing safety or profitability, ensuring the industry can stay ahead of problems.
An internationally recognized expert on autonomous vehicle (AV) safety, Koopan has worked in this area for almost 30 years. He has also worked extensively in more general embedded system design, software quality and safety across numerous transportation, industrial and defense application domains, including conventional automotive software and hardware systems. He originated the UL 4600 autonomous vehicle safety standard, and received the Industry Legend award at the 2024 the Self-Driving Industry Awards.
What is your presentation about?
I will be discussing ‘Understanding self-driving vehicle safety’. Removing the human driver fundamentally changes what we actually mean by acceptable safety. A simplistic ‘safer than human driver’ positive risk balance approach must be augmented with additional considerations regarding risk transfer, negligent driving behavior, standards conformance, absence of unreasonable fine-grain risk, ethics and equity concerns.
Current standards frameworks and accompanying definitions are likely to be inadequate to ensure safety due to implicit assumptions that are violated when the human driver is removed. A framework relates risk to acceptable safety in a way that is applicable to all autonomous systems.
How does removing the human driver fundamentally change what we mean by acceptable safety?
Current approaches to driving safety implicitly rely on a number of a driver’s human characteristics to work in practice. These include a combination of a desire to not hurt others, pressure to conform to social norms, fear of being hurt in a crash and fear of judicial consequences. These especially come into play when encountering novel, unstructured situations for which a scenario-based rulebook will struggle to provide clear answers as to what the right behavior is. Human drivers more or less try to do ‘the right thing’, where that right thing is highly dependent on the context of the situation and the additional context of social norms, prevailing law, and so on.
Additionally, human drivers have numerous non-driving safety-related obligations, such as equipment inspection/maintenance, responding to equipment failures, safeguarding passenger safety and post-crash scene management. Robotaxis or robotrucks without human drivers will have to cover all these topics to achieve socially acceptable safety, recognizing that even remote support staff will have practical limits to what they can do.
Why will positive risk balance not give socially acceptable safety?
Positive risk balance (PRB) is the starting point for safety, not the ending point. As a hypothetical example, what if the number of total traffic fatalities were cut in half? On a purely PRB basis, that is a win for safety. But, what if the fatalities showed biased outcomes, such as a dramatically higher number of pedestrians being killed than for non-automated driving? Or what if the majority of fatalities involved a computer driver breaking traffic rules such as running a red light before causing a fatality? Or what if there were specific, preventable clusters of harm, such as repeated instances of hitting emergency vehicles due to their flashing lights confusing computer vision algorithms? There would be tremendous pushback against such outcomes.
Imagine further that a manufacturer hypothetically said, “We are killing half as many people, so we have earned the right to ignore traffic laws.” We do not let crash-free human drivers break traffic laws because they have not (yet) had a fatal crash. Why should that be OK for computer drivers?
The point here is that there are aspects to safety beyond net harm (risk balance) that will matter on a practical basis for socially acceptable safety.
What examples are there of safety issues in the news beyond positive risk balance?
We have seen a number of examples in California that have caused significant public concern about safety. There have been numerous incidents of interfering with emergency responders, including blocking firehouse driveways. Despite industry promises that robotaxis would, in effect, not make stupid driving mistakes, we have seen crashes into a bus and a utility pole due to software defects that to ordinary folks look like stupid driving mistakes. We have seen numerous non-crash misbehaviors such as driving through road closure signage at construction sites. We have seen robotaxis dragging yellow emergency scene tape down the road, and driving into wet concrete. While it is true that human drivers also make mistakes, the robotaxi safety narrative has been that they will not make stupid driving mistakes, and every picture of such an incident degrades public trust in the technology. We also saw Cruise have to shut down operations after a pedestrian dragging mishap even though their data claimed dramatic positive risk balance advantages.
What are some specific additional considerations needed beyond positive risk balance?
Safety engineers tend to emphasize reducing net risk. This can work well when other aspects of risk are pushed off onto a human driver. But without a human driver, safety needs to encompass at least the following: acceptably low net risk; acceptably low risk transfer onto vulnerable populations; acceptably low rates of negligent driving behavior (driving decisions that would be considered reckless or unreasonable if a human driver were to make them); acceptably low rates of specific dangerous driving behavior of the type that might provoke a safety recall; and behavior in reasonable conformance with social norms and legal requirements for ethical and equitable outcomes.
What framework would you propose that relates risk to acceptable safety in a way that is applicable to all autonomous systems? How do we need to define acceptable safety for at-scale deployments?
The key insight here is that we traditionally regard safety as an engineering process to optimize for low net risk with affordable cost. Reduce net risk low enough, and you get positive risk balance. But other concerns resist incorporation into a single universal optimization formula. Instead, we should first consider this a multi-constraint satisfaction problem, and achieve acceptability in all areas, including: net risk, risk transfer, negligent behavior, specific dangerous behaviors, ethics and equity. Only if we meet all those constraints can we then optimize whatever objective we like, such as safety. Or profitability.
These concerns tend to be invisible for small-scale operations on public roads because rare events related to safety issues are, well … rare. But as we have seen due to increasing scale of operations in the San Francisco area, the fact that there is more to safety than PRB has become impossible to ignore.
Given your expertise in autonomous vehicles and background in embedded system design as well as safety standards, why did you choose to speak on this topic at ADAS & Autonomous Vehicle Technology Expo Europe 2025 and why is it important at this time?
The presentation will cover the why and the how of a multi-constraint satisfaction framework for autonomous vehicle safety. We are on the cusp of larger-scale deployment of autonomous vehicle technology. I think it is important to help people think about the wider implications of socially acceptable safety so the industry can stay ahead of problems instead of having to react to a stream of adverse news articles.
Hear from Koopman on May 20 during the ‘Challenges, innovations and outlook surrounding the development and safe deployment of ADAS and AV technologies’ conference session at ADAS & Autonomous Vehicle Technology Expo Europe.
Visit the website to find out more and to book a conference pass.