According to the developers of Mcity at the University of Michigan, highly automated vehicles need a different kind of test to prove they are safe before testing moves to public roads. With this in mind, they have conducted the first demonstration of a protocol developed to do just that.
The Mcity ABC Test is a concept methodology for an independent and standardized way for auto makers and state and federal regulators to validate the safety of Level 4 automated vehicles inside a closed test track before they’re tested or deployed in the real world.
Standardized safety testing protocols are essential to realize the benefits of AVs and increase public trust, according to Greg McGuire, associate director of Mcity. “The very existence of autonomous vehicles is based on the promise to reduce roadway deaths and injuries while also saving energy and increasing access to transportation. The 2018 fatalities were a jolt to the industry, and a core question we’re facing right now is: How do you prove an AV is safe enough to operate on public roads? The Mcity ABC Test could serve as a blueprint to address this challenge.”
The key components of the ABC Test were recently deployed in a remote demonstration for transportation industry representatives and policymakers. Attendees saw how the protocol would work for seven of the 50 common crash-causing situations Mcity compiled a library of ‘behavior competence scenarios’ that AVs should demonstrate they can safely navigate before testing or deploying on public roads. This comprises the ‘B’ in ‘Mcity ABC Test’.
The ‘A’ stands for accelerated evaluation, as the approach speeds up testing by concentrating on the most common risky driving situations. The ‘C’ stands for corner-case testing, which focuses on situations that push the limits of automated vehicle performance and technology. Any of the 50 behavioral competence scenarios can include accelerated testing or corner cases.
“We understand that not all 50 scenarios should be tested for every autonomous vehicle,” said Huei Peng, Mcity director and professor of mechanical engineering. “In fact, the scenarios and test cases should be selected based on where and how an AV will operate.”
The seven scenarios presented were: merging onto a highway, merging into a roundabout, one vehicle cutting in front of another on a highway, a vehicle door opening in the path of another vehicle, a deer in the path of a vehicle coming around a blind curve, an unprotected left turn, and a pedestrian crossing an intersection at a crosswalk without following traffic signals.
In an example during the pedestrian scenario demonstration, the vehicle does not hit the pedestrian but comes close. “Let’s talk about the concept of scoring,” Peng noted. “After every test, we must judge whether the vehicle passes the test or not. And being safe is not enough. For example, a vehicle might avoid a crash by applying hard braking, but that could cause trouble for the car behind it. So even if there’s no crash, there may be several reasons we don’t believe it is a perfect execution of the scenario.”
In addition to safety, Peng recommends scoring based on several other considerations, including:
- Efficiency, meaning the vehicle drives fast enough
- Compliance, meaning it follows traffic laws
- Path tracking, meaning the vehicle refrains from weaving
- Smoothness, meaning the vehicle steers appropriately and does not unnecessarily brake hard
“We fully believe that AVs can make mobility safer, cleaner and more equitable,” Peng concluded. “But in order to realize their full potential as soon as possible, we need to establish and maintain public trust and confidence. A standardized testing protocol can do that in an effective and transparent way.”