The trolley problem is a classic philosophical dilemma used to illustrate the moral conundrum surrounding how to program autonomous vehicles to react to different situations. However, this particular thought experiment may be just the tip of the iceberg. Here Kostas Poulios, principal design and development engineer at steering systems specialist Pailton Engineering, which supplies custom-made steering parts and full steering systems for commercial vehicles, buses and military vehicles, takes a closer look at the ethical dilemmas of fully autonomous vehicles.
In July 2020, Tesla founder Elon Musk boldly announced that his company would have fully autonomous vehicles ready by the end of the year. In the UK, the government has pledged to have fully driverless cars on the UK’s roads by 2021. To the untrained eye, it may seem like a driverless world is just around the corner.
Let’s be clear what we are talking about here. There are five levels of autonomous vehicles. Level 1 includes driver assistance like cruise control. The consultation the UK government announced this summer, on Automated Lane Keeping Systems (ALKS), would be Level 3. The driver needs to remain ready to regain control in an emergency. Level 5, where no human intervention is involved, is a fundamentally different ball game.
Aside from any technological or regulatory hurdles on the path to full autonomy, Level 5 raises profound ethical dilemmas. The trolley problem is one thought experiment that is inevitably raised in this context. A classic philosophical dilemma, the trolley problem asks respondents how they would act if a vehicle was on a path toward harming a pedestrian.
In scenario one, you do not intervene and allow events to run their course, harming two or more individuals. But in a second scenario you intervene to steer or redirect the vehicle onto a second path, where only one person is killed. Different variations abound, but the basic structure of the ethical dilemma is roughly the same.
One problematic assumption underlying much of the discussion is the notion of a universal moral code. If we could just find the right answer, or at least agree on one, then we could program the AV to respond accordingly.
Unfortunately, as a major study in the scientific journal Nature recently demonstrated, there is no universal moral code. Researchers surveyed people in 40 different countries and, using variations of the trolley dilemma, showed that our moral judgements and intuitions were culturally contingent, not universal. Put simply, people in different parts of the world reached different moral conclusions.
Furthermore, these simplistic thought experiments might not be the most appropriate analogies when it comes to AVs. Unlike in the trolley problem, AVs will be programmed to react in situations defined by high levels of uncertainty. The programmers cannot determine in advance what the right thing to do will be, because the answer will be context-specific.
AVs will use deep learning algorithms and rather than automatically having the right answer to a given moral dilemma, they will learn the answers through exposure to thousands of situations and scenarios. Therefore, in an emergency situation, for example, where an AV has to decide whether to swerve to avoid something in the road, it is not making a single isolated decision but instead a series of sequential decisions.
As an engineer, I spend my time designing and building bespoke steering parts for specialist vehicles. From electric buses to remote controlled military vehicles, I’ve dealt with enquiries for all types of vehicle. I’m ready for enquiries about autonomous vehicles, but I’m not expecting many any time soon. The world of fully autonomous vehicles is an exciting prospect, but profound ethical dilemmas remain. These dilemmas are important enough that they shouldn’t remain the exclusive domain of engineers and programmers.