There have been numerous research activities around ‘what makes a driver safe?’[1] However, with the rapid development of automated vehicles and artificial intelligence systems, the more pertinent question has become ‘what makes a self-driving car safe?’ and how do we go about answering this?

In everyday driving it is driver behaviour, rather than performance, that is often most important for safety[2]. Driver performance describes the ability of the driver to undertake various aspects of the driving task to a high standard of skill. Driver behaviour, on the other hand, describes the way a driver chooses to drive.

The main contributors to road crashes are all ‘behaviours’, rather than skills. When someone engages in speeding, distracted driving, drink-driving or when they choose not to wear their seat-belt, they are making behavioural choices which regardless of their driving skill make collisions and injuries more likely.

What, then, will self-driving vehicles mean for the distinction between performance and behaviour? Will we need to consider the difference when cars drive themselves? The answer to this has to be yes. In fact, the narrative around automated vehicles is already making good use of the performance/behaviour distinction.

The justification for society needing self-driving vehicles rests on the assumption that they will remove many of the bad driving behaviours mentioned above; self-driving cars will not need to drink alcohol, become distracted, or choose higher speeds just for the thrill of it. Unencumbered by imperfect ‘human’ decision making, automated vehicles will be safer than human drivers; they will remove a good portion of the ‘95%’ of road collisions that are due in some part to ‘human error’[3], or so the argument goes.

Driving ‘performance’ is central to the urgency of perfecting self-driving technologies. Sensors are marketed on their ability to detect objects in a given location and it is claimed that artificial intelligence systems will anticipate the behaviour of other road users. The argument in general for self-driving vehicles points to the advantages of machines over humans in ‘speed of thought’.

The distinction between behaviour and performance is relevant when thinking about how self-driving vehicles will fail. Failures of performance are easy to imagine. Put simply, things could stop working; sensors can malfunction, Wi-Fi connections between vehicles might go down, and artificial intelligence may reach the limits of its ability. Failures like these will all lead to situations in which the performance of the vehicle is no longer high enough to maintain the required level of safety.

There will also be failures in behaviour. A least three types seem likely:

  1. Not behaving as expected: This has already been noted as a potential issue in existing self-driving vehicles[4]. For example, human drivers ‘not knowing how to respond’ to an automated vehicle that is ‘not behaving like everything else on the road’. ‘Behaviour as expected’ will certainly be an important consideration while vehicles on the roads are a mix of self-driving and human-driven, because humans have expectations. It is even possible that artificial intelligence systems will begin to learn to expect certain behaviour from other vehicles. Therefore, behaviour as expected will continue to be important..

  2. System performance vs system safety: The preliminary report from NTSB[5] into the fatal collision of Uber’s self-driving car with a pedestrian (pushing a bicycle) in 2018 found that the car’s system identified the presence of the pedestrian from approximately six seconds before impact (although it took a while for the system to realise exactly what the object was). The system determined that emergency braking was required 1.3 seconds prior to impact, but Uber had disabled automated braking in this mode. On the face of it, this looks like a system performance failure. Note, however, that even when it was uncertain as to the nature of the hazard, the vehicle continued on its path. This is arguably a behavioural decision. The system must operate under some levels of uncertainty, and society will need to decide what levels of uncertainty are acceptable.

  3. Playing God: Much has been written around how self-driving cars will decide a course of action when a collision is inevitable. The choices vehicles may face are often framed around the famous ethical ‘trolley’ dilemma[6] in which people are asked if they would swerve to avoid killing five people, if it meant that they would instead kill one person. A thorough treatment of the ethics of self-driving vehicles is beyond the scope of this paper. Needless to say it requires significant further exploration.

    In summary

    Self-driving vehicles will need to behave, as well as perform. It’s certainly possible that they will bring about a large safety benefit overall. However, it also seems likely that we will need to consider matters other than sensor performance, and the computational power of the artificial intelligence system doing the driving, when attempting to answer the question ‘what makes a self-driving car safe?’

 

[1] Williams, A. F., & O'Neill, B. (1974). On-the-road driving records of licensed race drivers. Accident Analysis & Prevention, 6(3-4), 263-270.

[2] Evans, L. (2004). Traffic safety. Science Serving Society

[3] Sabey, B. E., & Taylor, H. (1980). The known risks we run: the highway. In Societal risk assessment (pp. 43-70). Springer, Boston, MA.

[4] Schoettle, B., & Sivak, M. (2015). A preliminary analysis of real-world crashes involving self-driving vehicles. University of Michigan Transportation Research Institute.

[6] Thomson, J. J. (1984). The trolley problem. Yale LJ, 94, 1395.

Register for our newsletter

Learn more about TRL with regular updates

Media membership

Our latest press releases and news

Get in touch

Have a question? Speak to one of our experts today.