Mitigating the SMIDSY cause of collisions

Victoria Laxton looks at intense classification training and how it could be used to improve driver training

Published on 16 January 2023

Share this article:


I’ve recently published an academic paper that explored a novel way of training lifeguards to spot people drowning in a crowded swimming pool. I believe the output from that research could be applied in the context of training drivers, in order to reduce the casualty rate from SMIDSY (“sorry mate I didn’t see you”) scenarios, as explained in my colleague Malcolm Palmer’s blog.

How do we visually detect events in our environment that help us plan our next move? Having an internal catalogue of possible events is one of the ways in which we navigate through everyday life and helps us make predictions about what might happen next. However, what happens if our internal visualisation of something does not marry up to reality? This can often lead to key events and behavioural clues being missed. Take for example a swimmer in distress or drowning. When asked what a swimmer in trouble may look like, we typically say that they will be splashing, calling for help and waving their arms in the air, just like they do in the movies. We internalise this representation and add it to our catalogue. The stark reality though is that often swimmers in trouble will face a silent battle, as trying to breathe becomes the priority (Drowning doesn’t look like drowning), and they are simply unable to call for help, or wave, as onlookers might expect. So, if we are not looking for the right clues, we may miss the event, and this is why we see examples of swimmers drowning in a pool full of bathers who are oblivious to the tragedy unfolding in front of them.

How can we increase our ability to spot things from behavioural clues? This is where my PhD research comes into play. This research has shown positive results for a novel intense classification training task. This involved people watching 3 second video clips of swimmers either playing in the water or showing very early signs of distress. The training required people to quickly classify if the swimmer was drowning or not. After repeated exposure to this training tool, the ability to detect drowning swimmers in the very early stages of distress is increased in a post-training drowning prediction test (link1, link2).

What relevance does this have to transport safety you may ask? Have you ever checked your mirrors before doing a manoeuvre and missed a cyclist or maybe pulled out of a junction and very nearly missed a motorbike? Have you ever found yourself saying ‘sorry mate I didn’t see you’ when driving your car? These are all referred to as 'looked but failed to see' errors and are a common cause of collisions / road traffic incidents. These errors are believed to be caused in part by internalised visualisations not matching with reality. In other words, the driver does not have the appropriate catalogue of clues to make the right assessment and response.

This is where an adapted version of the intense classification task could be used to improve drivers’ ability to spot other road users in these potentially dangerous situations. For instance, we could use short clips of different types of approaching bikes and at different distances from the junction, with drivers being asked to distinguish between the bike and a car. This would potentially equip the driver with a catalogue to assist with rapid classification. This may increase a driver’s ability to spot bikes when making manoeuvres at junctions. In an age when video footage of near-misses and crashes is abundant (and has been used to build hazard perception training packages) this approach may hold promise.

How can this training be adapted for the future of mobility?


As mobility changes, drivers and pedestrians need to become better at picking out visual clues about other road users’ intentions and build that internal catalogue for new mobility technology. This could be for eScooter riders navigating busy shared footpaths, or vice versa for pedestrians; car drivers responding to autonomous vehicles at complex road areas; or a distant future predicting our robot overlords’ intentions. The intense classification training could, for example, include short clips of eScooters (autonomous cars, delivery robots…) and ask drivers / pedestrians if something dangerous is about to happen. Repeated exposure to such training would increase our ability to extract clues from the visual scene as to whether the next move is likely to trigger a potential hazard and therefore allow us to make better decisions when planning our own actions.

There are many ways the intense classification training could be adapted for road safety and transport. However, most importantly the training tool is a promising method for improving our ability to rapidly use clues to help us identify what might happen next and ultimately lead to safer roads. Hopefully in the future after doing some intense classification we will all be saying "Thankfully mate, I did see you".

A blog by Victoria Laxton, Behavioural Researcher

 

Get in Touch

Have a question? Speak to one of our experts today