Earlier this year, in March 2018, a young woman became the first victim of a driverless car, when she was struck by an autonomous Uber vehicle.
The lady was crossing the road with her bicycle when the car, travelling at around 40 mph, struck her without slowing down. Tragically she died later in hospital as a result of the injuries she sustained in the accident.
The tragedy shone a light on the growing issue on how driverless cars should be managed, or whether they should even be allowed in their current guise.
The Trolley Problem In Action
Any first year ethics & moral studies student, and a lot more besides, have heard of the trolley problem, sometimes called the trolley dilemma.
The moral thought experiment forces the subject into making a choice that will result in the death of human beings. The problem states that you are the driver of a tram or a train, and you are hurtling out of control towards a person who is stuck on the tracks.
However you have a chance to save that person by switching to the branch track, although there are five people on that track.
So you are definitely going to kill someone, the question is who do you choose?
What about if the single person is a withered old man, and the five people are 2 young mothers with their 3 children; does your answer change?
Or maybe the one person is somebody you know and like, does the answer change.
What about if you're not actually the driver, you simply are standing next to the lever that switches the train onto the new tracks; how does that affect your decision?
Well researchers at the Massachusetts Institute of Technology, more famously known as MIT, have just released data they gathered from an online game that used the trolley problem.
The idea was to test people's attitude towards the ethics a driverless car should possess.
At this point I must state that I believe the test to be at least slightly flawed, if not deeply. In that the homeless person on my test, always seemed to be in the vehicle and the decision I had to make was whether to allow him to plough into a concrete barrier, or swerve and kill someone else.
My thinking was that I wanted the car to behave as it would if there was a perfectly moral person inside. So if there was one person in the vehicle, and one or more people that would die as a result of the car swerving. I always chose the person in the vehicle to die.
However if it was a parent and her children, then I chose for the people on the road to die.
The last reason why I think this test was flawed is because it didn't allow for self-destruction. I understand that this option is not part of the classic trolley dilemma. However we are talking about something that could have implications in the real world.
So as a further addition, I'd like to see more options added into the game to see what people would do if it were they themselves in the car, and not just a bunch of hypothetical people.
Synthetic Ethics
Up until now the trolley problem has been a hypothetical exercise used to gain insight into the morals of humans. However it is a problem that few people will have to face in their lives, so the problem acts more as a moral barometer, rather than a real life test you may one day have to take.
Of course there have been times when human drivers have had to make such a decision. In these times we trust that faced with two or more difficult decisions, the human will make the 'right' one. That is to say a decision that on reflection, most of us will agree with.
If a mother mows down an elderly woman at a crossing, because it meant saving her three young children. Then on the whole, whilst viewing the incident as a tragedy, we understand that she is being driven not only by her morals, but by her natural maternal instinct.
Even if that same mother swerves to avoid a cat in the road, and ends up killing a pedestrian, we understand that human error caused her to instinctively react to an unexpected object in her path. We all know that given enough time to think about it, she would have run over the cat instead of the person.
With our developing artificial intelligence, we can't have them making decisions based on what they feel is important. The MIT experiment is meant to help us lay the groundwork for exactly when it is okay for a machine to deliberately hit a person.
As uncomfortable as it is, until driverless technology becomes more reliable, we have to make these decisions.
In a sense we are 'humanising' our machines, if they are to join us in the realm of decision making, then we want to know that they are making judgements based on what a human being would do.
Human judgements in these life or death situations will often be made using emotion over logic. The problem with the MIT experiments is that the test is devoid of any emotional decision making.
So the question remains; is it possible to program an ethical guideline into a machine?
Human Error
It turns out that the woman killed by the driverless Uber may not have been the first victim of artificial intelligence after all.
The computer controlled braking system in the car had been disabled. The reasons being that certain driverless cars have been rear ended when they have slowed for a perceived threat that isn't there. Which clearly creates a safety problem on the road.
Instead a human operator was placed in the car, and that person had sole control of the brakes.
The system inside the Uber vehicle at first struggled to identify Elaine Herzberg as she wheeled her bicycle across a four-lane road. Although it was dark, the car’s radar and LIDAR detected her six seconds before the crash.
At first the perception system got confused: it classified her as an unknown object, then as a vehicle and finally as a bicycle, whose path it could not predict. Just 1.3 seconds before impact, the self-driving system realised that emergency braking was needed.
Unfortunately the human operator was not paying attention to the road at the time, so the car, powerless to stop, or slow down significantly, carried on going.
Pedestrian survival rates increase dramatically if the car that strikes them is travelling at under 30 mph. Therefore, if the Uber's autonomous system had been allowed to break in the 1.3 seconds window of opportunity, it would have either stopped or slowed to a speed whereby Herzberg would not have been killed.
Regardless of that irrefutable fact, the fault for this tragedy will still be attributed to machine error.
Perhaps the saddest thing about this whole incident, is if the humans that designed the system had put in a collision alarm, then the human who wasn't paying attention to the road, could have at least faced their own trolley dilemma, and swerved to avoid her.
Sources:
Uber taxi kills woman in first fatal accident between a pedestrian and a self-driving car dezeen.com
Why Uber’s self-driving car killed a pedestrian - The Economist
MIT surveys two million people to set out ethical framework for driverless cars - *dezeen.com
Moral Machine - MIT trolley dilemma test
Jaguar Land Rover's prototype driverless car makes eye contact with pedestrians - dezeen.com
More from the series
The Robots Are Coming: The Story So Far - Content Links #1
The Robots Are Coming: Synthetic Ascension And The Power Of Touch
The Robots Are Coming: Synthetic Ascension - Conceiving A Digital Mind
The Robots Are Coming: Symbiosis Or Slavery? A Place For Human Thinking
WHERE DO YOU STAND ON THE DEBATE? SHOULD WE ENTRUST OUR LIVES TO AUTONOMOUS SELF-DRIVING VEHICLES? OR SHOULD WE ALWAYS LEAVE HUMAN DECISIONS TO HUMANS? OR PERHAPS YOU BELIEVE THAT THE TECHNOLOGY SIMPLY ISN'T READY YET?
HAVE YOU TAKEN THE MIT TEST LINKED ABOVE? IF SO WHAT WERE YOUR RESULTS?
AS EVER, LET ME KNOW BELOW!
Title image: Annie Spratt on Unsplash
