العربية
s

Why do driverless cars crash?

Why driverless cars crash
Why driverless cars crash

Long predicted by science fiction visionaries including Philip K Dick and Isaac Asimov, autonomous vehicles are finally becoming a reality.

Their arrival is only possible thanks to computers, which can crunch massive amounts of data quickly, and the internet, which allows that information to be transferred from data centre to moving car faster than human thought. 

Companies, from established car makers like Mercedes and BMW to newcomers Google, Uber, Tesla and Amazon, are frantically working to mass-producing driverless vehicles.

However, the one question that their proponents are sensitive about is how safe driverless vehicles are. Ever since the death of a man at the wheel of a semi-autonomous vehicle in 2016 there have been regular headlines about their risks.

Matters of life and death

Although the man who died was not in a truly self-driving vehicle, but using autopilot software on his Tesla S, and some reports indicate he was watching a film and travelling too fast when he should have been in control of his vehicle, the incident caused significant concern.

Nonetheless, peer-reviewed research on tests of driverless vehicles in California in 2016 indicated that the majority of accidents they were involved in were not due to technology failure but caused by drivers not understanding how the vehicles might behave.

The study suggested driver familiarity may be one reason for the rise in accidents as autonomous vehicles clock up ever more miles. Passengers are initially fearful of the risks involved but, as they become accustomed to software running the car, they seem not to react as swiftly as they should when something goes wrong.

An age of confusion?

Governments and companies are keen on autonomous vehicles: they offer potential solutions to urban transport problems at a time when populations are increasingly gravitating to city life, as well as new products and income streams that will employ future generations of high-income earners.

But –at a national level as well as globally – there seems to have been little thought about the need for a joined-up set of ethical and moral standards for new technologies that provide either a minimum or “gold standard” of what levels of service, accountability and safety people should expect from them and their developers.

The United States provides a good example of some of the potential regulatory hurdles that can be found within one nation. A 2016 study by the Rand Corporation revealed: “A number of states, including Nevada, Florida, Michigan, and California (as well as Washington DC), have passed varying legislation regulating the use of [autonomous vehicle] technology. Measures have also been proposed in a number of other states.”

“The disadvantage of this approach is that it may create a patchwork of conflicting regulatory requirements,” the report said. “It is also unclear whether such measures are necessary, given the absence of commercially available vehicles with this technology and the absence of reported problems to date with the use of this technology on public roads.”

On the plus side, the report noted that the proposals may begin the important conversations that need to take place between the government and the public with regard to this new technology. 

Ethical debates

That such conversations are necessary can be seen in the discussions used to quantify the hazards. These often involve variations of the “trolley problem”: if you are witnessing an accident unfolding on public transport, whose life – or lives – would you sacrifice for the greater good?

For example, if by diverting a runaway bus and killing one person you would save the lives of five others, what should you do? Such discussions sometimes explore issues such as whether the five lives belong to prisoners heading to jail, while the one may belong to a person with an incurable disease.

While such scenarios may be unpleasant, some argue they help to explore the ethical conundrums that implicitly lie behind new technologies. But they also expose the risks of programming biases and the dangers of allowing unregulated algorithms to have control over humans in life-or-death situations.

Taking the lead

Germany has attempted to pull ahead in this debate by adopting the recommendations of an ethics committee set up to examine the risks inherent in designing programs that govern self-driving cars.

These include recognising that automated driving is an “ethical imperative” if the systems cause fewer accidents, that preventing damage to property must not take precedence over personal injury and that in the event of unavoidable accident situations, “any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible”.

Global cooperation on producing international standards for this and other technological advances – such as cloning, gene editing and the responsible use of social media – would help to set gold standards for regulators that could be defined by best practice models supported by peer reviewed research. But would companies and governments would buy into them?

With both states and companies keen to retain competitive advantages and companies likely to favour less burdensome regulatory environments it may be difficult to build consensus on uniform guidelines for this rapidly evolving industry.