العربية
s

AI can solve problems – when will it tell us which ones need solving most?

WGS001B46 AI can solve problems – when will it tell us which ones need solving most Shutterstock 49380919
WGS001B46 AI can solve problems – when will it tell us which ones need solving most Shutterstock 49380919

Artificial intelligence still often seems like a far-distant, science-fiction like dream that will be delivered at some uncertain point in the future.

 

Yet organizations and governments are already using AI in ways many of us do not fully appreciate or understand. Some of these may seem relatively benign, such as social media platforms that use facial recognition to connect you to friends or shopping sites that look at your past purchases to make recommendations.

 

But the same technology could also be used to illegally stalk individuals or to try and influence how we might vote in democratic elections.

 

Increasingly it seems organizations and governments are adopting AI to perform important tasks. Beijing, for example, is experimenting with facial recognition on popular phone apps as one way of modernizing China’s national ID card scheme. Health groups around the world use AI to help better detect a range of ailments, from cancers to mental health problems.

 

The benefits of AI – particularly machine learning, which spots patterns in many different sources of data – are easy for corporate leaders and policy-makers to comprehend: they can perform simple, repetitive and dull tasks more quickly than humans; they can free up our time and reduce costs; they allow humans to work on more creative, challenging problems.

 

So will AI ever be able to tell us what the most pressing human problems are – and how to solve them? And has anyone even considered the risks to humanity if we were to do so?

 

AI isn’t that smart….

 

Ever since the dawn of civilization, humanity has conjured up stories of how it could create a being in its own likeness with the ability to wipe out its makers. One such tale, Rossum’s Universal Robots, a play inspired by the legend of the Golem and Frankenstein, is the grandparent of all stories about intelligent but malign robots.

 

In November 2017, physicist Stephen Hawking warned of the importance of ensuring the development of artificial intelligence was better regulated: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” he said at a conference in Lisbon. “It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”

 

But he added: “Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”

 

While this apocalyptic warning is stark in its implications, right now AI is not yet capable of destroying the earth. Indeed, it is worth remembering that AI is not one thing, but different technologies. These include rules-based machines, which follow simple processes or commands to achieve defined objectives (such as putting you through to the right person to speak with during a telephone banking call) and machine learning, when bots add to their knowledge and learn to develop responses for themselves.

The four types of AI

 

AIs falls into 4 groups, says Arend Hintze, a professor at Michigan State University. These are:

 

  • Reactive machines, such as chess-playing machines like Deep Blue, which are good at one thing but have no memory and follow simple representations of what they know.

     

  • Limited-memory machines, such as those in self-driving cars. These can, for example, observe the speed and direction of other road users, which requires the ability to identify specific objects and monitor them. But they do not build up experience, as humans do, over time.

     

  • The third type of AI, known as ‘theory of mind’, may be possible in a more advanced phase, Hintze argues: “This class will have to understanding that people, creatures and objects in the world can have thoughts and emotions that affect [the machines’] own behavior.”

  • And fourth, machines that are “self-aware”, or “conscious”. This would build on the third type and create, as Hintze says: “Conscious beings [that] are aware of themselves, know about their internal states, and are able to predict feelings of others.” This is the one type of AI that Hawking and others fear could destroy humanity.

 

Beware of the dangers

 

A 2016 White House report on the future of AI said the technology is unlikely to “exhibit broadly-applicable intelligence comparable to or exceeding that of humans” within the next 20 years.

 

But policy-makers in particular need to be aware of its pitfalls. For one thing, it is not infallible, as deaths in self-driving car tests show (though human failings seem to play a part, especially if drivers come to trust systems too much).

 

AIs involving speech and image recognition have potential vulnerabilities that could be exploited by criminal or foreign state hackers. They can also make mistakes: billboards can confuse autonomous vehicles, for instance.

 

Rather than expecting AI to provide solutions for every problem, policy-makers and developers may be better off creating a common set of internationally applicable standards and rules for its safe development. While agreeing these may be complex, they would offer a way for AI and humanity to have a safe and shared future.