Debate over artificial intelligence raises moral, legal and technical questions

get-smart

Why we have to get smart about artificial intelligence

September 28, 2015
dell
Dell Inc Announces $125B Investment In China, Including Artificial Intelligence Lab
September 28, 2015

Debate over artificial intelligence raises moral, legal and technical questions

debate

We should thank Hawking, Musk and their friends for asking us to ask these questions before we are in over our heads with autonomous machines.

Debate over artificial intelligence raises moral, legal and technical questions

We all know about the Skynet scenario. It was the background story to the popular sci-fi Terminator film series in which a fully autonomous artificial intelligence (AI) system used by the US military ended up launching a global thermonuclear war to eliminate humanity and take over the world.

“It saw all humans as a threat; not just the ones on the other side,” a character from the series helpfully explained. “It decided our fate in a microsecond: extermination.”

Early this month, a group of prominent scientists, entrepreneurs and investors in the field of artificial intelligence, including physicist Stephen Hawking, billionaire businessman Elon Musk and Frank Wilczek, a Nobel laureate in physics, published an open letter warning against the potential dangers as well as benefits of AI.

Given the calibre of the people involved, the letter has generated extensive media coverage, and even lively debate among the cognoscenti. Much of the debate naturally focuses on the more sensational warnings contained in the letter and in select passages from the position paper that accompanied it.

Can AI systems become an existential threat to the human race? Reponses range over a whole spectrum. Some pundits follow the position of the distinguished American philosopher John Searle, who has questioned if the strong version of AI – that is, machines that can learn from experience and model behaviour like a human brain, except with incomparably greater computing powers and therefore cognitive abilities – is even possible at all.

Others, like science writer Andrew McAfee, argue strong AI is possible but we are still a long way from it to need to worry about it seriously.

But their arguments are rather abstract and remote.

So it’s the first half of the position paper that is most relevant to people today, and also the most interesting. It asks pertinent questions and discusses likely but troublesome scenarios that already confront us today.

It’s here that the group raises moral, legal and technical questions related to current and soon-to-be-available “smart” systems. These are semi-autonomous and quasi-intelligent machines or systems that are already around us: driverless cars, drones, computerised trading systems for stocks, bonds and currencies, voice and face recognition software, automatic language translators, surgical robots and automated medical diagnoses. The list is already endless.

The position paper provides useful programming rules and guidelines for AI or semi-AI systems:

  1. Verification: how to prove that a system satisfies the desired formal properties. (“Did I build the system right?”)
  2. Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviour and consequences. (“Did I build the right system?”)
  3. Security: how to prevent intentional manipulation by unauthorised parties.
  4. Control: how to enable meaningful human control over an AI system after it begins to operate. (“OK, I built the system wrong, can I fix it?”)

These nifty criteria are not just good for AI developers, but more importantly, for their users, consumers, regulators and informed citizens, to determine the robustness and benefits of semi-AI systems that are already with us.

Tired of labour disputes and unrest on the mainland, for example, Taiwan-based electronics giant Foxconn is investing billions to build fully automated factories. Procter & Gamble, the former owner of Pringles, the snack potato chips brand, got rid of the human workforce years ago to use supercomputers to control the insertion of the chips into their container tubes to make sure they stack up and don’t crack during production.

“When and in what order should we expect various jobs to become automated?” the position paper asks, warning full automation will mean high wages for those versed in the technical complexity and unemployment for most. “There is a difference between leisure and unemployment,” it says.

The US has a temporary ban on research on autonomous weapon systems that require minimal or no human supervision. But it could be argued that smart weapons, free of human biases and emotions, could be more “moral” in their ability for clean strikes to minimise civilian casualties.

Four states in the US are ready to authorise driverless cars on the road. But if one crushes or hurts a person, who should be responsible and how would their insurance policies work?

Sensors are being developed to help cars avoid hitting other cars and people. But what if a driverless car is caught up in a situation where to avoid hitting a young family of four, it has to run over an old couple? This is beyond artificial intelligence, but artificial moral intelligence.

We should thank Hawking, Musk and their friends for asking us to ask these questions before we are in over our heads with autonomous machines.

Source: http://www.scmp.com/lifestyle/technology/article/1690723/debate-over-artificial-intelligence-raises-moral-legal-and

Leave a Reply

Translate »

We use cookies. By browsing our site you agree to our use of cookies.Accept