Why we have to get smart about artificial intelligence

sam-altman
Sam Altman asks ‘all governments’ to regulate Artificial Intelligence as China-U.S. competition heats up
September 28, 2015
debate
Debate over artificial intelligence raises moral, legal and technical questions
September 28, 2015

Why we have to get smart about artificial intelligence

Otherwise the risk is that rather than expanding our horizons and our potential, the compromises that we are already making in terms of access to our personal information could end up compromising our choices, and even our basic human rights.

Humans have long been fascinated by the concept of artificial intelligence, but it is only relatively recently that technology has advanced sufficiently to make it reality. In 2014 we saw a computer pass the Turing test – its responses in a series of text conversations convinced 30% of human interrogators that it was human.

We see AI in everyday life. Examples include Apple’s Siri and Microsoft’s Cortana – ironically named after a character with corrupted artificial intelligence in the Halo video game series. We’re on the way to driverless cars with Tesla Motors’ latest dual-motor Model S, whose autopilot feature keeps the car safely in lane, obeys speed limits, avoids obstacles and parks itself in your garage. CEO Elon Musk told Bloomberg he expects Tesla to be “the first company to market with significant autonomous driving function in the vehicles”

In the defence sector, South Korean forces have deployed Samsung SGR-1 armed sentry robots to patrol the border with North Korea and in Iraq and Afghanistan the iRobot Packbot Tactical Mobile Robot removes unexploded bombs and mines and collects forensic evidence.

Investors are taking AI seriously. The last quarter of 2014 saw a spate of silicon valley investment in AI start-ups involved in financial risk analysis, big data analysis, language and image recognition and automated report writing. They include financial market prediction engines such as Sentient, which simulates financial markets to work out how they will react to different scenarios, and a Goldman Sachs-backed virtual market assistant from US start-up Kensho, which can answer complex verbal financial questions. In healthcare, IBM’s Watson is helping cancer doctors use genomic data to personalise patients’ treatment plans.

Meanwhile, Facebook is using deep learning, a set of algorithms attempting to model high-level abstractions out of data, to work out whether two photos are of the same person. Last year, Google acquired machine learning start-up DeepMind, andIBM unveiled its SyNapse neuromorphic chip, whose silicon transistors are configured to replicate the neurons and synapses of the human brain.

However, as interest and investment in the area explodes, many of the world’s leading thinkers and entrepreneurs are publicly expressing their concerns. Stephen Hawking says it could spell the end of the human race; Elon Musk says it’s more dangerous than nukes.

We have always considered AI with some trepidation. Isaac Asimov’s laws of robotics date back to 1942. Numerous AI films have highlighted potential dangers to humanity, particularly when machines with AI become “self-aware”. Recent examples include Her and Ex Machina. But that is fiction. Now we face the genuine possibility of “thinking” machines making decisions that affect people.

In practical terms, AI eliminates human error. It won’t get tired, get the maths wrong or do the same test twice. But although we can humanise technology to analyse and make decisions based on our needs, behaviours, preferences and reactions, we need to be careful about setting its goals – and be aware of its limitations. An obvious limitation is that hardware and software wear out and are superseded, but the big questions are around ethics. What are the legal implications? What rules should we be making to reduce the risks?

Many of the world’s AI experts have recently signed an open letter published by MIT-affiliated The Future of Life Institute, which states: “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.” The accompanying paper sets out research priorities which include establishing “meaningful human control over an AI system after it begins to operate”. Musk, who is one of the high-profile signatories, has donated $10m to “research aimed at keeping AI beneficial for humanity”.

The ethical implications highlighted by the paper include liability and law. For example, who is liable if a driverless car is involved in an accident? Should AI be covered by existing cyber law, or should it have specific rules? What rules should be made to control the deployment of autonomous weapons?

It is not just about making rules to govern intelligent machines – we also need to consider how we regulate the data they create and share. Rules attempting to control the flow of personal data have been high on the legislative and regulatory agenda for a long time, and were highlighted by the Snowden revelations.

Combining AI and the internet of things so that devices can automatically share personal data, including financial and health data, raises further privacy and security concerns. Gartner forecasts that by the end of this year there will be nearly 5 billion connected devices. Regulators and device manufacturers need to consider that connected devices provide extra opportunities for both legitimate organisations and hackers to access personal data.

Finally, the paper touches on professional ethics and the need for policies that enable us to enjoy the benefits of AI and minimise the dangers. The answer is to develop robust AI through verification (did I build the system right?), validity (did I build the right system?), security, and control. These are key challenges.

AI clearly has the potential to transform the way we live and work, but it is important that we set appropriate limitations and controls. Otherwise the risk is that rather than expanding our horizons and our potential, the compromises that we are already making in terms of access to our personal information could end up compromising our choices, and even our basic human rights.

Source: http://www.theguardian.com/media-network/2015/jan/26/artificial-intelligence-developments-risks-law

Leave a Reply

Translate »

We use cookies. By browsing our site you agree to our use of cookies.Accept