Microsoft’s CEO Satya Nadella: ‘We Need Algorithmic Accountability’

call-center-1015274_960_720
Humana Aims to Help Call Centres Prevent Frustrating Conversations
October 28, 2016
Arrangement of female profile, gears and numbers on the subject of artificial intelligence
Stanford Study Predicts the Change in Urban Life by 2030
October 28, 2016

Microsoft’s CEO Satya Nadella: ‘We Need Algorithmic Accountability’

adobestock_47692902

With machine learning being deployed in various sectors, the technology is now left to make decisions on behalf of the company. Imagine you are applying to get your account upgraded at NatWest, and they refuse it because of you’ve had an overdraft at your account one too many times. If this message was delivered by an adviser working at NatWest it would be easier to get an explanation as to why you were refused your account upgrade. But what if this decision is made by a machine? Who is responsible then? 

WIRED raises the same question in their article, using the example of a client being refused insurance due to the response of a machine: “Nobody can answer, because nobody understands how these systems—neural networks modelled on the human brain—produce their results. Computer scientists “train” each one by feeding it data, and it gradually learns. But once a neural net is working well, it’s a black box. Ask its creator how it achieves a certain result and you’ll likely get a shrug”.

In general it appears that it is exactly this, the decision-making that is making people uneasy. As a result to this, the EU passed a regulation this spring that allowed its citizens what is described as an effective “rights to an explanation” for decisions made by machine-learning systems, by University of Oxford researcher, Bryce Goodman.

Jan Albrecht, EU legislator from Germany believes that the main reason why people are sceptical or afraid of AI, is the thought of not being in control. Albrecht sees it as essential that the public is in control of this technology, in order for them to accept it, and this could also help tackle suspicions of bias.

Despite it might appear impossible, research have proven that it is possible to get knowledge of how machine learning systems work, such as at the machine-learning company Clarifai, where they were able to block out portions of pictures to see how the different layers inside the net responded when using image recognition. Here, the founder Matt Zeiler was able to detect which parts were responsible for recognising faces, for instance.

This access is not only beneficial to consumers, but to the companies too, as getting insight to what is actually going on inside their artificial intelligence enables them to improve their products.

WIRED says: “If machine learning is powerful because it processes data in ways we can’t, it might seem like a waste of time to try to dissect it—and might even hamper its development. But the stakes for society are too high, and the challenge is frankly too fascinating. Human beings are creating a new breed of intelligence; it would be irresponsible not to try to understand it”.

This article was originally found at: https://www.wired.com/2016/10/understanding-artificial-intelligence-decisions/?mbid=social_twitter

For the latest news and conversations about AI in business, follow us on Twitterjoin our community on LinkedIn and like us on Facebook

 

Leave a Reply

Translate »

We use cookies. By browsing our site you agree to our use of cookies.Accept