Adam Wenchel from Capital One addressed one of the most challenging questions that have emerged with the development of artificial intelligence and machine learning; ethics. Presenting his presentation “Ethical Challenges in ML” at the AI Summit in New York 1st December, Wenchel took the audience through how Capital One have prepared themselves for potential ethical challenges in the future, and how to best solve these.
The issue of ethics in machine learning is very reoccurring, and numerous books have been dedicated to discussing this topic, Wenchel begins, proving that there is certainly a need to address this issue seriously.
“With machine learning you are really disintermediating the human judgement, and the human’s fault in the decision-making”, Wenchel says. “You are actually pulling them further and further out, and I think this is part of a shift that has been going on for a while”.
Wenchel explains how before machine learning came into the picture, we started with humans making decisions, which turned into humans creating rules and simple models. He talks about how we have moved from operating cars with the steering wheel, to introducing cruise control, to then implementing autonomous vehicles.
“So how do we provide that same level of ethical intervention with those bots and in these systems where the outcome is so undoubted?”, Wenchel asks.
Looking at the example of banking, a lot will be focused on how to make sure that they are doing right with the public and our customers. “Fortunately for us, the US government does a lot of work to set standards to what it means to responsibly serve the public, and we work to make sure that we are in full alignment with that”.
“We are talking about building machine learning bots to make these type of decisions, and on the surface, it seems pretty easy”, Wenchel says. “You can’t discriminate based on race, colour, religion, national origin, sex, marital status, etc., so just don’t feed that data to the algorithm and you should be fine. It should be simple, right?”
However, it’s a little more complicatd than that, he explains, introducing the issue with proxy data. “Proxy data can be great in some situations, but in other situations it can create problems. If your model starts sucking in all this data, all of a sudden it starts to generate credit decisions that look like they are kind of learning proxy data for these protected classes, and that’s a problem”, Wenchel says.
“Your model is learning on its own to incorporate some of these protected classes directly into the credit positioning, even though you haven’t provided any of that data. How do you protect that from happening?”
Wenchel explains that their solution to this problem is to come at it from two major angles: from the technology-perspective, or the human perspective.
From a technology perspective they focus on automated model comparison and baselining, Wenchel says. “ As models are evolving, learning, and re-training and we are re-employing them, we understand how they deviate from simple rule-based systems or from previous versions of the model”, Wenchel explains.
“So we ended up here: we can actually set guard-grills in our model, so if our twitter-bot starts tweeting hateful things and deviating its behaviour significantly, we can say that this is way out of the normal situation, you need to investigate this”, Wenchel explains, saying how they have guarded them from incidents such as Microsoft’s “Tay”.
“On the human side we have worked towards creating a very safe environment for researchers to raise concerns”, Wenchel says. “Just because your model is retuning very good results and very good accuracy, make sure people feel comfortable saying: ‘Hey, we may be getting great accuracy of something, but I have concerns of how we are getting there”, Wenchel says, emphasising that it is key to make people feel very comfortable doing this.
To summarise the presentation, Wenchel highlights three key takeaways: Actively provide the system (human + machine) ethical judgement, monitor closely for problematic emergent behaviours, and be prepared to react quickly when problems arise.