In the aftermath of her attendance at the AI Summit in New York last week, Sara Castellanos from the Wall Street Journal reported on one of the many exclusive announcements during the day; how Capital One, one of the largest credit-card lenders in the U.S., pursues ‘Explainable AI’ to guard themselves against bias in models.
Capital One’s vice president of data innovation, Adam Wenchel, revealed during the day that Capital One are working on researching ways that machine learning algorithms can explain the rationale behind their answers, which could be very helpful in terms of guarding the company against ethical and regulatory breaches.
“The company is employing in-house experts to study ‘explainable AI’, a nascent field of research aimed at creating computer programs that can translate, in natural language, how a machine-learning model comes to a logical decision, which is imperative as organizations explore ways to leverage AI but realize that it can be rife with bias”, Castellanos writes.
In an exclusive interview during the summit, Adam Wenchel told Wall Street Journal that: “We’re starting to do more and more with machine learning, and we want to make sure that as we roll it out more broadly, we’re doing that in the most responsible way possible”.
Castellanos writes: “At the AI Summit in New York last week, where AI experts at corporations convened to discuss practical implications of AI in the enterprise, Mr. Wenchel said explainable AI will be important in guarding against ethical and possible legal challenges in the financial sector”,
The full article can be read at: http://blogs.wsj.com/cio/2016/12/06/capital-one-pursues-explainable-ai-to-guard-against-bias-in-models/#renderComments