Artificial Intelligence is quickly becoming ubiquitous in personal and professional lives in ways we both observe and others we don’t see as readily. Artificial Intelligence is used to influence life-changing decisions, such as whether or not you get hired to that dream job, who you will date, and whether or not you'll be approved for a loan for your first home. However, we have little insight into how critical decisions are made with AI. As a result, there is increasing demand (and legislation) to ensure the influence of these technologies is understood.
What is it we seek when we ask for explainability in AI, as in the GDPR's Article 22? Explainable by whom and to whom? Should our AI be explainable to another data scientist? An average adult? A child? It is conceivable that a data scientist's version of 'explainable' is indecipherable to most people. Perhaps what people seek is not explainability but understanding. Explainability is a top-down method of speaking at people from the expert's perspective, while understanding seeks to understand how the listener interprets and adjusts the explanation according to the user's needs. Anyone who has attempted to wade through the legal language of an end user license agreement has experienced this - EULAs are fully explainable, but not readily understandable. However, the two go hand in hand - key to understandable AI is the clarification and presentation of explainable AI outcomes.
To read more, click here.