
Explaining “how and why” behavior of any product & activity has always been very crucial. In the digital era, it has just become more prominence.
An example such as facebook inability to explain “how and why” of data sharing led to a big trust deficit with its users. “Explainability ” Revolution has started a while ago as evidence from the huge popularity of Jupyter/Zeppelin notebooks, data lineage in reporting, data governance project in enterprise & roles such as chief data officer.
The revolution is now pacing up as the adoption of Machine Learning and AI goes mainstream. With open source ML library and tons of code available online, an immature and a professional both can create a model that can be as critical as predicting your illness. How do we differentiate and trust this model and results?
Consider for example Healthcare recommendation engine on https://www.healthcare.com/.

By providing some basic inputs such as age, location it recommends healthcare plan personalized for you. As there is no explanation, top three recommendation coming for the same provider raises doubt and questions. Is the recommendation engine or company bias toward a specific provider? What were the criteria to recommend?
A black-box approach toward AI would be insensitive to the consumer and create lack of trust and will defeat the very purpose of leveraging AI to accelerate and improve customer experience.
“Explainability ” is the next big thing.
Visit https://www.ibm.com/cloud/ai-openscale to experience what it takes to provide explainability to your recommendation.