Wednesday, 25 July 2018

Explainable AI

One of the biggest criticisms about Machine Learning (ML) and Artificial Intelligence (AI) approaches have been that they are black boxes. No matter how much of a media frenzy lingers around AI and ML, and no matter how accurate the predictions based on these approaches were, due to the difficulty in explaining the decisions, many practical applications were experiencing trouble in deploying these models.

Catherine Helen O'Neil, the American mathematician and the author of the blog mathbabe.org and several books on data science, calls algorithms as “Weapons of Math Destruction” if they are widespread, secret and destructive (meaning if individuals are unfairly denied something they may deserve).

The thrust towards pulling back the covers on how ML/AI algorithms make decisions is being referred to in the industry as XAI (explainable AI) and FAT ML (fairness, accountability and transparency in Machine Learning) and this trust seems to be gaining momentum with some of the AI / ML platform vendors announcing products / features that address the issue of AI / ML interpretability.

These advancements seem to augur well with the recently launched EU’s General Data Protection Regulation that is said to include a right-to-explanation clause. This capability of interpreting decisions made by models will definitely go a long way in complying with the impending Data Protection Regulations that are currently being crafted in other regions including India.

Submitted by Prof. Hemalatha Chandrashekhar on 25-07-2018

No comments:

Post a Comment