AI Explainability 360


This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. We invite you to use it and improve it.

 

Although it is ultimately the consumer who determines the quality of an explanation, the research community has proposed quantitative metrics as proxies for explainability.

About this site

AI Explainability 360 was created by IBM Research and donated by IBM to the Linux Foundation AI & Data.

Additional research sites that advance other aspects of Trusted AI include:

AI Fairness 360
AI Privacy 360
Adversarial Robustness 360
Uncertainty Quantification 360
AI FactSheets 360