AI Explainability 360
This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. We invite you to use it and improve it.
Not sure what to do first? Start here!
Read More
Learn more about explainability concepts, terminology, and tools before you begin.
Try a Web Demo
Step through the process of explaining models to consumers with different personas in an interactive web demo that shows a sample of capabilities available in this toolkit.
Watch Videos
Watch videos to learn more about AI Explainability 360 toolkit.
Read a Paper
Read a paper describing how we designed AI Explainability 360 toolkit.
Use Tutorials
Step through a set of in-depth examples that introduce developers to code that explains data and models in different industry and application domains.
Ask a Question
Join our AI Explainability 360 Slack Channel to ask questions, make comments, and tell stories about how you use the toolkit.
View Notebooks
Open a directory of Jupyter notebooks in GitHub that provide working examples of explainability in sample datasets. Then share your own notebooks!
Contribute
You can add new algorithms and metrics in GitHub. Share Jupyter notebooks showcasing how you have enabled explanations in your machine learning application.
Learn how to put this toolkit to work for your application or industry problem. Try these tutorials.
Credit Approval
See how to explain credit approval models using the FICO Explainable Machine Learning Challenge dataset.
Medical Expenditure
See how to create interpretable machine learning models in a care management scenario using Medical Expenditure Panel Survey data.
Dermoscopy
See how to explain dermoscopic image datasets used to train machine learning models that help physicians diagnose skin diseases.
Health and Nutrition Survey
See how to quickly understand the National Health and Nutrition Examination Survey datasets to hasten research in epidemiology and health policy.
Proactive Retention
See how to explain predictions of a model that recommends employees for retention actions from a synthesized human resources dataset.
These are eight state-of-the-art explainability algorithms that can add transparency throughout AI systems. Add more!
Boolean Decision Rules via Column Generation (Light Edition)
Directly learn accurate and interpretable ‘or’-of-‘and’ logical classification rules.
Generalized Linear Rule Models
Directly learn accurate and interpretable weighted combinations of ‘and’ rules for classification or regression.
ProfWeight
Improve the accuracy of a directly interpretable model such as a decision tree using the confidence profile of a neural network.
Teaching AI to Explain its Decisions
Predict both labels and explanations with a model whose training set contains features, labels, and explanations.
Contrastive Explanations Method
Generate justifications for neural network classifications by highlighting minimally sufficient features, and minimally and critically absent features.
Contrastive Explanations Method with Monotonic Attribute Functions
Contrastive explanations for colored images or images with rich structure.
Disentangled Inferred Prior VAE
Learn disentangled representations for interpreting unlabeled data.
ProtoDash
Select prototypical examples from a dataset.
Although it is ultimately the consumer who determines the quality of an explanation, the research community has proposed quantitative metrics as proxies for explainability.
About this site
AI Explainability 360 was created by IBM Research and donated by IBM to the Linux Foundation AI & Data.
Additional research sites that advance other aspects of Trusted AI include:
AI Fairness 360
AI Privacy 360
Adversarial Robustness 360
Uncertainty Quantification 360
AI FactSheets 360