lime. This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or egorical data) or images, with a package called lime (short for local interpretable modelagnostic explanations).
![](/themes/3fr7/images/logo/logo.png)