Explainable AI for Trees: From Local Explanations to Global Understanding
Lundberg, Scott M.
and
Erion, Gabriel
and
Chen, Hugh
and
DeGrave, Alex
and
Prutkin, Jordan M.
and
Nair, Bala
and
Katz, Ronit
and
Himmelfarb, Jonathan
and
Bansal, Nisha
and
Lee, Su-In
- 2019 via Local Bibsonomy
Keywords:
interpretable
Tree-based ML models are becoming increasingly popular, but in the explanation space for these type of models is woefully lacking explanations on a local level. Local level explanations can give a clearer picture on specific use-cases and help pin point exact areas where the ML model maybe lacking in accuracy.
**Idea**: We need a local explanation system for trees, that is not based on simple decision path, but rather weighs each feature in comparison to every other feature to gain better insight on the model's inner workings.
**Solution**: This paper outlines a new methodology using SHAP relative values, to weigh pairs of features to get a better local explanation of a tree-based model. The paper also outlines how we can garner global level explanations from several local explanations, using the relative score for a large sample space. The paper also walks us through existing methodologies for local explanation, and why these are biased toward tree depth as opposed to actual feature importance.
The proposed explanation model titled TreeExplainer exposes methods to compute optimal local explanation, garner global understanding from local explanations, and capture feature interaction within a tree based model.
This method assigns Shapley interaction values to pairs of features essentially ranking the features so as to understand which features have a higher impact on overall outcomes, and analyze feature interaction.