4.1. Partial dependence plots

Partial dependence plots (PDP) show the dependence between the targetresponse 1 and a set of ‘target’ features, marginalizing over the valuesof all other features (the ‘complement’ features). Intuitively, we caninterpret the partial dependence as the expected target response as afunction of the ‘target’ features.

Due to the limits of human perception the size of the target feature setmust be small (usually, one or two) thus the target features are usuallychosen among the most important features.

The figure below shows four one-way and one two-way partial dependence plotsfor the California housing dataset, with a GradientBoostingRegressor:

../_images/sphx_glr_plot_partial_dependence_0021.png

One-way PDPs tell us about the interaction between the target response andthe target feature (e.g. linear, non-linear). The upper left plot in theabove figure shows the effect of the median income in a district on themedian house price; we can clearly see a linear relationship among them. Notethat PDPs assume that the target features are independent from the complementfeatures, and this assumption is often violated in practice.

PDPs with two target features show the interactions among the two features.For example, the two-variable PDP in the above figure shows the dependenceof median house price on joint values of house age and average occupants perhousehold. We can clearly see an interaction between the two features: foran average occupancy greater than two, the house price is nearly independent ofthe house age, whereas for values less than 2 there is a strong dependenceon age.

The sklearn.inspection module provides a convenience functionplot_partial_dependence to create one-way and two-way partialdependence plots. In the below example we show how to create a grid ofpartial dependence plots: two one-way PDPs for the features 0 and 1and a two-way PDP between the two features:

>>>

  1. >>> from sklearn.datasets import make_hastie_10_2
  2. >>> from sklearn.ensemble import GradientBoostingClassifier
  3. >>> from sklearn.inspection import plot_partial_dependence
  4.  
  5. >>> X, y = make_hastie_10_2(random_state=0)
  6. >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,
  7. ... max_depth=1, random_state=0).fit(X, y)
  8. >>> features = [0, 1, (0, 1)]
  9. >>> plot_partial_dependence(clf, X, features)

You can access the newly created figure and Axes objects using plt.gcf()and plt.gca().

For multi-class classification, you need to set the class label for whichthe PDPs should be created via the target argument:

>>>

  1. >>> from sklearn.datasets import load_iris
  2. >>> iris = load_iris()
  3. >>> mc_clf = GradientBoostingClassifier(n_estimators=10,
  4. ... max_depth=1).fit(iris.data, iris.target)
  5. >>> features = [3, 2, (3, 2)]
  6. >>> plot_partial_dependence(mc_clf, X, features, target=0)

The same parameter target is used to specify the target in multi-outputregression settings.

If you need the raw values of the partial dependence function rather thanthe plots, you can use thesklearn.inspection.partial_dependence function:

>>>

  1. >>> from sklearn.inspection import partial_dependence
  2.  
  3. >>> pdp, axes = partial_dependence(clf, X, [0])
  4. >>> pdp
  5. array([[ 2.466..., 2.466..., ...
  6. >>> axes
  7. [array([-1.624..., -1.592..., ...

The values at which the partial dependence should be evaluated are directlygenerated from X. For 2-way partial dependence, a 2D-grid of values isgenerated. The values field returned bysklearn.inspection.partial_dependence gives the actual valuesused in the grid for each target feature. They also correspond to the axisof the plots.

For each value of the ‘target’ features in the grid the partialdependence function needs to marginalize the predictions of the estimatorover all possible values of the ‘complement’ features. With the 'brute'method, this is done by replacing every target feature value of X by thosein the grid, and computing the average prediction.

In decision trees this can be evaluated efficiently without reference to thetraining data ('recursion' method). For each grid point a weighted treetraversal is performed: if a split node involves a ‘target’ feature, thecorresponding left or right branch is followed, otherwise both branches arefollowed, each branch is weighted by the fraction of training samples thatentered that branch. Finally, the partial dependence is given by a weightedaverage of all visited leaves. Note that with the 'recursion' method,X is only used to generate the grid, not to compute the averagedpredictions. The averaged predictions will always be computed on the data withwhich the trees were trained.

Footnotes

  • 1
  • For classification, the target response may be the probability of aclass (the positive class for binary classification), or the decisionfunction.

Examples:

References

T. Hastie, R. Tibshirani and J. Friedman, The Elements ofStatistical Learning,Second Edition, Section 10.13.2, Springer, 2009.

C. Molnar, Interpretable Machine Learning, Section 5.1, 2019.