1.11. Ensemble methods

The goal of ensemble methods is to combine the predictions of severalbase estimators built with a given learning algorithm in order to improvegeneralizability / robustness over a single estimator.

Two families of ensemble methods are usually distinguished:

  • In averaging methods, the driving principle is to build severalestimators independently and then to average their predictions. On average,the combined estimator is usually better than any of the single baseestimator because its variance is reduced.

Examples:Bagging methods, Forests of randomized trees, …

  • By contrast, in boosting methods, base estimators are built sequentiallyand one tries to reduce the bias of the combined estimator. The motivation isto combine several weak models to produce a powerful ensemble.

Examples:AdaBoost, Gradient Tree Boosting, …

1.11.1. Bagging meta-estimator

In ensemble algorithms, bagging methods form a class of algorithms which buildseveral instances of a black-box estimator on random subsets of the originaltraining set and then aggregate their individual predictions to form a finalprediction. These methods are used as a way to reduce the variance of a baseestimator (e.g., a decision tree), by introducing randomization into itsconstruction procedure and then making an ensemble out of it. In many cases,bagging methods constitute a very simple way to improve with respect to asingle model, without making it necessary to adapt the underlying basealgorithm. As they provide a way to reduce overfitting, bagging methods workbest with strong and complex models (e.g., fully developed decision trees), incontrast with boosting methods which usually work best with weak models (e.g.,shallow decision trees).

Bagging methods come in many flavours but mostly differ from each other by theway they draw random subsets of the training set:

  • When random subsets of the dataset are drawn as random subsets of thesamples, then this algorithm is known as Pasting [B1999].

  • When samples are drawn with replacement, then the method is known asBagging [B1996].

  • When random subsets of the dataset are drawn as random subsets ofthe features, then the method is known as Random Subspaces [H1998].

  • Finally, when base estimators are built on subsets of both samples andfeatures, then the method is known as Random Patches [LG2012].

In scikit-learn, bagging methods are offered as a unifiedBaggingClassifier meta-estimator (resp. BaggingRegressor),taking as input a user-specified base estimator along with parametersspecifying the strategy to draw random subsets. In particular, max_samplesand max_features control the size of the subsets (in terms of samples andfeatures), while bootstrap and bootstrap_features control whethersamples and features are drawn with or without replacement. When using a subsetof the available samples the generalization accuracy can be estimated with theout-of-bag samples by setting oob_score=True. As an example, thesnippet below illustrates how to instantiate a bagging ensemble ofKNeighborsClassifier base estimators, each built on random subsets of50% of the samples and 50% of the features.

>>>

  1. >>> from sklearn.ensemble import BaggingClassifier
  2. >>> from sklearn.neighbors import KNeighborsClassifier
  3. >>> bagging = BaggingClassifier(KNeighborsClassifier(),
  4. ... max_samples=0.5, max_features=0.5)

Examples:

References

  • B1999
  • L. Breiman, “Pasting small votes for classification in largedatabases and on-line”, Machine Learning, 36(1), 85-103, 1999.

  • B1996

  • L. Breiman, “Bagging predictors”, Machine Learning, 24(2),123-140, 1996.

  • H1998

  • T. Ho, “The random subspace method for constructing decisionforests”, Pattern Analysis and Machine Intelligence, 20(8), 832-844,1998.

  • LG2012

  • G. Louppe and P. Geurts, “Ensembles on Random Patches”,Machine Learning and Knowledge Discovery in Databases, 346-361, 2012.

1.11.2. Forests of randomized trees

The sklearn.ensemble module includes two averaging algorithms basedon randomized decision trees: the RandomForest algorithmand the Extra-Trees method. Both algorithms are perturb-and-combinetechniques [B1998] specifically designed for trees. This means a diverseset of classifiers is created by introducing randomness in the classifierconstruction. The prediction of the ensemble is given as the averagedprediction of the individual classifiers.

As other classifiers, forest classifiers have to be fitted with twoarrays: a sparse or dense array X of size [n_samples, n_features] holding thetraining samples, and an array Y of size [n_samples] holding thetarget values (class labels) for the training samples:

>>>

  1. >>> from sklearn.ensemble import RandomForestClassifier
  2. >>> X = [[0, 0], [1, 1]]
  3. >>> Y = [0, 1]
  4. >>> clf = RandomForestClassifier(n_estimators=10)
  5. >>> clf = clf.fit(X, Y)

Like decision trees, forests of trees also extendto multi-output problems (if Y is an array of size[n_samples, n_outputs]).

1.11.2.1. Random Forests

In random forests (see RandomForestClassifier andRandomForestRegressor classes), each tree in the ensemble is builtfrom a sample drawn with replacement (i.e., a bootstrap sample) from thetraining set.

Furthermore, when splitting each node during the construction of a tree, thebest split is found either from all input features or a random subset of sizemax_features. (See the parameter tuning guidelines for more details).

The purpose of these two sources of randomness is to decrease the variance ofthe forest estimator. Indeed, individual decision trees typically exhibit highvariance and tend to overfit. The injected randomness in forests yield decisiontrees with somewhat decoupled prediction errors. By taking an average of thosepredictions, some errors can cancel out. Random forests achieve a reducedvariance by combining diverse trees, sometimes at the cost of a slight increasein bias. In practice the variance reduction is often significant hence yieldingan overall better model.

In contrast to the original publication [B2001], the scikit-learnimplementation combines classifiers by averaging their probabilisticprediction, instead of letting each classifier vote for a single class.

1.11.2.2. Extremely Randomized Trees

In extremely randomized trees (see ExtraTreesClassifierand ExtraTreesRegressor classes), randomness goes one stepfurther in the way splits are computed. As in random forests, a randomsubset of candidate features is used, but instead of looking for themost discriminative thresholds, thresholds are drawn at random for eachcandidate feature and the best of these randomly-generated thresholds ispicked as the splitting rule. This usually allows to reduce the varianceof the model a bit more, at the expense of a slightly greater increasein bias:

>>>

  1. >>> from sklearn.model_selection import cross_val_score
  2. >>> from sklearn.datasets import make_blobs
  3. >>> from sklearn.ensemble import RandomForestClassifier
  4. >>> from sklearn.ensemble import ExtraTreesClassifier
  5. >>> from sklearn.tree import DecisionTreeClassifier
  6.  
  7. >>> X, y = make_blobs(n_samples=10000, n_features=10, centers=100,
  8. ... random_state=0)
  9.  
  10. >>> clf = DecisionTreeClassifier(max_depth=None, min_samples_split=2,
  11. ... random_state=0)
  12. >>> scores = cross_val_score(clf, X, y, cv=5)
  13. >>> scores.mean()
  14. 0.98...
  15.  
  16. >>> clf = RandomForestClassifier(n_estimators=10, max_depth=None,
  17. ... min_samples_split=2, random_state=0)
  18. >>> scores = cross_val_score(clf, X, y, cv=5)
  19. >>> scores.mean()
  20. 0.999...
  21.  
  22. >>> clf = ExtraTreesClassifier(n_estimators=10, max_depth=None,
  23. ... min_samples_split=2, random_state=0)
  24. >>> scores = cross_val_score(clf, X, y, cv=5)
  25. >>> scores.mean() > 0.999
  26. True

../_images/sphx_glr_plot_forest_iris_0011.png

1.11.2.3. Parameters

The main parameters to adjust when using these methods is n_estimators andmax_features. The former is the number of trees in the forest. The largerthe better, but also the longer it will take to compute. In addition, note thatresults will stop getting significantly better beyond a critical number oftrees. The latter is the size of the random subsets of features to considerwhen splitting a node. The lower the greater the reduction of variance, butalso the greater the increase in bias. Empirical good default values aremax_features=None (always considering all features instead of a randomsubset) for regression problems, and max_features="sqrt" (using a randomsubset of size sqrt(n_features)) for classification tasks (wheren_features is the number of features in the data). Good results are oftenachieved when setting max_depth=None in combination withmin_samples_split=2 (i.e., when fully developing the trees). Bear in mindthough that these values are usually not optimal, and might result in modelsthat consume a lot of RAM. The best parameter values should always becross-validated. In addition, note that in random forests, bootstrap samplesare used by default (bootstrap=True) while the default strategy forextra-trees is to use the whole dataset (bootstrap=False). When usingbootstrap sampling the generalization accuracy can be estimated on the left outor out-of-bag samples. This can be enabled by setting oob_score=True.

Note

The size of the model with the default parameters is

1.11. Ensemble methods - 图2,where1.11. Ensemble methods - 图3 is the number of trees and1.11. Ensemble methods - 图4 is the number of samples.In order to reduce the size of the model, you can change these parameters:min_samples_split, max_leaf_nodes, max_depth and min_samples_leaf.

1.11.2.4. Parallelization

Finally, this module also features the parallel construction of the treesand the parallel computation of the predictions through the n_jobsparameter. If n_jobs=k then computations are partitioned intok jobs, and run on k cores of the machine. If n_jobs=-1then all cores available on the machine are used. Note that because ofinter-process communication overhead, the speedup might not be linear(i.e., using k jobs will unfortunately not be k times asfast). Significant speedup can still be achieved though when buildinga large number of trees, or when building a single tree requires a fairamount of time (e.g., on large datasets).

Examples:

References

  • B2001
    • Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001.
  • B1998

    • Breiman, “Arcing Classifiers”, Annals of Statistics 1998.
  • P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomizedtrees”, Machine Learning, 63(1), 3-42, 2006.

1.11.2.5. Feature importance evaluation

The relative rank (i.e. depth) of a feature used as a decision node in atree can be used to assess the relative importance of that feature withrespect to the predictability of the target variable. Features used atthe top of the tree contribute to the final prediction decision of alarger fraction of the input samples. The expected fraction of thesamples they contribute to can thus be used as an estimate of therelative importance of the features. In scikit-learn, the fraction ofsamples a feature contributes to is combined with the decrease in impurityfrom splitting them to create a normalized estimate of the predictive powerof that feature.

By averaging the estimates of predictive ability over several randomizedtrees one can reduce the variance of such an estimate and use itfor feature selection. This is known as the mean decrease in impurity, or MDI.Refer to [L2014] for more information on MDI and feature importanceevaluation with Random Forests.

The following example shows a color-coded representation of the relativeimportances of each individual pixel for a face recognition task usinga ExtraTreesClassifier model.

../_images/sphx_glr_plot_forest_importances_faces_0011.png

In practice those estimates are stored as an attribute namedfeatureimportances on the fitted model. This is an array with shape(n_features,) whose values are positive and sum to 1.0. The higherthe value, the more important is the contribution of the matching featureto the prediction function.

Examples:

References

  • L2014
  • G. Louppe,“Understanding Random Forests: From Theory to Practice”,PhD Thesis, U. of Liege, 2014.

1.11.2.6. Totally Random Trees Embedding

RandomTreesEmbedding implements an unsupervised transformation of thedata. Using a forest of completely random trees, RandomTreesEmbeddingencodes the data by the indices of the leaves a data point ends up in. Thisindex is then encoded in a one-of-K manner, leading to a high dimensional,sparse binary coding.This coding can be computed very efficiently and can then be used as a basisfor other learning tasks.The size and sparsity of the code can be influenced by choosing the number oftrees and the maximum depth per tree. For each tree in the ensemble, the codingcontains one entry of one. The size of the coding is at most n_estimators 2* max_depth, the maximum number of leaves in the forest.

As neighboring data points are more likely to lie within the same leaf of a tree,the transformation performs an implicit, non-parametric density estimation.

Examples:

See also

Manifold learning techniques can also be useful to derive non-linearrepresentations of feature space, also these approaches focus also ondimensionality reduction.

1.11.3. AdaBoost

The module sklearn.ensemble includes the popular boosting algorithmAdaBoost, introduced in 1995 by Freund and Schapire [FS1995].

The core principle of AdaBoost is to fit a sequence of weak learners (i.e.,models that are only slightly better than random guessing, such as smalldecision trees) on repeatedly modified versions of the data. The predictionsfrom all of them are then combined through a weighted majority vote (or sum) toproduce the final prediction. The data modifications at each so-called boostingiteration consist of applying weights

1.11. Ensemble methods - 图6,1.11. Ensemble methods - 图7, …,1.11. Ensemble methods - 图8to each of the training samples. Initially, those weights are all set to1.11. Ensemble methods - 图9, so that the first step simply trains a weak learner on theoriginal data. For each successive iteration, the sample weights areindividually modified and the learning algorithm is reapplied to the reweighteddata. At a given step, those training examples that were incorrectly predictedby the boosted model induced at the previous step have their weights increased,whereas the weights are decreased for those that were predicted correctly. Asiterations proceed, examples that are difficult to predict receiveever-increasing influence. Each subsequent weak learner is thereby forced toconcentrate on the examples that are missed by the previous ones in the sequence[HTF].

../_images/sphx_glr_plot_adaboost_hastie_10_2_0011.png

AdaBoost can be used both for classification and regression problems:

1.11.3.1. Usage

The following example shows how to fit an AdaBoost classifier with 100 weaklearners:

>>>

  1. >>> from sklearn.model_selection import cross_val_score
  2. >>> from sklearn.datasets import load_iris
  3. >>> from sklearn.ensemble import AdaBoostClassifier
  4.  
  5. >>> X, y = load_iris(return_X_y=True)
  6. >>> clf = AdaBoostClassifier(n_estimators=100)
  7. >>> scores = cross_val_score(clf, X, y, cv=5)
  8. >>> scores.mean()
  9. 0.9...

The number of weak learners is controlled by the parameter n_estimators. Thelearning_rate parameter controls the contribution of the weak learners inthe final combination. By default, weak learners are decision stumps. Differentweak learners can be specified through the base_estimator parameter.The main parameters to tune to obtain good results are n_estimators andthe complexity of the base estimators (e.g., its depth max_depth orminimum required number of samples to consider a split min_samples_split).

Examples:

References

  • FS1995
  • Y. Freund, and R. Schapire, “A Decision-Theoretic Generalization ofOn-Line Learning and an Application to Boosting”, 1997.

  • ZZRH2009

  • J. Zhu, H. Zou, S. Rosset, T. Hastie. “Multi-class AdaBoost”,2009.

  • D1997

    • Drucker. “Improving Regressors using Boosting Techniques”, 1997.
  • HTF(1,2,3)

  • T. Hastie, R. Tibshirani and J. Friedman, “Elements ofStatistical Learning Ed. 2”, Springer, 2009.

1.11.4. Gradient Tree Boosting

Gradient Tree Boostingor Gradient Boosted Decision Trees (GBDT) is a generalizationof boosting to arbitrarydifferentiable loss functions. GBDT is an accurate and effectiveoff-the-shelf procedure that can be used for both regression andclassification problems in avariety of areas including Web search ranking and ecology.

The module sklearn.ensemble provides methodsfor both classification and regression via gradient boosted decisiontrees.

Note

Scikit-learn 0.21 introduces two new experimental implementations ofgradient boosting trees, namely HistGradientBoostingClassifierand HistGradientBoostingRegressor, inspired byLightGBM (See [LightGBM]).

These histogram-based estimators can be orders of magnitude fasterthan GradientBoostingClassifier andGradientBoostingRegressor when the number of samples is largerthan tens of thousands of samples.

They also have built-in support for missing values, which avoids the needfor an imputer.

These estimators are described in more detail below inHistogram-Based Gradient Boosting.

The following guide focuses on GradientBoostingClassifier andGradientBoostingRegressor, which might be preferred for smallsample sizes since binning may lead to split points that are too approximatein this setting.

1.11.4.1. Classification

GradientBoostingClassifier supports both binary and multi-classclassification.The following example shows how to fit a gradient boosting classifierwith 100 decision stumps as weak learners:

>>>

  1. >>> from sklearn.datasets import make_hastie_10_2
  2. >>> from sklearn.ensemble import GradientBoostingClassifier
  3.  
  4. >>> X, y = make_hastie_10_2(random_state=0)
  5. >>> X_train, X_test = X[:2000], X[2000:]
  6. >>> y_train, y_test = y[:2000], y[2000:]
  7.  
  8. >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,
  9. ... max_depth=1, random_state=0).fit(X_train, y_train)
  10. >>> clf.score(X_test, y_test)
  11. 0.913...

The number of weak learners (i.e. regression trees) is controlled by the parameter n_estimators; The size of each tree can be controlled either by setting the tree depth via max_depth or by setting the number of leaf nodes via max_leaf_nodes. The learning_rate is a hyper-parameter in the range (0.0, 1.0] that controls overfitting via shrinkage .

Note

Classification with more than 2 classes requires the inductionof n_classes regression trees at each iteration,thus, the total number of induced trees equalsn_classes * n_estimators. For datasets with a large numberof classes we strongly recommend to useHistGradientBoostingClassifier as an alternative toGradientBoostingClassifier .

1.11.4.2. Regression

GradientBoostingRegressor supports a number ofdifferent loss functionsfor regression which can be specified via the argumentloss; the default loss function for regression is least squares ('ls').

>>>

  1. >>> import numpy as np
  2. >>> from sklearn.metrics import mean_squared_error
  3. >>> from sklearn.datasets import make_friedman1
  4. >>> from sklearn.ensemble import GradientBoostingRegressor
  5.  
  6. >>> X, y = make_friedman1(n_samples=1200, random_state=0, noise=1.0)
  7. >>> X_train, X_test = X[:200], X[200:]
  8. >>> y_train, y_test = y[:200], y[200:]
  9. >>> est = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1,
  10. ... max_depth=1, random_state=0, loss='ls').fit(X_train, y_train)
  11. >>> mean_squared_error(y_test, est.predict(X_test))
  12. 5.00...

The figure below shows the results of applying GradientBoostingRegressorwith least squares loss and 500 base learners to the Boston house price dataset(sklearn.datasets.load_boston).The plot on the left shows the train and test error at each iteration.The train error at each iteration is stored in thetrainscore attributeof the gradient boosting model. The test error at each iterations can be obtainedvia the staged_predict method which returns agenerator that yields the predictions at each stage. Plots like these can be usedto determine the optimal number of trees (i.e. nestimators) by early stopping.The plot on the right shows the feature importances which can be obtained viathe feature_importances property.

../_images/sphx_glr_plot_gradient_boosting_regression_0011.png

Examples:

1.11.4.3. Fitting additional weak-learners

Both GradientBoostingRegressor and GradientBoostingClassifiersupport warm_start=True which allows you to add more estimators to an alreadyfitted model.

>>>

  1. >>> _ = est.set_params(n_estimators=200, warm_start=True) # set warm_start and new nr of trees
  2. >>> _ = est.fit(X_train, y_train) # fit additional 100 trees to est
  3. >>> mean_squared_error(y_test, est.predict(X_test))
  4. 3.84...

1.11.4.4. Controlling the tree size

The size of the regression tree base learners defines the level of variableinteractions that can be captured by the gradient boosting model. In general,a tree of depth h can capture interactions of order h .There are two ways in which the size of the individual regression trees canbe controlled.

If you specify max_depth=h then complete binary treesof depth h will be grown. Such trees will have (at most) 2h leaf nodesand 2h - 1 split nodes.

Alternatively, you can control the tree size by specifying the number ofleaf nodes via the parameter max_leaf_nodes. In this case,trees will be grown using best-first search where nodes with the highest improvementin impurity will be expanded first.A tree with max_leaf_nodes=k has k - 1 split nodes and thus canmodel interactions of up to order max_leaf_nodes - 1 .

We found that max_leaf_nodes=k gives comparable results to max_depth=k-1but is significantly faster to train at the expense of a slightly highertraining error.The parameter max_leaf_nodes corresponds to the variable J in thechapter on gradient boosting in [F2001] and is related to the parameterinteraction.depth in R’s gbm package where max_leaf_nodes == interaction.depth + 1 .

1.11.4.5. Mathematical formulation

GBRT considers additive models of the following form:

1.11. Ensemble methods - 图12

where

1.11. Ensemble methods - 图13 are the basis functions which are usually calledweak learners in the context of boosting. Gradient Tree Boostinguses decision trees of fixed size as weaklearners. Decision trees have a number of abilities that make themvaluable for boosting, namely the ability to handle data of mixed typeand the ability to model complex functions.

Similar to other boosting algorithms, GBRT builds the additive model ina greedy fashion:

1.11. Ensemble methods - 图14

where the newly added tree

1.11. Ensemble methods - 图15 tries to minimize the loss1.11. Ensemble methods - 图16,given the previous ensemble1.11. Ensemble methods - 图17:

1.11. Ensemble methods - 图18

The initial model

1.11. Ensemble methods - 图19 is problem specific, for least-squaresregression one usually chooses the mean of the target values.

Note

The initial model can also be specified via the initargument. The passed object has to implement fit and predict.

Gradient Boosting attempts to solve this minimization problemnumerically via steepest descent: The steepest descent direction isthe negative gradient of the loss function evaluated at the currentmodel

1.11. Ensemble methods - 图20 which can be calculated for any differentiableloss function:

1.11. Ensemble methods - 图21

Where the step length

1.11. Ensemble methods - 图22 is chosen using line search:

1.11. Ensemble methods - 图23

The algorithms for regression and classificationonly differ in the concrete loss function used.

1.11.4.5.1. Loss Functions

The following loss functions are supported and can be specified usingthe parameter loss:

  • Regression

    • Least squares ('ls'): The natural choice for regression dueto its superior computational properties. The initial model isgiven by the mean of the target values.

    • Least absolute deviation ('lad'): A robust loss function forregression. The initial model is given by the median of thetarget values.

    • Huber ('huber'): Another robust loss function that combinesleast squares and least absolute deviation; use alpha tocontrol the sensitivity with regards to outliers (see [F2001] formore details).

    • Quantile ('quantile'): A loss function for quantile regression.Use 0 < alpha < 1 to specify the quantile. This loss functioncan be used to create prediction intervals(see Prediction Intervals for Gradient Boosting Regression).

  • Classification

    • Binomial deviance ('deviance'): The negative binomiallog-likelihood loss function for binary classification (providesprobability estimates). The initial model is given by thelog odds-ratio.

    • Multinomial deviance ('deviance'): The negative multinomiallog-likelihood loss function for multi-class classification withn_classes mutually exclusive classes. It providesprobability estimates. The initial model is given by theprior probability of each class. At each iteration n_classesregression trees have to be constructed which makes GBRT ratherinefficient for data sets with a large number of classes.

    • Exponential loss ('exponential'): The same loss functionas AdaBoostClassifier. Less robust to mislabeledexamples than 'deviance'; can only be used for binaryclassification.

1.11.4.6. Regularization

1.11.4.6.1. Shrinkage

[F2001] proposed a simple regularization strategy that scalesthe contribution of each weak learner by a factor

1.11. Ensemble methods - 图24:

1.11. Ensemble methods - 图25

The parameter

1.11. Ensemble methods - 图26 is also called the learning rate becauseit scales the step length the gradient descent procedure; it canbe set via the learning_rate parameter.

The parameter learning_rate strongly interacts with the parametern_estimators, the number of weak learners to fit. Smaller valuesof learning_rate require larger numbers of weak learners to maintaina constant training error. Empirical evidence suggests that smallvalues of learning_rate favor better test error. [HTF]recommend to set the learning rate to a small constant(e.g. learning_rate <= 0.1) and choose n_estimators by earlystopping. For a more detailed discussion of the interaction betweenlearning_rate and n_estimators see [R2007].

1.11.4.6.2. Subsampling

[F1999] proposed stochastic gradient boosting, which combines gradientboosting with bootstrap averaging (bagging). At each iterationthe base classifier is trained on a fraction subsample ofthe available training data. The subsample is drawn without replacement.A typical value of subsample is 0.5.

The figure below illustrates the effect of shrinkage and subsamplingon the goodness-of-fit of the model. We can clearly see that shrinkageoutperforms no-shrinkage. Subsampling with shrinkage can further increasethe accuracy of the model. Subsampling without shrinkage, on the other hand,does poorly.

../_images/sphx_glr_plot_gradient_boosting_regularization_0011.png

Another strategy to reduce the variance is by subsampling the featuresanalogous to the random splits in RandomForestClassifier .The number of subsampled features can be controlled via the max_featuresparameter.

Note

Using a small max_features value can significantly decrease the runtime.

Stochastic gradient boosting allows to compute out-of-bag estimates of thetest deviance by computing the improvement in deviance on the examples that arenot included in the bootstrap sample (i.e. the out-of-bag examples).The improvements are stored in the attributeoobimprovement. oobimprovement[i] holdsthe improvement in terms of the loss on the OOB samples if you add the i-th stageto the current predictions.Out-of-bag estimates can be used for model selection, for example to determinethe optimal number of iterations. OOB estimates are usually very pessimistic thuswe recommend to use cross-validation instead and only use OOB if cross-validationis too time consuming.

Examples:

1.11.4.7. Interpretation

Individual decision trees can be interpreted easily by simplyvisualizing the tree structure. Gradient boosting models, however,comprise hundreds of regression trees thus they cannot be easilyinterpreted by visual inspection of the individual trees. Fortunately,a number of techniques have been proposed to summarize and interpretgradient boosting models.

1.11.4.7.1. Feature importance

Often features do not contribute equally to predict the targetresponse; in many situations the majority of the features are in factirrelevant.When interpreting a model, the first question usually is: what arethose important features and how do they contributing in predictingthe target response?

Individual decision trees intrinsically perform feature selection by selectingappropriate split points. This information can be used to measure theimportance of each feature; the basic idea is: the more often afeature is used in the split points of a tree the more important thatfeature is. This notion of importance can be extended to decision treeensembles by simply averaging the feature importance of each tree (seeFeature importance evaluation for more details).

The feature importance scores of a fit gradient boosting model can beaccessed via the featureimportances property:

>>>

  1. >>> from sklearn.datasets import make_hastie_10_2
  2. >>> from sklearn.ensemble import GradientBoostingClassifier
  3.  
  4. >>> X, y = make_hastie_10_2(random_state=0)
  5. >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0,
  6. ... max_depth=1, random_state=0).fit(X, y)
  7. >>> clf.feature_importances_
  8. array([0.10..., 0.10..., 0.11..., ...

Examples:

1.11.5. Histogram-Based Gradient Boosting

Scikit-learn 0.21 introduces two new experimental implementations ofgradient boosting trees, namely HistGradientBoostingClassifierand HistGradientBoostingRegressor, inspired byLightGBM (See [LightGBM]).

These histogram-based estimators can be orders of magnitude fasterthan GradientBoostingClassifier andGradientBoostingRegressor when the number of samples is largerthan tens of thousands of samples.

They also have built-in support for missing values, which avoids the needfor an imputer.

These fast estimators first bin the input samples X intointeger-valued bins (typically 256 bins) which tremendously reduces thenumber of splitting points to consider, and allows the algorithm toleverage integer-based data structures (histograms) instead of relying onsorted continuous values when building the trees. The API of theseestimators is slightly different, and some of the features fromGradientBoostingClassifier and GradientBoostingRegressorare not yet supported: in particular sample weights, and some lossfunctions.

These estimators are still experimental: their predictionsand their API might change without any deprecation cycle. To use them, youneed to explicitly import enable_hist_gradient_boosting:

>>>

  1. >>> # explicitly require this experimental feature
  2. >>> from sklearn.experimental import enable_hist_gradient_boosting # noqa
  3. >>> # now you can import normally from ensemble
  4. >>> from sklearn.ensemble import HistGradientBoostingClassifier

Examples:

1.11.5.1. Usage

Most of the parameters are unchanged fromGradientBoostingClassifier and GradientBoostingRegressor.One exception is the max_iter parameter that replaces n_estimators, andcontrols the number of iterations of the boosting process:

>>>

  1. >>> from sklearn.experimental import enable_hist_gradient_boosting
  2. >>> from sklearn.ensemble import HistGradientBoostingClassifier
  3. >>> from sklearn.datasets import make_hastie_10_2
  4.  
  5. >>> X, y = make_hastie_10_2(random_state=0)
  6. >>> X_train, X_test = X[:2000], X[2000:]
  7. >>> y_train, y_test = y[:2000], y[2000:]
  8.  
  9. >>> clf = HistGradientBoostingClassifier(max_iter=100).fit(X_train, y_train)
  10. >>> clf.score(X_test, y_test)
  11. 0.8965

Available losses for regression are ‘least_squares’ and‘least_absolute_deviation’, which is less sensitive to outliers. Forclassification, ‘binary_crossentropy’ is used for binary classification and‘categorical_crossentropy’ is used for multiclass classification. By defaultthe loss is ‘auto’ and will select the appropriate loss depending ony passed to fit.

The size of the trees can be controlled through the max_leaf_nodes,max_depth, and min_samples_leaf parameters.

The number of bins used to bin the data is controlled with the max_binsparameter. Using less bins acts as a form of regularization. It isgenerally recommended to use as many bins as possible, which is the default.

The l2_regularization parameter is a regularizer on the loss function andcorresponds to

1.11. Ensemble methods - 图28 in equation (2) of [XGBoost].

The early-stopping behaviour is controlled via the scoring,validation_fraction, n_iter_no_change, and tol parameters. It ispossible to early-stop using an arbitrary scorer, or just thetraining or validation loss. By default, early-stopping is performed usingthe default scorer of the estimator on a validation set but it isalso possible to perform early-stopping based on the loss value, which issignificantly faster.

1.11.5.2. Missing values support

HistGradientBoostingClassifier andHistGradientBoostingRegressor have built-in support for missingvalues (NaNs).

During training, the tree grower learns at each split point whether sampleswith missing values should go to the left or right child, based on thepotential gain. When predicting, samples with missing values are assigned tothe left or right child consequently:

>>>

  1. >>> from sklearn.experimental import enable_hist_gradient_boosting # noqa
  2. >>> from sklearn.ensemble import HistGradientBoostingClassifier
  3. >>> import numpy as np
  4.  
  5. >>> X = np.array([0, 1, 2, np.nan]).reshape(-1, 1)
  6. >>> y = [0, 0, 1, 1]
  7.  
  8. >>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1).fit(X, y)
  9. >>> gbdt.predict(X)
  10. array([0, 0, 1, 1])

When the missingness pattern is predictive, the splits can be done onwhether the feature value is missing or not:

>>>

  1. >>> X = np.array([0, np.nan, 1, 2, np.nan]).reshape(-1, 1)
  2. >>> y = [0, 1, 0, 0, 1]
  3. >>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1,
  4. ... max_depth=2,
  5. ... learning_rate=1,
  6. ... max_iter=1).fit(X, y)
  7. >>> gbdt.predict(X)
  8. array([0, 1, 0, 0, 1])

If no missing values were encountered for a given feature during training,then samples with missing values are mapped to whichever child has the mostsamples.

1.11.5.3. Low-level parallelism

HistGradientBoostingClassifier andHistGradientBoostingRegressor have implementations that use OpenMPfor parallelization through Cython. For more details on how to control thenumber of threads, please refer to our Parallelism notes.

The following parts are parallelized:

  • mapping samples from real values to integer-valued bins (finding the binthresholds is however sequential)

  • building histograms is parallelized over features

  • finding the best split point at a node is parallelized over features

  • during fit, mapping samples into the left and right children isparallelized over samples

  • gradient and hessians computations are parallelized over samples

  • predicting is parallelized over samples

1.11.5.4. Why it’s faster

The bottleneck of a gradient boosting procedure is building the decisiontrees. Building a traditional decision tree (as in the other GBDTsGradientBoostingClassifier and GradientBoostingRegressor)requires sorting the samples at each node (foreach feature). Sorting is needed so that the potential gain of a split pointcan be computed efficiently. Splitting a single node has thus a complexityof

1.11. Ensemble methods - 图29 where1.11. Ensemble methods - 图30is the number of samples at the node.

HistGradientBoostingClassifier andHistGradientBoostingRegressor, in contrast, do not require sorting thefeature values and instead use a data-structure called a histogram, where thesamples are implicitly ordered. Building a histogram has a

1.11. Ensemble methods - 图31 complexity, so the node splitting procedure has a1.11. Ensemble methods - 图32 complexity, much smallerthan the previous one. In addition, instead of considering1.11. Ensemble methods - 图33 splitpoints, we here consider only max_bins split points, which is muchsmaller.

In order to build histograms, the input data X needs to be binned intointeger-valued bins. This binning procedure does require sorting the featurevalues, but it only happens once at the very beginning of the boosting process(not at each node, like in GradientBoostingClassifier andGradientBoostingRegressor).

Finally, many parts of the implementation ofHistGradientBoostingClassifier andHistGradientBoostingRegressor are parallelized.

References

1.11.6. Voting Classifier

The idea behind the VotingClassifier is to combineconceptually different machine learning classifiers and use a majority voteor the average predicted probabilities (soft vote) to predict the class labels.Such a classifier can be useful for a set of equally well performing modelin order to balance out their individual weaknesses.

1.11.6.1. Majority Class Labels (Majority/Hard Voting)

In majority voting, the predicted class label for a particular sample isthe class label that represents the majority (mode) of the class labelspredicted by each individual classifier.

E.g., if the prediction for a given sample is

  • classifier 1 -> class 1

  • classifier 2 -> class 1

  • classifier 3 -> class 2

the VotingClassifier (with voting='hard') would classify the sampleas “class 1” based on the majority class label.

In the cases of a tie, the VotingClassifier will select the classbased on the ascending sort order. E.g., in the following scenario

  • classifier 1 -> class 2

  • classifier 2 -> class 1

the class label 1 will be assigned to the sample.

1.11.6.1.1. Usage

The following example shows how to fit the majority rule classifier:

>>>

  1. >>> from sklearn import datasets
  2. >>> from sklearn.model_selection import cross_val_score
  3. >>> from sklearn.linear_model import LogisticRegression
  4. >>> from sklearn.naive_bayes import GaussianNB
  5. >>> from sklearn.ensemble import RandomForestClassifier
  6. >>> from sklearn.ensemble import VotingClassifier
  7.  
  8. >>> iris = datasets.load_iris()
  9. >>> X, y = iris.data[:, 1:3], iris.target
  10.  
  11. >>> clf1 = LogisticRegression(random_state=1)
  12. >>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1)
  13. >>> clf3 = GaussianNB()
  14.  
  15. >>> eclf = VotingClassifier(
  16. ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
  17. ... voting='hard')
  18.  
  19. >>> for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']):
  20. ... scores = cross_val_score(clf, X, y, scoring='accuracy', cv=5)
  21. ... print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
  22. Accuracy: 0.95 (+/- 0.04) [Logistic Regression]
  23. Accuracy: 0.94 (+/- 0.04) [Random Forest]
  24. Accuracy: 0.91 (+/- 0.04) [naive Bayes]
  25. Accuracy: 0.95 (+/- 0.04) [Ensemble]

1.11.6.2. Weighted Average Probabilities (Soft Voting)

In contrast to majority voting (hard voting), soft votingreturns the class label as argmax of the sum of predicted probabilities.

Specific weights can be assigned to each classifier via the weightsparameter. When weights are provided, the predicted class probabilitiesfor each classifier are collected, multiplied by the classifier weight,and averaged. The final class label is then derived from the class labelwith the highest average probability.

To illustrate this with a simple example, let’s assume we have 3classifiers and a 3-class classification problems where we assignequal weights to all classifiers: w1=1, w2=1, w3=1.

The weighted average probabilities for a sample would then becalculated as follows:

classifierclass 1class 2class 3
classifier 1w1 0.2w1 0.5w1 0.3
classifier 2w2 0.6w2 0.3w2 0.1
classifier 3w3 0.3w3 0.4w3 * 0.3
weighted average0.370.40.23

Here, the predicted class label is 2, since it has thehighest average probability.

The following example illustrates how the decision regions may changewhen a soft VotingClassifier is used based on an linear SupportVector Machine, a Decision Tree, and a K-nearest neighbor classifier:

>>>

  1. >>> from sklearn import datasets
  2. >>> from sklearn.tree import DecisionTreeClassifier
  3. >>> from sklearn.neighbors import KNeighborsClassifier
  4. >>> from sklearn.svm import SVC
  5. >>> from itertools import product
  6. >>> from sklearn.ensemble import VotingClassifier
  7.  
  8. >>> # Loading some example data
  9. >>> iris = datasets.load_iris()
  10. >>> X = iris.data[:, [0, 2]]
  11. >>> y = iris.target
  12.  
  13. >>> # Training classifiers
  14. >>> clf1 = DecisionTreeClassifier(max_depth=4)
  15. >>> clf2 = KNeighborsClassifier(n_neighbors=7)
  16. >>> clf3 = SVC(kernel='rbf', probability=True)
  17. >>> eclf = VotingClassifier(estimators=[('dt', clf1), ('knn', clf2), ('svc', clf3)],
  18. ... voting='soft', weights=[2, 1, 2])
  19.  
  20. >>> clf1 = clf1.fit(X, y)
  21. >>> clf2 = clf2.fit(X, y)
  22. >>> clf3 = clf3.fit(X, y)
  23. >>> eclf = eclf.fit(X, y)

../_images/sphx_glr_plot_voting_decision_regions_0011.png

1.11.6.3. Using the VotingClassifier with GridSearchCV

The VotingClassifier can also be used together withGridSearchCV in order to tune thehyperparameters of the individual estimators:

>>>

  1. >>> from sklearn.model_selection import GridSearchCV
  2. >>> clf1 = LogisticRegression(random_state=1)
  3. >>> clf2 = RandomForestClassifier(random_state=1)
  4. >>> clf3 = GaussianNB()
  5. >>> eclf = VotingClassifier(
  6. ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
  7. ... voting='soft'
  8. ... )
  9.  
  10. >>> params = {'lr__C': [1.0, 100.0], 'rf__n_estimators': [20, 200]}
  11.  
  12. >>> grid = GridSearchCV(estimator=eclf, param_grid=params, cv=5)
  13. >>> grid = grid.fit(iris.data, iris.target)

1.11.6.3.1. Usage

In order to predict the class labels based on the predictedclass-probabilities (scikit-learn estimators in the VotingClassifiermust support predict_proba method):

>>>

  1. >>> eclf = VotingClassifier(
  2. ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
  3. ... voting='soft'
  4. ... )

Optionally, weights can be provided for the individual classifiers:

>>>

  1. >>> eclf = VotingClassifier(
  2. ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
  3. ... voting='soft', weights=[2,5,1]
  4. ... )

1.11.7. Voting Regressor

The idea behind the VotingRegressor is to combine conceptuallydifferent machine learning regressors and return the average predicted values.Such a regressor can be useful for a set of equally well performing modelsin order to balance out their individual weaknesses.

1.11.7.1. Usage

The following example shows how to fit the VotingRegressor:

>>>

  1. >>> from sklearn.datasets import load_boston
  2. >>> from sklearn.ensemble import GradientBoostingRegressor
  3. >>> from sklearn.ensemble import RandomForestRegressor
  4. >>> from sklearn.linear_model import LinearRegression
  5. >>> from sklearn.ensemble import VotingRegressor
  6.  
  7. >>> # Loading some example data
  8. >>> X, y = load_boston(return_X_y=True)
  9.  
  10. >>> # Training classifiers
  11. >>> reg1 = GradientBoostingRegressor(random_state=1, n_estimators=10)
  12. >>> reg2 = RandomForestRegressor(random_state=1, n_estimators=10)
  13. >>> reg3 = LinearRegression()
  14. >>> ereg = VotingRegressor(estimators=[('gb', reg1), ('rf', reg2), ('lr', reg3)])
  15. >>> ereg = ereg.fit(X, y)

../_images/sphx_glr_plot_voting_regressor_0011.png

Examples:

1.11.8. Stacked generalization

Stacked generalization is a method for combining estimators to reduce theirbiases [W1992][HTF]. More precisely, the predictions of each individualestimator are stacked together and used as input to a final estimator tocompute the prediction. This final estimator is trained throughcross-validation.

The StackingClassifier and StackingRegressor provide suchstrategies which can be applied to classification and regression problems.

The estimators parameter corresponds to the list of the estimators whichare stacked together in parallel on the input data. It should be given as alist of names and estimators:

>>>

  1. >>> from sklearn.linear_model import RidgeCV, LassoCV
  2. >>> from sklearn.svm import SVR
  3. >>> estimators = [('ridge', RidgeCV()),
  4. ... ('lasso', LassoCV(random_state=42)),
  5. ... ('svr', SVR(C=1, gamma=1e-6))]

The final_estimator will use the predictions of the estimators as input. Itneeds to be a classifier or a regressor when using StackingClassifieror StackingRegressor, respectively:

>>>

  1. >>> from sklearn.ensemble import GradientBoostingRegressor
  2. >>> from sklearn.ensemble import StackingRegressor
  3. >>> reg = StackingRegressor(
  4. ... estimators=estimators,
  5. ... final_estimator=GradientBoostingRegressor(random_state=42))

To train the estimators and final_estimator, the fit method needsto be called on the training data:

>>>

  1. >>> from sklearn.datasets import load_boston
  2. >>> X, y = load_boston(return_X_y=True)
  3. >>> from sklearn.model_selection import train_test_split
  4. >>> X_train, X_test, y_train, y_test = train_test_split(X, y,
  5. ... random_state=42)
  6. >>> reg.fit(X_train, y_train)
  7. StackingRegressor(...)

During training, the estimators are fitted on the whole training dataX_train. They will be used when calling predict or predict_proba. Togeneralize and avoid over-fitting, the final_estimator is trained onout-samples using sklearn.model_selection.cross_val_predict internally.

For StackingClassifier, note that the output of the estimators iscontrolled by the parameter stack_method and it is called by each estimator.This parameter is either a string, being estimator method names, or 'auto'which will automatically identify an available method depending on theavailability, tested in the order of preference: predict_proba,decision_function and predict.

A StackingRegressor and StackingClassifier can be used asany other regressor or classifier, exposing a predict, predict_proba, anddecision_function methods, e.g.:

>>>

  1. >>> y_pred = reg.predict(X_test)
  2. >>> from sklearn.metrics import r2_score
  3. >>> print('R2 score: {:.2f}'.format(r2_score(y_test, y_pred)))
  4. R2 score: 0.81

Note that it is also possible to get the output of the stacked outputs of theestimators using the transform method:

>>>

  1. >>> reg.transform(X_test[:5])
  2. array([[28.78..., 28.43... , 22.62...],
  3. [35.96..., 32.58..., 23.68...],
  4. [14.97..., 14.05..., 16.45...],
  5. [25.19..., 25.54..., 22.92...],
  6. [18.93..., 19.26..., 17.03... ]])

In practise, a stacking predictor predict as good as the best predictor of thebase layer and even sometimes outputperform it by combining the differentstrength of the these predictors. However, training a stacking predictor iscomputationally expensive.

Note

For StackingClassifier, when using stackmethod='predict_proba',the first column is dropped when the problem is a binary classificationproblem. Indeed, both probability columns predicted by each estimator areperfectly collinear.

Note

Multiple stacking layers can be achieved by assigning final_estimator toa StackingClassifier or StackingRegressor:

>>>

  1. >>> final_layer = StackingRegressor(
  2. ... estimators=[('rf', RandomForestRegressor(random_state=42)),
  3. ... ('gbrt', GradientBoostingRegressor(random_state=42))],
  4. ... final_estimator=RidgeCV()
  5. ... )
  6. >>> multi_layer_regressor = StackingRegressor(
  7. ... estimators=[('ridge', RidgeCV()),
  8. ... ('lasso', LassoCV(random_state=42)),
  9. ... ('svr', SVR(C=1, gamma=1e-6, kernel='rbf'))],
  10. ... final_estimator=final_layer
  11. ... )
  12. >>> multi_layer_regressor.fit(X_train, y_train)
  13. StackingRegressor(...)
  14. >>> print('R2 score: {:.2f}'
  15. ... .format(multi_layer_regressor.score(X_test, y_test)))
  16. R2 score: 0.82

References

  • W1992
  • Wolpert, David H. “Stacked generalization.” Neural networks 5.2(1992): 241-259.