3.5. Validation curves: plotting scores to evaluate models

Every estimator has its advantages and drawbacks. Its generalization errorcan be decomposed in terms of bias, variance and noise. The bias of anestimator is its average error for different training sets. The varianceof an estimator indicates how sensitive it is to varying training sets. Noiseis a property of the data.

In the following plot, we see a function

3.5. Validation curves: plotting scores to evaluate models - 图1and some noisy samples from that function. We use three different estimatorsto fit the function: linear regression with polynomial features of degree 1,4 and 15. We see that the first estimator can at best provide only a poor fitto the samples and the true function because it is too simple (high bias),the second estimator approximates it almost perfectly and the last estimatorapproximates the training data perfectly but does not fit the true functionvery well, i.e. it is very sensitive to varying training data (high variance).

../_images/sphx_glr_plot_underfitting_overfitting_0011.png

Bias and variance are inherent properties of estimators and we usually have toselect learning algorithms and hyperparameters so that both bias and varianceare as low as possible (see Bias-variance dilemma). Another way to reducethe variance of a model is to use more training data. However, you should onlycollect more training data if the true function is too complex to beapproximated by an estimator with a lower variance.

In the simple one-dimensional problem that we have seen in the example it iseasy to see whether the estimator suffers from bias or variance. However, inhigh-dimensional spaces, models can become very difficult to visualize. Forthis reason, it is often helpful to use the tools described below.

Examples:

3.5.1. Validation curve

To validate a model we need a scoring function (see Metrics and scoring: quantifying the quality of predictions),for example accuracy for classifiers. The proper way of choosing multiplehyperparameters of an estimator are of course grid search or similar methods(see Tuning the hyper-parameters of an estimator) that select the hyperparameter with the maximum scoreon a validation set or multiple validation sets. Note that if we optimizedthe hyperparameters based on a validation score the validation score is biasedand not a good estimate of the generalization any longer. To get a properestimate of the generalization we have to compute the score on another testset.

However, it is sometimes helpful to plot the influence of a singlehyperparameter on the training score and the validation score to find outwhether the estimator is overfitting or underfitting for some hyperparametervalues.

The function validation_curve can help in this case:

>>>

  1. >>> import numpy as np
  2. >>> from sklearn.model_selection import validation_curve
  3. >>> from sklearn.datasets import load_iris
  4. >>> from sklearn.linear_model import Ridge
  5.  
  6. >>> np.random.seed(0)
  7. >>> X, y = load_iris(return_X_y=True)
  8. >>> indices = np.arange(y.shape[0])
  9. >>> np.random.shuffle(indices)
  10. >>> X, y = X[indices], y[indices]
  11.  
  12. >>> train_scores, valid_scores = validation_curve(Ridge(), X, y, "alpha",
  13. ... np.logspace(-7, 3, 3),
  14. ... cv=5)
  15. >>> train_scores
  16. array([[0.93..., 0.94..., 0.92..., 0.91..., 0.92...],
  17. [0.93..., 0.94..., 0.92..., 0.91..., 0.92...],
  18. [0.51..., 0.52..., 0.49..., 0.47..., 0.49...]])
  19. >>> valid_scores
  20. array([[0.90..., 0.84..., 0.94..., 0.96..., 0.93...],
  21. [0.90..., 0.84..., 0.94..., 0.96..., 0.93...],
  22. [0.46..., 0.25..., 0.50..., 0.49..., 0.52...]])

If the training score and the validation score are both low, the estimator willbe underfitting. If the training score is high and the validation score is low,the estimator is overfitting and otherwise it is working very well. A lowtraining score and a high validation score is usually not possible. All threecases can be found in the plot below where we vary the parameter

3.5. Validation curves: plotting scores to evaluate models - 图3 of an SVM on the digits dataset.

../_images/sphx_glr_plot_validation_curve_0011.png

3.5.2. Learning curve

A learning curve shows the validation and training score of an estimatorfor varying numbers of training samples. It is a tool to find out how muchwe benefit from adding more training data and whether the estimator suffersmore from a variance error or a bias error. Consider the following examplewhere we plot the learning curve of a naive Bayes classifier and an SVM.

For the naive Bayes, both the validation score and the training scoreconverge to a value that is quite low with increasing size of the trainingset. Thus, we will probably not benefit much from more training data.

In contrast, for small amounts of data, the training score of the SVM ismuch greater than the validation score. Adding more training samples willmost likely increase generalization.

../_images/sphx_glr_plot_learning_curve_0011.png

We can use the function learning_curve to generate the valuesthat are required to plot such a learning curve (number of samplesthat have been used, the average scores on the training sets and theaverage scores on the validation sets):

>>>

  1. >>> from sklearn.model_selection import learning_curve
  2. >>> from sklearn.svm import SVC
  3.  
  4. >>> train_sizes, train_scores, valid_scores = learning_curve(
  5. ... SVC(kernel='linear'), X, y, train_sizes=[50, 80, 110], cv=5)
  6. >>> train_sizes
  7. array([ 50, 80, 110])
  8. >>> train_scores
  9. array([[0.98..., 0.98 , 0.98..., 0.98..., 0.98...],
  10. [0.98..., 1. , 0.98..., 0.98..., 0.98...],
  11. [0.98..., 1. , 0.98..., 0.98..., 0.99...]])
  12. >>> valid_scores
  13. array([[1. , 0.93..., 1. , 1. , 0.96...],
  14. [1. , 0.96..., 1. , 1. , 0.96...],
  15. [1. , 0.96..., 1. , 1. , 0.96...]])