1.1. Linear Models

The following are a set of methods intended for regression in whichthe target value is expected to be a linear combination of the features.In mathematical notation, if

1.1. Linear Models - 图1 is the predictedvalue.

1.1. Linear Models - 图2

Across the module, we designate the vector

1.1. Linear Models - 图3 as coef and1.1. Linear Models - 图4 as intercept.

To perform classification with generalized linear models, seeLogistic regression.

1.1.1. Ordinary Least Squares

LinearRegression fits a linear model with coefficients

1.1. Linear Models - 图5 to minimize the residual sumof squares between the observed targets in the dataset, and thetargets predicted by the linear approximation. Mathematically itsolves a problem of the form:

1.1. Linear Models - 图6

../_images/sphx_glr_plot_ols_0011.png

LinearRegression will take in its fit method arrays X, yand will store the coefficients

1.1. Linear Models - 图8 of the linear model in itscoef_ member:

>>>

  1. >>> from sklearn import linear_model
  2. >>> reg = linear_model.LinearRegression()
  3. >>> reg.fit([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
  4. LinearRegression()
  5. >>> reg.coef_
  6. array([0.5, 0.5])

The coefficient estimates for Ordinary Least Squares rely on theindependence of the features. When features are correlated and thecolumns of the design matrix

1.1. Linear Models - 图9 have an approximate lineardependence, the design matrix becomes close to singularand as a result, the least-squares estimate becomes highly sensitiveto random errors in the observed target, producing a largevariance. This situation of multicollinearity can arise, forexample, when data are collected without an experimental design.

Examples:

1.1.1.1. Ordinary Least Squares Complexity

The least squares solution is computed using the singular valuedecomposition of X. If X is a matrix of shape (n_samples, n_features)this method has a cost of

1.1. Linear Models - 图10, assuming that1.1. Linear Models - 图11.

1.1.2. Ridge regression and classification

1.1.2.1. Regression

Ridge regression addresses some of the problems ofOrdinary Least Squares by imposing a penalty on the size of thecoefficients. The ridge coefficients minimize a penalized residual sumof squares:

1.1. Linear Models - 图12

The complexity parameter

1.1. Linear Models - 图13 controls the amountof shrinkage: the larger the value of1.1. Linear Models - 图14, the greater the amountof shrinkage and thus the coefficients become more robust to collinearity.

../_images/sphx_glr_plot_ridge_path_0011.png

As with other linear models, Ridge will take in its fit methodarrays X, y and will store the coefficients

1.1. Linear Models - 图16 of the linear model inits coef_ member:

>>>

  1. >>> from sklearn import linear_model
  2. >>> reg = linear_model.Ridge(alpha=.5)
  3. >>> reg.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1])
  4. Ridge(alpha=0.5)
  5. >>> reg.coef_
  6. array([0.34545455, 0.34545455])
  7. >>> reg.intercept_
  8. 0.13636...

1.1.2.2. Classification

The Ridge regressor has a classifier variant:RidgeClassifier. This classifier first converts binary targets to{-1, 1} and then treats the problem as a regression task, optimizing thesame objective as above. The predicted class corresponds to the sign of theregressor’s prediction. For multiclass classification, the problem istreated as multi-output regression, and the predicted class corresponds tothe output with the highest value.

It might seem questionable to use a (penalized) Least Squares loss to fit aclassification model instead of the more traditional logistic or hingelosses. However in practice all those models can lead to similarcross-validation scores in terms of accuracy or precision/recall, while thepenalized least squares loss used by the RidgeClassifier allows fora very different choice of the numerical solvers with distinct computationalperformance profiles.

The RidgeClassifier can be significantly faster than e.g.LogisticRegression with a high number of classes, because it isable to compute the projection matrix

1.1. Linear Models - 图17 only once.

This classifier is sometimes referred to as a Least Squares Support VectorMachines witha linear kernel.

Examples:

1.1.2.3. Ridge Complexity

This method has the same order of complexity asOrdinary Least Squares.

1.1.2.4. Setting the regularization parameter: generalized Cross-Validation

RidgeCV implements ridge regression with built-incross-validation of the alpha parameter. The object works in the same wayas GridSearchCV except that it defaults to Generalized Cross-Validation(GCV), an efficient form of leave-one-out cross-validation:

>>>

  1. >>> import numpy as np
  2. >>> from sklearn import linear_model
  3. >>> reg = linear_model.RidgeCV(alphas=np.logspace(-6, 6, 13))
  4. >>> reg.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1])
  5. RidgeCV(alphas=array([1.e-06, 1.e-05, 1.e-04, 1.e-03, 1.e-02, 1.e-01, 1.e+00, 1.e+01,
  6. 1.e+02, 1.e+03, 1.e+04, 1.e+05, 1.e+06]))
  7. >>> reg.alpha_
  8. 0.01

Specifying the value of the cv attribute will trigger the use ofcross-validation with GridSearchCV, forexample cv=10 for 10-fold cross-validation, rather than GeneralizedCross-Validation.

References

1.1.3. Lasso

The Lasso is a linear model that estimates sparse coefficients.It is useful in some contexts due to its tendency to prefer solutionswith fewer non-zero coefficients, effectively reducing the number offeatures upon which the given solution is dependent. For this reasonLasso and its variants are fundamental to the field of compressed sensing.Under certain conditions, it can recover the exact set of non-zerocoefficients (seeCompressive sensing: tomography reconstruction with L1 prior (Lasso)).

Mathematically, it consists of a linear model with an added regularization term.The objective function to minimize is:

1.1. Linear Models - 图18

The lasso estimate thus solves the minimization of theleast-squares penalty with

1.1. Linear Models - 图19 added, where1.1. Linear Models - 图20 is a constant and1.1. Linear Models - 图21 is the1.1. Linear Models - 图22-norm ofthe coefficient vector.

The implementation in the class Lasso uses coordinate descent asthe algorithm to fit the coefficients. See Least Angle Regressionfor another implementation:

>>>

  1. >>> from sklearn import linear_model
  2. >>> reg = linear_model.Lasso(alpha=0.1)
  3. >>> reg.fit([[0, 0], [1, 1]], [0, 1])
  4. Lasso(alpha=0.1)
  5. >>> reg.predict([[1, 1]])
  6. array([0.8])

The function lasso_path is useful for lower-level tasks, as itcomputes the coefficients along the full path of possible values.

Examples:

Note

Feature selection with Lasso

As the Lasso regression yields sparse models, it canthus be used to perform feature selection, as detailed inL1-based feature selection.

The following two references explain the iterationsused in the coordinate descent solver of scikit-learn, as well asthe duality gap computation used for convergence control.

References

  • “Regularization Path For Generalized linear Models by Coordinate Descent”,Friedman, Hastie & Tibshirani, J Stat Softw, 2010 (Paper).

  • “An Interior-Point Method for Large-Scale L1-Regularized Least Squares,”S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky,in IEEE Journal of Selected Topics in Signal Processing, 2007(Paper)

1.1.3.1. Setting regularization parameter

The alpha parameter controls the degree of sparsity of the estimatedcoefficients.

1.1.3.1.1. Using cross-validation

scikit-learn exposes objects that set the Lasso alpha parameter bycross-validation: LassoCV and LassoLarsCV.LassoLarsCV is based on the Least Angle Regression algorithmexplained below.

For high-dimensional datasets with many collinear features,LassoCV is most often preferable. However, LassoLarsCV hasthe advantage of exploring more relevant values of alpha parameter, andif the number of samples is very small compared to the number offeatures, it is often faster than LassoCV.

lasso_cv_1lasso_cv_2

1.1.3.1.2. Information-criteria based model selection

Alternatively, the estimator LassoLarsIC proposes to use theAkaike information criterion (AIC) and the Bayes Information criterion (BIC).It is a computationally cheaper alternative to find the optimal value of alphaas the regularization path is computed only once instead of k+1 timeswhen using k-fold cross-validation. However, such criteria needs aproper estimation of the degrees of freedom of the solution, arederived for large samples (asymptotic results) and assume the modelis correct, i.e. that the data are actually generated by this model.They also tend to break when the problem is badly conditioned(more features than samples).

../_images/sphx_glr_plot_lasso_model_selection_0011.png

Examples:

1.1.3.1.3. Comparison with the regularization parameter of SVM

The equivalence between alpha and the regularization parameter of SVM,C is given by alpha = 1 / C or alpha = 1 / (n_samples * C),depending on the estimator and the exact objective function optimized by themodel.

1.1.4. Multi-task Lasso

The MultiTaskLasso is a linear model that estimates sparsecoefficients for multiple regression problems jointly: y is a 2D array,of shape (n_samples, n_tasks). The constraint is that the selectedfeatures are the same for all the regression problems, also called tasks.

The following figure compares the location of the non-zero entries in thecoefficient matrix W obtained with a simple Lasso or a MultiTaskLasso.The Lasso estimates yield scattered non-zeros while the non-zeros ofthe MultiTaskLasso are full columns.

multi_task_lasso_1multi_task_lasso_2

Fitting a time-series model, imposing that any active feature be active at all times.

Examples:

Mathematically, it consists of a linear model trained with a mixed

1.1. Linear Models - 图28

1.1. Linear Models - 图29-norm for regularization.The objective function to minimize is:

1.1. Linear Models - 图30

where

1.1. Linear Models - 图31 indicates the Frobenius norm

1.1. Linear Models - 图32

and

1.1. Linear Models - 图33

1.1. Linear Models - 图34 reads

1.1. Linear Models - 图35

The implementation in the class MultiTaskLasso usescoordinate descent as the algorithm to fit the coefficients.

1.1.5. Elastic-Net

ElasticNet is a linear regression model trained with both

1.1. Linear Models - 图36 and1.1. Linear Models - 图37-norm regularization of the coefficients.This combination allows for learning a sparse model where few ofthe weights are non-zero like Lasso, while still maintainingthe regularization properties of Ridge. We control the convexcombination of1.1. Linear Models - 图38 and1.1. Linear Models - 图39 using the l1_ratioparameter.

Elastic-net is useful when there are multiple features which arecorrelated with one another. Lasso is likely to pick one of theseat random, while elastic-net is likely to pick both.

A practical advantage of trading-off between Lasso and Ridge is that itallows Elastic-Net to inherit some of Ridge’s stability under rotation.

The objective function to minimize is in this case

1.1. Linear Models - 图40

../_images/sphx_glr_plot_lasso_coordinate_descent_path_0011.png

The class ElasticNetCV can be used to set the parametersalpha (

1.1. Linear Models - 图42) and l1_ratio (1.1. Linear Models - 图43) by cross-validation.

Examples:

The following two references explain the iterationsused in the coordinate descent solver of scikit-learn, as well asthe duality gap computation used for convergence control.

References

  • “Regularization Path For Generalized linear Models by Coordinate Descent”,Friedman, Hastie & Tibshirani, J Stat Softw, 2010 (Paper).

  • “An Interior-Point Method for Large-Scale L1-Regularized Least Squares,”S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky,in IEEE Journal of Selected Topics in Signal Processing, 2007(Paper)

1.1.6. Multi-task Elastic-Net

The MultiTaskElasticNet is an elastic-net model that estimates sparsecoefficients for multiple regression problems jointly: Y is a 2D arrayof shape (n_samples, n_tasks). The constraint is that the selectedfeatures are the same for all the regression problems, also called tasks.

Mathematically, it consists of a linear model trained with a mixed

1.1. Linear Models - 图44

1.1. Linear Models - 图45-norm and1.1. Linear Models - 图46-norm for regularization.The objective function to minimize is:

1.1. Linear Models - 图47

The implementation in the class MultiTaskElasticNet uses coordinate descent asthe algorithm to fit the coefficients.

The class MultiTaskElasticNetCV can be used to set the parametersalpha (

1.1. Linear Models - 图48) and l1_ratio (1.1. Linear Models - 图49) by cross-validation.

1.1.7. Least Angle Regression

Least-angle regression (LARS) is a regression algorithm forhigh-dimensional data, developed by Bradley Efron, Trevor Hastie, IainJohnstone and Robert Tibshirani. LARS is similar to forward stepwiseregression. At each step, it finds the feature most correlated with thetarget. When there are multiple features having equal correlation, insteadof continuing along the same feature, it proceeds in a direction equiangularbetween the features.

The advantages of LARS are:

  • It is numerically efficient in contexts where the number of featuresis significantly greater than the number of samples.

  • It is computationally just as fast as forward selection and hasthe same order of complexity as ordinary least squares.

  • It produces a full piecewise linear solution path, which isuseful in cross-validation or similar attempts to tune the model.

  • If two features are almost equally correlated with the target,then their coefficients should increase at approximately the samerate. The algorithm thus behaves as intuition would expect, andalso is more stable.

  • It is easily modified to produce solutions for other estimators,like the Lasso.

The disadvantages of the LARS method include:

  • Because LARS is based upon an iterative refitting of theresiduals, it would appear to be especially sensitive to theeffects of noise. This problem is discussed in detail by Weisbergin the discussion section of the Efron et al. (2004) Annals ofStatistics article.

The LARS model can be used using estimator Lars, or itslow-level implementation lars_path or lars_path_gram.

1.1.8. LARS Lasso

LassoLars is a lasso model implemented using the LARSalgorithm, and unlike the implementation based on coordinate descent,this yields the exact solution, which is piecewise linear as afunction of the norm of its coefficients.

../_images/sphx_glr_plot_lasso_lars_0011.png

>>>

  1. >>> from sklearn import linear_model
  2. >>> reg = linear_model.LassoLars(alpha=.1)
  3. >>> reg.fit([[0, 0], [1, 1]], [0, 1])
  4. LassoLars(alpha=0.1)
  5. >>> reg.coef_
  6. array([0.717157..., 0. ])

Examples:

The Lars algorithm provides the full path of the coefficients alongthe regularization parameter almost for free, thus a common operationis to retrieve the path with one of the functions lars_pathor lars_path_gram.

1.1.8.1. Mathematical formulation

The algorithm is similar to forward stepwise regression, but insteadof including features at each step, the estimated coefficients areincreased in a direction equiangular to each one’s correlations withthe residual.

Instead of giving a vector result, the LARS solution consists of acurve denoting the solution for each value of the

1.1. Linear Models - 图51 norm of theparameter vector. The full coefficients path is stored in the arraycoefpath, which has size (n_features, max_features+1). The firstcolumn is always zero.

References:

1.1.9. Orthogonal Matching Pursuit (OMP)

OrthogonalMatchingPursuit and orthogonal_mp implements the OMPalgorithm for approximating the fit of a linear model with constraints imposedon the number of non-zero coefficients (ie. the

1.1. Linear Models - 图52 pseudo-norm).

Being a forward feature selection method like Least Angle Regression,orthogonal matching pursuit can approximate the optimum solution vector with afixed number of non-zero elements:

1.1. Linear Models - 图53

Alternatively, orthogonal matching pursuit can target a specific error insteadof a specific number of non-zero coefficients. This can be expressed as:

1.1. Linear Models - 图54

OMP is based on a greedy algorithm that includes at each step the atom mosthighly correlated with the current residual. It is similar to the simplermatching pursuit (MP) method, but better in that at each iteration, theresidual is recomputed using an orthogonal projection on the space of thepreviously chosen dictionary elements.

Examples:

References:

1.1.10. Bayesian Regression

Bayesian regression techniques can be used to include regularizationparameters in the estimation procedure: the regularization parameter isnot set in a hard sense but tuned to the data at hand.

This can be done by introducing uninformative priorsover the hyper parameters of the model.The

1.1. Linear Models - 图55 regularization used in Ridge regression and classification isequivalent to finding a maximum a posteriori estimation under a Gaussian priorover the coefficients1.1. Linear Models - 图56 with precision1.1. Linear Models - 图57.Instead of setting lambda manually, it is possible to treat it as a randomvariable to be estimated from the data.

To obtain a fully probabilistic model, the output

1.1. Linear Models - 图58 is assumedto be Gaussian distributed around1.1. Linear Models - 图59:

1.1. Linear Models - 图60

where

1.1. Linear Models - 图61 is again treated as a random variable that is to beestimated from the data.

The advantages of Bayesian Regression are:

  • It adapts to the data at hand.

  • It can be used to include regularization parameters in theestimation procedure.

The disadvantages of Bayesian regression include:

  • Inference of the model can be time consuming.

References

  • A good introduction to Bayesian methods is given in C. Bishop: PatternRecognition and Machine learning

  • Original Algorithm is detailed in the book Bayesian learning for neuralnetworks by Radford M. Neal

1.1.10.1. Bayesian Ridge Regression

BayesianRidge estimates a probabilistic model of theregression problem as described above.The prior for the coefficient

1.1. Linear Models - 图62 is given by a spherical Gaussian:

1.1. Linear Models - 图63

The priors over

1.1. Linear Models - 图64 and1.1. Linear Models - 图65 are chosen to be gammadistributions, theconjugate prior for the precision of the Gaussian. The resulting model iscalled Bayesian Ridge Regression, and is similar to the classicalRidge.

The parameters

1.1. Linear Models - 图66,1.1. Linear Models - 图67 and1.1. Linear Models - 图68 are estimatedjointly during the fit of the model, the regularization parameters1.1. Linear Models - 图69 and1.1. Linear Models - 图70 being estimated by maximizing thelog marginal likelihood. The scikit-learn implementationis based on the algorithm described in Appendix A of (Tipping, 2001)where the update of the parameters1.1. Linear Models - 图71 and1.1. Linear Models - 图72 is doneas suggested in (MacKay, 1992). The initial value of the maximization procedurecan be set with the hyperparameters alpha_init and lambda_init.

There are four more hyperparameters,

1.1. Linear Models - 图73,1.1. Linear Models - 图74,1.1. Linear Models - 图75 and1.1. Linear Models - 图76 of the gamma prior distributions over1.1. Linear Models - 图77 and1.1. Linear Models - 图78. These are usually chosen to benon-informative. By default1.1. Linear Models - 图79.

../_images/sphx_glr_plot_bayesian_ridge_0011.png

Bayesian Ridge Regression is used for regression:

>>>

  1. >>> from sklearn import linear_model
  2. >>> X = [[0., 0.], [1., 1.], [2., 2.], [3., 3.]]
  3. >>> Y = [0., 1., 2., 3.]
  4. >>> reg = linear_model.BayesianRidge()
  5. >>> reg.fit(X, Y)
  6. BayesianRidge()

After being fitted, the model can then be used to predict new values:

>>>

  1. >>> reg.predict([[1, 0.]])
  2. array([0.50000013])

The coefficients

1.1. Linear Models - 图81 of the model can be accessed:

>>>

  1. >>> reg.coef_
  2. array([0.49999993, 0.49999993])

Due to the Bayesian framework, the weights found are slightly different to theones found by Ordinary Least Squares. However, Bayesian Ridge Regressionis more robust to ill-posed problems.

Examples:

References:

1.1.10.2. Automatic Relevance Determination - ARD

ARDRegression is very similar to Bayesian Ridge Regression,but can lead to sparser coefficients

1.1. Linear Models - 图8212.ARDRegression poses a different prior over1.1. Linear Models - 图83, by dropping theassumption of the Gaussian being spherical.

Instead, the distribution over

1.1. Linear Models - 图84 is assumed to be an axis-parallel,elliptical Gaussian distribution.

This means each coefficient

1.1. Linear Models - 图85 is drawn from a Gaussian distribution,centered on zero and with a precision1.1. Linear Models - 图86:

1.1. Linear Models - 图87

with

1.1. Linear Models - 图88.

In contrast to Bayesian Ridge Regression, each coordinate of

1.1. Linear Models - 图89has its own standard deviation1.1. Linear Models - 图90. The prior over all1.1. Linear Models - 图91 is chosen to be the same gamma distribution given byhyperparameters1.1. Linear Models - 图92 and1.1. Linear Models - 图93.

../_images/sphx_glr_plot_ard_0011.png

ARD is also known in the literature as Sparse Bayesian Learning andRelevance Vector Machine34.

Examples:

References:

1.1.11. Logistic regression

Logistic regression, despite its name, is a linear model for classificationrather than regression. Logistic regression is also known in the literature aslogit regression, maximum-entropy classification (MaxEnt) or the log-linearclassifier. In this model, the probabilities describing the possible outcomesof a single trial are modeled using alogistic function.

Logistic regression is implemented in LogisticRegression.This implementation can fit binary, One-vs-Rest, or multinomial logisticregression with optional

1.1. Linear Models - 图95,1.1. Linear Models - 图96 or Elastic-Netregularization.

Note

Regularization is applied by default, which is common in machinelearning but not in statistics. Another advantage of regularization isthat it improves numerical stability. No regularization amounts tosetting C to a very high value.

As an optimization problem, binary class

1.1. Linear Models - 图97 penalized logisticregression minimizes the following cost function:

1.1. Linear Models - 图98

Similarly,

1.1. Linear Models - 图99 regularized logistic regression solves the followingoptimization problem:

1.1. Linear Models - 图100

Elastic-Net regularization is a combination of

1.1. Linear Models - 图101 and1.1. Linear Models - 图102, and minimizes the following cost function:

1.1. Linear Models - 图103

where

1.1. Linear Models - 图104 controls the strength of1.1. Linear Models - 图105 regularization vs.1.1. Linear Models - 图106 regularization (it corresponds to the l1_ratio parameter).

Note that, in this notation, it’s assumed that the target

1.1. Linear Models - 图107 takesvalues in the set1.1. Linear Models - 图108 at trial1.1. Linear Models - 图109. We can also see thatElastic-Net is equivalent to1.1. Linear Models - 图110 when1.1. Linear Models - 图111 and equivalentto1.1. Linear Models - 图112 when1.1. Linear Models - 图113.

The solvers implemented in the class LogisticRegressionare “liblinear”, “newton-cg”, “lbfgs”, “sag” and “saga”:

The solver “liblinear” uses a coordinate descent (CD) algorithm, and relieson the excellent C++ LIBLINEAR library, which is shipped withscikit-learn. However, the CD algorithm implemented in liblinear cannot learna true multinomial (multiclass) model; instead, the optimization problem isdecomposed in a “one-vs-rest” fashion so separate binary classifiers aretrained for all classes. This happens under the hood, soLogisticRegression instances using this solver behave as multiclassclassifiers. For

1.1. Linear Models - 图114 regularization sklearn.svm.l1_min_c allows tocalculate the lower bound for C in order to get a non “null” (all featureweights to zero) model.

The “lbfgs”, “sag” and “newton-cg” solvers only support

1.1. Linear Models - 图115regularization or no regularization, and are found to converge faster for somehigh-dimensional data. Setting multi_class to “multinomial” with these solverslearns a true multinomial logistic regression model 5, which means that itsprobability estimates should be better calibrated than the default “one-vs-rest”setting.

The “sag” solver uses Stochastic Average Gradient descent 6. It is fasterthan other solvers for large datasets, when both the number of samples and thenumber of features are large.

The “saga” solver 7 is a variant of “sag” that also supports thenon-smooth penalty="l1". This is therefore the solver of choice for sparsemultinomial logistic regression. It is also the only solver that supportspenalty="elasticnet".

The “lbfgs” is an optimization algorithm that approximates theBroyden–Fletcher–Goldfarb–Shanno algorithm 8, which belongs toquasi-Newton methods. The “lbfgs” solver is recommended for use forsmall data-sets but for larger datasets its performance suffers. 9

The following table summarizes the penalties supported by each solver:

Solvers
Penalties‘liblinear’‘lbfgs’‘newton-cg’‘sag’‘saga’
Multinomial + L2 penaltynoyesyesyesyes
OVR + L2 penaltyyesyesyesyesyes
Multinomial + L1 penaltynonononoyes
OVR + L1 penaltyyesnononoyes
Elastic-Netnonononoyes
No penalty (‘none’)noyesyesyesyes
Behaviors
Penalize the intercept (bad)yesnononono
Faster for large datasetsnononoyesyes
Robust to unscaled datasetsyesyesyesnono

The “lbfgs” solver is used by default for its robustness. For large datasetsthe “saga” solver is usually faster.For large dataset, you may also consider using SGDClassifierwith ‘log’ loss, which might be even faster but requires more tuning.

Examples:

Differences from liblinear:

There might be a difference in the scores obtained betweenLogisticRegression with solver=liblinearor LinearSVC and the external liblinear library directly,when fitintercept=False and the fit coef (or) the data tobe predicted are zeroes. This is because for the sample(s) withdecision_function zero, LogisticRegression and LinearSVCpredict the negative class, while liblinear predicts the positive class.Note that a model with fit_intercept=False and having many samples withdecision_function zero, is likely to be a underfit, bad model and you areadvised to set fit_intercept=True and increase the intercept_scaling.

Note

Feature selection with sparse logistic regression

A logistic regression with

1.1. Linear Models - 图116 penalty yields sparse models, and canthus be used to perform feature selection, as detailed inL1-based feature selection.

Note

P-value estimation

It is possible to obtain the p-values and confidence intervals forcoefficients in cases of regression without penalization. The statsmodelspackage <https://pypi.org/project/statsmodels/&gt; natively supports this.Within sklearn, one could use bootstrapping instead as well.

LogisticRegressionCV implements Logistic Regression with built-incross-validation support, to find the optimal C and l1_ratio parametersaccording to the scoring attribute. The “newton-cg”, “sag”, “saga” and“lbfgs” solvers are found to be faster for high-dimensional dense data, dueto warm-starting (see Glossary).

References:

1.1.12. Stochastic Gradient Descent - SGD

Stochastic gradient descent is a simple yet very efficient approachto fit linear models. It is particularly useful when the number of samples(and the number of features) is very large.The partial_fit method allows online/out-of-core learning.

The classes SGDClassifier and SGDRegressor providefunctionality to fit linear models for classification and regressionusing different (convex) loss functions and different penalties.E.g., with loss="log", SGDClassifierfits a logistic regression model,while with loss="hinge" it fits a linear support vector machine (SVM).

References

1.1.13. Perceptron

The Perceptron is another simple classification algorithm suitable forlarge scale learning. By default:

  • It does not require a learning rate.

  • It is not regularized (penalized).

  • It updates its model only on mistakes.

The last characteristic implies that the Perceptron is slightly faster totrain than SGD with the hinge loss and that the resulting models aresparser.

1.1.14. Passive Aggressive Algorithms

The passive-aggressive algorithms are a family of algorithms for large-scalelearning. They are similar to the Perceptron in that they do not require alearning rate. However, contrary to the Perceptron, they include aregularization parameter C.

For classification, PassiveAggressiveClassifier can be used withloss='hinge' (PA-I) or loss='squared_hinge' (PA-II). For regression,PassiveAggressiveRegressor can be used withloss='epsilon_insensitive' (PA-I) orloss='squared_epsilon_insensitive' (PA-II).

References:

1.1.15. Robustness regression: outliers and modeling errors

Robust regression aims to fit a regression model in thepresence of corrupt data: either outliers, or error in the model.

../_images/sphx_glr_plot_theilsen_0011.png

1.1.15.1. Different scenario and useful concepts

There are different things to keep in mind when dealing with datacorrupted by outliers:

  • Outliers in X or in y?

Outliers in the y direction

Outliers in the X direction

y_outliers

X_outliers

  • Fraction of outliers versus amplitude of error

The number of outlying points matters, but also how much they areoutliers.

Small outliers

Large outliers

y_outliers

large_y_outliers

An important notion of robust fitting is that of breakdown point: thefraction of data that can be outlying for the fit to start missing theinlying data.

Note that in general, robust fitting in high-dimensional setting (largen_features) is very hard. The robust models here will probably not workin these settings.

Trade-offs: which estimator?

Scikit-learn provides 3 robust regression estimators:RANSAC,Theil Sen andHuberRegressor.

  • HuberRegressor should be faster thanRANSAC and Theil Senunless the number of samples are very large, i.e n_samples >> n_features.This is because RANSAC and Theil Senfit on smaller subsets of the data. However, both Theil Senand RANSAC are unlikely to be as robust asHuberRegressor for the default parameters.

  • RANSAC is faster than Theil Senand scales much better with the number of samples.

  • RANSAC will deal better with largeoutliers in the y direction (most common situation).

  • Theil Sen will cope better withmedium-size outliers in the X direction, but this property willdisappear in high-dimensional settings.

When in doubt, use RANSAC.

1.1.15.2. RANSAC: RANdom SAmple Consensus

RANSAC (RANdom SAmple Consensus) fits a model from random subsets ofinliers from the complete data set.

RANSAC is a non-deterministic algorithm producing only a reasonable result witha certain probability, which is dependent on the number of iterations (seemax_trials parameter). It is typically used for linear and non-linearregression problems and is especially popular in the field of photogrammetriccomputer vision.

The algorithm splits the complete input sample data into a set of inliers,which may be subject to noise, and outliers, which are e.g. caused by erroneousmeasurements or invalid hypotheses about the data. The resulting model is thenestimated only from the determined inliers.

../_images/sphx_glr_plot_ransac_0011.png

1.1.15.2.1. Details of the algorithm

Each iteration performs the following steps:

  • Select min_samples random samples from the original data and checkwhether the set of data is valid (see is_data_valid).

  • Fit a model to the random subset (base_estimator.fit) and checkwhether the estimated model is valid (see is_model_valid).

  • Classify all data as inliers or outliers by calculating the residualsto the estimated model (base_estimator.predict(X) - y) - all datasamples with absolute residuals smaller than the residual_thresholdare considered as inliers.

  • Save fitted model as best model if number of inlier samples ismaximal. In case the current estimated model has the same number ofinliers, it is only considered as the best model if it has better score.

These steps are performed either a maximum number of times (max_trials) oruntil one of the special stop criteria are met (see stop_n_inliers andstop_score). The final model is estimated using all inlier samples (consensusset) of the previously determined best model.

The is_data_valid and is_model_valid functions allow to identify and rejectdegenerate combinations of random sub-samples. If the estimated model is notneeded for identifying degenerate cases, is_data_valid should be used as itis called prior to fitting the model and thus leading to better computationalperformance.

Examples:

References:

1.1.15.3. Theil-Sen estimator: generalized-median-based estimator

The TheilSenRegressor estimator uses a generalization of the median inmultiple dimensions. It is thus robust to multivariate outliers. Note howeverthat the robustness of the estimator decreases quickly with the dimensionalityof the problem. It loses its robustness properties and becomes nobetter than an ordinary least squares in high dimension.

Examples:

References:

1.1.15.3.1. Theoretical considerations

TheilSenRegressor is comparable to the Ordinary Least Squares(OLS) in terms of asymptotic efficiency and as anunbiased estimator. In contrast to OLS, Theil-Sen is a non-parametricmethod which means it makes no assumption about the underlyingdistribution of the data. Since Theil-Sen is a median-based estimator, itis more robust against corrupted data aka outliers. In univariatesetting, Theil-Sen has a breakdown point of about 29.3% in case of asimple linear regression which means that it can tolerate arbitrarycorrupted data of up to 29.3%.

../_images/sphx_glr_plot_theilsen_0011.png

The implementation of TheilSenRegressor in scikit-learn follows ageneralization to a multivariate linear regression model 10 using thespatial median which is a generalization of the median to multipledimensions 11.

In terms of time and space complexity, Theil-Sen scales according to

1.1. Linear Models - 图124

which makes it infeasible to be applied exhaustively to problems with alarge number of samples and features. Therefore, the magnitude of asubpopulation can be chosen to limit the time and space complexity byconsidering only a random subset of all possible combinations.

Examples:

References:

1.1.15.4. Huber Regression

The HuberRegressor is different to Ridge because it applies alinear loss to samples that are classified as outliers.A sample is classified as an inlier if the absolute error of that sample islesser than a certain threshold. It differs from TheilSenRegressorand RANSACRegressor because it does not ignore the effect of the outliersbut gives a lesser weight to them.

../_images/sphx_glr_plot_huber_vs_ridge_001.png

The loss function that HuberRegressor minimizes is given by

1.1. Linear Models - 图126

where

1.1. Linear Models - 图127

It is advised to set the parameter epsilon to 1.35 to achieve 95% statistical efficiency.

1.1.15.5. Notes

The HuberRegressor differs from using SGDRegressor with loss set to huberin the following ways.

  • HuberRegressor is scaling invariant. Once epsilon is set, scaling X and ydown or up by different values would produce the same robustness to outliers as before.as compared to SGDRegressor where epsilon has to be set again when X and y arescaled.

  • HuberRegressor should be more efficient to use on data with small number ofsamples while SGDRegressor needs a number of passes on the training data toproduce the same robustness.

Examples:

References:

  • Peter J. Huber, Elvezio M. Ronchetti: Robust Statistics, Concomitant scale estimates, pg 172

Note that this estimator is different from the R implementation of Robust Regression(http://www.ats.ucla.edu/stat/r/dae/rreg.htm) because the R implementation does a weighted leastsquares implementation with weights given to each sample on the basis of how much the residual isgreater than a certain threshold.

1.1.16. Polynomial regression: extending linear models with basis functions

One common pattern within machine learning is to use linear models trainedon nonlinear functions of the data. This approach maintains the generallyfast performance of linear methods, while allowing them to fit a much widerrange of data.

For example, a simple linear regression can be extended by constructingpolynomial features from the coefficients. In the standard linearregression case, you might have a model that looks like this fortwo-dimensional data:

1.1. Linear Models - 图128

If we want to fit a paraboloid to the data instead of a plane, we can combinethe features in second-order polynomials, so that the model looks like this:

1.1. Linear Models - 图129

The (sometimes surprising) observation is that this is still a linear model:to see this, imagine creating a new set of features

1.1. Linear Models - 图130

With this re-labeling of the data, our problem can be written

1.1. Linear Models - 图131

We see that the resulting polynomial regression is in the same class oflinear models we considered above (i.e. the model is linear in

1.1. Linear Models - 图132)and can be solved by the same techniques. By considering linear fits withina higher-dimensional space built with these basis functions, the model has theflexibility to fit a much broader range of data.

Here is an example of applying this idea to one-dimensional data, usingpolynomial features of varying degrees:

../_images/sphx_glr_plot_polynomial_interpolation_0011.png

This figure is created using the PolynomialFeatures transformer, whichtransforms an input data matrix into a new data matrix of a given degree.It can be used as follows:

>>>

  1. >>> from sklearn.preprocessing import PolynomialFeatures
  2. >>> import numpy as np
  3. >>> X = np.arange(6).reshape(3, 2)
  4. >>> X
  5. array([[0, 1],
  6. [2, 3],
  7. [4, 5]])
  8. >>> poly = PolynomialFeatures(degree=2)
  9. >>> poly.fit_transform(X)
  10. array([[ 1., 0., 1., 0., 0., 1.],
  11. [ 1., 2., 3., 4., 6., 9.],
  12. [ 1., 4., 5., 16., 20., 25.]])

The features of X have been transformed from

1.1. Linear Models - 图134 to1.1. Linear Models - 图135, and can now be used withinany linear model.

This sort of preprocessing can be streamlined with thePipeline tools. A single object representing a simplepolynomial regression can be created and used as follows:

>>>

  1. >>> from sklearn.preprocessing import PolynomialFeatures
  2. >>> from sklearn.linear_model import LinearRegression
  3. >>> from sklearn.pipeline import Pipeline
  4. >>> import numpy as np
  5. >>> model = Pipeline([('poly', PolynomialFeatures(degree=3)),
  6. ... ('linear', LinearRegression(fit_intercept=False))])
  7. >>> # fit to an order-3 polynomial data
  8. >>> x = np.arange(5)
  9. >>> y = 3 - 2 * x + x ** 2 - x ** 3
  10. >>> model = model.fit(x[:, np.newaxis], y)
  11. >>> model.named_steps['linear'].coef_
  12. array([ 3., -2., 1., -1.])

The linear model trained on polynomial features is able to exactly recoverthe input polynomial coefficients.

In some cases it’s not necessary to include higher powers of any single feature,but only the so-called _interaction features_that multiply together at most

1.1. Linear Models - 图136 distinct features.These can be gotten from PolynomialFeatures with the settinginteraction_only=True.

For example, when dealing with boolean features,

1.1. Linear Models - 图137 for all1.1. Linear Models - 图138 and is therefore useless;but1.1. Linear Models - 图139 represents the conjunction of two booleans.This way, we can solve the XOR problem with a linear classifier:

>>>

  1. >>> from sklearn.linear_model import Perceptron
  2. >>> from sklearn.preprocessing import PolynomialFeatures
  3. >>> import numpy as np
  4. >>> X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
  5. >>> y = X[:, 0] ^ X[:, 1]
  6. >>> y
  7. array([0, 1, 1, 0])
  8. >>> X = PolynomialFeatures(interaction_only=True).fit_transform(X).astype(int)
  9. >>> X
  10. array([[1, 0, 0, 0],
  11. [1, 0, 1, 0],
  12. [1, 1, 0, 0],
  13. [1, 1, 1, 1]])
  14. >>> clf = Perceptron(fit_intercept=False, max_iter=10, tol=None,
  15. ... shuffle=False).fit(X, y)

And the classifier “predictions” are perfect:

>>>

  1. >>> clf.predict(X)
  2. array([0, 1, 1, 0])
  3. >>> clf.score(X, y)
  4. 1.0