6.3. Preprocessing data

The sklearn.preprocessing package provides several commonutility functions and transformer classes to change raw feature vectorsinto a representation that is more suitable for the downstream estimators.

In general, learning algorithms benefit from standardization of the data set. Ifsome outliers are present in the set, robust scalers or transformers are moreappropriate. The behaviors of the different scalers, transformers, andnormalizers on a dataset containing marginal outliers is highlighted inCompare the effect of different scalers on data with outliers.

6.3.1. Standardization, or mean removal and variance scaling

Standardization of datasets is a common requirement for manymachine learning estimators implemented in scikit-learn; they might behavebadly if the individual features do not more or less look like standardnormally distributed data: Gaussian with zero mean and unit variance.

In practice we often ignore the shape of the distribution and justtransform the data to center it by removing the mean value of eachfeature, then scale it by dividing non-constant features by theirstandard deviation.

For instance, many elements used in the objective function ofa learning algorithm (such as the RBF kernel of Support VectorMachines or the l1 and l2 regularizers of linear models) assume thatall features are centered around zero and have variance in the sameorder. If a feature has a variance that is orders of magnitude largerthan others, it might dominate the objective function and make theestimator unable to learn from other features correctly as expected.

The function scale provides a quick and easy way to perform thisoperation on a single array-like dataset:

>>>

  1. >>> from sklearn import preprocessing
  2. >>> import numpy as np
  3. >>> X_train = np.array([[ 1., -1., 2.],
  4. ... [ 2., 0., 0.],
  5. ... [ 0., 1., -1.]])
  6. >>> X_scaled = preprocessing.scale(X_train)
  7.  
  8. >>> X_scaled
  9. array([[ 0. ..., -1.22..., 1.33...],
  10. [ 1.22..., 0. ..., -0.26...],
  11. [-1.22..., 1.22..., -1.06...]])

Scaled data has zero mean and unit variance:

>>>

  1. >>> X_scaled.mean(axis=0)
  2. array([0., 0., 0.])
  3.  
  4. >>> X_scaled.std(axis=0)
  5. array([1., 1., 1.])

The preprocessing module further provides a utility classStandardScaler that implements the Transformer API to computethe mean and standard deviation on a training set so as to beable to later reapply the same transformation on the testing set.This class is hence suitable for use in the early steps of asklearn.pipeline.Pipeline:

>>>

  1. >>> scaler = preprocessing.StandardScaler().fit(X_train)
  2. >>> scaler
  3. StandardScaler()
  4.  
  5. >>> scaler.mean_
  6. array([1. ..., 0. ..., 0.33...])
  7.  
  8. >>> scaler.scale_
  9. array([0.81..., 0.81..., 1.24...])
  10.  
  11. >>> scaler.transform(X_train)
  12. array([[ 0. ..., -1.22..., 1.33...],
  13. [ 1.22..., 0. ..., -0.26...],
  14. [-1.22..., 1.22..., -1.06...]])

The scaler instance can then be used on new data to transform it thesame way it did on the training set:

>>>

  1. >>> X_test = [[-1., 1., 0.]]
  2. >>> scaler.transform(X_test)
  3. array([[-2.44..., 1.22..., -0.26...]])

It is possible to disable either centering or scaling by eitherpassing with_mean=False or with_std=False to the constructorof StandardScaler.

6.3.1.1. Scaling features to a range

An alternative standardization is scaling features tolie between a given minimum and maximum value, often between zero and one,or so that the maximum absolute value of each feature is scaled to unit size.This can be achieved using MinMaxScaler or MaxAbsScaler,respectively.

The motivation to use this scaling include robustness to very smallstandard deviations of features and preserving zero entries in sparse data.

Here is an example to scale a toy data matrix to the [0, 1] range:

>>>

  1. >>> X_train = np.array([[ 1., -1., 2.],
  2. ... [ 2., 0., 0.],
  3. ... [ 0., 1., -1.]])
  4. ...
  5. >>> min_max_scaler = preprocessing.MinMaxScaler()
  6. >>> X_train_minmax = min_max_scaler.fit_transform(X_train)
  7. >>> X_train_minmax
  8. array([[0.5 , 0. , 1. ],
  9. [1. , 0.5 , 0.33333333],
  10. [0. , 1. , 0. ]])

The same instance of the transformer can then be applied to some new test dataunseen during the fit call: the same scaling and shifting operations will beapplied to be consistent with the transformation performed on the train data:

>>>

  1. >>> X_test = np.array([[-3., -1., 4.]])
  2. >>> X_test_minmax = min_max_scaler.transform(X_test)
  3. >>> X_test_minmax
  4. array([[-1.5 , 0. , 1.66666667]])

It is possible to introspect the scaler attributes to find about the exactnature of the transformation learned on the training data:

>>>

  1. >>> min_max_scaler.scale_
  2. array([0.5 , 0.5 , 0.33...])
  3.  
  4. >>> min_max_scaler.min_
  5. array([0. , 0.5 , 0.33...])

If MinMaxScaler is given an explicit feature_range=(min, max) thefull formula is:

  1. X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
  2.  
  3. X_scaled = X_std * (max - min) + min

MaxAbsScaler works in a very similar fashion, but scales in a waythat the training data lies within the range [-1, 1] by dividing throughthe largest maximum value in each feature. It is meant for datathat is already centered at zero or sparse data.

Here is how to use the toy data from the previous example with this scaler:

>>>

  1. >>> X_train = np.array([[ 1., -1., 2.],
  2. ... [ 2., 0., 0.],
  3. ... [ 0., 1., -1.]])
  4. ...
  5. >>> max_abs_scaler = preprocessing.MaxAbsScaler()
  6. >>> X_train_maxabs = max_abs_scaler.fit_transform(X_train)
  7. >>> X_train_maxabs
  8. array([[ 0.5, -1. , 1. ],
  9. [ 1. , 0. , 0. ],
  10. [ 0. , 1. , -0.5]])
  11. >>> X_test = np.array([[ -3., -1., 4.]])
  12. >>> X_test_maxabs = max_abs_scaler.transform(X_test)
  13. >>> X_test_maxabs
  14. array([[-1.5, -1. , 2. ]])
  15. >>> max_abs_scaler.scale_
  16. array([2., 1., 2.])

As with scale, the module further provides convenience functionsminmax_scale and maxabs_scale if you don’t want to createan object.

6.3.1.2. Scaling sparse data

Centering sparse data would destroy the sparseness structure in the data, andthus rarely is a sensible thing to do. However, it can make sense to scalesparse inputs, especially if features are on different scales.

MaxAbsScaler and maxabs_scale were specifically designedfor scaling sparse data, and are the recommended way to go about this.However, scale and StandardScaler can accept scipy.sparsematrices as input, as long as with_mean=False is explicitly passedto the constructor. Otherwise a ValueError will be raised assilently centering would break the sparsity and would often crash theexecution by allocating excessive amounts of memory unintentionally.RobustScaler cannot be fitted to sparse inputs, but you can usethe transform method on sparse inputs.

Note that the scalers accept both Compressed Sparse Rows and CompressedSparse Columns format (see scipy.sparse.csr_matrix andscipy.sparse.csc_matrix). Any other sparse input will be converted tothe Compressed Sparse Rows representation. To avoid unnecessary memorycopies, it is recommended to choose the CSR or CSC representation upstream.

Finally, if the centered data is expected to be small enough, explicitlyconverting the input to an array using the toarray method of sparse matricesis another option.

6.3.1.3. Scaling data with outliers

If your data contains many outliers, scaling using the mean and varianceof the data is likely to not work very well. In these cases, you can userobust_scale and RobustScaler as drop-in replacementsinstead. They use more robust estimates for the center and range of yourdata.

References:

Further discussion on the importance of centering and scaling data isavailable on this FAQ: Should I normalize/standardize/rescale the data?

Scaling vs Whitening

It is sometimes not enough to center and scale the featuresindependently, since a downstream model can further make some assumptionon the linear independence of the features.

To address this issue you can use sklearn.decomposition.PCA withwhiten=True to further remove the linear correlation across features.

Scaling a 1D array

All above functions (i.e. scale, minmax_scale,maxabs_scale, and robust_scale) accept 1D array which can beuseful in some specific case.

6.3.1.4. Centering kernel matrices

If you have a kernel matrix of a kernel

6.3. Preprocessing data - 图1 that computes a dot productin a feature space defined by function6.3. Preprocessing data - 图2,a KernelCenterer can transform the kernel matrixso that it contains inner products in the feature spacedefined by6.3. Preprocessing data - 图3 followed by removal of the mean in that space.

6.3.2. Non-linear transformation

Two types of transformations are available: quantile transforms and powertransforms. Both quantile and power transforms are based on monotonictransformations of the features and thus preserve the rank of the valuesalong each feature.

Quantile transforms put all features into the same desired distribution basedon the formula

6.3. Preprocessing data - 图4 where6.3. Preprocessing data - 图5 is the cumulativedistribution function of the feature and6.3. Preprocessing data - 图6 thequantile function of thedesired output distribution6.3. Preprocessing data - 图7. This formula is using the two followingfacts: (i) if6.3. Preprocessing data - 图8 is a random variable with a continuous cumulativedistribution function6.3. Preprocessing data - 图9 then6.3. Preprocessing data - 图10 is uniformly distributed on6.3. Preprocessing data - 图11; (ii) if6.3. Preprocessing data - 图12 is a random variable with uniform distributionon6.3. Preprocessing data - 图13 then6.3. Preprocessing data - 图14 has distribution6.3. Preprocessing data - 图15. By performinga rank transformation, a quantile transform smooths out unusual distributionsand is less influenced by outliers than scaling methods. It does, however,distort correlations and distances within and across features.

Power transforms are a family of parametric transformations that aim to mapdata from any distribution to as close to a Gaussian distribution.

6.3.2.1. Mapping to a Uniform distribution

QuantileTransformer and quantile_transform provide anon-parametric transformation to map the data to a uniform distributionwith values between 0 and 1:

>>>

  1. >>> from sklearn.datasets import load_iris
  2. >>> from sklearn.model_selection import train_test_split
  3. >>> X, y = load_iris(return_X_y=True)
  4. >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
  5. >>> quantile_transformer = preprocessing.QuantileTransformer(random_state=0)
  6. >>> X_train_trans = quantile_transformer.fit_transform(X_train)
  7. >>> X_test_trans = quantile_transformer.transform(X_test)
  8. >>> np.percentile(X_train[:, 0], [0, 25, 50, 75, 100])
  9. array([ 4.3, 5.1, 5.8, 6.5, 7.9])

This feature corresponds to the sepal length in cm. Once the quantiletransformation applied, those landmarks approach closely the percentilespreviously defined:

>>>

  1. >>> np.percentile(X_train_trans[:, 0], [0, 25, 50, 75, 100])
  2. ...
  3. array([ 0.00... , 0.24..., 0.49..., 0.73..., 0.99... ])

This can be confirmed on a independent testing set with similar remarks:

>>>

  1. >>> np.percentile(X_test[:, 0], [0, 25, 50, 75, 100])
  2. ...
  3. array([ 4.4 , 5.125, 5.75 , 6.175, 7.3 ])
  4. >>> np.percentile(X_test_trans[:, 0], [0, 25, 50, 75, 100])
  5. ...
  6. array([ 0.01..., 0.25..., 0.46..., 0.60... , 0.94...])

6.3.2.2. Mapping to a Gaussian distribution

In many modeling scenarios, normality of the features in a dataset is desirable.Power transforms are a family of parametric, monotonic transformations that aimto map data from any distribution to as close to a Gaussian distribution aspossible in order to stabilize variance and minimize skewness.

PowerTransformer currently provides two such power transformations,the Yeo-Johnson transform and the Box-Cox transform.

The Yeo-Johnson transform is given by:

6.3. Preprocessing data - 图16

while the Box-Cox transform is given by:

6.3. Preprocessing data - 图17

Box-Cox can only be applied to strictly positive data. In both methods, thetransformation is parameterized by

6.3. Preprocessing data - 图18, which is determined throughmaximum likelihood estimation. Here is an example of using Box-Cox to mapsamples drawn from a lognormal distribution to a normal distribution:

>>>

  1. >>> pt = preprocessing.PowerTransformer(method='box-cox', standardize=False)
  2. >>> X_lognormal = np.random.RandomState(616).lognormal(size=(3, 3))
  3. >>> X_lognormal
  4. array([[1.28..., 1.18..., 0.84...],
  5. [0.94..., 1.60..., 0.38...],
  6. [1.35..., 0.21..., 1.09...]])
  7. >>> pt.fit_transform(X_lognormal)
  8. array([[ 0.49..., 0.17..., -0.15...],
  9. [-0.05..., 0.58..., -0.57...],
  10. [ 0.69..., -0.84..., 0.10...]])

While the above example sets the standardize option to False,PowerTransformer will apply zero-mean, unit-variance normalizationto the transformed output by default.

Below are examples of Box-Cox and Yeo-Johnson applied to various probabilitydistributions. Note that when applied to certain distributions, the powertransforms achieve very Gaussian-like results, but with others, they areineffective. This highlights the importance of visualizing the data before andafter transformation.

../_images/sphx_glr_plot_map_data_to_normal_0011.png

It is also possible to map data to a normal distribution usingQuantileTransformer by setting output_distribution='normal'.Using the earlier example with the iris dataset:

>>>

  1. >>> quantile_transformer = preprocessing.QuantileTransformer(
  2. ... output_distribution='normal', random_state=0)
  3. >>> X_trans = quantile_transformer.fit_transform(X)
  4. >>> quantile_transformer.quantiles_
  5. array([[4.3, 2. , 1. , 0.1],
  6. [4.4, 2.2, 1.1, 0.1],
  7. [4.4, 2.2, 1.2, 0.1],
  8. ...,
  9. [7.7, 4.1, 6.7, 2.5],
  10. [7.7, 4.2, 6.7, 2.5],
  11. [7.9, 4.4, 6.9, 2.5]])

Thus the median of the input becomes the mean of the output, centered at 0. Thenormal output is clipped so that the input’s minimum and maximum —corresponding to the 1e-7 and 1 - 1e-7 quantiles respectively — do notbecome infinite under the transformation.

6.3.3. Normalization

Normalization is the process of scaling individual samples to haveunit norm. This process can be useful if you plan to use a quadratic formsuch as the dot-product or any other kernel to quantify the similarityof any pair of samples.

This assumption is the base of the Vector Space Model often used in textclassification and clustering contexts.

The function normalize provides a quick and easy way to perform thisoperation on a single array-like dataset, either using the l1 or l2norms:

>>>

  1. >>> X = [[ 1., -1., 2.],
  2. ... [ 2., 0., 0.],
  3. ... [ 0., 1., -1.]]
  4. >>> X_normalized = preprocessing.normalize(X, norm='l2')
  5.  
  6. >>> X_normalized
  7. array([[ 0.40..., -0.40..., 0.81...],
  8. [ 1. ..., 0. ..., 0. ...],
  9. [ 0. ..., 0.70..., -0.70...]])

The preprocessing module further provides a utility classNormalizer that implements the same operation using theTransformer API (even though the fit method is useless in this case:the class is stateless as this operation treats samples independently).

This class is hence suitable for use in the early steps of asklearn.pipeline.Pipeline:

>>>

  1. >>> normalizer = preprocessing.Normalizer().fit(X) # fit does nothing
  2. >>> normalizer
  3. Normalizer()

The normalizer instance can then be used on sample vectors as any transformer:

>>>

  1. >>> normalizer.transform(X)
  2. array([[ 0.40..., -0.40..., 0.81...],
  3. [ 1. ..., 0. ..., 0. ...],
  4. [ 0. ..., 0.70..., -0.70...]])
  5.  
  6. >>> normalizer.transform([[-1., 1., 0.]])
  7. array([[-0.70..., 0.70..., 0. ...]])

Note: L2 normalization is also known as spatial sign preprocessing.

Sparse input

normalize and Normalizer accept both dense array-likeand sparse matrices from scipy.sparse as input.

For sparse input the data is converted to the Compressed Sparse Rowsrepresentation (see scipy.sparse.csr_matrix) before being fed toefficient Cython routines. To avoid unnecessary memory copies, it isrecommended to choose the CSR representation upstream.

6.3.4. Encoding categorical features

Often features are not given as continuous values but categorical.For example a person could have features ["male", "female"],["from Europe", "from US", "from Asia"],["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"].Such features can be efficiently coded as integers, for instance["male", "from US", "uses Internet Explorer"] could be expressed as[0, 1, 3] while ["female", "from Asia", "uses Chrome"] would be[1, 2, 1].

To convert categorical features to such integer codes, we can use theOrdinalEncoder. This estimator transforms each categorical feature to onenew feature of integers (0 to n_categories - 1):

>>>

  1. >>> enc = preprocessing.OrdinalEncoder()
  2. >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
  3. >>> enc.fit(X)
  4. OrdinalEncoder()
  5. >>> enc.transform([['female', 'from US', 'uses Safari']])
  6. array([[0., 1., 1.]])

Such integer representation can, however, not be used directly with allscikit-learn estimators, as these expect continuous input, and would interpretthe categories as being ordered, which is often not desired (i.e. the set ofbrowsers was ordered arbitrarily).

Another possibility to convert categorical features to features that can be usedwith scikit-learn estimators is to use a one-of-K, also known as one-hot ordummy encoding.This type of encoding can be obtained with the OneHotEncoder,which transforms each categorical feature withn_categories possible values into n_categories binary features, withone of them 1, and all others 0.

Continuing the example above:

>>>

  1. >>> enc = preprocessing.OneHotEncoder()
  2. >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
  3. >>> enc.fit(X)
  4. OneHotEncoder()
  5. >>> enc.transform([['female', 'from US', 'uses Safari'],
  6. ... ['male', 'from Europe', 'uses Safari']]).toarray()
  7. array([[1., 0., 0., 1., 0., 1.],
  8. [0., 1., 1., 0., 0., 1.]])

By default, the values each feature can take is inferred automaticallyfrom the dataset and can be found in the categories_ attribute:

>>>

  1. >>> enc.categories_
  2. [array(['female', 'male'], dtype=object), array(['from Europe', 'from US'], dtype=object), array(['uses Firefox', 'uses Safari'], dtype=object)]

It is possible to specify this explicitly using the parameter categories.There are two genders, four possible continents and four web browsers in ourdataset:

>>>

  1. >>> genders = ['female', 'male']
  2. >>> locations = ['from Africa', 'from Asia', 'from Europe', 'from US']
  3. >>> browsers = ['uses Chrome', 'uses Firefox', 'uses IE', 'uses Safari']
  4. >>> enc = preprocessing.OneHotEncoder(categories=[genders, locations, browsers])
  5. >>> # Note that for there are missing categorical values for the 2nd and 3rd
  6. >>> # feature
  7. >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
  8. >>> enc.fit(X)
  9. OneHotEncoder(categories=[['female', 'male'],
  10. ['from Africa', 'from Asia', 'from Europe',
  11. 'from US'],
  12. ['uses Chrome', 'uses Firefox', 'uses IE',
  13. 'uses Safari']])
  14. >>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray()
  15. array([[1., 0., 0., 1., 0., 0., 1., 0., 0., 0.]])

If there is a possibility that the training data might have missing categoricalfeatures, it can often be better to specify handle_unknown='ignore' insteadof setting the categories manually as above. Whenhandle_unknown='ignore' is specified and unknown categories are encounteredduring transform, no error will be raised but the resulting one-hot encodedcolumns for this feature will be all zeros(handle_unknown='ignore' is only supported for one-hot encoding):

>>>

  1. >>> enc = preprocessing.OneHotEncoder(handle_unknown='ignore')
  2. >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
  3. >>> enc.fit(X)
  4. OneHotEncoder(handle_unknown='ignore')
  5. >>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray()
  6. array([[1., 0., 0., 0., 0., 0.]])

It is also possible to encode each column into n_categories - 1 columnsinstead of n_categories columns by using the drop parameter. Thisparameter allows the user to specify a category for each feature to be dropped.This is useful to avoid co-linearity in the input matrix in some classifiers.Such functionality is useful, for example, when using non-regularizedregression (LinearRegression),since co-linearity would cause the covariance matrix to be non-invertible.When this paramenter is not None, handle_unknown must be set toerror:

>>>

  1. >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']]
  2. >>> drop_enc = preprocessing.OneHotEncoder(drop='first').fit(X)
  3. >>> drop_enc.categories_
  4. [array(['female', 'male'], dtype=object), array(['from Europe', 'from US'], dtype=object), array(['uses Firefox', 'uses Safari'], dtype=object)]
  5. >>> drop_enc.transform(X).toarray()
  6. array([[1., 1., 1.],
  7. [0., 0., 0.]])

See Loading features from dicts for categorical features that are representedas a dict, not as scalars.

6.3.5. Discretization

Discretization(otherwise known as quantization or binning) provides a way to partition continuousfeatures into discrete values. Certain datasets with continuous featuresmay benefit from discretization, because discretization can transform the datasetof continuous attributes to one with only nominal attributes.

One-hot encoded discretized features can make a model more expressive, whilemaintaining interpretability. For instance, pre-processing with a discretizercan introduce nonlinearity to linear models.

6.3.5.1. K-bins discretization

KBinsDiscretizer discretizes features into k bins:

>>>

  1. >>> X = np.array([[ -3., 5., 15 ],
  2. ... [ 0., 6., 14 ],
  3. ... [ 6., 3., 11 ]])
  4. >>> est = preprocessing.KBinsDiscretizer(n_bins=[3, 2, 2], encode='ordinal').fit(X)

By default the output is one-hot encoded into a sparse matrix(See Encoding categorical features)and this can be configured with the encode parameter.For each feature, the bin edges are computed during fit and together withthe number of bins, they will define the intervals. Therefore, for the currentexample, these intervals are defined as:

  • feature 1:

    6.3. Preprocessing data - 图20

  • feature 2:

    6.3. Preprocessing data - 图21

  • feature 3:

    6.3. Preprocessing data - 图22

Based on these bin intervals, X is transformed as follows:

>>>

  1. >>> est.transform(X)
  2. array([[ 0., 1., 1.],
  3. [ 1., 1., 1.],
  4. [ 2., 0., 0.]])

The resulting dataset contains ordinal attributes which can be further usedin a sklearn.pipeline.Pipeline.

Discretization is similar to constructing histograms for continuous data.However, histograms focus on counting features which fall into particularbins, whereas discretization focuses on assigning feature values to these bins.

KBinsDiscretizer implements different binning strategies, which can beselected with the strategy parameter. The ‘uniform’ strategy usesconstant-width bins. The ‘quantile’ strategy uses the quantiles values to haveequally populated bins in each feature. The ‘kmeans’ strategy defines bins basedon a k-means clustering procedure performed on each feature independently.

Examples:

6.3.5.2. Feature binarization

Feature binarization is the process of thresholding numericalfeatures to get boolean values. This can be useful for downstreamprobabilistic estimators that make assumption that the input datais distributed according to a multi-variate Bernoulli distribution. For instance,this is the case for the sklearn.neural_network.BernoulliRBM.

It is also common among the text processing community to use binaryfeature values (probably to simplify the probabilistic reasoning) evenif normalized counts (a.k.a. term frequencies) or TF-IDF valued featuresoften perform slightly better in practice.

As for the Normalizer, the utility classBinarizer is meant to be used in the early stages ofsklearn.pipeline.Pipeline. The fit method does nothingas each sample is treated independently of others:

>>>

  1. >>> X = [[ 1., -1., 2.],
  2. ... [ 2., 0., 0.],
  3. ... [ 0., 1., -1.]]
  4.  
  5. >>> binarizer = preprocessing.Binarizer().fit(X) # fit does nothing
  6. >>> binarizer
  7. Binarizer()
  8.  
  9. >>> binarizer.transform(X)
  10. array([[1., 0., 1.],
  11. [1., 0., 0.],
  12. [0., 1., 0.]])

It is possible to adjust the threshold of the binarizer:

>>>

  1. >>> binarizer = preprocessing.Binarizer(threshold=1.1)
  2. >>> binarizer.transform(X)
  3. array([[0., 0., 1.],
  4. [1., 0., 0.],
  5. [0., 0., 0.]])

As for the StandardScaler and Normalizer classes, thepreprocessing module provides a companion function binarizeto be used when the transformer API is not necessary.

Note that the Binarizer is similar to the KBinsDiscretizerwhen k = 2, and when the bin edge is at the value threshold.

Sparse input

binarize and Binarizer accept both dense array-likeand sparse matrices from scipy.sparse as input.

For sparse input the data is converted to the Compressed Sparse Rowsrepresentation (see scipy.sparse.csr_matrix).To avoid unnecessary memory copies, it is recommended to choose the CSRrepresentation upstream.

6.3.6. Imputation of missing values

Tools for imputing missing values are discussed at Imputation of missing values.

6.3.7. Generating polynomial features

Often it’s useful to add complexity to the model by considering nonlinear features of the input data. A simple and common method to use is polynomial features, which can get features’ high-order and interaction terms. It is implemented in PolynomialFeatures:

>>>

  1. >>> import numpy as np
  2. >>> from sklearn.preprocessing import PolynomialFeatures
  3. >>> X = np.arange(6).reshape(3, 2)
  4. >>> X
  5. array([[0, 1],
  6. [2, 3],
  7. [4, 5]])
  8. >>> poly = PolynomialFeatures(2)
  9. >>> poly.fit_transform(X)
  10. array([[ 1., 0., 1., 0., 0., 1.],
  11. [ 1., 2., 3., 4., 6., 9.],
  12. [ 1., 4., 5., 16., 20., 25.]])

The features of X have been transformed from

6.3. Preprocessing data - 图23 to6.3. Preprocessing data - 图24.

In some cases, only interaction terms among features are required, and it can be gotten with the setting interaction_only=True:

>>>

  1. >>> X = np.arange(9).reshape(3, 3)
  2. >>> X
  3. array([[0, 1, 2],
  4. [3, 4, 5],
  5. [6, 7, 8]])
  6. >>> poly = PolynomialFeatures(degree=3, interaction_only=True)
  7. >>> poly.fit_transform(X)
  8. array([[ 1., 0., 1., 2., 0., 0., 2., 0.],
  9. [ 1., 3., 4., 5., 12., 15., 20., 60.],
  10. [ 1., 6., 7., 8., 42., 48., 56., 336.]])

The features of X have been transformed from

6.3. Preprocessing data - 图25 to6.3. Preprocessing data - 图26.

Note that polynomial features are used implicitly in kernel methods (e.g., sklearn.svm.SVC, sklearn.decomposition.KernelPCA) when using polynomial Kernel functions.

See Polynomial interpolation for Ridge regression using created polynomial features.

6.3.8. Custom transformers

Often, you will want to convert an existing Python function into a transformerto assist in data cleaning or processing. You can implement a transformer froman arbitrary function with FunctionTransformer. For example, to builda transformer that applies a log transformation in a pipeline, do:

>>>

  1. >>> import numpy as np
  2. >>> from sklearn.preprocessing import FunctionTransformer
  3. >>> transformer = FunctionTransformer(np.log1p, validate=True)
  4. >>> X = np.array([[0, 1], [2, 3]])
  5. >>> transformer.transform(X)
  6. array([[0. , 0.69314718],
  7. [1.09861229, 1.38629436]])

You can ensure that func and inverse_func are the inverse of each otherby setting check_inverse=True and calling fit beforetransform. Please note that a warning is raised and can be turned into anerror with a filterwarnings:

>>>

  1. >>> import warnings
  2. >>> warnings.filterwarnings("error", message=".*check_inverse*.",
  3. ... category=UserWarning, append=False)

For a full code example that demonstrates using a FunctionTransformerto do custom feature selection,see Using FunctionTransformer to select columns