2.1. Gaussian mixture models

sklearn.mixture is a package which enables one to learnGaussian Mixture Models (diagonal, spherical, tied and full covariancematrices supported), sample them, and estimate them fromdata. Facilities to help determine the appropriate number ofcomponents are also provided.

../_images/sphx_glr_plot_gmm_pdf_0011.png

Two-component Gaussian mixture model: data points, and equi-probabilitysurfaces of the model.

A Gaussian mixture model is a probabilistic model that assumes all thedata points are generated from a mixture of a finite number ofGaussian distributions with unknown parameters. One can think ofmixture models as generalizing k-means clustering to incorporateinformation about the covariance structure of the data as well as thecenters of the latent Gaussians.

Scikit-learn implements different classes to estimate Gaussianmixture models, that correspond to different estimation strategies,detailed below.

2.1.1. Gaussian Mixture

The GaussianMixture object implements theexpectation-maximization (EM)algorithm for fitting mixture-of-Gaussian models. It can also drawconfidence ellipsoids for multivariate models, and compute theBayesian Information Criterion to assess the number of clusters in thedata. A GaussianMixture.fit method is provided that learns a GaussianMixture Model from train data. Given test data, it can assign to eachsample the Gaussian it mostly probably belong to usingthe GaussianMixture.predict method.

The GaussianMixture comes with different options to constrain thecovariance of the difference classes estimated: spherical, diagonal, tied orfull covariance.

../_images/sphx_glr_plot_gmm_covariances_0011.png

Examples:

2.1.1.1. Pros and cons of class GaussianMixture

2.1.1.1.1. Pros

  • Speed
  • It is the fastest algorithm for learning mixture models

  • Agnostic

  • As this algorithm maximizes only the likelihood, itwill not bias the means towards zero, or bias the cluster sizes tohave specific structures that might or might not apply.

2.1.1.1.2. Cons

  • Singularities
  • When one has insufficiently many points permixture, estimating the covariance matrices becomes difficult,and the algorithm is known to diverge and find solutions withinfinite likelihood unless one regularizes the covariances artificially.

  • Number of components

  • This algorithm will always use all thecomponents it has access to, needing held-out dataor information theoretical criteria to decide how many components to usein the absence of external cues.

2.1.1.2. Selecting the number of components in a classical Gaussian Mixture Model

The BIC criterion can be used to select the number of components in a GaussianMixture in an efficient way. In theory, it recovers the true number ofcomponents only in the asymptotic regime (i.e. if much data is available andassuming that the data was actually generated i.i.d. from a mixture of Gaussiandistribution). Note that using a Variational Bayesian Gaussian mixtureavoids the specification of the number of components for a Gaussian mixturemodel.

../_images/sphx_glr_plot_gmm_selection_0011.png

Examples:

2.1.1.3. Estimation algorithm Expectation-maximization

The main difficulty in learning Gaussian mixture models from unlabeleddata is that it is one usually doesn’t know which points came fromwhich latent component (if one has access to this information it getsvery easy to fit a separate Gaussian distribution to each set ofpoints). Expectation-maximizationis a well-founded statisticalalgorithm to get around this problem by an iterative process. Firstone assumes random components (randomly centered on data points,learned from k-means, or even just normally distributed around theorigin) and computes for each point a probability of being generated byeach component of the model. Then, one tweaks theparameters to maximize the likelihood of the data given thoseassignments. Repeating this process is guaranteed to always convergeto a local optimum.

2.1.2. Variational Bayesian Gaussian Mixture

The BayesianGaussianMixture object implements a variant of theGaussian mixture model with variational inference algorithms. The API issimilar as the one defined by GaussianMixture.

2.1.2.1. Estimation algorithm: variational inference

Variational inference is an extension of expectation-maximization thatmaximizes a lower bound on model evidence (includingpriors) instead of data likelihood. The principle behindvariational methods is the same as expectation-maximization (that isboth are iterative algorithms that alternate between finding theprobabilities for each point to be generated by each mixture andfitting the mixture to these assigned points), but variationalmethods add regularization by integrating information from priordistributions. This avoids the singularities often found inexpectation-maximization solutions but introduces some subtle biasesto the model. Inference is often notably slower, but not usually asmuch so as to render usage unpractical.

Due to its Bayesian nature, the variational algorithm needs more hyper-parameters than expectation-maximization, the most important of these being theconcentration parameter weight_concentration_prior. Specifying a low valuefor the concentration prior will make the model put most of the weight on fewcomponents set the remaining components weights very close to zero. High valuesof the concentration prior will allow a larger number of components to be activein the mixture.

The parameters implementation of the BayesianGaussianMixture classproposes two types of prior for the weights distribution: a finite mixture modelwith Dirichlet distribution and an infinite mixture model with the DirichletProcess. In practice Dirichlet Process inference algorithm is approximated anduses a truncated distribution with a fixed maximum number of components (calledthe Stick-breaking representation). The number of components actually usedalmost always depends on the data.

The next figure compares the results obtained for the different type of theweight concentration prior (parameter weight_concentration_prior_type)for different values of weight_concentration_prior.Here, we can see the value of the weight_concentration_prior parameterhas a strong impact on the effective number of active components obtained. Wecan also notice that large values for the concentration weight prior lead tomore uniform weights when the type of prior is ‘dirichlet_distribution’ whilethis is not necessarily the case for the ‘dirichlet_process’ type (used bydefault).

plot_bgmmplot_dpgmm

The examples below compare Gaussian mixture models with a fixed number ofcomponents, to the variational Gaussian mixture models with a Dirichlet processprior. Here, a classical Gaussian mixture is fitted with 5 components on adataset composed of 2 clusters. We can see that the variational Gaussian mixturewith a Dirichlet process prior is able to limit itself to only 2 componentswhereas the Gaussian mixture fits the data with a fixed number of componentsthat has to be set a priori by the user. In this case the user has selectedn_components=5 which does not match the true generative distribution of thistoy dataset. Note that with very little observations, the variational Gaussianmixture models with a Dirichlet process prior can take a conservative stand, andfit only one component.

../_images/sphx_glr_plot_gmm_0011.png

On the following figure we are fitting a dataset not well-depicted by aGaussian mixture. Adjusting the weight_concentration_prior, parameter of theBayesianGaussianMixture controls the number of components used to fitthis data. We also present on the last two plots a random sampling generatedfrom the two resulting mixtures.

../_images/sphx_glr_plot_gmm_sin_0011.png

Examples:

2.1.2.2. Pros and cons of variational inference with BayesianGaussianMixture

2.1.2.2.1. Pros

  • Automatic selection
  • when weight_concentration_prior is small enough andn_components is larger than what is found necessary by the model, theVariational Bayesian mixture model has a natural tendency to set some mixtureweights values close to zero. This makes it possible to let the model choosea suitable number of effective components automatically. Only an upper boundof this number needs to be provided. Note however that the “ideal” number ofactive components is very application specific and is typically ill-definedin a data exploration setting.

  • Less sensitivity to the number of parameters

  • unlike finite models, which willalmost always use all components as much as they can, and hence will producewildly different solutions for different numbers of components, thevariational inference with a Dirichlet process prior(weight_concentration_prior_type='dirichlet_process') won’t change muchwith changes to the parameters, leading to more stability and less tuning.

  • Regularization

  • due to the incorporation of prior information,variational solutions have less pathological special cases thanexpectation-maximization solutions.

2.1.2.2.2. Cons

  • Speed
  • the extra parametrization necessary for variational inference makeinference slower, although not by much.

  • Hyperparameters

  • this algorithm needs an extra hyperparameterthat might need experimental tuning via cross-validation.

  • Bias

  • there are many implicit biases in the inference algorithms (and also inthe Dirichlet process if used), and whenever there is a mismatch betweenthese biases and the data it might be possible to fit better models using afinite mixture.

2.1.2.3. The Dirichlet Process

Here we describe variational inference algorithms on Dirichlet processmixture. The Dirichlet process is a prior probability distribution onclusterings with an infinite, unbounded, number of partitions.Variational techniques let us incorporate this prior structure onGaussian mixture models at almost no penalty in inference time, comparingwith a finite Gaussian mixture model.

An important question is how can the Dirichlet process use an infinite,unbounded number of clusters and still be consistent. While a full explanationdoesn’t fit this manual, one can think of its stick breaking processanalogy to help understanding it. The stick breaking process is a generativestory for the Dirichlet process. We start with a unit-length stick and in eachstep we break off a portion of the remaining stick. Each time, we associate thelength of the piece of the stick to the proportion of points that falls into agroup of the mixture. At the end, to represent the infinite mixture, weassociate the last remaining piece of the stick to the proportion of pointsthat don’t fall into all the other groups. The length of each piece is a randomvariable with probability proportional to the concentration parameter. Smallervalue of the concentration will divide the unit-length into larger pieces ofthe stick (defining more concentrated distribution). Larger concentrationvalues will create smaller pieces of the stick (increasing the number ofcomponents with non zero weights).

Variational inference techniques for the Dirichlet process still workwith a finite approximation to this infinite mixture model, butinstead of having to specify a priori how many components one wants touse, one just specifies the concentration parameter and an upper boundon the number of mixture components (this upper bound, assuming it ishigher than the “true” number of components, affects only algorithmiccomplexity, not the actual number of components used).