tensor.nlinalg – Linear Algebra Ops Using Numpy

Note

This module is not imported by default. You need to import it to use it.

API

  • class theano.tensor.nlinalg.AllocDiag[source]
  • Allocates a square matrix with the given vector as its diagonal.
  • class theano.tensor.nlinalg.Det[source]
  • Matrix determinant. Input should be a square matrix.
  • class theano.tensor.nlinalg.Eig[source]
  • Compute the eigenvalues and right eigenvectors of a square array.
  • class theano.tensor.nlinalg.Eigh(UPLO='L')[source]
  • Return the eigenvalues and eigenvectors of a Hermitian or symmetric matrix.

    • grad(inputs, g_outputs)[source]
    • The gradient function should return

\sum_n\left(W_n\frac{\partial\,w_n}
      {\partial a_{ij}} +
\sum_k V_{nk}\frac{\partial\,v_{nk}}
      {\partial a_{ij}}\right),

where [W, V] corresponds to g_outputs,a to inputs, and (w, v)=\mbox{eig}(a).

Analytic formulae for eigensystem gradients are well-known inperturbation theory:

\frac{\partial\,w_n}
{\partial a_{ij}} = v_{in}\,v_{jn}

\frac{\partial\,v_{kn}}
          {\partial a_{ij}} =
\sum_{m\ne n}\frac{v_{km}v_{jn}}{w_n-w_m}

  • class theano.tensor.nlinalg.EighGrad(UPLO='L')[source]
  • Gradient of an eigensystem of a Hermitian matrix.

    • perform(node, inputs, outputs)[source]
    • Implements the “reverse-mode” gradient for the eigensystem ofa square matrix.
  • class theano.tensor.nlinalg.MatrixInverse[source]
  • Computes the inverse of a matrix A.

Given a square matrix A, matrixinverse returns a squarematrix ![A{inv}](/projects/Theano-1.0/137ff9878a6315b8ef4a9811cfab3561.png) such that the dot product A \cdot A_{inv}and A_{inv} \cdot A equals the identity matrix I.

Notes

When possible, the call to this op will be optimized to the callof solve.

  • Rop(_inputs, eval_points)[source]
  • The gradient function should return

\frac{\partial X^{-1}}{\partial X}V,

where V corresponds to g_outputs and X toinputs. Using the matrix cookbook,one can deduce that the relation corresponds to

X^{-1} \cdot V \cdot X^{-1}.

  • grad(inputs, g_outputs)[source]
  • The gradient function should return

V\frac{\partial X^{-1}}{\partial X},

where V corresponds to g_outputs and X toinputs. Using the matrix cookbook,one can deduce that the relation corresponds to

(X^{-1} \cdot V^{T} \cdot X^{-1})^T.

  • class theano.tensor.nlinalg.MatrixPinv[source]
  • Computes the pseudo-inverse of a matrix A.

The pseudo-inverse of a matrix A, denoted A^+, isdefined as: “the matrix that ‘solves’ [the least-squares problem]Ax = b,” i.e., if \bar{x} is said solution, thenA^+ is that matrix such that \bar{x} = A^+b.

Note that Ax=AA^+b, so AA^+ is close to the identity matrix.This method is not faster than matrix_inverse. Its strength comes fromthat it works for non-square matrices.If you have a square matrix though, matrix_inverse can be both moreexact and faster to compute. Also this op does not get optimized into asolve op.

  • Lop(_inputs, outputs, g_outputs)[source]
  • The gradient function should return

V\frac{\partial X^+}{\partial X},

where V corresponds to g_outputs and X toinputs. According to Wikipedia,this corresponds to

(-X^+ V^T X^+ + X^+ X^{+T} V (I - X X^+) + (I - X^+ X) V X^{+T} X^+)^T.

  • class theano.tensor.nlinalg.QRFull(mode)[source]
  • Full QR Decomposition.

Computes the QR decomposition of a matrix.Factor the matrix a as qr, where q is orthonormaland r is upper-triangular.

  • class theano.tensor.nlinalg.QRIncomplete(mode)[source]
  • Incomplete QR Decomposition.

Computes the QR decomposition of a matrix.Factor the matrix a as qr and return a single matrix R.

  • class theano.tensor.nlinalg.SVD(full_matrices=True, compute_uv=True)[source]

Parameters:

  • full_matrices (bool, optional) – If True (default), u and v have the shapes (M, M) and (N, N),respectively.Otherwise, the shapes are (M, K) and (K, N), respectively,where K = min(M, N).
  • compute_uv (bool, optional) – Whether or not to compute u and v in addition to s.True by default.
  • class theano.tensor.nlinalg.TensorInv(ind=2)[source]
  • Class wrapper for tensorinv() function;Theano utilization of numpy.linalg.tensorinv;
  • class theano.tensor.nlinalg.TensorSolve(axes=None)[source]
  • Theano utilization of numpy.linalg.tensorsolveClass wrapper for tensorsolve function.
  • theano.tensor.nlinalg.diag(x)[source]
  • Numpy-compatibility methodIf x is a matrix, return its diagonal.If x is a vector return a matrix with it as its diagonal.

    • This method does not support the k argument that numpy supports.
  • theano.tensor.nlinalg.matrixdot(*args_)[source]
  • Shorthand for product between several dots.

Given N matrices A_0, A_1, .., A_N, matrix_dot willgenerate the matrix product between all in the given order, namelyA_0 \cdot A_1 \cdot A_2 \cdot .. \cdot A_N.

  • theano.tensor.nlinalg.matrixpower(_M, n)[source]
  • Raise a square matrix to the (integer) power n.

Parameters:

  • M (Tensor variable) –
  • n (Python int) –
  • theano.tensor.nlinalg.qr(a, mode='reduced')[source]
  • Computes the QR decomposition of a matrix.Factor the matrix a as qr, where qis orthonormal and r is upper-triangular.

Parameters:

  • a (arraylike_, shape (M, N)) – Matrix to be factored.
  • mode ({'reduced', 'complete', 'r', 'raw'}, optional) – If K = min(M, N), then

    • ‘reduced’
    • returns q, r with dimensions (M, K), (K, N)
    • ‘complete’
    • returns q, r with dimensions (M, M), (M, N)
    • ‘r’
    • returns r only with dimensions (K, N)
    • ‘raw’
    • returns h, tau with dimensions (N, M), (K,) Note that array h returned in ‘raw’ mode istransposed for calling Fortran.

Default mode is ‘reduced’ Returns:

  • q (matrix of float or complex, optional) – A matrix with orthonormal columns. When mode = ‘complete’ theresult is an orthogonal/unitary matrix depending on whether ornot a is real/complex. The determinant may be either +/- 1 inthat case.
  • r (matrix of float or complex, optional) – The upper-triangular matrix.
  • theano.tensor.nlinalg.svd(a, full_matrices=1, compute_uv=1)[source]
  • This function performs the SVD on CPU.

Parameters:

  • full_matrices (bool, optional) – If True (default), u and v have the shapes (M, M) and (N, N),respectively.Otherwise, the shapes are (M, K) and (K, N), respectively,where K = min(M, N).
  • compute_uv (bool, optional) – Whether or not to compute u and v in addition to s.True by default.Returns: U, V, D Return type: matrices
  • theano.tensor.nlinalg.tensorinv(a, ind=2)[source]
  • Does not run on GPU;Theano utilization of numpy.linalg.tensorinv;

Compute the ‘inverse’ of an N-dimensional array.The result is an inverse for a relative to the tensordot operationtensordot(a, b, ind), i. e., up to floating-point accuracy,tensordot(tensorinv(a), a, ind) is the “identity” tensor for thetensordot operation.

Parameters:

  • a (array_like) – Tensor to ‘invert’. Its shape must be ‘square’, i. e.,prod(a.shape[:ind]) == prod(a.shape[ind:]).
  • ind (int, optional) – Number of first indices that are involved in the inverse sum.Must be a positive integer, default is 2.Returns: ba‘s tensordot inverse, shape a.shape[ind:] + a.shape[:ind]. Return type: ndarray Raises: LinAlgError – If a is singular or not ‘square’ (in the above sense).
  • theano.tensor.nlinalg.tensorsolve(a, b, axes=None)[source]
  • Theano utilization of numpy.linalg.tensorsolve. Does not run on GPU!

Solve the tensor equation a x = b for x.It is assumed that all indices of x are summed over in the product,together with the rightmost indices of a, as is done in, for example,tensordot(a, x, axes=len(b.shape)).

Parameters:

  • a (array_like) – Coefficient tensor, of shape b.shape + Q. Q, a tuple, equalsthe shape of that sub-tensor of a consisting of the appropriatenumber of its rightmost indices, and must be such thatprod(Q) == prod(b.shape) (in which sense a is said to be‘square’).
  • b (array_like) – Right-hand tensor, which can be of any shape.
  • axes (tuple of ints, optional) – Axes in a to reorder to the right, before inversion.If None (default), no reordering is done.Returns: x Return type: ndarray, shape Q Raises: LinAlgError – If a is singular or not ‘square’ (in the above sense).
  • theano.tensor.nlinalg.trace(X)[source]
  • Returns the sum of diagonal elements of matrix X.

Notes

Works on GPU since 0.6rc4.