sparse – Symbolic Sparse Matrices

In the tutorial section, you can find a sparse tutorial.

The sparse submodule is not loaded when we import Theano. You mustimport theano.sparse to enable it.

The sparse module provides the same functionality as the tensormodule. The difference lies under the covers because sparse matricesdo not store data in a contiguous array. Note that there are no GPUimplementations for sparse matrices in Theano. The sparse module hasbeen used in:

  • NLP: Dense linear transformations of sparse vectors.
  • Audio: Filterbank in the Fourier domain.

Compressed Sparse Format

This section tries to explain how information is stored for the twosparse formats of SciPy supported by Theano. There are more formatsthat can be used with SciPy and some documentation about them may befound here.

Theano supports two compressed sparse formats: csc and csr,respectively based on columns and rows. They have both the sameattributes: data, indices, indptr and shape.

  • The data attribute is a one-dimensional ndarray which contains all the non-zero elements of the sparse matrix.
  • The indices and indptr attributes are used to store the position of the data in the sparse matrix.
  • The shape attribute is exactly the same as the shape attribute of a dense (i.e. generic) matrix. It can be explicitly specified at the creation of a sparse matrix if it cannot be infered from the first three attributes.

CSC Matrix

In the Compressed Sparse Column format, indices stands forindexes inside the column vectors of the matrix and indptr tellswhere the column starts in the data and in the indicesattributes. indptr can be thought of as giving the slice whichmust be applied to the other attribute in order to get each column ofthe matrix. In other words, slice(indptr[i], indptr[i+1])corresponds to the slice needed to find the i-th column of the matrixin the data and indices fields.

The following example builds a matrix and returns its columns. Itprints the i-th column, i.e. a list of indices in the column and theircorresponding value in the second list.

  1. >>> import numpy as np
  2. >>> import scipy.sparse as sp
  3. >>> data = np.asarray([7, 8, 9])
  4. >>> indices = np.asarray([0, 1, 2])
  5. >>> indptr = np.asarray([0, 2, 3, 3])
  6. >>> m = sp.csc_matrix((data, indices, indptr), shape=(3, 3))
  7. >>> m.toarray()
  8. array([[7, 0, 0],
  9. [8, 0, 0],
  10. [0, 9, 0]])
  11. >>> i = 0
  12. >>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
  13. (array([0, 1], dtype=int32), array([7, 8]))
  14. >>> i = 1
  15. >>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
  16. (array([2], dtype=int32), array([9]))
  17. >>> i = 2
  18. >>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
  19. (array([], dtype=int32), array([], dtype=int64))

CSR Matrix

In the Compressed Sparse Row format, indices stands for indexesinside the row vectors of the matrix and indptr tells where therow starts in the data and in the indicesattributes. indptr can be thought of as giving the slice whichmust be applied to the other attribute in order to get each row of thematrix. In other words, slice(indptr[i], indptr[i+1]) correspondsto the slice needed to find the i-th row of the matrix in the dataand indices fields.

The following example builds a matrix and returns its rows. It printsthe i-th row, i.e. a list of indices in the row and theircorresponding value in the second list.

  1. >>> import numpy as np
  2. >>> import scipy.sparse as sp
  3. >>> data = np.asarray([7, 8, 9])
  4. >>> indices = np.asarray([0, 1, 2])
  5. >>> indptr = np.asarray([0, 2, 3, 3])
  6. >>> m = sp.csr_matrix((data, indices, indptr), shape=(3, 3))
  7. >>> m.toarray()
  8. array([[7, 8, 0],
  9. [0, 0, 9],
  10. [0, 0, 0]])
  11. >>> i = 0
  12. >>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
  13. (array([0, 1], dtype=int32), array([7, 8]))
  14. >>> i = 1
  15. >>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
  16. (array([2], dtype=int32), array([9]))
  17. >>> i = 2
  18. >>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
  19. (array([], dtype=int32), array([], dtype=int64))

List of Implemented Operations

    • Moving from and to sparse
      • dense_from_sparse.Both grads are implemented. Structured by default.
      • csr_from_dense,csc_from_dense.The grad implemented is structured.
      • Theano SparseVariable objects have a method toarray() that is the same asdense_from_sparse.
    • Construction of Sparses and their Properties
      • CSM and CSC, CSR to construct a matrix.The grad implemented is regular.
      • csm_properties.to get the properties of a sparse matrix.The grad implemented is regular.
      • csm_indices(x), csm_indptr(x), csm_data(x) and csm_shape(x) or x.shape.
      • sp_ones_like.The grad implemented is regular.
      • sp_zeros_like.The grad implemented is regular.
      • square_diagonal.The grad implemented is regular.
      • construct_sparse_from_list.The grad implemented is regular.
    • Cast
      • cast with bcast, wcast, icast, lcast,fcast, dcast, ccast, and zcast.The grad implemented is regular.
    • Transpose
      • transpose.The grad implemented is regular.
    • Basic Arithmetic
      • neg.The grad implemented is regular.
      • eq.
      • neq.
      • gt.
      • ge.
      • lt.
      • le.
      • add.The grad implemented is regular.
      • sub.The grad implemented is regular.
      • mul.The grad implemented is regular.
      • col_scale to multiply by a vector along the columns.The grad implemented is structured.
      • row_slace to multiply by a vector along the rows.The grad implemented is structured.
    • Monoid (Element-wise operation with only one sparse input).
    • They all have a structured grad.

      • structured_sigmoid
      • structured_exp
      • structured_log
      • structured_pow
      • structured_minimum
      • structured_maximum
      • structured_add
      • sin
      • arcsin
      • tan
      • arctan
      • sinh
      • arcsinh
      • tanh
      • arctanh
      • rad2deg
      • deg2rad
      • rint
      • ceil
      • floor
      • trunc
      • sgn
      • log1p
      • expm1
      • sqr
      • sqrt
    • Dot Product
  • One of the inputs must be sparse, the other sparse or dense.
  • The grad implemented is regular.
  • No C code for perform and no C code for grad.
  • Returns a dense for perform and a dense for grad.
  1. -

structured_dot.

  • The first input is sparse, the second can be sparse or dense.
  • The grad implemented is structured.
  • C code for perform and grad.
  • It returns a sparse output if both inputs are sparse and dense one if one of the inputs is dense.
  • Returns a sparse grad for sparse inputs and dense grad for dense inputs.
  1. -

true_dot.

  • The first input is sparse, the second can be sparse or dense.
  • The grad implemented is regular.
  • No C code for perform and no C code for grad.
  • Returns a Sparse.
  • The gradient returns a Sparse for sparse inputs and by default a dense for dense inputs. The parameter grad_preserves_dense can be set to False to return a sparse grad for dense inputs.
  1. -

sampling_dot.

  • Both inputs must be dense.
  • The grad implemented is structured for p.
  • Sample of the dot and sample of the gradient.
  • C code for perform but not for grad.
  • Returns sparse for perform and grad.
  1. -

usmm.

  • You shouldn’t insert this op yourself!
    • There is an optimization that transform a dot to Usmm when possible.
  • This op is the equivalent of gemm for sparse dot.

  • There is no grad implemented for this op.

  • One of the inputs must be sparse, the other sparse or dense.

  • Returns a dense from perform.

    • Slice Operations
      • sparse_variable[N, N], returns a tensor scalar.There is no grad implemented for this operation.
      • sparse_variable[M:N, O:P], returns a sparse matrixThere is no grad implemented for this operation.
      • Sparse variables don’t support [M, N:O] and [M:N, O] as we don’tsupport sparse vectors and returning a sparse matrix would breakthe numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead.
      • diag.The grad implemented is regular.
    • Concatenation
      • hstack.The grad implemented is regular.
      • vstack.The grad implemented is regular.
    • Probability
    • There is no grad implemented for these operations.

      • Poisson and poisson
      • Binomial and csc_fbinomial, csc_dbinomialcsr_fbinomial, csr_dbinomial
      • Multinomial and multinomial
    • Internal Representation
    • They all have a regular grad implemented.

      • ensure_sorted_indices.
      • remove0.
      • clean to resort indices and remove zeros

sparse – Sparse Op

Classes for handling sparse matrices.

To read about different sparse formats, seehttp://www-users.cs.umn.edu/~saad/software/SPARSKIT/paper.ps

  • class theano.sparse.basic.CSM(format, kmap=None)[source]
  • Indexing to speficied what part of the data parametershould be used to construct the sparse matrix.
  • theano.sparse.basic.add(x, y)[source]
  • Add two matrices, at least one of which is sparse.

This method will provide the right op accordingto the inputs.

Parameters:

  • x – A matrix variable.
  • y – A matrix variable.Returns: x + y Return type: A sparse matrix

Notes

At least one of x and y must be a sparse matrix.

The grad will be structured only when one of the variable will be a densematrix.

  • theano.sparse.basic.assparse(_x, name=None)[source]
  • Wrapper around SparseVariable constructor to constructa Variable with a sparse matrix with the same dtype andformat.

Parameters:x – A sparse matrix.Returns:SparseVariable version of x.Return type:object

  • theano.sparse.basic.assparse_or_tensor_variable(_x, name=None)[source]
  • Same as as_sparse_variable but if we can’t make asparse variable, we try to make a tensor variable.

Parameters:x – A sparse matrix.Returns:Return type:SparseVariable or TensorVariable version of x

  • theano.sparse.basic.assparse_variable(_x, name=None)[source]
  • Wrapper around SparseVariable constructor to constructa Variable with a sparse matrix with the same dtype andformat.

Parameters:x – A sparse matrix.Returns:SparseVariable version of x.Return type:object

  • theano.sparse.basic.cast(variable, dtype)[source]
  • Cast sparse variable to the desired dtype.

Parameters:

  • variable – Sparse matrix.
  • dtype – The dtype wanted.Returns:

Return type: Same as x but having dtype as dtype.

Notes

The grad implemented is regular, i.e. not structured.

  • theano.sparse.basic.clean(x)[source]
  • Remove explicit zeros from a sparse matrix, and re-sort indices.

CSR column indices are not necessarily sorted. Likewisefor CSC row indices. Use clean when sortedindices are required (e.g. when passing data to otherlibraries) and to ensure there are no zeros in the data.

Parameters:x – A sparse matrix.Returns:The same as x with indices sorted and zerosremoved.Return type:A sparse matrix

Notes

The grad implemented is regular, i.e. not structured.

  • theano.sparse.basic.colscale(_x, s)[source]
  • Scale each columns of a sparse matrix by the corresponding element of adense vector.

Parameters:

  • x – A sparse matrix.
  • s – A dense vector with length equal to the number of columns of x.Returns:
  • A sparse matrix in the same format as x which each column had been
  • multiply by the corresponding element of s.

Notes

The grad implemented is structured.

  • theano.sparse.basic.csmdata(_csm)[source]
  • Return the data field of the sparse variable.
  • theano.sparse.basic.csmindices(_csm)[source]
  • Return the indices field of the sparse variable.
  • theano.sparse.basic.csmindptr(_csm)[source]
  • Return the indptr field of the sparse variable.
  • theano.sparse.basic.csmshape(_csm)[source]
  • Return the shape field of the sparse variable.
  • theano.sparse.basic.dot(x, y)[source]
  • Operation for efficiently calculating the dot product whenone or all operands is sparse. Supported format are CSC and CSR.The output of the operation is dense.

Parameters:

  • x – Sparse or dense matrix variable.
  • y – Sparse or dense matrix variable.Returns:

Return type: The dot product x.y in a dense format.

Notes

The grad implemented is regular, i.e. not structured.

At least one of x or y must be a sparse matrix.

When the operation has the form dot(csr_matrix, dense)the gradient of this operation can be performed inplaceby UsmmCscDense. This leads to significant speed-ups.

  • theano.sparse.basic.hstack(blocks, format=None, dtype=None)[source]
  • Stack sparse matrices horizontally (column wise).

This wrap the method hstack from scipy.

Parameters:

  • blocks – List of sparse array of compatible shape.
  • format – String representing the output format. Default is csc.
  • dtype – Output dtype.Returns: The concatenation of the sparse array column wise. Return type: array

Notes

The number of line of the sparse matrix must agree.

The grad implemented is regular, i.e. not structured.

  • theano.sparse.basic.mul(x, y)[source]
  • Multiply elementwise two matrices, at least one of which is sparse.

This method will provide the right op according to the inputs.

Parameters:

  • x – A matrix variable.
  • y – A matrix variable.Returns: x + y Return type: A sparse matrix

Notes

At least one of x and y must be a sparse matrix.The grad is regular, i.e. not structured.

  • theano.sparse.basic.rowscale(_x, s)[source]
  • Scale each row of a sparse matrix by the corresponding element ofa dense vector.

Parameters:

  • x – A sparse matrix.
  • s – A dense vector with length equal to the number of rows of x.Returns: A sparse matrix in the same format as x whose each row has beenmultiplied by the corresponding element of s. Return type: A sparse matrix

Notes

The grad implemented is structured.

  • theano.sparse.basic.spones_like(_x)[source]
  • Construct a sparse matrix of ones with the same sparsity pattern.

Parameters:x – Sparse matrix to take the sparsity pattern.Returns:The same as x with data changed for ones.Return type:A sparse matrix

  • theano.sparse.basic.spsum(_x, axis=None, sparse_grad=False)[source]
  • Calculate the sum of a sparse matrix along the specified axis.

It operates a reduction along the specified axis. When axis is None,it is applied along all axes.

Parameters:

  • x – Sparse matrix.
  • axis – Axis along which the sum is applied. Integer or None.
  • sparse_grad (bool) – True to have a structured grad.Returns: The sum of x in a dense format. Return type: object

Notes

The grad implementation is controlled with the sparse_grad parameter.True will provide a structured grad and False will provide a regulargrad. For both choices, the grad returns a sparse matrix having the sameformat as x.

This op does not return a sparse matrix, but a dense tensor matrix.

  • theano.sparse.basic.spzeros_like(_x)[source]
  • Construct a sparse matrix of zeros.

Parameters:x – Sparse matrix to take the shape.Returns:The same as x with zero entries for all element.Return type:A sparse matrix

  • theano.sparse.basic.structureddot(_x, y)[source]
  • Structured Dot is like dot, except that only thegradient wrt non-zero elements of the sparse matrixa are calculated and propagated.

The output is presumed to be a dense matrix, and is represented by aTensorType instance.

Parameters:

  • a – A sparse matrix.
  • b – A sparse or dense matrix.Returns: The dot product of a and b. Return type: A sparse matrix

Notes

The grad implemented is structured.

  • theano.sparse.basic.sub(x, y)[source]
  • Subtract two matrices, at least one of which is sparse.

This method will provide the right op accordingto the inputs.

Parameters:

  • x – A matrix variable.
  • y – A matrix variable.Returns: x - y Return type: A sparse matrix

Notes

At least one of x and y must be a sparse matrix.

The grad will be structured only when one of the variable will be a densematrix.

  • theano.sparse.basic.truedot(_x, y, grad_preserves_dense=True)[source]
  • Operation for efficiently calculating the dot product whenone or all operands are sparse. Supported formats are CSC and CSR.The output of the operation is sparse.

Parameters:

  • x – Sparse matrix.
  • y – Sparse matrix or 2d tensor variable.
  • grad_preserves_dense (bool) – If True (default), makes the grad of dense inputs dense.Otherwise the grad is always sparse.Returns:
  • The dot product x.y in a sparse format.
  • Notex
  • —–
  • The grad implemented is regular, i.e. not structured.
  • theano.sparse.basic.vstack(blocks, format=None, dtype=None)[source]
  • Stack sparse matrices vertically (row wise).

This wrap the method vstack from scipy.

Parameters:

  • blocks – List of sparse array of compatible shape.
  • format – String representing the output format. Default is csc.
  • dtype – Output dtype.Returns: The concatenation of the sparse array row wise. Return type: array

Notes

The number of column of the sparse matrix must agree.

The grad implemented is regular, i.e. not structured.

  • theano.sparse.tests.testbasic.sparse_random_inputs(_format, shape, n=1, out_dtype=None, p=0.5, gap=None, explicit_zero=False, unsorted_indices=False)[source]
  • Return a tuple containing everything needed toperform a test.

If out_dtype is None, theano.config.floatX isused.

Parameters:

  • format – Sparse format.
  • shape – Shape of data.
  • n – Number of variable.
  • out_dtype – dtype of output.
  • p – Sparsity proportion.
  • gap – Tuple for the range of the random sample. Whenlength is 1, it is assumed to be the exclusivemax, when gap = (a, b) it provide a samplefrom [a, b[. If None is used, it provide [0, 1]for float dtypes and [0, 50[ for integer dtypes.
  • explicit_zero – When True, we add explicit zero in thereturned sparse matrix
  • unsorted_indices – when True, we make sure there isunsorted indices in the returnedsparse matrix.Returns: (variable, data) where both variable_and _data are list. Note: explicit_zero and unsorted_indices was added in Theano 0.6rc4