sparse.sandbox – Sparse Op Sandbox

API

Convolution-like operations with sparse matrix multiplication.

To read about different sparse formats, seeU{http://www-users.cs.umn.edu/~saad/software/SPARSKIT/paper.ps}.

@todo: Automatic methods for determining best sparse format?

  • class theano.sparse.sandbox.sp.ConvolutionIndices[source]
  • Build indices for a sparse CSC matrix that could implement A(convolve) B.
This generates a sparse matrix M, which generates a stack of image patches when computing the dot product of M with image patch. Convolution is then simply the dot product of (img x M) and the kernels.
  • static evaluate(inshp, kshp, strides=(1, 1), nkern=1, mode='valid', ws=True)[source]
  • Build a sparse matrix which can be used for performing… convolution: in this case, the dot product of this matrixwith the input images will generate a stack of imagespatches. Convolution is then a tensordot operation of thefilters and the patch stack. sparse local connections: in this case, the sparse matrixallows us to operate the weight matrix as if it werefully-connected. The structured-dot with the input image givesthe output for the following layer.

Parameters:

  1. - **ker_shape** shape of kernel to apply (smaller than image)
  2. - **img_shape** shape of input images
  3. - **mode** valid generates output only when kernel andimage overlap overlap fully. Convolution obtainedby zero-padding the input
  4. - **ws** must be always True
  5. - **(****dx****,****dy****)** offset parameter. In the case of no weight sharing,gives the pixel offset between two receptive fields.With weight sharing gives the offset between thetop-left pixels of the generated patchesReturn type:

tuple(indices, indptr, logical_shape, sp_type, out_img_shp) Returns: the structure of a sparse matrix, and the logical dimensionsof the image which will be the result of filtering.

  • theano.sparse.sandbox.sp.convolve(kerns, kshp, nkern, images, imgshp, step=(1, 1), bias=None, mode='valid', flatten=True)[source]
  • Convolution implementation by sparse matrix multiplication.

Note:For best speed, put the matrix which you expect to besmaller as the ‘kernel’ argument

“images” is assumed to be a matrix of shape batch_size x img_size,where the second dimension represents each image in raster order

If flatten is “False”, the output feature map will have shape:

  1. batch_size x number of kernels x output_size

If flatten is “True”, the output feature map will have shape:

  1. batch_size x number of kernels * output_size

Note

IMPORTANT: note that this means that each feature map (imagegenerate by each kernel) is contiguous in memory. The memorylayout will therefore be: [ ], where represents a“feature map” in raster order

kerns is a 2D tensor of shape nkern x N.prod(kshp)

Parameters:

  • kerns – 2D tensor containing kernels which are applied at every pixel
  • kshp – tuple containing actual dimensions of kernel (not symbolic)
  • nkern – number of kernels/filters to apply.nkern=1 will apply one common filter to all input pixels
  • images – tensor containing images on which to apply convolution
  • imgshp – tuple containing image dimensions
  • step – determines number of pixels between adjacent receptive fields(tuple containing dx,dy values)
  • mode – ‘full’, ‘valid’ see CSM.evaluate function for details
  • sumdims – dimensions over which to sum for the tensordot operation.By default ((2,),(1,)) assumes kerns is a nkern x kernsizematrix and images is a batchsize x imgsize matrixcontaining flattened images in raster order
  • flatten – flatten the last 2 dimensions of the output. By default,instead of generating a batchsize x outsize x nkern tensor,will flatten to batchsize x outsize*nkernReturns: out1, symbolic result Returns: out2, logical shape of the output img (nkern,heigt,width) TODO: test for 1D and think of how to do n-d convolutions

  • theano.sparse.sandbox.sp.maxpool(_images, imgshp, maxpoolshp)[source]
  • Implements a max pooling layer

Takes as input a 2D tensor of shape batch_size x img_size andperforms max pooling. Max pooling downsamples by taking the maxvalue in a given area, here defined by maxpoolshp. Outputs a 2Dtensor of shape batch_size x output_size.

Parameters:

  • images – 2D tensor containing images on which to apply convolution.Assumed to be of shape batch_size x img_size
  • imgshp – tuple containing image dimensions
  • maxpoolshp – tuple containing shape of area to max pool overReturns: out1, symbolic result (2D tensor) Returns: out2, logical shape of the output
  • class theano.sparse.sandbox.sp2.Binomial(format, dtype)[source]
  • Return a sparse matrix having random values from a binomialdensity having number of experiment n and probability of succesp.

WARNING: This Op is NOT deterministic, as calling it twice with thesame inputs will NOT give the same result. This is a violation ofTheano’s contract for Ops

Parameters:

  • n – Tensor scalar representing the number of experiment.
  • p – Tensor scalar representing the probability of success.
  • shape – Tensor vector for the output shape.Returns: A sparse matrix of integers representing the numberof success.
  • class theano.sparse.sandbox.sp2.Multinomial[source]
  • Return a sparse matrix having random values from a multinomialdensity having number of experiment n and probability of succesp.

WARNING: This Op is NOT deterministic, as calling it twice with thesame inputs will NOT give the same result. This is a violation ofTheano’s contract for Ops

Parameters:

  • n – Tensor type vector or scalar representing the number ofexperiment for each row. If n is a scalar, it will beused for each row.
  • p – Sparse matrix of probability where each row is a probabilityvector representing the probability of succes. N.B. Each rowmust sum to one.Returns: A sparse matrix of random integers from a multinomial densityfor each row. Note: It will works only if p have csr format.
  • class theano.sparse.sandbox.sp2.Poisson[source]
  • Return a sparse having random values from a Poisson densitywith mean from the input.

WARNING: This Op is NOT deterministic, as calling it twice with thesame inputs will NOT give the same result. This is a violation ofTheano’s contract for Ops

Parameters:x – Sparse matrix.Returns:A sparse matrix of random integers of a Poisson densitywith mean of x element wise.