Writing an Op to work on an ndarray in C

This section walks through a non-trivial example Op that does something prettyweird and unrealistic, that is hard to express with existing Ops.(Technically, we could use Scan to implement the Op we’re about to describe,but we ignore that possibility for the sake of example.)

The following code works, but important error-checking has been omitted forclarity. For example, when you write C code that assumes memory is contiguous,you should check the strides and alignment.

  1. import theano
  2.  
  3. class Fibby(theano.Op):
  4. """
  5. An arbitrarily generalized Fibbonacci sequence
  6. """
  7. __props__ = ()
  8.  
  9. def make_node(self, x):
  10. x_ = tensor.as_tensor_variable(x)
  11. assert x_.ndim == 1
  12. return theano.Apply(self,
  13. inputs=[x_],
  14. outputs=[x_.type()])
  15. # using x_.type() is dangerous, it copies x's broadcasting behaviour
  16.  
  17. def perform(self, node, inputs, output_storage):
  18. x, = inputs
  19. y = output_storage[0][0] = x.copy()
  20. for i in range(2, len(x)):
  21. y[i] = y[i-1] * y[i-2] + x[i]
  22.  
  23. def c_code(self, node, name, inames, onames, sub):
  24. x, = inames
  25. y, = onames
  26. fail = sub['fail']
  27. return """
  28. Py_XDECREF(%(y)s);
  29. %(y)s = (PyArrayObject*)PyArray_FromArray(
  30. %(x)s, 0, NPY_ARRAY_ENSURECOPY);
  31. if (!%(y)s)
  32. %(fail)s;
  33. {//New scope needed to make compilation work
  34. dtype_%(y)s * y = (dtype_%(y)s*)PyArray_DATA(%(y)s);
  35. dtype_%(x)s * x = (dtype_%(x)s*)PyArray_DATA(%(x)s);
  36. for (int i = 2; i < PyArray_DIMS(%(x)s)[0]; ++i)
  37. y[i] = y[i-1]*y[i-2] + x[i];
  38. }
  39. """ % locals()
  40.  
  41. def c_code_cache_version(self):
  42. return (1,)
  43.  
  44. fibby = Fibby()

In the first two lines of the C function, we make y point to a new array withthe correct size for the output. This is essentially simulating the liney = x.copy().The variables %(x)s and %(y)s are set up by the TensorType to be PyArrayObject pointers.TensorType also set up dtype_%(x)s to be a typdef to the C type for x.

  1. Py_XDECREF(%(y)s);
  2. %(y)s = (PyArrayObject*)PyArray_FromArray(
  3. %(x)s, 0, NPY_ARRAY_ENSURECOPY);

The first line reduces the reference count of the data that y originallypointed to. The second line allocates the new data and makes y point to it.

In C code for a theano op, numpy arrays are represented as PyArrayObject Cstructs. This is part of the numpy/scipy C API documented athttp://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html

TODO: NEEDS MORE EXPLANATION.

Writing an Optimization

fibby of a vector of zeros is another vector of zeros ofthe same size.Theano does not attempt to infer this from the code provided via Fibby.perform or Fibby.c_code.However, we can write an optimization that makes use of this observation.This sort of local substitution of special cases is common,and there is a stage of optimization (specialization) devoted to such optimizations.The following optimization (fibby_of_zero) tests whether the input isguaranteed to be all zero, and if so it returns the input itself as a replacementfor the old output.

TODO: talk about OPTIMIZATION STAGES

  1. from theano.tensor.opt import get_scalar_constant_value, NotScalarConstantError
  2.  
  3. # Remove any fibby(zeros(...))
  4. @theano.tensor.opt.register_specialize
  5. @theano.gof.local_optimizer([fibby])
  6. def fibby_of_zero(node):
  7. if node.op == fibby:
  8. x = node.inputs[0]
  9. try:
  10. if numpy.all(0 == get_scalar_constant_value(x)):
  11. return [x]
  12. except NotScalarConstantError:
  13. pass

The register_specialize decorator is what activates our optimization, andtells Theano to use it in the specialization stage.The local_optimizer decorator builds a class instance around our globalfunction. The [fibby] argument is a hint that our optimizer works on nodeswhose .op attribute equals fibby.The function here (fibby_of_zero) expects an Apply instance as anargument for parameter node. It tests usingfunction get_scalar_constant_value, which determines if aVariable (x) is guaranteed to be a constant, and if so, what constant.

Test the optimization

Here is some code to test that the optimization is applied only when needed.

  1. import numpy
  2. import theano.tensor as T
  3. from theano import function
  4. from theano import tensor
  5.  
  6. # Test it does not apply when not needed
  7. x = T.dvector()
  8. f = function([x], fibby(x))
  9.  
  10. # We call the function to make sure it runs.
  11. # If you run in DebugMode, it will compare the C and Python outputs.
  12. f(numpy.random.rand(5))
  13. topo = f.maker.fgraph.toposort()
  14. assert len(topo) == 1
  15. assert isinstance(topo[0].op, Fibby)
  16.  
  17. # Test that the optimization gets applied.
  18. f_zero = function([], fibby(T.zeros([5])))
  19.  
  20. # If you run in DebugMode, it will compare the output before
  21. # and after the optimization.
  22. f_zero()
  23.  
  24. # Check that the optimization removes the Fibby Op.
  25. # For security, the Theano memory interface ensures that the output
  26. # of the function is always memory not aliased to the input.
  27. # That is why there is a DeepCopyOp op.
  28. topo = f_zero.maker.fgraph.toposort()
  29. assert len(topo) == 1
  30. assert isinstance(topo[0].op, theano.compile.ops.DeepCopyOp)