Derivatives in Theano

Computing Gradients

Now let’s use Theano for a slightly more sophisticated task: create afunction which computes the derivative of some expression y withrespect to its parameter x. To do this we will use the macro T.grad.For instance, we can compute thegradient of x^2 with respect to x. Note that:d(x^2)/dx = 2 \cdot x.

Here is the code to compute this gradient:

  1. >>> import numpy
  2. >>> import theano
  3. >>> import theano.tensor as T
  4. >>> from theano import pp
  5. >>> x = T.dscalar('x')
  6. >>> y = x ** 2
  7. >>> gy = T.grad(y, x)
  8. >>> pp(gy) # print out the gradient prior to optimization
  9. '((fill((x ** TensorConstant{2}), TensorConstant{1.0}) * TensorConstant{2}) * (x ** (TensorConstant{2} - TensorConstant{1})))'
  10. >>> f = theano.function([x], gy)
  11. >>> f(4)
  12. array(8.0)
  13. >>> numpy.allclose(f(94.2), 188.4)
  14. True

In this example, we can see from pp(gy) that we are computingthe correct symbolic gradient.fill((x 2), 1.0) means to make a matrix of the same shape asx 2 and fill it with 1.0.

Note

The optimizer simplifies the symbolic gradient expression. You can seethis by digging inside the internal properties of the compiled function.

  1. pp(f.maker.fgraph.outputs[0])
  2. '(2.0 * x)'

After optimization there is only one Apply node left in the graph, whichdoubles the input.

We can also compute the gradient of complex expressions such as thelogistic function defined above. It turns out that the derivative of thelogistic is: ds(x)/dx = s(x) \cdot (1 - s(x)).

../_images/dlogistic.png A plot of the gradient of the logistic function, with x on the x-axisand ds(x)/dx on the y-axis.

  1. >>> x = T.dmatrix('x')
  2. >>> s = T.sum(1 / (1 + T.exp(-x)))
  3. >>> gs = T.grad(s, x)
  4. >>> dlogistic = theano.function([x], gs)
  5. >>> dlogistic([[0, 1], [-1, -2]])
  6. array([[ 0.25 , 0.19661193],
  7. [ 0.19661193, 0.10499359]])

In general, for any scalar expression s, T.grad(s, w) providesthe Theano expression for computing \frac{\partial s}{\partial w}. Inthis way Theano can be used for doing efficient symbolic differentiation(as the expression returned by T.grad will be optimized during compilation), even forfunction with many inputs. (see automatic differentiation for a descriptionof symbolic differentiation).

Note

The second argument of T.grad can be a list, in which case theoutput is also a list. The order in both lists is important: elementi of the output list is the gradient of the first argument ofT.grad with respect to the i-th element of the list given as second argument.The first argument of T.grad has to be a scalar (a tensorof size 1). For more information on the semantics of the arguments ofT.grad and details about the implementation, seethis section of the library.

Additional information on the inner workings of differentiation may also befound in the more advanced tutorial Extending Theano.

Computing the Jacobian

In Theano’s parlance, the term Jacobian designates the tensor comprising thefirst partial derivatives of the output of a function with respect to its inputs.(This is a generalization of to the so-called Jacobian matrix in Mathematics.)Theano implements the theano.gradient.jacobian() macro that does allthat is needed to compute the Jacobian. The following text explains howto do it manually.

In order to manually compute the Jacobian of some function y withrespect to some parameter x we need to use scan. What wedo is to loop over the entries in y and compute the gradient ofy[i] with respect to x.

Note

scan is a generic op in Theano that allows writing in a symbolicmanner all kinds of recurrent equations. While creatingsymbolic loops (and optimizing them for performance) is a hard task,effort is being done for improving the performance of scan. Weshall return to scan later in this tutorial.

  1. >>> import theano
  2. >>> import theano.tensor as T
  3. >>> x = T.dvector('x')
  4. >>> y = x ** 2
  5. >>> J, updates = theano.scan(lambda i, y, x : T.grad(y[i], x), sequences=T.arange(y.shape[0]), non_sequences=[y, x])
  6. >>> f = theano.function([x], J, updates=updates)
  7. >>> f([4, 4])
  8. array([[ 8., 0.],
  9. [ 0., 8.]])

What we do in this code is to generate a sequence of ints from 0 toy.shape[0] using T.arange. Then we loop through this sequence, andat each step, we compute the gradient of element y[i] with respect tox. scan automatically concatenates all these rows, generating amatrix which corresponds to the Jacobian.

Note

There are some pitfalls to be aware of regarding T.grad. One of them is that youcannot re-write the above expression of the Jacobian astheano.scan(lambda yi,x: T.grad(y_i,x), sequences=y, non_sequences=x), even though from the documentation of scan thisseems possible. The reason is that _y_i will not be a function ofx anymore, while y[i] still is.

Computing the Hessian

In Theano, the term Hessian has the usual mathematical acception: It is thematrix comprising the second order partial derivative of a function with scalaroutput and vector input. Theano implements theano.gradient.hessian() macro that does allthat is needed to compute the Hessian. The following text explains howto do it manually.

You can compute the Hessian manually similarly to the Jacobian. The onlydifference is that now, instead of computing the Jacobian of some expressiony, we compute the Jacobian of T.grad(cost,x), where cost is somescalar.

  1. >>> x = T.dvector('x')
  2. >>> y = x ** 2
  3. >>> cost = y.sum()
  4. >>> gy = T.grad(cost, x)
  5. >>> H, updates = theano.scan(lambda i, gy,x : T.grad(gy[i], x), sequences=T.arange(gy.shape[0]), non_sequences=[gy, x])
  6. >>> f = theano.function([x], H, updates=updates)
  7. >>> f([4, 4])
  8. array([[ 2., 0.],
  9. [ 0., 2.]])

Jacobian times a Vector

Sometimes we can express the algorithm in terms of Jacobians times vectors,or vectors times Jacobians. Compared to evaluating the Jacobian and thendoing the product, there are methods that compute the desired results whileavoiding actual evaluation of the Jacobian. This can bring about significantperformance gains. A description of one such algorithm can be found here:

  • Barak A. Pearlmutter, “Fast Exact Multiplication by the Hessian”, NeuralComputation, 1994

While in principle we would want Theano to identify these patterns automatically for us,in practice, implementing such optimizations in a generic manner is extremelydifficult. Therefore, we provide special functions dedicated to these tasks.

R-operator

The R operator is built to evaluate the product between a Jacobian and avector, namely \frac{\partial f(x)}{\partial x} v. The formulationcan be extended even for x being a matrix, or a tensor in general, case inwhich also the Jacobian becomes a tensor and the product becomes some kindof tensor product. Because in practice we end up needing to compute suchexpressions in terms of weight matrices, Theano supports this more genericform of the operation. In order to evaluate the R-operation ofexpression y, with respect to x, multiplying the Jacobian with _v_you need to do something similar to this:

  1. >>> W = T.dmatrix('W')
  2. >>> V = T.dmatrix('V')
  3. >>> x = T.dvector('x')
  4. >>> y = T.dot(x, W)
  5. >>> JV = T.Rop(y, W, V)
  6. >>> f = theano.function([W, V, x], JV)
  7. >>> f([[1, 1], [1, 1]], [[2, 2], [2, 2]], [0,1])
  8. array([ 2., 2.])

List of Op that implement Rop.

L-operator

In similitude to the R-operator, the L-operator would compute a row vector timesthe Jacobian. The mathematical formula would be v \frac{\partialf(x)}{\partial x}. The L-operator is also supported for generic tensors(not only for vectors). Similarly, it can be implemented as follows:

  1. >>> W = T.dmatrix('W')
  2. >>> v = T.dvector('v')
  3. >>> x = T.dvector('x')
  4. >>> y = T.dot(x, W)
  5. >>> VJ = T.Lop(y, W, v)
  6. >>> f = theano.function([v,x], VJ)
  7. >>> f([2, 2], [0, 1])
  8. array([[ 0., 0.],
  9. [ 2., 2.]])

Note

v, the point of evaluation, differs between the L-operator and the R-operator. For the L-operator, the point of evaluation needs to have the same shape as the output, whereas for the R-operator this point should have the same shape as the input parameter. Furthermore, the results of these two operations differ. The result of the L-operator is of the same shape as the input parameter, while the result of the R-operator has a shape similar to that of the output.

List of op with r op support.

Hessian times a Vector

If you need to compute the Hessian times a vector, you can make use of theabove-defined operators to do it more efficiently than actually computingthe exact Hessian and then performing the product. Due to the symmetry of theHessian matrix, you have two options that willgive you the same result, though these options might exhibit differing performances.Hence, we suggest profiling the methods before using either one of the two:

  1. >>> x = T.dvector('x')
  2. >>> v = T.dvector('v')
  3. >>> y = T.sum(x ** 2)
  4. >>> gy = T.grad(y, x)
  5. >>> vH = T.grad(T.sum(gy * v), x)
  6. >>> f = theano.function([x, v], vH)
  7. >>> f([4, 4], [2, 2])
  8. array([ 4., 4.])

or, making use of the R-operator:

  1. >>> x = T.dvector('x')
  2. >>> v = T.dvector('v')
  3. >>> y = T.sum(x ** 2)
  4. >>> gy = T.grad(y, x)
  5. >>> Hv = T.Rop(gy, x, v)
  6. >>> f = theano.function([x, v], Hv)
  7. >>> f([4, 4], [2, 2])
  8. array([ 4., 4.])

Final Pointers

  • The grad function works symbolically: it receives and returns Theano variables.
  • grad can be compared to a macro since it can be applied repeatedly.
  • Scalar costs only can be directly handled by grad. Arrays are handled through repeated applications.
  • Built-in functions allow to compute efficiently vector times Jacobian and vector times Hessian.
  • Work is in progress on the optimizations required to compute efficiently the fullJacobian and the Hessian matrix as well as the Jacobian times vector.