Frequently Asked Questions

How to update a subset of weights?

If you want to update only a subset of a weight matrix (such assome rows or some columns) that are used in the forward propogationof each iteration, then the cost function should be defined in a waythat it only depends on the subset of weights that are used in thatiteration.

For example if you want to learn a lookup table, e.g. used forword embeddings, where each row is a vector of weights representingthe embedding that the model has learned for a word, in each iteration,the only rows that should get updated are those containing embeddingsused during the forward propagation. Here is how the theano functionshould be written:

Defining a shared variable for the lookup table

  1. lookup_table = theano.shared(matrix_ndarray)

Getting a subset of the table (some rows or some columns) by passingan integer vector of indices corresponding to those rows or columns.

  1. subset = lookup_table[vector_of_indices]

From now on, use only ‘subset’. Do not call lookup_table[vector_of_indices]again. This causes problems with grad as this will create new variables.

Defining cost which depends only on subset and not the entire lookup_table

  1. cost = something that depends on subset
  2. g = theano.grad(cost, subset)

There are two ways for updating the parameters:Either use inc_subtensor or set_subtensor. It is recommended to useinc_subtensor. Some theano optimizations do the conversion betweenthe two functions, but not in all cases.

  1. updates = inc_subtensor(subset, g*lr)

OR

  1. updates = set_subtensor(subset, subset + g*lr)

Currently we just cover the case here,not if you use inc_subtensor or set_subtensor with other types of indexing.

Defining the theano function

  1. f = theano.function(..., updates=[(lookup_table, updates)])

Note that you can compute the gradient of the cost function w.r.t.the entire lookup_table, and the gradient will have nonzero rows onlyfor the rows that were selected during forward propagation. If you usegradient descent to update the parameters, there are no issues exceptfor unnecessary computation, e.g. you will update the lookup tableparameters with many zero gradient rows. However, if you want to usea different optimization method like rmsprop or Hessian-Free optimization,then there will be issues. In rmsprop, you keep an exponentially decayingsquared gradient by whose square root you divide the current gradient torescale the update step component-wise. If the gradient of the lookup table rowwhich corresponds to a rare word is very often zero, the squared gradient historywill tend to zero for that row because the history of that row decays towards zero.Using Hessian-Free, you will get many zero rows and columns. Even one of them wouldmake it non-invertible. In general, it would be better to compute the gradient onlyw.r.t. to those lookup table rows or columns which are actually used during theforward propagation.