Debugging Theano: FAQ and Troubleshooting

There are many kinds of bugs that might come up in a computer program.This page is structured as a FAQ. It provides recipes to tackle commonproblems, and introduces some of the tools that we use to find problems in ourown Theano code, and even (it happens) in Theano’s internals, inUsing DebugMode.

Isolating the Problem/Testing Theano Compiler

You can run your Theano function in a DebugMode.This tests the Theano optimizations and helps to find where NaN, inf and other problems come from.

Interpreting Error Messages

Even in its default configuration, Theano tries to display useful errormessages. Consider the following faulty code.

  1. import numpy as np
  2. import theano
  3. import theano.tensor as T
  4.  
  5. x = T.vector()
  6. y = T.vector()
  7. z = x + x
  8. z = z + y
  9. f = theano.function([x, y], z)
  10. f(np.ones((2,)), np.ones((3,)))

Running the code above we see:

  1. Traceback (most recent call last):
  2. ...
  3. ValueError: Input dimension mis-match. (input[0].shape[0] = 3, input[1].shape[0] = 2)
  4. Apply node that caused the error: Elemwise{add,no_inplace}(<TensorType(float64, vector)>, <TensorType(float64, vector)>, <TensorType(float64, vector)>)
  5. Inputs types: [TensorType(float64, vector), TensorType(float64, vector), TensorType(float64, vector)]
  6. Inputs shapes: [(3,), (2,), (2,)]
  7. Inputs strides: [(8,), (8,), (8,)]
  8. Inputs scalar values: ['not scalar', 'not scalar', 'not scalar']
  9.  
  10. HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags 'optimizer=fast_compile'. If that does not work, Theano optimization can be disabled with 'optimizer=None'.
  11. HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint of this apply node.

Arguably the most useful information is approximately half-way throughthe error message, where the kind of error is displayed along with itscause (ValueError: Input dimension mis-match. (input[0].shape[0] = 3,input[1].shape[0] = 2).Below it, some other information is given, such as the apply node thatcaused the error, as well as the input types, shapes, strides andscalar values.

The two hints can also be helpful when debugging. Using the theano flagoptimizer=fast_compile or optimizer=None can often tell youthe faulty line, while exception_verbosity=high will display adebugprint of the apply node. Using these hints, the end of the errormessage becomes :

  1. Backtrace when the node is created:
  2. File "test0.py", line 8, in <module>
  3. z = z + y
  4.  
  5. Debugprint of the apply node:
  6. Elemwise{add,no_inplace} [id A] <TensorType(float64, vector)> ''
  7. |Elemwise{add,no_inplace} [id B] <TensorType(float64, vector)> ''
  8. | |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
  9. | |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
  10. |<TensorType(float64, vector)> [id D] <TensorType(float64, vector)>

We can here see that the error can be traced back to the line z = z + y.For this example, using optimizer=fast_compile worked. If it did not,you could set optimizer=None or use test values.

Using Test Values

As of v.0.4.0, Theano has a new mechanism by which graphs are executedon-the-fly, before a theano.function is ever compiled. Since optimizationshaven’t been applied at this stage, it is easier for the user to locate thesource of some bug. This functionality is enabled through the config flagtheano.config.compute_test_value. Its use is best shown through thefollowing example. Here, we use exception_verbosity=high andoptimizer=fast_compile, which would not tell you the line at fault.optimizer=None would and it could therefore be used instead of test values.

  1. import numpy
  2. import theano
  3. import theano.tensor as T
  4.  
  5. # compute_test_value is 'off' by default, meaning this feature is inactive
  6. theano.config.compute_test_value = 'off' # Use 'warn' to activate this feature
  7.  
  8. # configure shared variables
  9. W1val = numpy.random.rand(2, 10, 10).astype(theano.config.floatX)
  10. W1 = theano.shared(W1val, 'W1')
  11. W2val = numpy.random.rand(15, 20).astype(theano.config.floatX)
  12. W2 = theano.shared(W2val, 'W2')
  13.  
  14. # input which will be of shape (5,10)
  15. x = T.matrix('x')
  16. # provide Theano with a default test-value
  17. #x.tag.test_value = numpy.random.rand(5, 10)
  18.  
  19. # transform the shared variable in some way. Theano does not
  20. # know off hand that the matrix func_of_W1 has shape (20, 10)
  21. func_of_W1 = W1.dimshuffle(2, 0, 1).flatten(2).T
  22.  
  23. # source of error: dot product of 5x10 with 20x10
  24. h1 = T.dot(x, func_of_W1)
  25.  
  26. # do more stuff
  27. h2 = T.dot(h1, W2.T)
  28.  
  29. # compile and call the actual function
  30. f = theano.function([x], h2)
  31. f(numpy.random.rand(5, 10))

Running the above code generates the following error message:

  1. Traceback (most recent call last):
  2. File "test1.py", line 31, in <module>
  3. f(numpy.random.rand(5, 10))
  4. File "PATH_TO_THEANO/theano/compile/function_module.py", line 605, in __call__
  5. self.fn.thunks[self.fn.position_of_error])
  6. File "PATH_TO_THEANO/theano/compile/function_module.py", line 595, in __call__
  7. outputs = self.fn()
  8. ValueError: Shape mismatch: x has 10 cols (and 5 rows) but y has 20 rows (and 10 cols)
  9. Apply node that caused the error: Dot22(x, DimShuffle{1,0}.0)
  10. Inputs types: [TensorType(float64, matrix), TensorType(float64, matrix)]
  11. Inputs shapes: [(5, 10), (20, 10)]
  12. Inputs strides: [(80, 8), (8, 160)]
  13. Inputs scalar values: ['not scalar', 'not scalar']
  14.  
  15. Debugprint of the apply node:
  16. Dot22 [id A] <TensorType(float64, matrix)> ''
  17. |x [id B] <TensorType(float64, matrix)>
  18. |DimShuffle{1,0} [id C] <TensorType(float64, matrix)> ''
  19. |Flatten{2} [id D] <TensorType(float64, matrix)> ''
  20. |DimShuffle{2,0,1} [id E] <TensorType(float64, 3D)> ''
  21. |W1 [id F] <TensorType(float64, 3D)>
  22.  
  23. HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags 'optimizer=fast_compile'. If that does not work, Theano optimization can be disabled with 'optimizer=None'.

If the above is not informative enough, by instrumenting the code everso slightly, we can get Theano to reveal the exact source of the error.

  1. # enable on-the-fly graph computations
  2. theano.config.compute_test_value = 'warn'
  3.  
  4. ...
  5.  
  6. # input which will be of shape (5, 10)
  7. x = T.matrix('x')
  8. # provide Theano with a default test-value
  9. x.tag.test_value = numpy.random.rand(5, 10)

In the above, we are tagging the symbolic matrix x with a special testvalue. This allows Theano to evaluate symbolic expressions on-the-fly (bycalling the perform method of each op), as they are being defined. Sourcesof error can thus be identified with much more precision and much earlier inthe compilation pipeline. For example, running the above code yields thefollowing error message, which properly identifies line 24 as the culprit.

  1. Traceback (most recent call last):
  2. File "test2.py", line 24, in <module>
  3. h1 = T.dot(x, func_of_W1)
  4. File "PATH_TO_THEANO/theano/tensor/basic.py", line 4734, in dot
  5. return _dot(a, b)
  6. File "PATH_TO_THEANO/theano/gof/op.py", line 545, in __call__
  7. required = thunk()
  8. File "PATH_TO_THEANO/theano/gof/op.py", line 752, in rval
  9. r = p(n, [x[0] for x in i], o)
  10. File "PATH_TO_THEANO/theano/tensor/basic.py", line 4554, in perform
  11. z[0] = numpy.asarray(numpy.dot(x, y))
  12. ValueError: matrices are not aligned

The compute_test_value mechanism works as follows:

  • Theano constants and shared variables are used as is. No need to instrument them.
  • A Theano variable (i.e. dmatrix, vector, etc.) should begiven a special test value through the attribute tag.test_value.
  • Theano automatically instruments intermediate results. As such, any quantityderived from x will be given a tag.test_value automatically.

compute_test_value can take the following values:

  • off: Default behavior. This debugging mechanism is inactive.
  • raise: Compute test values on the fly. Any variable for which a testvalue is required, but not provided by the user, is treated as an error. Anexception is raised accordingly.
  • warn: Idem, but a warning is issued instead of an Exception.
  • ignore: Silently ignore the computation of intermediate test values, if avariable is missing a test value.

Note

This feature is currently incompatible with Scan and also with opswhich do not implement a perform method.

“How do I Print an Intermediate Value in a Function?”

Theano provides a ‘Print’ op to do this.

  1. import numpy
  2. import theano
  3.  
  4. x = theano.tensor.dvector('x')
  5.  
  6. x_printed = theano.printing.Print('this is a very important value')(x)
  7.  
  8. f = theano.function([x], x * 5)
  9. f_with_print = theano.function([x], x_printed * 5)
  10.  
  11. #this runs the graph without any printing
  12. assert numpy.all( f([1, 2, 3]) == [5, 10, 15])
  13.  
  14. #this runs the graph with the message, and value printed
  15. assert numpy.all( f_with_print([1, 2, 3]) == [5, 10, 15])
  1. this is a very important value __str__ = [ 1. 2. 3.]

Since Theano runs your program in a topological order, you won’t have precisecontrol over the order in which multiple Print() ops are evaluted. For a moreprecise inspection of what’s being computed where, when, and how, see the discussion“How do I Step through a Compiled Function?”.

Warning

Using this Print Theano Op can prevent some Theanooptimization from being applied. This can also happen withstability optimization. So if you use this Print and have nan, tryto remove them to know if this is the cause or not.

“How do I Print a Graph?” (before or after compilation)

Theano provides two functions (theano.pp() andtheano.printing.debugprint()) to print a graph to the terminal before or aftercompilation. These two functions print expression graphs in different ways:pp() is more compact and math-like, debugprint() is more verbose.Theano also provides theano.printing.pydotprint() that creates a png image of the function.

You can read about them in printing – Graph Printing and Symbolic Print Statement.

“The Function I Compiled is Too Slow, what’s up?”

First, make sure you’re running in FAST_RUN mode. Even thoughFAST_RUN is the default mode, insist by passing mode='FAST_RUN'to theano.function (or theano.make) or by setting config.modeto FAST_RUN.

Second, try the Theano ProfileMode. This will tell you whichApply nodes, and which ops are eating up your CPU cycles.

Tips:

  • Use the flags floatX=float32 to require type float32 instead of float64;Use the Theano constructors matrix(),vector(),… instead of dmatrix(), dvector(),…since they respectively involve the default types float32 and float64.
  • Check in the profile mode that there is no Dot op in the post-compilationgraph while you are multiplying two matrices of the same type. Dot should beoptimized to dot22 when the inputs are matrices and of the same type. This canstill happen when using floatX=float32 when one of the inputs of the graph isof type float64.

“Why does my GPU function seem to be slow?”

When you compile a theano function, if you do not get the speedup that you expect over theCPU performance of the same code. It is oftentimes due to the fact that some Ops might be runningon CPU instead GPU. If that is the case, you can use assert_no_cpu_op to check if thereis a CPU Op on your computational graph. assert_no_cpu_op can take the following one of the threeoptions:

  • warn: Raise a warning
  • pdb: Stop with a pdb in the computational graph during the compilation
  • raise: Raise an error,if there is a CPU Op in the computational graph.

It is possible to use this mode by providing the flag in THEANO_FLAGS, such as:THEANO_FLAGS="float32,device=gpu,assert_no_cpu_op='raise'" python test.py

But note that this optimization will not catch all the CPU Ops, it might miss someOps.

“How do I Step through a Compiled Function?”

You can use MonitorMode to inspect the inputs and outputs of eachnode being executed when the function is called. The code snipped belowshows how to print all inputs and outputs:

  1. from __future__ import print_function
  2. import theano
  3.  
  4. def inspect_inputs(i, node, fn):
  5. print(i, node, "input(s) value(s):", [input[0] for input in fn.inputs],
  6. end='')
  7.  
  8. def inspect_outputs(i, node, fn):
  9. print(" output(s) value(s):", [output[0] for output in fn.outputs])
  10.  
  11. x = theano.tensor.dscalar('x')
  12. f = theano.function([x], [5 * x],
  13. mode=theano.compile.MonitorMode(
  14. pre_func=inspect_inputs,
  15. post_func=inspect_outputs))
  16. f(3)
  1. 0 Elemwise{mul,no_inplace}(TensorConstant{5.0}, x) input(s) value(s): [array(5.0), array(3.0)] output(s) value(s): [array(15.0)]

When using these inspect_inputs and inspect_outputs functionswith MonitorMode, you should see [potentially a lot of] printed output.Every Apply node will be printed out,along with its position in the graph, the arguments to the functions perform orc_code and the output it computed.Admittedly, this may be a huge amount ofoutput to read through if you are using big tensors… but you can choose toadd logic that would, for instance, printsomething out only if a certain kind of op were used, at a certain programposition, or only if a particular value showed up in one of the inputs or outputs.A typical example is to detect when NaN values are added into computations, whichcan be achieved as follows:

  1. import numpy
  2.  
  3. import theano
  4.  
  5. # This is the current suggested detect_nan implementation to
  6. # show you how it work. That way, you can modify it for your
  7. # need. If you want exactly this method, you can use
  8. # ``theano.compile.monitormode.detect_nan`` that will always
  9. # contain the current suggested version.
  10.  
  11. def detect_nan(i, node, fn):
  12. for output in fn.outputs:
  13. if (not isinstance(output[0], numpy.random.RandomState) and
  14. numpy.isnan(output[0]).any()):
  15. print('*** NaN detected ***')
  16. theano.printing.debugprint(node)
  17. print('Inputs : %s' % [input[0] for input in fn.inputs])
  18. print('Outputs: %s' % [output[0] for output in fn.outputs])
  19. break
  20.  
  21. x = theano.tensor.dscalar('x')
  22. f = theano.function([x], [theano.tensor.log(x) * x],
  23. mode=theano.compile.MonitorMode(
  24. post_func=detect_nan))
  25. f(0) # log(0) * 0 = -inf * 0 = NaN
  1. *** NaN detected ***
  2. Elemwise{Composite{(log(i0) * i0)}} [id A] ''
  3. |x [id B]
  4. Inputs : [array(0.0)]
  5. Outputs: [array(nan)]

To help understand what is happening in your graph, you candisable the local_elemwise_fusion and all inplaceoptimizations. The first is a speed optimization that merges elemwiseoperations together. This makes it harder to know which particularelemwise causes the problem. The second optimization makes some ops’outputs overwrite their inputs. So, if an op creates a bad output, youwill not be able to see the input that was overwriten in the post_funcfunction. To disable those optimizations (with a Theano version after0.6rc3), define the MonitorMode like this:

  1. mode = theano.compile.MonitorMode(post_func=detect_nan).excluding(
  2. 'local_elemwise_fusion', 'inplace')
  3. f = theano.function([x], [theano.tensor.log(x) * x],
  4. mode=mode)

Note

The Theano flags optimizer_including, optimizer_excludingand optimizer_requiring aren’t used by the MonitorMode, theyare used only by the default mode. You can’t use the defaultmode with MonitorMode, as you need to define what you monitor.

To be sure all inputs of the node are available during the call topost_func, you must also disable the garbage collector. Otherwise,the execution of the node can garbage collect its inputs that aren’tneeded anymore by the Theano function. This can be done with the Theanoflag:

  1. allow_gc=False

How to Use pdb

In the majority of cases, you won’t be executing from the interactive shellbut from a set of Python scripts. In such cases, the use of the Pythondebugger can come in handy, especially as your models become more complex.Intermediate results don’t necessarily have a clear name and you can getexceptions which are hard to decipher, due to the “compiled” nature of thefunctions.

Consider this example script (“ex.py”):

  1. import theano
  2. import numpy
  3. import theano.tensor as T
  4.  
  5. a = T.dmatrix('a')
  6. b = T.dmatrix('b')
  7.  
  8. f = theano.function([a, b], [a * b])
  9.  
  10. # matrices chosen so dimensions are unsuitable for multiplication
  11. mat1 = numpy.arange(12).reshape((3, 4))
  12. mat2 = numpy.arange(25).reshape((5, 5))
  13.  
  14. f(mat1, mat2)

This is actually so simple the debugging could be done easily, but it’s forillustrative purposes. As the matrices can’t be multiplied element-wise(unsuitable shapes), we get the following exception:

  1. File "ex.py", line 14, in <module>
  2. f(mat1, mat2)
  3. File "/u/username/Theano/theano/compile/function_module.py", line 451, in __call__
  4. File "/u/username/Theano/theano/gof/link.py", line 271, in streamline_default_f
  5. File "/u/username/Theano/theano/gof/link.py", line 267, in streamline_default_f
  6. File "/u/username/Theano/theano/gof/cc.py", line 1049, in execute ValueError: ('Input dimension mis-match. (input[0].shape[0] = 3, input[1].shape[0] = 5)', Elemwise{mul,no_inplace}(a, b), Elemwise{mul,no_inplace}(a, b))

The call stack contains some useful information to trace back the sourceof the error. There’s the script where the compiled function was called –but if you’re using (improperly parameterized) prebuilt modules, the errormight originate from ops in these modules, not this script. The last linetells us about the op that caused the exception. In this case it’s a “mul”involving variables with names “a” and “b”. But suppose we instead had anintermediate result to which we hadn’t given a name.

After learning a few things about the graph structure in Theano, we can usethe Python debugger to explore the graph, and then we can get runtimeinformation about the error. Matrix dimensions, especially, are useful topinpoint the source of the error. In the printout, there are also 2 of the 4dimensions of the matrices involved, but for the sake of example say we’dneed the other dimensions to pinpoint the error. First, we re-launch withthe debugger module and run the program with “c”:

  1. python -m pdb ex.py
  2. > /u/username/experiments/doctmp1/ex.py(1)<module>()
  3. -> import theano
  4. (Pdb) c

Then we get back the above error printout, but the interpreter breaks inthat state. Useful commands here are

  • “up” and “down” (to move up and down the call stack),
  • “l” (to print code around the line in the current stack position),
  • “p variable_name” (to print the string representation of ‘variable_name’),
  • “p dir(object_name)”, using the Python dir() function to print the list of an object’s members

Here, for example, I do “up”, and a simple “l” shows me there’s a localvariable “node”. This is the “node” from the computation graph, so byfollowing the “node.inputs”, “node.owner” and “node.outputs” links I canexplore around the graph.

That graph is purely symbolic (no data, just symbols to manipulate itabstractly). To get information about the actual parameters, you explore the“thunk” objects, which bind the storage for the inputs (and outputs) withthe function itself (a “thunk” is a concept related to closures). Here, toget the current node’s first input’s shape, you’d therefore do “pthunk.inputs[0][0].shape”, which prints out “(3, 4)”.

Dumping a Function to help debug

If you are reading this, there is high chance that you emailed ourmailing list and we asked you to read this section. This sectionexplain how to dump all the parameter passed totheano.function(). This is useful to help us reproduce a problemduring compilation and it doesn’t request you to make a self containedexample.

For this to work, we need to be able to import the code for all Op inthe graph. So if you create your own Op, we will need thiscode. Otherwise, we won’t be able to unpickle it. We already have allthe Ops from Theano and Pylearn2.

  1. # Replace this line:
  2. theano.function(...)
  3. # with
  4. theano.function_dump(filename, ...)
  5. # Where filename is a string to a file that we will write to.

Then send us filename.

  • class theano.tests.breakpoint.PdbBreakpoint(name)[source]
  • This is an identity-like op with the side effect of enforcing aconditional breakpoint, inside a theano function, based on a symbolicscalar condition.

Parameters: name (String) – name of the conditional breakpoint. To be printed when thebreakpoint is activated. Note: WARNING. At least one of the outputs of the op must be usedotherwise the op will be removed from the Theano graphdue to its outputs being unused Note:

  • WARNING. Employing the function inside a theano graph can prevent
  • Theano from applying certain optimizations to improveperformance, reduce memory consumption and/or reducenumerical instability.

Detailed explanation:As of 2014-12-01 the PdbBreakpoint op is not known by anyoptimization. Setting a PdbBreakpoint op in the middle of apattern that is usually optimized out will block the optimization.

Example:

  1. import theano
  2. import theano.tensor as T
  3. from theano.tests.breakpoint import PdbBreakpoint
  4.  
  5. input = T.fvector()
  6. target = T.fvector()
  7.  
  8. # Mean squared error between input and target
  9. mse = (input - target) ** 2
  10.  
  11. # Conditional breakpoint to be activated if the total MSE is higher
  12. # than 100. The breakpoint will monitor the inputs, targets as well
  13. # as the individual error values
  14. breakpointOp = PdbBreakpoint("MSE too high")
  15. condition = T.gt(mse.sum(), 100)
  16. mse, monitored_input, monitored_target = breakpointOp(condition, mse,
  17. input, target)
  18.  
  19. # Compile the theano function
  20. fct = theano.function([input, target], mse)
  21.  
  22. # Use the function
  23. print fct([10, 0], [10, 5]) # Will NOT activate the breakpoint
  24. print fct([0, 0], [10, 5]) # Will activate the breakpoint