Using the GPU

For an introductory discussion of Graphical Processing Units (GPU)and their use for intensive parallel computation purposes, see GPGPU.

One of Theano’s design goals is to specify computations at an abstractlevel, so that the internal function compiler has a lot of flexibilityabout how to carry out those computations. One of the ways we takeadvantage of this flexibility is in carrying out calculations on agraphics card.

Using the GPU in Theano is as simple as setting the deviceconfiguration flag to device=cuda. You can optionally target aspecific gpu by specifying the number of the gpu as ine.g. device=cuda2. It is also encouraged to set the floatingpoint precision to float32 when working on the GPU as that is usuallymuch faster. For example:THEANO_FLAGS='device=cuda,floatX=float32'. You can also set theseoptions in the .theanorc file’s [global] section:

  1. [global]
  2. device = cuda
  3. floatX = float32

Note

  • If your computer has multiple GPUs and you use device=cuda,the driver selects the one to use (usually cuda0).
  • You can use the program nvidia-smi to change this policy.
  • By default, when device indicates preference for GPU computations,Theano will fall back to the CPU if there is a problem with the GPU.You can use the flag force_device=True to instead raise an error whenTheano cannot use the GPU.

GpuArray Backend

If you have not done so already, you will need to install libgpuarrayas well as at least one computing toolkit (CUDA or OpenCL). Detailedinstructions to accomplish that are provided atlibgpuarray.

To install Nvidia’s GPU-programming toolchain (CUDA) and configureTheano to use it, see the installation instructions forLinux, MacOS and Windows.

While all types of devices are supported if using OpenCL, for theremainder of this section, whatever compute device you are using willbe referred to as GPU.

Note

GpuArray backend uses config.gpuarray.preallocate for GPU memoryallocation.

Warning

The backend was designed to support OpenCL, however current support isincomplete. A lot of very useful ops still do not support it because theywere ported from the old backend with minimal change.

Testing Theano with GPU

To see if your GPU is being used, cut and paste the following programinto a file and run it.

Use the Theano flag device=cuda to require the use of the GPU. Use the flagdevice=cuda{0,1,…} to specify which GPU to use.

  1. from theano import function, config, shared, tensor
  2. import numpy
  3. import time
  4.  
  5. vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
  6. iters = 1000
  7.  
  8. rng = numpy.random.RandomState(22)
  9. x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
  10. f = function([], tensor.exp(x))
  11. print(f.maker.fgraph.toposort())
  12. t0 = time.time()
  13. for i in range(iters):
  14. r = f()
  15. t1 = time.time()
  16. print("Looping %d times took %f seconds" % (iters, t1 - t0))
  17. print("Result is %s" % (r,))
  18. if numpy.any([isinstance(x.op, tensor.Elemwise) and
  19. ('Gpu' not in type(x.op).__name__)
  20. for x in f.maker.fgraph.toposort()]):
  21. print('Used the cpu')
  22. else:
  23. print('Used the gpu')

The program just computes exp() of a bunch of random numbers. Notethat we use the theano.shared() function to make sure that theinput x is stored on the GPU.

  1. $ THEANO_FLAGS=device=cpu python gpu_tutorial1.py
  2. [Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
  3. Looping 1000 times took 2.271284 seconds
  4. Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
  5. 1.62323285]
  6. Used the cpu
  7.  
  8. $ THEANO_FLAGS=device=cuda0 python gpu_tutorial1.py
  9. Using cuDNN version 5105 on context None
  10. Mapped name None to device cuda0: GeForce GTX 750 Ti (0000:07:00.0)
  11. [GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float64, (False,))>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
  12. Looping 1000 times took 1.697514 seconds
  13. Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
  14. 1.62323285]
  15. Used the gpu

Returning a Handle to Device-Allocated Data

By default functions that execute on the GPU still return a standardnumpy ndarray. A transfer operation is inserted just before theresults are returned to ensure a consistent interface with CPU code.This allows changing the device some code runs on by only replacingthe value of the device flag without touching the code.

If you don’t mind a loss of flexibility, you can ask theano to returnthe GPU object directly. The following code is modified to do just that.

  1. from theano import function, config, shared, tensor
  2. import numpy
  3. import time
  4.  
  5. vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
  6. iters = 1000
  7.  
  8. rng = numpy.random.RandomState(22)
  9. x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
  10. f = function([], tensor.exp(x).transfer(None))
  11. print(f.maker.fgraph.toposort())
  12. t0 = time.time()
  13. for i in range(iters):
  14. r = f()
  15. t1 = time.time()
  16. print("Looping %d times took %f seconds" % (iters, t1 - t0))
  17. print("Result is %s" % (numpy.asarray(r),))
  18. if numpy.any([isinstance(x.op, tensor.Elemwise) and
  19. ('Gpu' not in type(x.op).__name__)
  20. for x in f.maker.fgraph.toposort()]):
  21. print('Used the cpu')
  22. else:
  23. print('Used the gpu')

Here tensor.exp(x).transfer(None) means “copy exp(x) to the GPU”,with None the default GPU context when not explicitly given.For information on how to set GPU contexts, see Using multiple GPUs.

The output is

  1. $ THEANO_FLAGS=device=cuda0 python gpu_tutorial2.py
  2. Using cuDNN version 5105 on context None
  3. Mapped name None to device cuda0: GeForce GTX 750 Ti (0000:07:00.0)
  4. [GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float64, (False,))>)]
  5. Looping 1000 times took 0.040277 seconds
  6. Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
  7. 1.62323285]
  8. Used the gpu

While the time per call appears to be much lower than the two previousinvocations (and should indeed be lower, since we avoid a transfer)the massive speedup we obtained is in part due to asynchronous natureof execution on GPUs, meaning that the work isn’t completed yet, just‘launched’. We’ll talk about that later.

The object returned is a GpuArray from pygpu. It mostly acts as anumpy ndarray with some exceptions due to its data being on the GPU.You can copy it to the host and convert it to a regular ndarray byusing usual numpy casting such as numpy.asarray().

For even more speed, you can play with the borrow flag. SeeBorrowing when Constructing Function Objects.

What Can be Accelerated on the GPU

The performance characteristics will of course vary from device todevice, and also as we refine our implementation:

  • In general, matrix multiplication, convolution, and large element-wiseoperations can be accelerated a lot (5-50x) when arguments are large enoughto keep 30 processors busy.
  • Indexing, dimension-shuffling and constant-time reshaping will beequally fast on GPU as on CPU.
  • Summation over rows/columns of tensors can be a little slower on theGPU than on the CPU.
  • Copying of large quantities of data to and from a device is relatively slow,and often cancels most of the advantage of one or two accelerated functionson that data. Getting GPU performance largely hinges on making data transferto the device pay off.

The backend supports all regular theano data types (float32, float64,int, …), however GPU support varies and some units can’t deal withdouble (float64) or small (less than 32 bits like int16) data types.You will get an error at compile time or runtime if this is the case.

By default all inputs will get transferred to GPU. You can prevent aninput from getting transferred by setting its tag.target attribute to‘cpu’.

Complex support is untested and most likely completely broken.

Tips for Improving Performance on GPU

  • Consider adding floatX=float32 (or the type you are using) to your.theanorc file if you plan to do a lot of GPU work.
  • The GPU backend supports float64 variables, but they are still slowerto compute than float32. The more float32, the better GPU performanceyou will get.
  • Prefer constructors like matrix, vector and scalar (whichfollow the type set in floatX) to dmatrix, dvector anddscalar. The latter enforce double precision (float64 on mostmachines), which slows down GPU computations on current hardware.
  • Minimize transfers to the GPU device by using shared variablesto store frequently-accessed data (see shared()).When using the GPU, tensor shared variables are stored onthe GPU by default to eliminate transfer time for GPU ops using thosevariables.
  • If you aren’t happy with the performance you see, try running yourscript with profile=True flag. This should print some timinginformation at program termination. Is time being used sensibly? Ifan op or Apply is taking more time than its share, then if you knowsomething about GPU programming, have a look at how it’s implementedin theano.gpuarray. Check the line similar to Spent Xs(X%) in cpuop, Xs(X%) in gpu op and Xs(X%) in transfer op. This can tell youif not enough of your graph is on the GPU or if there is too muchmemory transfer.
  • To investigate whether all the Ops in the computational graph arerunning on GPU, it is possible to debug or check your code by providinga value to assert_no_cpu_op flag, i.e. warn, for warning, raise forraising an error or pdb for putting a breakpoint in the computationalgraph if there is a CPU Op.

GPU Async Capabilities

By default, all operations on the GPU are run asynchronously. Thismeans that they are only scheduled to run and the function returns.This is made somewhat transparently by the underlying libgpuarray.

A forced synchronization point is introduced when doing memorytransfers between device and host.

It is possible to force synchronization for a particular GpuArray bycalling its sync() method. This is useful to get accurate timingswhen doing benchmarks.

Changing the Value of Shared Variables

To change the value of a shared variable, e.g. to provide new datato processes, use shared_variable.set_value(new_value). For a lotmore detail about this, see Understanding Memory Aliasing for Speed and Correctness.

Exercise

Consider again the logistic regression:

  1. import numpy
  2. import theano
  3. import theano.tensor as T
  4. rng = numpy.random
  5.  
  6. N = 400
  7. feats = 784
  8. D = (rng.randn(N, feats).astype(theano.config.floatX),
  9. rng.randint(size=N,low=0, high=2).astype(theano.config.floatX))
  10. training_steps = 10000
  11.  
  12. # Declare Theano symbolic variables
  13. x = T.matrix("x")
  14. y = T.vector("y")
  15. w = theano.shared(rng.randn(feats).astype(theano.config.floatX), name="w")
  16. b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX), name="b")
  17. x.tag.test_value = D[0]
  18. y.tag.test_value = D[1]
  19.  
  20. # Construct Theano expression graph
  21. p_1 = 1 / (1 + T.exp(-T.dot(x, w)-b)) # Probability of having a one
  22. prediction = p_1 > 0.5 # The prediction that is done: 0 or 1
  23. xent = -y*T.log(p_1) - (1-y)*T.log(1-p_1) # Cross-entropy
  24. cost = xent.mean() + 0.01*(w**2).sum() # The cost to optimize
  25. gw,gb = T.grad(cost, [w,b])
  26.  
  27. # Compile expressions to functions
  28. train = theano.function(
  29. inputs=[x,y],
  30. outputs=[prediction, xent],
  31. updates=[(w, w-0.01*gw), (b, b-0.01*gb)],
  32. name = "train")
  33. predict = theano.function(inputs=[x], outputs=prediction,
  34. name = "predict")
  35.  
  36. if any([x.op.__class__.__name__ in ['Gemv', 'CGemv', 'Gemm', 'CGemm'] for x in
  37. train.maker.fgraph.toposort()]):
  38. print('Used the cpu')
  39. elif any([x.op.__class__.__name__ in ['GpuGemm', 'GpuGemv'] for x in
  40. train.maker.fgraph.toposort()]):
  41. print('Used the gpu')
  42. else:
  43. print('ERROR, not able to tell if theano used the cpu or the gpu')
  44. print(train.maker.fgraph.toposort())
  45.  
  46. for i in range(training_steps):
  47. pred, err = train(D[0], D[1])
  48.  
  49. print("target values for D")
  50. print(D[1])
  51.  
  52. print("prediction on D")
  53. print(predict(D[0]))
  54.  
  55. print("floatX=", theano.config.floatX)
  56. print("device=", theano.config.device)

Modify and execute this example to run on GPU with floatX=float32and time it using the command line time python file.py. (Ofcourse, you may use some of your answer to the exercise in sectionConfiguration Settings and Compiling Mode.)

Is there an increase in speed from CPU to GPU?

Where does it come from? (Use profile=True flag.)

What can be done to further increase the speed of the GPU version? Putyour ideas to test.

Solution


Software for Directly Programming a GPU

Leaving aside Theano which is a meta-programmer, there are:

  • CUDA: GPU programming API by NVIDIA based on extension to C (CUDA C)

    • Vendor-specific
    • Numeric libraries (BLAS, RNG, FFT) are maturing.
  • OpenCL: multi-vendor version of CUDA

    • More general, standardized.
    • Fewer libraries, lesser spread.
  • PyCUDA: Python bindings to CUDA driver interface allow to access Nvidia’s CUDA parallelcomputation API from Python

    • Convenience:

Makes it easy to do GPU meta-programming from within Python.

Abstractions to compile low-level CUDA code from Python (pycuda.driver.SourceModule).

GPU memory buffer (pycuda.gpuarray.GPUArray).

Helpful documentation.

  • Completeness: Binding to all of CUDA’s driver API.

  • Automatic error checking: All CUDA errors are automatically translated into Python exceptions.

  • Speed: PyCUDA’s base layer is written in C++.

  • Good memory management of GPU objects:

Object cleanup tied to lifetime of objects (RAII, ‘Resource Acquisition Is Initialization’).

Makes it much easier to write correct, leak- and crash-free code.

PyCUDA knows about dependencies (e.g. it won’t detach from a context before all memoryallocated in it is also freed).

(This is adapted from PyCUDA’s documentationand Andreas Kloeckner’s website on PyCUDA.)

  • PyOpenCL: PyCUDA for OpenCL

Learning to Program with PyCUDA

If you already enjoy a good proficiency with the C programming language, youmay easily leverage your knowledge by learning, first, to program a GPU with theCUDA extension to C (CUDA C) and, second, to use PyCUDA to access the CUDAAPI with a Python wrapper.

The following resources will assist you in this learning process:

The following examples give a foretaste of programming a GPU with PyCUDA. Onceyou feel competent enough, you may try yourself on the corresponding exercises.

Example: PyCUDA

  1. # (from PyCUDA's documentation)
  2. import pycuda.autoinit
  3. import pycuda.driver as drv
  4. import numpy
  5.  
  6. from pycuda.compiler import SourceModule
  7. mod = SourceModule("""
  8. __global__ void multiply_them(float *dest, float *a, float *b)
  9. {
  10. const int i = threadIdx.x;
  11. dest[i] = a[i] * b[i];
  12. }
  13. """)
  14.  
  15. multiply_them = mod.get_function("multiply_them")
  16.  
  17. a = numpy.random.randn(400).astype(numpy.float32)
  18. b = numpy.random.randn(400).astype(numpy.float32)
  19.  
  20. dest = numpy.zeros_like(a)
  21. multiply_them(
  22. drv.Out(dest), drv.In(a), drv.In(b),
  23. block=(400,1,1), grid=(1,1))
  24.  
  25. assert numpy.allclose(dest, a*b)
  26. print(dest)

Exercise

Run the preceding example.

Modify and execute to work for a matrix of shape (20, 10).

Example: Theano + PyCUDA

  1. import numpy, theano
  2. import theano.misc.pycuda_init
  3. from pycuda.compiler import SourceModule
  4. import theano.sandbox.cuda as cuda
  5.  
  6. class PyCUDADoubleOp(theano.Op):
  7.  
  8. __props__ = ()
  9.  
  10. def make_node(self, inp):
  11. inp = cuda.basic_ops.gpu_contiguous(
  12. cuda.basic_ops.as_cuda_ndarray_variable(inp))
  13. assert inp.dtype == "float32"
  14. return theano.Apply(self, [inp], [inp.type()])
  15.  
  16. def make_thunk(self, node, storage_map, _, _2, impl):
  17. mod = SourceModule("""
  18. __global__ void my_fct(float * i0, float * o0, int size) {
  19. int i = blockIdx.x*blockDim.x + threadIdx.x;
  20. if(i<size){
  21. o0[i] = i0[i]*2;
  22. }
  23. }""")
  24. pycuda_fct = mod.get_function("my_fct")
  25. inputs = [storage_map[v] for v in node.inputs]
  26. outputs = [storage_map[v] for v in node.outputs]
  27.  
  28. def thunk():
  29. z = outputs[0]
  30. if z[0] is None or z[0].shape != inputs[0][0].shape:
  31. z[0] = cuda.CudaNdarray.zeros(inputs[0][0].shape)
  32. grid = (int(numpy.ceil(inputs[0][0].size / 512.)), 1)
  33. pycuda_fct(inputs[0][0], z[0], numpy.intc(inputs[0][0].size),
  34. block=(512, 1, 1), grid=grid)
  35. return thunk

Use this code to test it:

  1. >>> x = theano.tensor.fmatrix()
  2. >>> f = theano.function([x], PyCUDADoubleOp()(x))
  3. >>> xv = numpy.ones((4, 5), dtype="float32")
  4. >>> assert numpy.allclose(f(xv), xv*2)
  5. >>> print(numpy.asarray(f(xv)))

Exercise

Run the preceding example.

Modify and execute to multiply two matrices: x * y.

Modify and execute to return two outputs: x + y and x - y.

(Notice that Theano’s current elemwise fusion optimization isonly applicable to computations involving a single output. Hence, to gainefficiency over the basic solution that is asked here, the two operations wouldhave to be jointly optimized explicitly in the code.)

Modify and execute to support stride (i.e. to avoid constraining the input to be C-contiguous).

Note

  • See Other Implementations to know how to handle random numberson the GPU.
  • The mode FAST_COMPILE disables C code, so also disables the GPU. Youcan use the Theano flag optimizer=’fast_compile’ to speed upcompilation and keep the GPU.