Torch Core

Open In Colab

Basic pytorch functions used in the fastai library

  1. /usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  2. return torch._C._cuda_getDeviceCount() > 0
  1. from PIL import Image

Arrays and show

subplots[source]

subplots(nrows=1, ncols=1, figsize=None, imsize=3, suptitle=None, sharex=False, sharey=False, squeeze=True, subplot_kw=None, gridspec_kw=None, **kwargs)

show_image[source]

show_image(im, ax=None, figsize=None, title=None, ctx=None, cmap=None, norm=None, aspect=None, interpolation=None, alpha=None, vmin=None, vmax=None, origin=None, extent=None, filternorm=True, filterrad=4.0, resample=None, url=None, data=None, **kwargs)

Show a PIL or PyTorch image on ax.

show_image can show PIL images…

  1. im = Image.open(TEST_IMAGE_BW)
  2. ax = show_image(im, cmap="Greys")

PyTorch Core - 图2

…and color images with standard CHW dim order…

  1. im2 = np.array(Image.open(TEST_IMAGE))
  2. ax = show_image(im2, figsize=(2,2))

PyTorch Core - 图3

…and color images with HWC dim order…

  1. im3 = torch.as_tensor(im2).permute(2,0,1)
  2. ax = show_image(im3, figsize=(2,2))

PyTorch Core - 图4

show_titled_image[source]

show_titled_image(o, ax=None, figsize=None, title=None, ctx=None, cmap=None, norm=None, aspect=None, interpolation=None, alpha=None, vmin=None, vmax=None, origin=None, extent=None, filternorm=True, filterrad=4.0, resample=None, url=None, data=None, **kwargs)

Call show_image destructuring o to (img,title)

  1. show_titled_image((im3,'A puppy'), figsize=(2,2))

PyTorch Core - 图5

Show all images ims as subplots with rows using titles. suptitle provides a way to create a figure title for all images. If you use suptitle, constrained_layout is used unless you set constrained_layout to False.

show_images[source]

show_images(ims, nrows=1, ncols=None, titles=None, figsize=None, imsize=3, suptitle=None, sharex=False, sharey=False, squeeze=True, subplot_kw=None, gridspec_kw=None)

Show all images ims as subplots with rows using titles.

  1. show_images((im,im3),titles=('number','puppy'),suptitle='Number Puppy', imsize=3)

PyTorch Core - 图6

ArrayImage, ArrayImageBW and ArrayMask are subclasses of ndarray that know how to show themselves.

class ArrayBase[source]

ArrayBase() :: ndarray

An ndarray that can modify casting behavior

class ArrayImageBase[source]

ArrayImageBase() :: ArrayBase

Base class for arrays representing images

class ArrayImage[source]

ArrayImage() :: ArrayImageBase

An array representing an image

class ArrayImageBW[source]

ArrayImageBW() :: ArrayImage

An array representing an image

class ArrayMask[source]

ArrayMask() :: ArrayImageBase

An array representing an image mask

  1. im = Image.open(TEST_IMAGE)
  1. im_t = cast(im, ArrayImage)
  2. test_eq(type(im_t), ArrayImage)
  1. ax = im_t.show(figsize=(2,2))

PyTorch Core - 图7

  1. test_fig_exists(ax)

Basics

Tensor.__array_eq__[source]

Tensor.__array_eq__(b)

tensor[source]

tensor(x, *rest, dtype=None, device=None, requires_grad=False, pin_memory=False)

Like torch.as_tensor, but handle lists too, and can pass multiple vector elements directly.

  1. test_eq(tensor(torch.tensor([1,2,3])), torch.tensor([1,2,3]))
  2. test_eq(tensor(array([1,2,3])), torch.tensor([1,2,3]))
  3. test_eq(tensor(1,2,3), torch.tensor([1,2,3]))
  4. test_eq_type(tensor(1.0), torch.tensor(1.0))

[`set_seed`](/torch_core.html#set_seed) is useful for reproducibility between runs. It is important to remember that certain classes such as Dataloaders have internal random number generators that is not effected by this function, so this must be run before such objects are created in order to guarantee reproducibility.

set_seed[source]

set_seed(s, reproducible=False)

Set random seed for random, torch, and numpy (where available)

Here is an example of how [`set_seed`](/torch_core.html#set_seed) can be used to reset the state of random number generators.

  1. set_seed(2*33)
  2. a1 = np.random.random()
  3. a2 = torch.rand(())
  4. a3 = random.random()
  5. set_seed(2*33)
  6. b1 = np.random.random()
  7. b2 = torch.rand(())
  8. b3 = random.random()
  9. print('a's: {0:3.3f} {1:3.3f} {2:3.3f}'.format(a1,a2,a3))
  10. print('b's: {0:3.3f} {1:3.3f} {2:3.3f}'.format(b1,b2,a3))
  1. a's: 0.154 0.498 0.071
  2. b's: 0.154 0.498 0.071
  1. test_eq(a1,b1)
  2. test_eq(a2,b2)
  3. test_eq(a3,b3)

[`get_random_states`](/torch_core.html#get_random_states) and [`set_random_states`](/torch_core.html#set_random_states) are useful for storing a state so you can go back to it later.

get_random_states[source]

get_random_states()

Gets states for random, torch, and numpy random number generators

set_random_states[source]

set_random_states(random_state, numpy_state, torch_state, torch_cuda_state, torch_deterministic, torch_benchmark)

Set states for random, torch, and numpy random number generators

Below notice that the old values and rewinded values are the same because we were able to return to the previous state.

  1. old_states = get_random_states()
  2. olds = (random.random(),np.random.random(),torch.rand(()))
  3. news = (random.random(),np.random.random(),torch.rand(()))
  4. set_random_states(**old_states)
  5. rewinds = (random.random(),np.random.random(),torch.rand(()))
  6. print('olds: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*olds))
  7. print('news: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*news))
  8. print('rewinds: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*rewinds))
  1. olds: 0.435 0.134 0.023
  2. news: 0.246 0.363 0.227
  3. rewinds: 0.435 0.134 0.023
  1. test_ne(olds,news)
  2. test_eq(olds,rewinds)

In [`no_random`](/torch_core.html#no_random) we combine the ideas of rewinding state with [`get_random_states`](/torch_core.html#get_random_states) and [`set_random_states`](/torch_core.html#set_random_states) with the ability to [`set_seed`](/torch_core.html#set_seed) and create a context manager that can allow us to control randomness in a portion of our code.

Note: Similar to torch.random.fork_rng, but also with numpy and random

no_random[source]

no_random(seed=42, reproducible=True)

Stores and retrieves state of random number generators. Sets random seed for random, torch, and numpy.

Here are some examples on how we can use [`no_random`](/torch_core.html#no_random) to control the randomness within a block of code.

  1. states=get_random_states()
  2. olds = (random.random(),np.random.random(),torch.rand(()))
  3. set_random_states(**states) #rewinding above random calls
  4. with no_random():
  5. new1 = (random.random(),np.random.random(),torch.rand(()))
  6. with no_random():
  7. new2 = (random.random(),np.random.random(),torch.rand(()))
  8. with no_random(seed=100):
  9. seeded1 = (random.random(),np.random.random(),torch.rand(()))
  10. with no_random(seed=100):
  11. seeded2 = (random.random(),np.random.random(),torch.rand(()))
  12. rewinds = (random.random(),np.random.random(),torch.rand(()))
  13. print('olds: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*olds))
  14. print('new1: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*new1))
  15. print('new2: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*new2))
  16. print('seeded1: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*seeded1))
  17. print('seeded2: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*seeded2))
  18. print('rewinds: {0:3.3f} {1:3.3f} {2:3.3f}'.format(*rewinds))
  1. olds: 0.246 0.363 0.227
  2. new1: 0.639 0.375 0.882
  3. new2: 0.639 0.375 0.882
  4. seeded1: 0.146 0.543 0.112
  5. seeded2: 0.146 0.543 0.112
  6. rewinds: 0.246 0.363 0.227

Notice that olds, and rewinds are alos both equal to each other. From this we can see that everything in the with blocks did not update the state outside of the block. Inside of the block, the state is reset for any particular seed, so for the same seed you should get the same random number generator results.

Note: It is important to remember that classes like Dataloader have internal random number generators, and [`no_random`](/torch_core.html#no_random) will have no effect on those random number generators.

  1. test_ne(olds,new1)
  2. test_eq(new1,new2)
  3. test_ne(new1,seeded1)
  4. test_eq(seeded1,seeded2)
  5. test_eq(olds,rewinds)

unsqueeze[source]

unsqueeze(x, dim=-1, n=1)

Same as torch.unsqueeze but can add n dims

  1. t = tensor([1])
  2. t2 = unsqueeze(t, n=2)
  3. test_eq(t2,t[:,None,None])

unsqueeze_[source]

unsqueeze_(x, dim=-1, n=1)

Same as torch.unsqueeze_ but can add n dims

  1. t = tensor([1])
  2. unsqueeze_(t, n=2)
  3. test_eq(t, tensor([1]).view(1,1,1))

apply[source]

apply(func, x, *args, **kwargs)

Apply func recursively to x, passing on args

maybe_gather[source]

maybe_gather(x, axis=0)

Gather copies of x on axis (if training is distributed)

to_detach[source]

to_detach(b, cpu=True, gather=True)

Recursively detach lists of tensors in b; put them on the CPU if cpu=True.

gather only applies during distributed training and the result tensor will be the one gathered across processes if gather=True (as a result, the batch size will be multiplied by the number of processes).

to_half[source]

to_half(b)

Recursively map lists of tensors in b to FP16.

to_float[source]

to_float(b)

Recursively map lists of int tensors in b to float.

default_device[source]

default_device(use_cuda=-1)

Return or set default device; use_cuda: None - CUDA if available; True - error if not available; False - CPU

  1. if torch.cuda.is_available():
  2. _td = torch.device(torch.cuda.current_device())
  3. test_eq(default_device(None), _td)
  4. test_eq(default_device(True), _td)
  5. else:
  6. test_eq(default_device(False), torch.device('cpu'))
  7. default_device(None);

to_device[source]

to_device(b, device=None)

Recursively put b on device.

  1. t = to_device((3,(tensor(3),tensor(2))))
  2. t1,(t2,t3) = t
  1. if torch.cuda.is_available():
  2. test_eq_type(t,(3,(tensor(3).cuda(),tensor(2).cuda())))
  3. test_eq(t2.type(), "torch.cuda.LongTensor")
  4. test_eq(t3.type(), "torch.cuda.LongTensor")

to_cpu[source]

to_cpu(b)

Recursively map lists of tensors in b to the cpu.

  1. t3 = to_cpu(t3)
  2. test_eq(t3.type(), "torch.LongTensor")
  3. test_eq(t3, 2)

to_np[source]

to_np(x)

Convert a tensor to a numpy array.

  1. t3 = to_np(t3)
  2. test_eq(type(t3), np.ndarray)
  3. test_eq(t3, 2)

to_concat[source]

to_concat(xs, dim=0)

Concat the element in xs (recursively if they are tuples/lists of tensors)

  1. test_eq(to_concat([tensor([1,2]), tensor([3,4])]), tensor([1,2,3,4]))
  2. test_eq(to_concat([tensor([[1,2]]), tensor([[3,4]])], dim=1), tensor([[1,2,3,4]]))
  3. test_eq_type(to_concat([(tensor([1,2]), tensor([3,4])), (tensor([3,4]), tensor([5,6]))]), (tensor([1,2,3,4]), tensor([3,4,5,6])))
  4. test_eq_type(to_concat([[tensor([1,2]), tensor([3,4])], [tensor([3,4]), tensor([5,6])]]), [tensor([1,2,3,4]), tensor([3,4,5,6])])
  5. test_eq_type(to_concat([(tensor([1,2]),), (tensor([3,4]),)]), (tensor([1,2,3,4]),))
  6. test_eq(to_concat([tensor([[1,2]]), tensor([[3,4], [5,6]])], dim=1), [tensor([1]),tensor([3, 5]),tensor([4, 6])])
  1. test_eq(type(to_concat([dict(foo=tensor([1,2]), bar=tensor(3,4))])), dict)

Tensor subtypes

Tensor.set_meta[source]

Tensor.set_meta(x, as_copy=False)

Set all metadata in __dict__

Tensor.as_subclass[source]

Tensor.as_subclass(typ)

Cast to typ and include __dict__ and meta

Tensor.set_meta and Tensor.as_subclass work together to maintain __dict__ after casting.

  1. class _T(Tensor): pass
  2. t = tensor(1.).requires_grad_()
  3. t.img_size = 1
  4. t2 = t.as_subclass(_T)
  5. test_eq(t.img_size, t2.img_size)
  6. test_eq(t2.img_size, 1)
  7. assert(t2.requires_grad_)

class TensorBase[source]

TensorBase(x, **kwargs) :: Tensor

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

TensorBase hooks into __torch_function__ to ensure metadata is not lost. To see all functions being called, set debug.

  1. a = TensorBase(1)
  2. a.debug=True
  3. 1/(a+1)
  1. <method 'add' of 'torch._C._TensorBase' objects> (<class '__main__.TensorBase'>,) (TensorBase(1), 1) None
  2. <function Tensor.__rdiv__ at 0x000001BD5DC4DDC0> (<class '__main__.TensorBase'>,) (TensorBase(2), 1) {}
  1. TensorBase(0.5000)
  1. class _TImage(TensorBase): pass
  2. class _TImage2(_TImage): pass
  3. t1 = _TImage([1.])
  4. t2 = _TImage2([1.])
  5. t2+t1
  1. _TImage2([2.])
  1. class _T(TensorBase): pass
  2. t = _T(range(5))
  3. test_eq(t[0], 0)
  4. test_eq_type(t+1, _T(range(1,6)))
  5. test_eq(repr(t), '_T([0, 1, 2, 3, 4])')
  6. test_eq_type(t[_T([False,False,True,True,True])], _T([2,3,4]))
  7. test_eq_type(t[_T([2,3,4])], _T([2,3,4]))
  8. test_eq(type(pickle.loads(pickle.dumps(t))), _T)
  9. test_eq_type(t.new_ones(1), _T([1]))
  10. test_eq_type(t.new_tensor([1.,2.]), _T([1,2]))
  1. <ipython-input-252-57796e6dc0d5>:35: DeprecationWarning: an integer is required (got type float). Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python.
  2. return self.as_subclass(Tensor).new_tensor(size, dtype=dtype, device=device, requires_grad=requires_grad).as_subclass(cls)
  1. t = tensor([1,2,3])
  2. m = TensorBase([False,True,True])
  3. test_eq(t[m], tensor([2,3]))
  4. t = tensor([[1,2,3],[1,2,3]])
  5. m = cast(tensor([[False,True,True],
  6. [False,True,True]]), TensorBase)
  7. test_eq(t[m], tensor([2,3,2,3]))
  1. t = tensor([[1,2,3],[1,2,3]])
  2. t.img_size = 1
  3. t2 = cast(t, TensorBase)
  4. test_eq(t2.img_size, t.img_size)
  5. x = retain_type(tensor([4,5,6]), t2)
  6. test_eq(x.img_size, t.img_size)
  7. t3 = TensorBase([[1,2,3],[1,2,3]], img_size=1)
  8. test_eq(t3.img_size, t.img_size)
  9. t4 = t2+1
  10. t4.img_size = 2
  11. test_eq(t2.img_size, 1)
  12. test_eq(t4.img_size, 2)

class TensorImageBase[source]

TensorImageBase(x, **kwargs) :: TensorBase

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

class TensorImage[source]

TensorImage(x, **kwargs) :: TensorImageBase

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

class TensorImageBW[source]

TensorImageBW(x, **kwargs) :: TensorImage

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

class TensorMask[source]

TensorMask(x, **kwargs) :: TensorImageBase

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

  1. im = Image.open(TEST_IMAGE)
  2. im_t = cast(array(im), TensorImage)
  3. test_eq(type(im_t), TensorImage)
  1. im_t2 = cast(tensor(1), TensorMask)
  2. test_eq(type(im_t2), TensorMask)
  3. test_eq(im_t2, tensor(1))
  4. ax = im_t.show(figsize=(2,2))
  5. _ =(im_t == im_t2)

PyTorch Core - 图8

  1. test_fig_exists(ax)

Operations between TensorMask and TensorImageBase objects return the type of the TensorImageBase object:

  1. a = TensorMask([1,2])
  2. test_eq_type(TensorImage(1)+a, TensorImage([2,3]))
  3. test_eq_type(1-a, TensorMask([0,-1]))
  1. test_eq_type(to_concat([TensorImage([1,2]), TensorImage([3,4])]), TensorImage([1,2,3,4]))

class TensorFlowField[source]

TensorFlowField(x, **kwargs) :: TensorBase

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

  1. t1 = TensorImage([1.]).view(1,1,1,1)
  2. t2 = TensorFlowField([1.,1.]).view(1,1,1,2)
  3. test_eq_type(F.grid_sample(t1, t2), TensorImage([[[[0.25]]]]))

class TensorCategory[source]

TensorCategory(x, **kwargs) :: TensorBase

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

class TensorMultiCategory[source]

TensorMultiCategory(x, **kwargs) :: TensorCategory

A Tensor which support subclass pickling, and maintains metadata when casting or after methods

class TitledTensorScalar[source]

TitledTensorScalar(x, **kwargs) :: TensorBase

A tensor containing a scalar that has a show method

L.tensored[source]

L.tensored()

mapped(tensor)

There are shortcuts for torch.stack and torch.cat if your L contains tensors or something convertible. You can manually convert with tensored.

  1. t = L(([1,2],[3,4]))
  2. test_eq(t.tensored(), [tensor(1,2),tensor(3,4)])

L.stack[source]

L.stack(dim=0)

Same as torch.stack

  1. test_eq(t.stack(), tensor([[1,2],[3,4]]))

L.cat[source]

L.cat(dim=0)

Same as torch.cat

  1. test_eq(t.cat(), tensor([1,2,3,4]))

Chunks

concat[source]

concat(colls)

Concatenate all collections in colls

  1. a,b,c = [1],[1,2],[1,1,2]
  2. test_eq(concat(a,b), c)
  3. test_eq_type(concat(tuple (a),tuple (b)), tuple (c))
  4. test_eq_type(concat(array (a),array (b)), array (c))
  5. test_eq_type(concat(tensor(a),tensor(b)), tensor(c))
  6. test_eq_type(concat(TensorBase(a),TensorBase(b)), TensorBase(c))
  7. test_eq_type(concat([1,1],1), [1,1,1])
  8. test_eq_type(concat(1,1,1), L(1,1,1))
  9. test_eq_type(concat(L(1,2),1), L(1,2,1))

class Chunks[source]

Chunks(chunks, lens=None)

Slice and int indexing into a list of lists

  1. docs = L(list(string.ascii_lowercase[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
  2. b = Chunks(docs)
  3. test_eq([b[ o] for o in range(0,5)], ['a','b','c','d','e'])
  4. test_eq([b[-o] for o in range(1,6)], ['z','y','x','w','v'])
  5. test_eq(b[6:13], 'g,h,i,j,k,l,m'.split(','))
  6. test_eq(b[20:77], 'u,v,w,x,y,z'.split(','))
  7. test_eq(b[:5], 'a,b,c,d,e'.split(','))
  8. test_eq(b[:2], 'a,b'.split(','))
  1. t = torch.arange(26)
  2. docs = L(t[a:b] for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
  3. b = Chunks(docs)
  4. test_eq([b[ o] for o in range(0,5)], range(0,5))
  5. test_eq([b[-o] for o in range(1,6)], [25,24,23,22,21])
  6. test_eq(b[6:13], torch.arange(6,13))
  7. test_eq(b[20:77], torch.arange(20,26))
  8. test_eq(b[:5], torch.arange(5))
  9. test_eq(b[:2], torch.arange(2))
  1. docs = L(TensorBase(t[a:b]) for a,b in ((0,3),(3,7),(7,8),(8,16),(16,24),(24,26)))
  2. b = Chunks(docs)
  3. test_eq_type(b[:2], TensorBase(range(2)))
  4. test_eq_type(b[:5], TensorBase(range(5)))
  5. test_eq_type(b[9:13], TensorBase(range(9,13)))

Simple types

show_title[source]

show_title(o, ax=None, ctx=None, label=None, color='black', **kwargs)

Set title of ax to o, or print o if ax is None

  1. test_stdout(lambda: show_title("title"), "title")
  2. # ensure that col names are unique when showing to a pandas series
  3. assert show_title("title", ctx=pd.Series(dict(a=1)), label='a').equals(pd.Series(dict(a=1,a_='title')))

class ShowTitle[source]

ShowTitle()

Base class that adds a simple show

class TitledInt[source]

TitledInt() :: Int

An int with show

class TitledStr[source]

TitledStr() :: Str

An str with show

class TitledFloat[source]

TitledFloat(x=0) :: Float

A float with show

  1. test_stdout(lambda: TitledStr('s').show(), 's')
  2. test_stdout(lambda: TitledInt(1).show(), '1')

class TitledTuple[source]

TitledTuple(x=None, *rest) :: fastuple

A fastuple with show

TitledStr.truncate[source]

TitledStr.truncate(n)

Truncate self to n

Other functions

DataFrame.__init__[source]

DataFrame.__init__(data=None, index=None, columns=None, dtype=None, copy=False)

get_empty_df[source]

get_empty_df(n)

Return n empty rows of a dataframe

display_df[source]

display_df(df)

Display df in a notebook or defaults to print

get_first[source]

get_first(c)

Get the first element of c, even if c is a dataframe

one_param[source]

one_param(m)

First parameter in m

item_find[source]

item_find(x, idx=0)

Recursively takes the idx-th element of x

find_device[source]

find_device(b)

Recursively search the device of b.

  1. t2 = to_device(tensor(0))
  2. dev = default_device()
  3. test_eq(find_device(t2), dev)
  4. test_eq(find_device([t2,t2]), dev)
  5. test_eq(find_device({'a':t2,'b':t2}), dev)
  6. test_eq(find_device({'a':[[t2],[t2]],'b':t2}), dev)

find_bs[source]

find_bs(b)

Recursively search the batch size of b.

  1. x = torch.randn(4,5)
  2. test_eq(find_bs(x), 4)
  3. test_eq(find_bs([x, x]), 4)
  4. test_eq(find_bs({'a':x,'b':x}), 4)
  5. test_eq(find_bs({'a':[[x],[x]],'b':x}), 4)

np_func[source]

np_func(f)

Convert a function taking and returning numpy arrays to one taking and returning tensors

This decorator is particularly useful for using numpy functions as fastai metrics, for instance:

  1. from sklearn.metrics import f1_score
  1. @np_func
  2. def f1(inp,targ): return f1_score(targ, inp)
  3. a1,a2 = array([0,1,1]),array([1,0,1])
  4. t = f1(tensor(a1),tensor(a2))
  5. test_eq(f1_score(a1,a2), t)
  6. assert isinstance(t,Tensor)

class Module[source]

Module() :: Module

Same as nn.Module, but no need for subclasses to call super().__init__

  1. class _T(Module):
  2. def __init__(self): self.f = nn.Linear(1,1)
  3. def forward(self,x): return self.f(x)
  4. t = _T()
  5. t(tensor([1.]))
  1. tensor([-0.0832], grad_fn=<AddBackward0>)

get_model[source]

get_model(model)

Return the model maybe wrapped inside model.

one_hot[source]

one_hot(x, c)

One-hot encode x with c classes.

  1. test_eq(one_hot([1,4], 5), tensor(0,1,0,0,1).byte())
  2. test_eq(one_hot(torch.tensor([]), 5), tensor(0,0,0,0,0).byte())
  3. test_eq(one_hot(2, 5), tensor(0,0,1,0,0).byte())

one_hot_decode[source]

one_hot_decode(x, vocab=None)

  1. test_eq(one_hot_decode(tensor(0,1,0,0,1)), [1,4])
  2. test_eq(one_hot_decode(tensor(0,0,0,0,0)), [ ])
  3. test_eq(one_hot_decode(tensor(0,0,1,0,0)), [2 ])

params[source]

params(m)

Return all parameters of m

trainable_params[source]

trainable_params(m)

Return all trainable parameters of m

  1. m = nn.Linear(4,5)
  2. test_eq(trainable_params(m), [m.weight, m.bias])
  3. m.weight.requires_grad_(False)
  4. test_eq(trainable_params(m), [m.bias])

norm_bias_params[source]

norm_bias_params(m, with_bias=True)

Return all bias and BatchNorm parameters

  1. for norm_func in [nn.BatchNorm1d, partial(nn.InstanceNorm1d, affine=True)]:
  2. model = nn.Sequential(nn.Linear(10,20), norm_func(20), nn.Conv1d(3,4, 3))
  3. test_eq(norm_bias_params(model), [model[0].bias, model[1].weight, model[1].bias, model[2].bias])
  4. model = nn.ModuleList([nn.Linear(10,20, bias=False), nn.Sequential(norm_func(20), nn.Conv1d(3,4,3))])
  5. test_eq(norm_bias_params(model), [model[1][0].weight, model[1][0].bias, model[1][1].bias])
  6. model = nn.ModuleList([nn.Linear(10,20), nn.Sequential(norm_func(20), nn.Conv1d(3,4,3))])
  7. test_eq(norm_bias_params(model, with_bias=False), [model[1][0].weight, model[1][0].bias])

batch_to_samples[source]

batch_to_samples(b, max_n=10)

‘Transposes’ a batch to (at most max_n) samples

  1. t = tensor([1,2,3])
  2. test_eq(batch_to_samples([t,t+1], max_n=2), ([1,2],[2,3]))
  3. test_eq(batch_to_samples(tensor([1,2,3]), 10), [1, 2, 3])
  4. test_eq(batch_to_samples([tensor([1,2,3]), tensor([4,5,6])], 10), [(1, 4), (2, 5), (3, 6)])
  5. test_eq(batch_to_samples([tensor([1,2,3]), tensor([4,5,6])], 2), [(1, 4), (2, 5)])
  6. test_eq(batch_to_samples([tensor([1,2,3]), [tensor([4,5,6]),tensor([7,8,9])]], 10),
  7. [(1, (4, 7)), (2, (5, 8)), (3, (6, 9))])
  8. test_eq(batch_to_samples([tensor([1,2,3]), [tensor([4,5,6]),tensor([7,8,9])]], 2), [(1, (4, 7)), (2, (5, 8))])
  9. t = fastuple(tensor([1,2,3]),TensorBase([2,3,4]))
  10. test_eq_type(batch_to_samples(t)[0][1], TensorBase(2))
  11. test_eq(batch_to_samples(t).map(type), [fastuple]*3)

Tensor.interp_1d[source]

Tensor.interp_1d(x:Tensor, xp, fp)

Same as np.interp

  1. brks = tensor(0,1,2,4,8,64).float()
  2. ys = tensor(range_of(brks)).float()
  3. ys /= ys[-1].item()
  4. pts = tensor(0.2,0.5,0.8,3,5,63)
  5. preds = pts.interp_1d(brks, ys)
  6. test_close(preds.numpy(), np.interp(pts.numpy(), brks.numpy(), ys.numpy()))
  7. plt.scatter(brks,ys)
  8. plt.scatter(pts,preds)
  9. plt.legend(['breaks','preds']);

PyTorch Core - 图9

Tensor.pca[source]

Tensor.pca(x:Tensor, k=2)

Compute PCA of x with k dimensions.

logit[source]

logit(x)

Logit of x, clamped to avoid inf.

num_distrib[source]

num_distrib()

Return the number of processes in distributed training (if applicable).

rank_distrib[source]

rank_distrib()

Return the distributed rank of this process (if applicable).

distrib_barrier[source]

distrib_barrier()

Place a synchronization barrier in distributed training

After calling this, ALL sub-processes in the pytorch process group must arrive here before proceeding.

Path.save_array[source]

Path.save_array(p:Path, o, complib='lz4', lvl=3)

Save numpy array to a compressed pytables file, using compression level lvl

Compression lib can be any of: blosclz, lz4, lz4hc, snappy, zlib or zstd.

Path.load_array[source]

Path.load_array(p:Path)

Save numpy array to a pytables file

base_doc[source]

base_doc(elt)

Print a base documentation of elt

doc[source]

doc(elt)

Try to use doc form nbdev and fall back to base_doc

nested_reorder[source]

nested_reorder(t, idxs)

Reorder all tensors in t using idxs

  1. x = tensor([0,1,2,3,4,5])
  2. idxs = tensor([2,5,1,0,3,4])
  3. test_eq_type(nested_reorder(([x], x), idxs), ([idxs], idxs))
  4. y = L(0,1,2,3,4,5)
  5. z = L(i.item() for i in idxs)
  6. test_eq_type(nested_reorder((y, x), idxs), (z,idxs))

Image helpers

make_cross_image[source]

make_cross_image(bw=True)

Create a tensor containing a cross image, either bw (True) or color

  1. plt.imshow(make_cross_image(), cmap="Greys");

PyTorch Core - 图10

  1. plt.imshow(make_cross_image(False).permute(1,2,0));

PyTorch Core - 图11

show_image_batch[source]

show_image_batch(b, show=show_titled_image, items=9, cols=3, figsize=None, **kwargs)

Display batch b in a grid of size items with cols width

  1. show_image_batch(([Image.open(TEST_IMAGE_BW),Image.open(TEST_IMAGE)],['bw','color']), items=2)

PyTorch Core - 图12

Model init

requires_grad[source]

requires_grad(m)

Check if the first parameter of m requires grad or not

  1. tst = nn.Linear(4,5)
  2. assert requires_grad(tst)
  3. for p in tst.parameters(): p.requires_grad_(False)
  4. assert not requires_grad(tst)

init_default[source]

init_default(m, func=kaiming_normal_)

Initialize m weights with func and set bias to 0.

  1. tst = nn.Linear(4,5)
  2. tst.weight.data.uniform_(-1,1)
  3. tst.bias.data.uniform_(-1,1)
  4. tst = init_default(tst, func = lambda x: x.data.fill_(1.))
  5. test_eq(tst.weight, torch.ones(5,4))
  6. test_eq(tst.bias, torch.zeros(5))

cond_init[source]

cond_init(m, func)

Apply init_default to m unless it’s a batchnorm module

  1. tst = nn.Linear(4,5)
  2. tst.weight.data.uniform_(-1,1)
  3. tst.bias.data.uniform_(-1,1)
  4. cond_init(tst, func = lambda x: x.data.fill_(1.))
  5. test_eq(tst.weight, torch.ones(5,4))
  6. test_eq(tst.bias, torch.zeros(5))
  7. tst = nn.BatchNorm2d(5)
  8. init = [tst.weight.clone(), tst.bias.clone()]
  9. cond_init(tst, func = lambda x: x.data.fill_(1.))
  10. test_eq(tst.weight, init[0])
  11. test_eq(tst.bias, init[1])

apply_leaf[source]

apply_leaf(m, f)

Apply f to children of m.

  1. tst = nn.Sequential(nn.Linear(4,5), nn.Sequential(nn.Linear(4,5), nn.Linear(4,5)))
  2. apply_leaf(tst, partial(init_default, func=lambda x: x.data.fill_(1.)))
  3. for l in [tst[0], *tst[1]]: test_eq(l.weight, torch.ones(5,4))
  4. for l in [tst[0], *tst[1]]: test_eq(l.bias, torch.zeros(5))

apply_init[source]

apply_init(m, func=kaiming_normal_)

Initialize all non-batchnorm layers of m with func.

  1. tst = nn.Sequential(nn.Linear(4,5), nn.Sequential(nn.Linear(4,5), nn.BatchNorm1d(5)))
  2. init = [tst[1][1].weight.clone(), tst[1][1].bias.clone()]
  3. apply_init(tst, func=lambda x: x.data.fill_(1.))
  4. for l in [tst[0], tst[1][0]]: test_eq(l.weight, torch.ones(5,4))
  5. for l in [tst[0], tst[1][0]]: test_eq(l.bias, torch.zeros(5))
  6. test_eq(tst[1][1].weight, init[0])
  7. test_eq(tst[1][1].bias, init[1])

autograd jit functions

script_use_ctx[source]

script_use_ctx(f)

Decorator: create jit script and pass everything in ctx.saved_variables tof, after*args`

script_save_ctx[source]

script_save_ctx(static, *argidx)

Decorator: create jit script and save args with indices argidx using ctx.save_for_backward

script_fwd[source]

script_fwd(*argidx)

Decorator: create static jit script and save args with indices argidx using ctx.save_for_backward

script_bwd[source]

script_bwd(f)

Decorator: create static jit script and pass everything in ctx.saved_variables tof, after*args`

grad_module[source]

grad_module()

Decorator: convert cls into an autograd function


Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021