Tutorial - Training a model on Imagenette

Open In Colab

A dive into the layered API of fastai in computer vision

The fastai library as a layered API as summarized by this graph:

A layered API

If you are following this tutorial, you are probably already familiar with the applications, here we will see how they are powered by the high-level and mid-level API.

Imagenette is a subset of ImageNet with 10 very different classes. It’s great to quickly experiment before trying a fleshed-out technique on the full ImageNet dataset. We will show in this tutorial how to train a model on it, using the usual high-level APIs, then delving inside the fastai library to show you how to use the mid-level APIs we designed. This way you’ll be able to customize your own data collection or training as needed.

Assemble the data

We will look at several ways to get our data in DataLoaders: first we will use ImageDataLoaders factory methods (application layer), then the data block API (high level API) and lastly, how to do the same thing with the mid-level API.

Loading the data with a factory method

This is the most basic way of assembling the data that we have presented in all the beginner tutorials, so hopefully it should be familiar to you by now.

First, we import everything inside the vision application:

  1. from fastai.vision.all import *

Then we download the dataset and decompress it (if needed) and get its location:

  1. path = untar_data(URLs.IMAGENETTE_160)

We use ImageDataLoaders.from_folder to get everything (since our data is organized in an imageNet-style format):

  1. dls = ImageDataLoaders.from_folder(path, valid='val',
  2. item_tfms=RandomResizedCrop(128, min_scale=0.35), batch_tfms=Normalize.from_stats(*imagenet_stats))

And we can have a look at our data:

  1. dls.show_batch()

Imagenette Tutorial - 图3

Loading the data with the data block API

And as we saw in previous tutorials, the get_image_files function helps get all the images in subfolders:

  1. fnames = get_image_files(path)

Let’s begin with an empty DataBlock.

  1. dblock = DataBlock()

By itself, a DataBlock is just a blue print on how to assemble your data. It does not do anything until you pass it a source. You can choose to then convert that source into a Datasets or a DataLoaders by using the DataBlock.datasets or DataBlock.dataloaders method. Since we haven’t done anything to get our data ready for batches, the dataloaders method will fail here, but we can have a look at how it gets converted in Datasets. This is where we pass the source of our data, here all of our filenames:

  1. dsets = dblock.datasets(fnames)
  2. dsets.train[0]
  1. (Path('/home/jhoward/.fastai/data/imagenette2-160/train/n03028079/n03028079_30979.JPEG'),
  2. Path('/home/jhoward/.fastai/data/imagenette2-160/train/n03028079/n03028079_30979.JPEG'))

By default, the data block API assumes we have an input and a target, which is why we see our filename repeated twice.

The first thing we can do is to use a get_items function to actually assemble our items inside the data block:

  1. dblock = DataBlock(get_items = get_image_files)

The difference is that you then pass as a source the folder with the images and not all the filenames:

  1. dsets = dblock.datasets(path)
  2. dsets.train[0]
  1. (Path('/home/jhoward/.fastai/data/imagenette2-160/train/n03417042/n03417042_10033.JPEG'),
  2. Path('/home/jhoward/.fastai/data/imagenette2-160/train/n03417042/n03417042_10033.JPEG'))

Our inputs are ready to be processed as images (since images can be built from filenames), but our target is not. We need to convert that filename to a class name. For this, fastai provides parent_label:

  1. parent_label(fnames[0])
  1. 'n03417042'

This is not very readable, so since we can actually make the function we want, let’s convert those obscure labels to something we can read:

  1. lbl_dict = dict(
  2. n01440764='tench',
  3. n02102040='English springer',
  4. n02979186='cassette player',
  5. n03000684='chain saw',
  6. n03028079='church',
  7. n03394916='French horn',
  8. n03417042='garbage truck',
  9. n03425413='gas pump',
  10. n03445777='golf ball',
  11. n03888257='parachute'
  12. )
  1. def label_func(fname):
  2. return lbl_dict[parent_label(fname)]

We can then tell our data block to use it to label our target by passing it as get_y:

  1. dblock = DataBlock(get_items = get_image_files,
  2. get_y = label_func)
  3. dsets = dblock.datasets(path)
  4. dsets.train[0]
  1. (Path('/home/jhoward/.fastai/data/imagenette2-160/train/n02102040/n02102040_4955.JPEG'),
  2. 'English springer')

Now that our inputs and targets are ready, we can specify types to tell the data block API that our inputs are images and our targets are categories. Types are represented by blocks in the data block API, here we use ImageBlock and CategoryBlock:

  1. dblock = DataBlock(blocks = (ImageBlock, CategoryBlock),
  2. get_items = get_image_files,
  3. get_y = label_func)
  4. dsets = dblock.datasets(path)
  5. dsets.train[0]
  1. (PILImage mode=RGB size=160x160, TensorCategory(2))

We can see how the DataBlock automatically added the transforms necessary to open the image, or how it changed the name “cat” to an index (with a special tensor type). To do this, it created a mapping from categories to index called “vocab” that we can access this way:

  1. dsets.vocab
  1. ['English springer', 'French horn', 'cassette player', 'chain saw', 'church', 'garbage truck', 'gas pump', 'golf ball', 'parachute', 'tench']

Note that you can mix and match any block for input and targets, which is why the API is named data block API. You can also have more than two blocks (if you have multiple inputs and/or targets), you would just need to pass n_inp to the DataBlock to tell the library how many inputs there are (the rest would be targets) and pass a list of functions to get_x and/or get_y (to explain how to process each item to be ready for its type). See the object detection below for such an example.

The next step is to control how our validation set is created. We do this by passing a splitter to DataBlock. For instance, here is how we split by grandparent folder.

  1. dblock = DataBlock(blocks = (ImageBlock, CategoryBlock),
  2. get_items = get_image_files,
  3. get_y = label_func,
  4. splitter = GrandparentSplitter())
  5. dsets = dblock.datasets(path)
  6. dsets.train[0]
  1. (PILImage mode=RGB size=213x160, TensorCategory(5))

The last step is to specify item transforms and batch transforms (the same way as we do it in ImageDataLoaders factory methods):

  1. dblock = DataBlock(blocks = (ImageBlock, CategoryBlock),
  2. get_items = get_image_files,
  3. get_y = label_func,
  4. splitter = GrandparentSplitter(),
  5. item_tfms = RandomResizedCrop(128, min_scale=0.35),
  6. batch_tfms=Normalize.from_stats(*imagenet_stats))

With that resize, we are now able to batch items together and can finally call dataloaders to convert our DataBlock to a DataLoaders object:

  1. dls = dblock.dataloaders(path)
  2. dls.show_batch()

Imagenette Tutorial - 图4

Another way to compose several functions for get_y is to put them in a Pipeline:

  1. imagenette = DataBlock(blocks = (ImageBlock, CategoryBlock),
  2. get_items = get_image_files,
  3. get_y = Pipeline([parent_label, lbl_dict.__getitem__]),
  4. splitter = GrandparentSplitter(valid_name='val'),
  5. item_tfms = RandomResizedCrop(128, min_scale=0.35),
  6. batch_tfms = Normalize.from_stats(*imagenet_stats))
  1. dls = imagenette.dataloaders(path)
  2. dls.show_batch()

Imagenette Tutorial - 图5

To learn more about the data block API, checkout the data block tutorial!

Loading the data with the mid-level API

Now let’s see how we can load the data with the medium-level API: we will learn about Transforms and Datasets. The beginning is the same as before: we download our data and get all our filenames:

  1. source = untar_data(URLs.IMAGENETTE_160)
  2. fnames = get_image_files(source)

Every bit of transformation we apply to our raw items (here the filenames) is called a Transform in fastai. It’s basically a function with a bit of added functionality:

  • it can have different behavior depending on the type it receives (this is called type dispatch)
  • it will generally be applied on each element of a tuple

This way, when you have a Transform like resize, you can apply it on a tuple (image, label) and it will resize the image but not the categorical label (since there is no implementation of resize for categories). The exact same transform applied on a tuple (image, mask) will resize the image and the target, using bilinear interpolation on the image and nearest neighbor on the mask. This is how the library manages to always apply data augmentation transforms on every computer vision application (segmentation, point localization or object detection).

Additionally, a transform can have

  • a setup executed on the whole set (or the whole training set). This is how Categorize builds it vocabulary automatically.
  • a decodes that can undo what the transform does for showing purposes (for instance Categorize will convert back an index into a category).

We won’t delve into those bits of the low level API here, but you can check out the pets tutorial or the more advanced siamese tutorial for more information.

To open an image, we use the PILImage.create transform. It will open the image and make it of the fastai type PILImage:

  1. PILImage.create(fnames[0])

Imagenette Tutorial - 图6

In parallel, we have already seen how to get the label of our image, using parent_label and lbl_dict:

  1. lbl_dict[parent_label(fnames[0])]
  1. 'garbage truck'

To make them proper categories that are mapped to an index before being fed to the model, we need to add the Categorize transform. If we want to apply it directly, we need to give it a vocab (so that it knows how to associate a string with an int). We already saw that we can compose several transforms by using a Pipeline:

  1. tfm = Pipeline([parent_label, lbl_dict.__getitem__, Categorize(vocab = lbl_dict.values())])
  2. tfm(fnames[0])
  1. TensorCategory(5)

Now to build our Datasets object, we need to specify:

  • our raw items
  • the list of transforms that builds our inputs from the raw items
  • the list of transforms that builds our targets from the raw items
  • the split for training and validation

We have everything apart from the split right now, which we can build this way:

  1. splits = GrandparentSplitter(valid_name='val')(fnames)

We can then pass all of this information to Datasets.

  1. dsets = Datasets(fnames, [[PILImage.create], [parent_label, lbl_dict.__getitem__, Categorize]], splits=splits)

The main difference with what we had before is that we can just pass along Categorize without passing it the vocab: it will build it from the training data (which it knows from items and splits) during its setup phase. Let’s have a look at the first element:

  1. dsets[0]
  1. (PILImage mode=RGB size=213x160, TensorCategory(5))

We can also use our Datasets object to represent it:

  1. dsets.show(dsets[0]);

Imagenette Tutorial - 图7

Now if we want to build a DataLoaders from this object, we need to add a few transforms that will be applied at the item level> As we saw before, those transforms will be applied separately on the inputs and targets, using the appropriate implementation for each type (which can very well be don’t do anything).

Here we need to:

  • resize our images
  • convert them to tensors
  1. item_tfms = [ToTensor, RandomResizedCrop(128, min_scale=0.35)]

Additionally we will need to apply a few transforms on the batch level, namely:

  • convert the int tensors from images to floats, and divide every pixel by 255
  • normalize using the imagenet statistics
  1. batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]

Those two bits could be done per item as well, but it’s way more efficient to do it on a full batch.

Note that we have more transforms than in the data block API: there was no need to think of ToTensor or IntToFloatTensor there. This is because data blocks come with default item transforms and batch transforms when it concerns transforms you will always need with that type.

When passing those transforms to the .dataloaders method, the corresponding arguments have a slightly different name: the item_tfms are passed to after_item (because they are applied after the item has been formed) and the batch_tfms are passed to after_batch (because they are applied after the batch has been formed).

  1. dls = dsets.dataloaders(after_item=item_tfms, after_batch=batch_tfms, bs=64, num_workers=8)

We can then use the traditional show_batch method:

  1. dls.show_batch()

Imagenette Tutorial - 图8

Training

We will start with the usual cnn_learner function we used in the vision tutorial, we will see how one can build a Learner object in fastai. Then we will learn how to customize

  • the loss function and how to write one that works fully with fastai,
  • the optimizer function and how to use PyTorch optimizers,
  • the training loop and how to write a basic Callback.

Building a Learner

The easiest way to build a Learner for image classification, as we have seen, is to use cnn_learner. We can specify that we don’t want a pretrained model by passing pretrained=False (here the goal is to train a model from scratch):

  1. learn = cnn_learner(dls, resnet34, metrics=accuracy, pretrained=False)

And we can fit our model as usual:

  1. learn.fit_one_cycle(5, 5e-3)
epochtrain_lossvalid_lossaccuracytime
02.2944553.5599500.31515900:15
12.3441315.8088960.12662400:14
22.2060591.9436850.33987300:16
31.8972091.6976090.45401300:17
41.6834721.4945380.50063700:18

That’s a start. But since we are not using a pretrained model, why not use a different architecture? fastai comes with a version of the resnets models that have all the tricks from modern research incorporated. While there is no pretrained model using those at the time of writing this tutorial, we can certainly use them here. For this, we just need to use the Learner class. It takes our DataLoaders and a PyTorch model, at the minimum. Here we can use xresnet34 and since we have 10 classes, we specify n_out=10:

  1. learn = Learner(dls, xresnet34(n_out=10), metrics=accuracy)

We can find a good learning rate with the learning rate finder:

  1. learn.lr_find()
  1. SuggestedLRs(lr_min=0.0013182567432522773, lr_steep=0.0006918309954926372)

Imagenette Tutorial - 图9

Then fit our model:

  1. learn.fit_one_cycle(5, 1e-3)
epochtrain_lossvalid_lossaccuracytime
01.6520161.5623690.46700600:18
11.2028001.1371720.63133800:22
20.9433981.0311660.66955400:26
30.7911830.7900540.75210200:26
40.6982570.7272500.77146500:22

Wow this is a huge improvement! As we saw in all the application tutorials, we can then look at some results with:

  1. learn.show_results()

Imagenette Tutorial - 图10

Now let’s see how to customize each bit of the training.

Changing the loss function

The loss function you pass to a Learner is expected to take an output and target, then return the loss. It can be any regular PyTorch function and the training loop will work without any problem. What may cause problems is when you use fastai functions like Learner.get_preds, Learner.predict or Learner.show_results.

If you want Learner.get_preds to work with the argument with_loss=True (which is also used when you runClassificationInterpretation.plot_top_losses for instance), your loss function will need a reduction attribute (or argument) that you can set to “none” (this is standard for all PyTorch loss functions or classes). With a reduction of “none”, the loss function does not return a single number (like a mean or sum) but something the same size as the target.

As for Learner.predict or Learner.show_results, they internally rely on two methods your loss function should have:

  • if you have a loss that combines activation and loss function (such as nn.CrossEntropyLoss), an activation function.
  • a decodes function that converts your predictions to the same format your targets are: for instance in the case of nn.CrossEntropyLoss, the decodes function should take the argmax.

As an example, let’s look at how to implement a custom loss function doing label smoothing (this is already in fastai as LabelSmoothingCrossEntropy).

  1. class LabelSmoothingCE(Module):
  2. def __init__(self, eps=0.1, reduction='mean'): self.eps,self.reduction = eps,reduction
  3. def forward(self, output, target):
  4. c = output.size()[-1]
  5. log_preds = F.log_softmax(output, dim=-1)
  6. if self.reduction=='sum': loss = -log_preds.sum()
  7. else:
  8. loss = -log_preds.sum(dim=-1) #We divide by that size at the return line so sum and not mean
  9. if self.reduction=='mean': loss = loss.mean()
  10. return loss*self.eps/c + (1-self.eps) * F.nll_loss(log_preds, target.long(), reduction=self.reduction)
  11. def activation(self, out): return F.softmax(out, dim=-1)
  12. def decodes(self, out): return out.argmax(dim=-1)

We won’t comment on the forward pass that just implements the loss in itself. What is important is to notice how the reduction attribute plays in how the final result is computed.

Then since this loss function combines activation (softmax) with the actual loss, we implement activation that take the softmax of the output. This is what will make Learner.get_preds or Learner.predict return the actual predictions instead of the final activations.

Lastly, decodes changes the outputs of the model to put them in the same format as the targets (one int for each sample in the batch size) by taking the argmax of the predictions. We can pass this loss function to Learner:

  1. learn = Learner(dls, xresnet34(n_out=10), loss_func=LabelSmoothingCE(), metrics=accuracy)
  1. learn.fit_one_cycle(5, 1e-3)
epochtrain_lossvalid_lossaccuracytime
01.7341301.6636650.52152900:18
11.4194071.3580000.65299400:19
21.2399731.2921380.67566900:19
31.1140461.0931920.75668800:19
41.0197601.0610800.77222900:19

It’s not training as well as before because label smoothing is a regularizing technique, so it needs more epochs to really kick in and give better results.

After training our model, we can indeed use predict and show_results and get proper results:

  1. learn.predict(fnames[0])
  1. ('garbage truck',
  2. tensor(5),
  3. tensor([1.5314e-03, 9.6116e-04, 2.7214e-03, 2.6757e-03, 6.4039e-04, 9.8842e-01,
  4. 8.1883e-04, 7.5840e-04, 1.0780e-03, 3.9759e-04]))
  1. learn.show_results()

Imagenette Tutorial - 图11

Changing the optimizer

fastai uses its own class of Optimizer built with various callbacks to refactor common functionality and provide a unique naming of hyperparameters playing the same role (like momentum in SGD, which is the same as alpha in RMSProp and beta0 in Adam) which makes it easier to schedule them (such as in Learner.fit_one_cycle).

It implements all optimizers supported by PyTorch (and much more) so you should never need to use one coming from PyTorch. Checkout the optimizer module to see all the optimizers natively available.

However in some circumstances, you might need to use an optimizer that is not in fastai (if for instance it’s a new one only implemented in PyTorch). Before learning how to port the code to our internal Optimizer (checkout the optimizer module to discover how), you can use the OptimWrapper class to wrap your PyTorch optimizer and train with it:

  1. @delegates(torch.optim.AdamW.__init__)
  2. def pytorch_adamw(param_groups, **kwargs):
  3. return OptimWrapper(torch.optim.AdamW([{'params': ps, **kwargs} for ps in param_groups]))

We write an optimizer function that expects param_groups, which is a list of list of parameters. Then we pass those to the PyTorch optimizer we want to use.

We can use this function and pass it to the opt_func argument of Learner:

  1. learn = Learner(dls, xresnet18(), lr=1e-2, metrics=accuracy,
  2. loss_func=LabelSmoothingCrossEntropy(),
  3. opt_func=partial(pytorch_adamw, wd=0.01, eps=1e-3))

We can then use the usual learning rate finder:

  1. learn.lr_find()
  1. SuggestedLRs(lr_min=0.07585775852203369, lr_steep=0.00363078061491251)

Imagenette Tutorial - 图12

Or fit_one_cycle (and thanks to the wrapper, fastai will properly schedule the beta0 of AdamW).

  1. learn.fit_one_cycle(5, 5e-3)
epochtrain_lossvalid_lossaccuracytime
02.6615603.0773460.33299400:14
12.1722262.0874960.62267500:14
21.9131951.8597300.69554100:14
31.7369571.6922210.77375800:14
41.6310781.6466560.78828000:14

Changing the training loop with a Callback

The base training loop in fastai is the same as PyTorch’s:

  1. for xb,yb in dl:
  2. pred = model(xb)
  3. loss = loss_func(pred, yb)
  4. loss.backward()
  5. opt.step()
  6. opt.zero_grad()

where model, loss_func and opt are all attributes of our Learner. To easily allow you to add new behavior in that training loop without needing to rewrite it yourself (along with all the fastai pieces you might want like mixed precision, 1cycle schedule, distributed training…), you can customize what happens in the training loop by writing a callback.

Callbacks will be fully explained in an upcoming tutorial, but the basics are that:

  • a Callback can read every piece of a Learner, hence knowing everything happening in the training loop
  • a Callback can change any piece of the Learner, allowing it to alter the behavior of the training loop
  • a Callback can even raise special exceptions that will allow breaking points (skipping a step, a validation phase, an epoch or even cancelling training entirely)

Here we will write a simple Callback applying mixup to our training (the version we will write is specific to our problem, use fastai’s MixUp in other settings).

Mixup consists in changing the inputs by mixing two different inputs and making a linear combination of them:

  1. input = x1 * t + x2 * (1-t)

Where t is a random number between 0 and 1. Then, if the targets are one-hot encoded, we change the target to be

  1. target = y1 * t + y2 * (1-t)

In practice though, targets are not one-hot encoded in PyTorch, but it’s equivalent to change the part of the loss dealing with y1 and y2 by

  1. loss = loss_func(pred, y1) * t + loss_func(pred, y2) * (1-t)

because the loss function used is linear with respect to y.

We just need to use the version with reduction='none' of the loss to do this linear combination, then take the mean.

Here is how we write mixup in a Callback:

  1. from torch.distributions.beta import Beta
  1. class Mixup(Callback):
  2. run_valid = False
  3. def __init__(self, alpha=0.4): self.distrib = Beta(tensor(alpha), tensor(alpha))
  4. def before_batch(self):
  5. self.t = self.distrib.sample((self.y.size(0),)).squeeze().to(self.x.device)
  6. shuffle = torch.randperm(self.y.size(0)).to(self.x.device)
  7. x1,self.y1 = self.x[shuffle],self.y[shuffle]
  8. self.learn.xb = (x1 * (1-self.t[:,None,None,None]) + self.x * self.t[:,None,None,None],)
  9. def after_loss(self):
  10. with NoneReduce(self.loss_func) as lf:
  11. loss = lf(self.pred,self.y1) * (1-self.t) + lf(self.pred,self.y) * self.t
  12. self.learn.loss = loss.mean()

We can see we write two events:

  • before_batch is executed just after drawing a batch and before the model is run on the input. We first draw our random numbers t, following a beta distribution (like advised in the paper) and get a shuffled version of the batch (instead of drawing a second version of the batch, we mix one batch with a shuffled version of itself). Then we set self.learn.xb to the new input, which will be the on fed to the model.
  • after_loss is executed just after the loss is computed and before the backward pass. We replace self.learn.loss by the correct value. NoneReduce is a context manager that temporarily sets the reduction attribute of a loss to ‘none’.

Also, we tell the Callback it should not run during the validation phase with run_valid=False.

To pass a Callback to a Learner, we use cbs=:

  1. learn = Learner(dls, xresnet18(), lr=1e-2, metrics=accuracy,
  2. loss_func=LabelSmoothingCrossEntropy(), cbs=Mixup(),
  3. opt_func=partial(pytorch_adamw, wd=0.01, eps=1e-3))

Then we can combine this new callback with the learning rate finder:

  1. learn.lr_find()
  1. SuggestedLRs(lr_min=0.06309573650360108, lr_steep=0.004365158267319202)

Imagenette Tutorial - 图13

And combine it with fit_one_cycle:

  1. learn.fit_one_cycle(5, 5e-3)
epochtrain_lossvalid_lossaccuracytime
03.0942433.5600970.17579600:15
12.7669562.6330070.40000000:15
22.6044952.4548620.54980900:15
32.5135802.3355370.59872600:15
42.4387282.2779120.63133800:15

Like label smoothing, this is a callback that provides more regularization, so you need to run more epochs before seeing any benefit. Also, our simple implementation does not have all the tricks of the fastai’s implementation, so make sure to check the official one in callback.mixup!


Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021