From Data to DataLoaders

DataLoaders is a thin class that just stores whatever DataLoader objects you pass to it, and makes them available as train and valid. Although it’s a very simple class, it’s very important in fastai: it provides the data for your model. The key functionality in DataLoaders is provided with just these four lines of code (it has some other minor functionality we’ll skip over for now):

  1. class DataLoaders(GetAttr):
  2. def __init__(self, *loaders): self.loaders = loaders
  3. def __getitem__(self, i): return self.loaders[i]
  4. train,valid = add_props(lambda i,self: self[i])

jargon: DataLoaders: A fastai class that stores multiple DataLoader objects you pass to it, normally a train and a valid, although it’s possible to have as many as you like. The first two are made available as properties.

Later in the book you’ll also learn about the Dataset and Datasets classes, which have the same relationship.

To turn our downloaded data into a DataLoaders object we need to tell fastai at least four things:

  • What kinds of data we are working with
  • How to get the list of items
  • How to label these items
  • How to create the validation set

So far we have seen a number of factory methods for particular combinations of these things, which are convenient when you have an application and data structure that happen to fit into those predefined methods. For when you don’t, fastai has an extremely flexible system called the data block API. With this API you can fully customize every stage of the creation of your DataLoaders. Here is what we need to create a DataLoaders for the dataset that we just downloaded:

In [ ]:

  1. bears = DataBlock(
  2. blocks=(ImageBlock, CategoryBlock),
  3. get_items=get_image_files,
  4. splitter=RandomSplitter(valid_pct=0.2, seed=42),
  5. get_y=parent_label,
  6. item_tfms=Resize(128))

Let’s look at each of these arguments in turn. First we provide a tuple where we specify what types we want for the independent and dependent variables:

  1. blocks=(ImageBlock, CategoryBlock)

The independent variable is the thing we are using to make predictions from, and the dependent variable is our target. In this case, our independent variables are images, and our dependent variables are the categories (type of bear) for each image. We will see many other types of block in the rest of this book.

For this DataLoaders our underlying items will be file paths. We have to tell fastai how to get a list of those files. The get_image_files function takes a path, and returns a list of all of the images in that path (recursively, by default):

  1. get_items=get_image_files

Often, datasets that you download will already have a validation set defined. Sometimes this is done by placing the images for the training and validation sets into different folders. Sometimes it is done by providing a CSV file in which each filename is listed along with which dataset it should be in. There are many ways that this can be done, and fastai provides a very general approach that allows you to use one of its predefined classes for this, or to write your own. In this case, however, we simply want to split our training and validation sets randomly. However, we would like to have the same training/validation split each time we run this notebook, so we fix the random seed (computers don’t really know how to create random numbers at all, but simply create lists of numbers that look random; if you provide the same starting point for that list each time—called the seed—then you will get the exact same list each time):

  1. splitter=RandomSplitter(valid_pct=0.2, seed=42)

The independent variable is often referred to as x and the dependent variable is often referred to as y. Here, we are telling fastai what function to call to create the labels in our dataset:

  1. get_y=parent_label

parent_label is a function provided by fastai that simply gets the name of the folder a file is in. Because we put each of our bear images into folders based on the type of bear, this is going to give us the labels that we need.

Our images are all different sizes, and this is a problem for deep learning: we don’t feed the model one image at a time but several of them (what we call a mini-batch). To group them in a big array (usually called a tensor) that is going to go through our model, they all need to be of the same size. So, we need to add a transform which will resize these images to the same size. Item transforms are pieces of code that run on each individual item, whether it be an image, category, or so forth. fastai includes many predefined transforms; we use the Resize transform here:

  1. item_tfms=Resize(128)

This command has given us a DataBlock object. This is like a template for creating a DataLoaders. We still need to tell fastai the actual source of our data—in this case, the path where the images can be found:

In [ ]:

  1. dls = bears.dataloaders(path)

A DataLoaders includes validation and training DataLoaders. DataLoader is a class that provides batches of a few items at a time to the GPU. We’ll be learning a lot more about this class in the next chapter. When you loop through a DataLoader fastai will give you 64 (by default) items at a time, all stacked up into a single tensor. We can take a look at a few of those items by calling the show_batch method on a DataLoader:

In [ ]:

  1. dls.valid.show_batch(max_n=4, nrows=1)

From Data to DataLoaders - 图1

By default Resize crops the images to fit a square shape of the size requested, using the full width or height. This can result in losing some important details. Alternatively, you can ask fastai to pad the images with zeros (black), or squish/stretch them:

In [ ]:

  1. bears = bears.new(item_tfms=Resize(128, ResizeMethod.Squish))
  2. dls = bears.dataloaders(path)
  3. dls.valid.show_batch(max_n=4, nrows=1)

From Data to DataLoaders - 图2

In [ ]:

  1. bears = bears.new(item_tfms=Resize(128, ResizeMethod.Pad, pad_mode='zeros'))
  2. dls = bears.dataloaders(path)
  3. dls.valid.show_batch(max_n=4, nrows=1)

From Data to DataLoaders - 图3

All of these approaches seem somewhat wasteful, or problematic. If we squish or stretch the images they end up as unrealistic shapes, leading to a model that learns that things look different to how they actually are, which we would expect to result in lower accuracy. If we crop the images then we remove some of the features that allow us to perform recognition. For instance, if we were trying to recognize breeds of dog or cat, we might end up cropping out a key part of the body or the face necessary to distinguish between similar breeds. If we pad the images then we have a whole lot of empty space, which is just wasted computation for our model and results in a lower effective resolution for the part of the image we actually use.

Instead, what we normally do in practice is to randomly select part of the image, and crop to just that part. On each epoch (which is one complete pass through all of our images in the dataset) we randomly select a different part of each image. This means that our model can learn to focus on, and recognize, different features in our images. It also reflects how images work in the real world: different photos of the same thing may be framed in slightly different ways.

In fact, an entirely untrained neural network knows nothing whatsoever about how images behave. It doesn’t even recognize that when an object is rotated by one degree, it still is a picture of the same thing! So actually training the neural network with examples of images where the objects are in slightly different places and slightly different sizes helps it to understand the basic concept of what an object is, and how it can be represented in an image.

Here’s another example where we replace Resize with RandomResizedCrop, which is the transform that provides the behavior we just described. The most important parameter to pass in is min_scale, which determines how much of the image to select at minimum each time:

In [ ]:

  1. bears = bears.new(item_tfms=RandomResizedCrop(128, min_scale=0.3))
  2. dls = bears.dataloaders(path)
  3. dls.train.show_batch(max_n=4, nrows=1, unique=True)

From Data to DataLoaders - 图4

We used unique=True to have the same image repeated with different versions of this RandomResizedCrop transform. This is a specific example of a more general technique, called data augmentation.

Data Augmentation

Data augmentation refers to creating random variations of our input data, such that they appear different, but do not actually change the meaning of the data. Examples of common data augmentation techniques for images are rotation, flipping, perspective warping, brightness changes and contrast changes. For natural photo images such as the ones we are using here, a standard set of augmentations that we have found work pretty well are provided with the aug_transforms function. Because our images are now all the same size, we can apply these augmentations to an entire batch of them using the GPU, which will save a lot of time. To tell fastai we want to use these transforms on a batch, we use the batch_tfms parameter (note that we’re not using RandomResizedCrop in this example, so you can see the differences more clearly; we’re also using double the amount of augmentation compared to the default, for the same reason):

In [ ]:

  1. bears = bears.new(item_tfms=Resize(128), batch_tfms=aug_transforms(mult=2))
  2. dls = bears.dataloaders(path)
  3. dls.train.show_batch(max_n=8, nrows=2, unique=True)

From Data to DataLoaders - 图5

Now that we have assembled our data in a format fit for model training, let’s actually train an image classifier using it.