fastai applications - quick start

Open In Colab

fastai’s applications all use the same basic steps and code:

  • Create appropriate DataLoaders
  • Create a Learner
  • Call a fit method
  • Make predictions or view results.

In this quick start, we’ll show these steps for a wide range of difference applications and datasets. As you’ll see, the code in each case is extremely similar, despite the very different models and data being used.

Computer vision classification

The code below does the following things:

  1. A dataset called the Oxford-IIIT Pet Dataset that contains 7,349 images of cats and dogs from 37 different breeds will be downloaded from the fast.ai datasets collection to the GPU server you are using, and will then be extracted.
  2. A pretrained model that has already been trained on 1.3 million images, using a competition-winning model will be downloaded from the internet.
  3. The pretrained model will be fine-tuned using the latest advances in transfer learning, to create a model that is specially customized for recognizing dogs and cats.

The first two steps only need to be run once. If you run it again, it will use the dataset and model that have already been downloaded, rather than downloading them again.

  1. path = untar_data(URLs.PETS)/'images'
  2. def is_cat(x): return x[0].isupper()
  3. dls = ImageDataLoaders.from_name_func(
  4. path, get_image_files(path), valid_pct=0.2, seed=42,
  5. label_func=is_cat, item_tfms=Resize(224))
  6. learn = cnn_learner(dls, resnet34, metrics=error_rate)
  7. learn.fine_tune(1)
epochtrain_lossvalid_losserror_ratetime
00.1737900.0188270.00541300:12
epochtrain_lossvalid_losserror_ratetime
00.0642950.0134040.00541300:14

You can do inference with your model with the predict method:

  1. img = PILImage.create('images/cat.jpg')
  2. img

Quick Start - 图2

  1. is_cat,_,probs = learn.predict(img)
  2. print(f"Is this a cat?: {is_cat}.")
  3. print(f"Probability it's a cat: {probs[1].item():.6f}")
  1. Is this a cat?: True.
  2. Probability it's a cat: 0.999722

Computer vision segmentation

Here is how we can train a segmentation model with fastai, using a subset of the Camvid dataset:

  1. path = untar_data(URLs.CAMVID_TINY)
  2. dls = SegmentationDataLoaders.from_label_func(
  3. path, bs=8, fnames = get_image_files(path/"images"),
  4. label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}',
  5. codes = np.loadtxt(path/'codes.txt', dtype=str)
  6. )
  7. learn = unet_learner(dls, resnet34)
  8. learn.fine_tune(8)
epochtrain_lossvalid_losstime
02.8824602.09692300:03
epochtrain_lossvalid_losstime
01.6022701.54358200:02
11.4177321.22578200:02
21.3074541.07109000:02
31.1703380.88450100:02
41.0470360.79982000:02
50.9479650.75480100:02
60.8681780.72816100:02
70.8049390.72094200:02

We can visualize how well it achieved its task, by asking the model to color-code each pixel of an image.

  1. learn.show_results(max_n=6, figsize=(7,8))

Quick Start - 图3

Or we can plot the k instances that contributed the most to the validation loss by using the SegmentationInterpretation class.

  1. interp = SegmentationInterpretation.from_learner(learn)
  2. interp.plot_top_losses(k=2)

Quick Start - 图4

Natural language processing

Here is all of the code necessary to train a model that can classify the sentiment of a movie review better than anything that existed in the world just five years ago:

  1. dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
  2. learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
  3. learn.fine_tune(2, 1e-2)
epochtrain_lossvalid_lossaccuracytime
00.5949120.4074160.82364001:35
epochtrain_lossvalid_lossaccuracytime
00.2682590.3162420.87600003:03
10.1848610.2462420.89808003:10
20.1363920.2200860.91820003:16
30.1064230.1910920.93136003:15

Predictions are done with predict, as for computer vision:

  1. learn.predict("I really liked that movie!")
  1. ('pos', tensor(1), tensor([0.0041, 0.9959]))

Tabular

Building models from plain tabular data is done using the same basic steps as the previous models. Here is the code necessary to train a model that will predict whether a person is a high-income earner, based on their socioeconomic background:

  1. path = untar_data(URLs.ADULT_SAMPLE)
  2. dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names="salary",
  3. cat_names = ['workclass', 'education', 'marital-status', 'occupation',
  4. 'relationship', 'race'],
  5. cont_names = ['age', 'fnlwgt', 'education-num'],
  6. procs = [Categorify, FillMissing, Normalize])
  7. learn = tabular_learner(dls, metrics=accuracy)
  8. learn.fit_one_cycle(2)
epochtrain_lossvalid_lossaccuracytime
00.3722980.3596980.82939200:06
10.3575300.3494400.83737700:06

Recommendation systems

Recommendation systems are very important, particularly in e-commerce. Companies like Amazon and Netflix try hard to recommend products or movies that users might like. Here’s how to train a model that will predict movies people might like, based on their previous viewing habits, using the MovieLens dataset:

  1. path = untar_data(URLs.ML_SAMPLE)
  2. dls = CollabDataLoaders.from_csv(path/'ratings.csv')
  3. learn = collab_learner(dls, y_range=(0.5,5.5))
  4. learn.fine_tune(6)
epochtrain_lossvalid_losstime
01.4975511.43572000:00
epochtrain_lossvalid_losstime
01.3323371.35176900:00
11.1801771.04680100:00
20.9130910.79931900:00
30.7498060.73121800:00
40.6865770.71537200:00
50.6656830.71330900:00

We can use the same show_results call we saw earlier to view a few examples of user and movie IDs, actual ratings, and predictions:

  1. learn.show_results()
userIdmovieIdratingrating_pred
05.03.02.03.985477
11.062.04.03.629225
291.081.01.03.476280
348.026.02.04.043919
475.054.03.04.023057
542.022.03.03.509050
640.059.04.03.686552
763.077.03.02.862713
832.061.04.04.356578

Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021