迁移学习教程

作者Sasank Chilamkurthy

译者:DrDavidS

校验:DrDavidS

在本教程中,您将学习如何使用迁移学习训练网络。你可以在 cs231n笔记中阅读更多关于迁移学习的内容。

引用笔记,

> 在实践中,很少有人从头开始训练整个卷积网络(随机初始化),因为足够大的数据集是相对少见的。相反,通常在非常大的数据集(例如 ImageNet,其包含具有1000个类别的120万张图片)上预先训练一个卷积神经网络,然后使用这个卷积神经网络对目标任务进行初始化或用作固定特征提取器。

如下是两个主要的迁移学习场景:

  • 微调卷积神经网络 我们使用预训练网络来初始化网络,而不是随机初始化,比如一个已经在imagenet 1000数据集上训练好的网络一样。其余训练和往常一样。
  • 将卷积神经网络作为固定特征提取器 :在这里,我们将冻结除最终全连接层之外的整个网络的权重。最后一个全连接层被替换为具有随机权重的新层,并且仅训练该层。
  1. # License: BSD
  2. # Author: Sasank Chilamkurthy
  3. from __future__ import print_function, division
  4. import torch
  5. import torch.nn as nn
  6. import torch.optim as optim
  7. from torch.optim import lr_scheduler
  8. import numpy as np
  9. import torchvision
  10. from torchvision import datasets, models, transforms
  11. import matplotlib.pyplot as plt
  12. import time
  13. import os
  14. import copy
  15. plt.ion() # interactive mode

加载数据

我们将使用 torchvision 和 torch.utils.data 包来加载数据。

今天,我们要解决的问题是训练一个模型来对蚂蚁蜜蜂进行分类。我们蚂蚁蜜蜂分别准备了大约120个训练图像,并且每类还有75个验证图像。通常,如果从头开始训练,这是一个非常小的数据集。由于我们正在使用迁移学习,我们应该能够合理地进行泛化。

该数据集是imagenet的一个很小的子集。

注意

此处下载数据,并将其解压到当前目录。

  1. # Data augmentation and normalization for training
  2. # Just normalization for validation
  3. data_transforms = {
  4. 'train': transforms.Compose([
  5. transforms.RandomResizedCrop(224),
  6. transforms.RandomHorizontalFlip(),
  7. transforms.ToTensor(),
  8. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
  9. ]),
  10. 'val': transforms.Compose([
  11. transforms.Resize(256),
  12. transforms.CenterCrop(224),
  13. transforms.ToTensor(),
  14. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
  15. ]),
  16. }
  17. data_dir = 'data/hymenoptera_data'
  18. image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
  19. data_transforms[x])
  20. for x in ['train', 'val']}
  21. dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
  22. shuffle=True, num_workers=4)
  23. for x in ['train', 'val']}
  24. dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
  25. class_names = image_datasets['train'].classes
  26. device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

可视化一些图像

让我们通过可视化一些训练图像,来理解什么是数据增强。

  1. def imshow(inp, title=None):
  2. """Imshow for Tensor."""
  3. inp = inp.numpy().transpose((1, 2, 0))
  4. mean = np.array([0.485, 0.456, 0.406])
  5. std = np.array([0.229, 0.224, 0.225])
  6. inp = std * inp + mean
  7. inp = np.clip(inp, 0, 1)
  8. plt.imshow(inp)
  9. if title is not None:
  10. plt.title(title)
  11. plt.pause(0.001) # pause a bit so that plots are updated
  12. # Get a batch of training data
  13. inputs, classes = next(iter(dataloaders['train']))
  14. # Make a grid from batch
  15. out = torchvision.utils.make_grid(inputs)
  16. imshow(out, title=[class_names[x] for x in classes])

img/sphx_glr_transfer_learning_tutorial_001.png

训练模型

现在, 让我们编写一个通用函数来训练一个模型。这里, 我们将会举例说明:

  • 调整学习率
  • 保存最好的模型

下面函数中, scheduler 参数是 torch.optim.lr_scheduler 中的学习率调整(LR scheduler)对象.

  1. def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
  2. since = time.time()
  3. best_model_wts = copy.deepcopy(model.state_dict())
  4. best_acc = 0.0
  5. for epoch in range(num_epochs):
  6. print('Epoch {}/{}'.format(epoch, num_epochs - 1))
  7. print('-' * 10)
  8. # Each epoch has a training and validation phase
  9. for phase in ['train', 'val']:
  10. if phase == 'train':
  11. model.train() # Set model to training mode
  12. else:
  13. model.eval() # Set model to evaluate mode
  14. running_loss = 0.0
  15. running_corrects = 0
  16. # Iterate over data.
  17. for inputs, labels in dataloaders[phase]:
  18. inputs = inputs.to(device)
  19. labels = labels.to(device)
  20. # zero the parameter gradients
  21. optimizer.zero_grad()
  22. # forward
  23. # track history if only in train
  24. with torch.set_grad_enabled(phase == 'train'):
  25. outputs = model(inputs)
  26. _, preds = torch.max(outputs, 1)
  27. loss = criterion(outputs, labels)
  28. # backward + optimize only if in training phase
  29. if phase == 'train':
  30. loss.backward()
  31. optimizer.step()
  32. # statistics
  33. running_loss += loss.item() * inputs.size(0)
  34. running_corrects += torch.sum(preds == labels.data)
  35. if phase == 'train':
  36. scheduler.step()
  37. epoch_loss = running_loss / dataset_sizes[phase]
  38. epoch_acc = running_corrects.double() / dataset_sizes[phase]
  39. print('{} Loss: {:.4f} Acc: {:.4f}'.format(
  40. phase, epoch_loss, epoch_acc))
  41. # deep copy the model
  42. if phase == 'val' and epoch_acc > best_acc:
  43. best_acc = epoch_acc
  44. best_model_wts = copy.deepcopy(model.state_dict())
  45. print()
  46. time_elapsed = time.time() - since
  47. print('Training complete in {:.0f}m {:.0f}s'.format(
  48. time_elapsed // 60, time_elapsed % 60))
  49. print('Best val Acc: {:4f}'.format(best_acc))
  50. # load best model weights
  51. model.load_state_dict(best_model_wts)
  52. return model

模型预测的可视化

用于显示少量预测图像的通用函数

  1. def visualize_model(model, num_images=6):
  2. was_training = model.training
  3. model.eval()
  4. images_so_far = 0
  5. fig = plt.figure()
  6. with torch.no_grad():
  7. for i, (inputs, labels) in enumerate(dataloaders['val']):
  8. inputs = inputs.to(device)
  9. labels = labels.to(device)
  10. outputs = model(inputs)
  11. _, preds = torch.max(outputs, 1)
  12. for j in range(inputs.size()[0]):
  13. images_so_far += 1
  14. ax = plt.subplot(num_images//2, 2, images_so_far)
  15. ax.axis('off')
  16. ax.set_title('predicted: {}'.format(class_names[preds[j]]))
  17. imshow(inputs.cpu().data[j])
  18. if images_so_far == num_images:
  19. model.train(mode=was_training)
  20. return
  21. model.train(mode=was_training)

微调卷积神经网络

加载预训练模型并重置最后的全连接层。

  1. model_ft = models.resnet18(pretrained=True)
  2. num_ftrs = model_ft.fc.in_features
  3. # Here the size of each output sample is set to 2.
  4. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
  5. model_ft.fc = nn.Linear(num_ftrs, 2)
  6. model_ft = model_ft.to(device)
  7. criterion = nn.CrossEntropyLoss()
  8. # Observe that all parameters are being optimized
  9. optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
  10. # Decay LR by a factor of 0.1 every 7 epochs
  11. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

训练与评价

在CPU上训练需要大约15-25分钟。但是在GPU上,它只需不到一分钟。

  1. model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
  2. num_epochs=25)

输出:

  1. Epoch 0/24
  2. ----------
  3. train Loss: 0.6751 Acc: 0.7049
  4. val Loss: 0.1834 Acc: 0.9346
  5. Epoch 1/24
  6. ----------
  7. train Loss: 0.5892 Acc: 0.7746
  8. val Loss: 1.0048 Acc: 0.6667
  9. Epoch 2/24
  10. ----------
  11. train Loss: 0.6568 Acc: 0.7459
  12. val Loss: 0.6047 Acc: 0.8366
  13. Epoch 3/24
  14. ----------
  15. train Loss: 0.4196 Acc: 0.8320
  16. val Loss: 0.4388 Acc: 0.8562
  17. Epoch 4/24
  18. ----------
  19. train Loss: 0.5883 Acc: 0.8033
  20. val Loss: 0.4013 Acc: 0.8889
  21. Epoch 5/24
  22. ----------
  23. train Loss: 0.6684 Acc: 0.7705
  24. val Loss: 0.2666 Acc: 0.9412
  25. Epoch 6/24
  26. ----------
  27. train Loss: 0.5308 Acc: 0.7787
  28. val Loss: 0.4803 Acc: 0.8693
  29. Epoch 7/24
  30. ----------
  31. train Loss: 0.3464 Acc: 0.8566
  32. val Loss: 0.2385 Acc: 0.8954
  33. Epoch 8/24
  34. ----------
  35. train Loss: 0.4586 Acc: 0.7910
  36. val Loss: 0.2064 Acc: 0.9020
  37. Epoch 9/24
  38. ----------
  39. train Loss: 0.3438 Acc: 0.8402
  40. val Loss: 0.2336 Acc: 0.9020
  41. Epoch 10/24
  42. ----------
  43. train Loss: 0.2405 Acc: 0.9016
  44. val Loss: 0.1866 Acc: 0.9346
  45. Epoch 11/24
  46. ----------
  47. train Loss: 0.2335 Acc: 0.8852
  48. val Loss: 0.2152 Acc: 0.9216
  49. Epoch 12/24
  50. ----------
  51. train Loss: 0.3441 Acc: 0.8402
  52. val Loss: 0.2298 Acc: 0.9020
  53. Epoch 13/24
  54. ----------
  55. train Loss: 0.2513 Acc: 0.9098
  56. val Loss: 0.2204 Acc: 0.9020
  57. Epoch 14/24
  58. ----------
  59. train Loss: 0.2745 Acc: 0.8934
  60. val Loss: 0.2439 Acc: 0.8889
  61. Epoch 15/24
  62. ----------
  63. train Loss: 0.2978 Acc: 0.8607
  64. val Loss: 0.2817 Acc: 0.8497
  65. Epoch 16/24
  66. ----------
  67. train Loss: 0.2560 Acc: 0.8975
  68. val Loss: 0.1933 Acc: 0.9281
  69. Epoch 17/24
  70. ----------
  71. train Loss: 0.2326 Acc: 0.9098
  72. val Loss: 0.2176 Acc: 0.9085
  73. Epoch 18/24
  74. ----------
  75. train Loss: 0.2274 Acc: 0.9016
  76. val Loss: 0.2084 Acc: 0.9346
  77. Epoch 19/24
  78. ----------
  79. train Loss: 0.3091 Acc: 0.8689
  80. val Loss: 0.2270 Acc: 0.9150
  81. Epoch 20/24
  82. ----------
  83. train Loss: 0.2540 Acc: 0.8975
  84. val Loss: 0.1957 Acc: 0.9216
  85. Epoch 21/24
  86. ----------
  87. train Loss: 0.3203 Acc: 0.8648
  88. val Loss: 0.1969 Acc: 0.9216
  89. Epoch 22/24
  90. ----------
  91. train Loss: 0.3048 Acc: 0.8443
  92. val Loss: 0.1981 Acc: 0.9346
  93. Epoch 23/24
  94. ----------
  95. train Loss: 0.2526 Acc: 0.9016
  96. val Loss: 0.2415 Acc: 0.8889
  97. Epoch 24/24
  98. ----------
  99. train Loss: 0.3041 Acc: 0.8689
  100. val Loss: 0.1894 Acc: 0.9346
  101. Training complete in 1m 7s
  102. Best val Acc: 0.941176
  103. visualize_model(model_ft)

img/sphx_glr_transfer_learning_tutorial_002.png

将卷积神经网络为固定特征提取器

在这里,我们需要冻结除最后一层之外的所有网络。我们需要设置requires_grad == False来冻结参数,以便在backward()中不会计算梯度。

您可以在此处的文档中阅读更多相关信息。

  1. model_conv = torchvision.models.resnet18(pretrained=True)
  2. for param in model_conv.parameters():
  3. param.requires_grad = False
  4. # Parameters of newly constructed modules have requires_grad=True by default
  5. num_ftrs = model_conv.fc.in_features
  6. model_conv.fc = nn.Linear(num_ftrs, 2)
  7. model_conv = model_conv.to(device)
  8. criterion = nn.CrossEntropyLoss()
  9. # Observe that only parameters of final layer are being optimized as
  10. # opposed to before.
  11. optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
  12. # Decay LR by a factor of 0.1 every 7 epochs
  13. exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)

训练与评价

在CPU上,与前一个场景相比,大概只花费一半的时间。这在预料之中,因为不需要为绝大多数网络计算梯度。当然,我们还是需要计算前向传播。

  1. model_conv = train_model(model_conv, criterion, optimizer_conv,
  2. exp_lr_scheduler, num_epochs=25)

Out:

  1. Epoch 0/24
  2. ----------
  3. train Loss: 0.6073 Acc: 0.6598
  4. val Loss: 0.2511 Acc: 0.8954
  5. Epoch 1/24
  6. ----------
  7. train Loss: 0.5457 Acc: 0.7459
  8. val Loss: 0.5169 Acc: 0.7647
  9. Epoch 2/24
  10. ----------
  11. train Loss: 0.4023 Acc: 0.8320
  12. val Loss: 0.2361 Acc: 0.9150
  13. Epoch 3/24
  14. ----------
  15. train Loss: 0.5150 Acc: 0.7869
  16. val Loss: 0.5423 Acc: 0.8039
  17. Epoch 4/24
  18. ----------
  19. train Loss: 0.4142 Acc: 0.8115
  20. val Loss: 0.2257 Acc: 0.9216
  21. Epoch 5/24
  22. ----------
  23. train Loss: 0.6364 Acc: 0.7418
  24. val Loss: 0.3133 Acc: 0.8889
  25. Epoch 6/24
  26. ----------
  27. train Loss: 0.5543 Acc: 0.7664
  28. val Loss: 0.1959 Acc: 0.9412
  29. Epoch 7/24
  30. ----------
  31. train Loss: 0.3552 Acc: 0.8443
  32. val Loss: 0.2013 Acc: 0.9477
  33. Epoch 8/24
  34. ----------
  35. train Loss: 0.3538 Acc: 0.8525
  36. val Loss: 0.1825 Acc: 0.9542
  37. Epoch 9/24
  38. ----------
  39. train Loss: 0.3954 Acc: 0.8402
  40. val Loss: 0.1959 Acc: 0.9477
  41. Epoch 10/24
  42. ----------
  43. train Loss: 0.3615 Acc: 0.8443
  44. val Loss: 0.1779 Acc: 0.9542
  45. Epoch 11/24
  46. ----------
  47. train Loss: 0.3951 Acc: 0.8320
  48. val Loss: 0.1730 Acc: 0.9542
  49. Epoch 12/24
  50. ----------
  51. train Loss: 0.4111 Acc: 0.8156
  52. val Loss: 0.2573 Acc: 0.9150
  53. Epoch 13/24
  54. ----------
  55. train Loss: 0.3073 Acc: 0.8525
  56. val Loss: 0.1901 Acc: 0.9477
  57. Epoch 14/24
  58. ----------
  59. train Loss: 0.3288 Acc: 0.8279
  60. val Loss: 0.2114 Acc: 0.9346
  61. Epoch 15/24
  62. ----------
  63. train Loss: 0.3472 Acc: 0.8525
  64. val Loss: 0.1989 Acc: 0.9412
  65. Epoch 16/24
  66. ----------
  67. train Loss: 0.3309 Acc: 0.8689
  68. val Loss: 0.1757 Acc: 0.9412
  69. Epoch 17/24
  70. ----------
  71. train Loss: 0.3963 Acc: 0.8197
  72. val Loss: 0.1881 Acc: 0.9608
  73. Epoch 18/24
  74. ----------
  75. train Loss: 0.3332 Acc: 0.8484
  76. val Loss: 0.2175 Acc: 0.9412
  77. Epoch 19/24
  78. ----------
  79. train Loss: 0.3419 Acc: 0.8320
  80. val Loss: 0.1932 Acc: 0.9412
  81. Epoch 20/24
  82. ----------
  83. train Loss: 0.3471 Acc: 0.8689
  84. val Loss: 0.1851 Acc: 0.9477
  85. Epoch 21/24
  86. ----------
  87. train Loss: 0.2843 Acc: 0.8811
  88. val Loss: 0.1772 Acc: 0.9477
  89. Epoch 22/24
  90. ----------
  91. train Loss: 0.4024 Acc: 0.8402
  92. val Loss: 0.1818 Acc: 0.9542
  93. Epoch 23/24
  94. ----------
  95. train Loss: 0.2409 Acc: 0.8975
  96. val Loss: 0.2211 Acc: 0.9346
  97. Epoch 24/24
  98. ----------
  99. train Loss: 0.3838 Acc: 0.8238
  100. val Loss: 0.1918 Acc: 0.9412
  101. Training complete in 0m 34s
  102. Best val Acc: 0.960784
  103. visualize_model(model_conv)
  104. plt.ioff()
  105. plt.show()

img/sphx_glr_transfer_learning_tutorial_003.png

脚本的总运行时间: (1分钟53.655秒)

由Sphinx-Gallery生成的图库