人脸关键点检测

作者: ssz95

日期: 2021.01

摘要: 本示例教程将会演示如何使用飞桨实现人脸关键点检测。

一、简介

在图像处理中,关键点本质上是一种特征。它是对一个固定区域或者空间物理关系的抽象描述,描述的是一定邻域范围内的组合或上下文关系。它不仅仅是一个点信息,或代表一个位置,更代表着上下文与周围邻域的组合关系。关键点检测的目标就是通过计算机从图像中找出这些点的坐标,作为计算机视觉领域的一个基础任务,关键点的检测对于高级别任务,例如识别和分类具有至关重要的意义。

关键点检测方法总体上可以分成两个类型,一个种是用坐标回归的方式来解决,另一种是将关键点建模成热力图,通过像素分类任务,回归热力图分布得到关键点位置。这两个方法,都是一种手段或者是途径,解决的问题就是要找出这个点在图像当中的位置与关系。

其中人脸关键点检测是关键点检测方法的一个成功实践,本示例简要介绍如何通过飞桨开源框架,实现人脸关键点检测的功能。这个案例用到的是第一种关键点检测方法——坐标回归。我们将使用到新版的paddle2.0的API,集成式的训练接口,能够很方便对模型进行训练和预测。

二、环境设置

本教程基于Paddle 2.0 编写,如果您的环境不是本版本,请先参考官网安装 Paddle 2.0 。

如果是cpu环境,请安装cpu版本的paddle2.0环境,在 paddle.set_device() 输入对应运行设备。

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import pandas as pd
  4. import os
  5. import paddle
  6. from paddle.io import Dataset
  7. from paddle.vision.transforms import transforms
  8. from paddle.vision.models import resnet18
  9. from paddle.nn import functional as F
  10. print(paddle.__version__)
  11. # evice = paddle.set_device('cpu')
  12. device = paddle.set_device('gpu')
  1. 2.0.0

三、数据集

3.1 数据集下载

本案例使用了Kaggle官方举办的人脸关键点检测challenge数据集,官网:https://www.kaggle.com/c/facial-keypoints-detection

官方数据集将人脸图像和标注数据打包成了csv文件,我们使用panda来读取。其中数据集中的文件: training.csv: 包含了用于训练的人脸关键点坐标和图像。 test.csv: 包含了用于测试的人脸关键点图像, 没有标注关键点坐标。 IdLookupTable.csv: 测试集关键点的位置的对应名称。

图像的长和宽都为96像素,所需要检测的一共有15个关键点。

  1. !unzip -o ./test.zip -d data/data60
  2. !unzip -o ./training.zip -d data/data60
  1. unzip: cannot find or open ./test.zip, ./test.zip.zip or ./test.zip.ZIP.
  2. unzip: cannot find or open ./training.zip, ./training.zip.zip or ./training.zip.ZIP.

3.2 数据集定义

飞桨(PaddlePaddle)数据集加载方案是统一使用Dataset(数据集定义) + DataLoader(多进程数据集加载)。

首先我们先进行数据集的定义,数据集定义主要是实现一个新的Dataset类,继承父类paddle.io.Dataset,并实现父类中以下两个抽象方法,__getitem__和len

  1. Train_Dir = './data/data60/training.csv'
  2. Test_Dir = './data/data60/test.csv'
  3. lookid_dir = './data/data60/IdLookupTable.csv'
  4. class ImgTransforms(object):
  5. """
  6. 图像预处理工具,用于将图像进行升维(96, 96) => (96, 96, 3),
  7. 并对图像的维度进行转换从HWC变为CHW
  8. """
  9. def __init__(self, fmt):
  10. self.format = fmt
  11. def __call__(self, img):
  12. if len(img.shape) == 2:
  13. img = np.expand_dims(img, axis=2)
  14. img = img.transpose(self.format)
  15. if img.shape[0] == 1:
  16. img = np.repeat(img, 3, axis=0)
  17. return img
  18. class FaceDataset(Dataset):
  19. def __init__(self, data_path, mode='train', val_split=0.2):
  20. self.mode = mode
  21. assert self.mode in ['train', 'val', 'test'], \
  22. "mode should be 'train' or 'test', but got {}".format(self.mode)
  23. self.data_source = pd.read_csv(data_path)
  24. # 清洗数据, 数据集中有很多样本只标注了部分关键点, 这里有两种策略
  25. # 第一种, 将未标注的位置从上一个样本对应的关键点复制过来
  26. # self.data_source.fillna(method = 'ffill',inplace = True)
  27. # 第二种, 将包含有未标注的样本从数据集中移除
  28. self.data_source.dropna(how="any", inplace=True)
  29. self.data_label_all = self.data_source.drop('Image', axis = 1)
  30. # 划分训练集和验证集合
  31. if self.mode in ['train', 'val']:
  32. np.random.seed(43)
  33. data_len = len(self.data_source)
  34. # 随机划分
  35. shuffled_indices = np.random.permutation(data_len)
  36. # 顺序划分
  37. # shuffled_indices = np.arange(data_len)
  38. self.shuffled_indices = shuffled_indices
  39. val_set_size = int(data_len*val_split)
  40. if self.mode == 'val':
  41. val_indices = shuffled_indices[:val_set_size]
  42. self.data_img = self.data_source.reindex().iloc[val_indices]
  43. self.data_label = self.data_label_all.reindex().iloc[val_indices]
  44. elif self.mode == 'train':
  45. train_indices = shuffled_indices[val_set_size:]
  46. self.data_img = self.data_source.reindex().iloc[train_indices]
  47. self.data_label = self.data_label_all.reindex().iloc[train_indices]
  48. elif self.mode == 'test':
  49. self.data_img = self.data_source
  50. self.data_label = self.data_label_all
  51. self.transforms = transforms.Compose([
  52. ImgTransforms((2, 0, 1))
  53. ])
  54. # 每次迭代时返回数据和对应的标签
  55. def __getitem__(self, idx):
  56. img = self.data_img['Image'].iloc[idx].split(' ')
  57. img = ['0' if x == '' else x for x in img]
  58. img = np.array(img, dtype = 'float32').reshape(96, 96)
  59. img = self.transforms(img)
  60. label = np.array(self.data_label.iloc[idx,:],dtype = 'float32')/96
  61. return img, label
  62. # 返回整个数据集的总数
  63. def __len__(self):
  64. return len(self.data_img)
  65. # 训练数据集和验证数据集
  66. train_dataset = FaceDataset(Train_Dir, mode='train')
  67. val_dataset = FaceDataset(Train_Dir, mode='val')
  68. # 测试数据集
  69. test_dataset = FaceDataset(Test_Dir, mode='test')

3.3 数据集抽样展示

实现好Dataset数据集后,我们来测试一下数据集是否符合预期,因为Dataset是一个可以被迭代的Class,我们通过for循环从里面读取数据来用matplotlib进行展示。关键点的坐标在数据集中进行了归一化处理,这里乘以图像的大小恢复到原始尺度,并用scatter函数将点画在输出的图像上。

  1. def plot_sample(x, y, axis):
  2. img = x.reshape(96, 96)
  3. axis.imshow(img, cmap='gray')
  4. axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='b')
  5. fig = plt.figure(figsize=(10, 7))
  6. fig.subplots_adjust(
  7. left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
  8. # 随机取16个样本展示
  9. for i in range(16):
  10. axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
  11. idx = np.random.randint(train_dataset.__len__())
  12. # print(idx)
  13. img, label = train_dataset[idx]
  14. label = label*96
  15. plot_sample(img[0], label, axis)
  16. plt.show()

../../../_images/landmark_detection_9_0.png

四、定义模型

这里使用到 paddle.vision.models 中定义的 resnet18 网络模型。在ImageNet分类任务中,图像分成1000类,在模型后接一个全连接层,将输出的1000维向量映射成30维,对应15个关键点的横纵坐标。

  1. class FaceNet(paddle.nn.Layer):
  2. def __init__(self, num_keypoints, pretrained=False):
  3. super(FaceNet, self).__init__()
  4. self.backbone = resnet18(pretrained)
  5. self.outLayer1 = paddle.nn.Sequential(
  6. paddle.nn.Linear(1000, 512),
  7. paddle.nn.ReLU(),
  8. paddle.nn.Dropout(0.1))
  9. self.outLayer2 = paddle.nn.Linear(512, num_keypoints*2)
  10. def forward(self, inputs):
  11. out = self.backbone(inputs)
  12. out = self.outLayer1(out)
  13. out = self.outLayer2(out)
  14. return out

4.1 模型可视化

调用飞桨提供的summary接口对组建好的模型进行可视化,方便进行模型结构和参数信息的查看和确认。

  1. from paddle.static import InputSpec
  2. num_keypoints = 15
  3. model = paddle.Model(FaceNet(num_keypoints))
  4. model.summary((1,3, 96, 96))
  1. -------------------------------------------------------------------------------
  2. Layer (type) Input Shape Output Shape Param #
  3. ===============================================================================
  4. Conv2D-1 [[1, 3, 96, 96]] [1, 64, 48, 48] 9,408
  5. BatchNorm2D-1 [[1, 64, 48, 48]] [1, 64, 48, 48] 256
  6. ReLU-1 [[1, 64, 48, 48]] [1, 64, 48, 48] 0
  7. MaxPool2D-1 [[1, 64, 48, 48]] [1, 64, 24, 24] 0
  8. Conv2D-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
  9. BatchNorm2D-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
  10. ReLU-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
  11. Conv2D-3 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
  12. BatchNorm2D-3 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
  13. BasicBlock-1 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
  14. Conv2D-4 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
  15. BatchNorm2D-4 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
  16. ReLU-3 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
  17. Conv2D-5 [[1, 64, 24, 24]] [1, 64, 24, 24] 36,864
  18. BatchNorm2D-5 [[1, 64, 24, 24]] [1, 64, 24, 24] 256
  19. BasicBlock-2 [[1, 64, 24, 24]] [1, 64, 24, 24] 0
  20. Conv2D-7 [[1, 64, 24, 24]] [1, 128, 12, 12] 73,728
  21. BatchNorm2D-7 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
  22. ReLU-4 [[1, 128, 12, 12]] [1, 128, 12, 12] 0
  23. Conv2D-8 [[1, 128, 12, 12]] [1, 128, 12, 12] 147,456
  24. BatchNorm2D-8 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
  25. Conv2D-6 [[1, 64, 24, 24]] [1, 128, 12, 12] 8,192
  26. BatchNorm2D-6 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
  27. BasicBlock-3 [[1, 64, 24, 24]] [1, 128, 12, 12] 0
  28. Conv2D-9 [[1, 128, 12, 12]] [1, 128, 12, 12] 147,456
  29. BatchNorm2D-9 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
  30. ReLU-5 [[1, 128, 12, 12]] [1, 128, 12, 12] 0
  31. Conv2D-10 [[1, 128, 12, 12]] [1, 128, 12, 12] 147,456
  32. BatchNorm2D-10 [[1, 128, 12, 12]] [1, 128, 12, 12] 512
  33. BasicBlock-4 [[1, 128, 12, 12]] [1, 128, 12, 12] 0
  34. Conv2D-12 [[1, 128, 12, 12]] [1, 256, 6, 6] 294,912
  35. BatchNorm2D-12 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
  36. ReLU-6 [[1, 256, 6, 6]] [1, 256, 6, 6] 0
  37. Conv2D-13 [[1, 256, 6, 6]] [1, 256, 6, 6] 589,824
  38. BatchNorm2D-13 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
  39. Conv2D-11 [[1, 128, 12, 12]] [1, 256, 6, 6] 32,768
  40. BatchNorm2D-11 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
  41. BasicBlock-5 [[1, 128, 12, 12]] [1, 256, 6, 6] 0
  42. Conv2D-14 [[1, 256, 6, 6]] [1, 256, 6, 6] 589,824
  43. BatchNorm2D-14 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
  44. ReLU-7 [[1, 256, 6, 6]] [1, 256, 6, 6] 0
  45. Conv2D-15 [[1, 256, 6, 6]] [1, 256, 6, 6] 589,824
  46. BatchNorm2D-15 [[1, 256, 6, 6]] [1, 256, 6, 6] 1,024
  47. BasicBlock-6 [[1, 256, 6, 6]] [1, 256, 6, 6] 0
  48. Conv2D-17 [[1, 256, 6, 6]] [1, 512, 3, 3] 1,179,648
  49. BatchNorm2D-17 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
  50. ReLU-8 [[1, 512, 3, 3]] [1, 512, 3, 3] 0
  51. Conv2D-18 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,359,296
  52. BatchNorm2D-18 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
  53. Conv2D-16 [[1, 256, 6, 6]] [1, 512, 3, 3] 131,072
  54. BatchNorm2D-16 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
  55. BasicBlock-7 [[1, 256, 6, 6]] [1, 512, 3, 3] 0
  56. Conv2D-19 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,359,296
  57. BatchNorm2D-19 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
  58. ReLU-9 [[1, 512, 3, 3]] [1, 512, 3, 3] 0
  59. Conv2D-20 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,359,296
  60. BatchNorm2D-20 [[1, 512, 3, 3]] [1, 512, 3, 3] 2,048
  61. BasicBlock-8 [[1, 512, 3, 3]] [1, 512, 3, 3] 0
  62. AdaptiveAvgPool2D-1 [[1, 512, 3, 3]] [1, 512, 1, 1] 0
  63. Linear-1 [[1, 512]] [1, 1000] 513,000
  64. ResNet-1 [[1, 3, 96, 96]] [1, 1000] 0
  65. Linear-2 [[1, 1000]] [1, 512] 512,512
  66. ReLU-10 [[1, 512]] [1, 512] 0
  67. Dropout-1 [[1, 512]] [1, 512] 0
  68. Linear-3 [[1, 512]] [1, 30] 15,390
  69. ===============================================================================
  70. Total params: 12,227,014
  71. Trainable params: 12,207,814
  72. Non-trainable params: 19,200
  73. -------------------------------------------------------------------------------
  74. Input size (MB): 0.11
  75. Forward/backward pass size (MB): 10.51
  76. Params size (MB): 46.64
  77. Estimated Total Size (MB): 57.26
  78. -------------------------------------------------------------------------------
  1. {'total_params': 12227014, 'trainable_params': 12207814}

五、训练模型

在这个任务是对坐标进行回归,我们使用均方误差(Mean Square error )损失函数paddle.nn.MSELoss()来做计算,飞桨2.0中,在nn下将损失函数封装成可调用类。我们这里使用paddle.Model相关的API直接进行训练,只需要定义好数据集、网络模型和损失函数即可。

使用模型代码进行Model实例生成,使用prepare接口定义优化器、损失函数和评价指标等信息,用于后续训练使用。在所有初步配置完成后,调用fit接口开启训练执行过程,调用fit时只需要将前面定义好的训练数据集、测试数据集、训练轮次(Epoch)和批次大小(batch_size)配置好即可。

  1. model = paddle.Model(FaceNet(num_keypoints=15))
  2. optim = paddle.optimizer.Adam(learning_rate=1e-3,
  3. parameters=model.parameters())
  4. model.prepare(optim, paddle.nn.MSELoss())
  5. model.fit(train_dataset, val_dataset, epochs=60, batch_size=256)
  1. The loss value printed in the log is the current step, and the metric is the average value of previous step.
  2. Epoch 1/60
  3. step 7/7 - loss: 0.1134 - 611ms/step
  4. Eval begin...
  5. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  6. step 2/2 - loss: 6.2252 - 502ms/step
  7. Eval samples: 428
  8. Epoch 2/60
  9. step 7/7 - loss: 0.0331 - 591ms/step
  10. Eval begin...
  11. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  12. step 2/2 - loss: 0.4000 - 506ms/step
  13. Eval samples: 428
  14. Epoch 3/60
  15. step 7/7 - loss: 0.0241 - 592ms/step
  16. Eval begin...
  17. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  18. step 2/2 - loss: 0.0677 - 509ms/step
  19. Eval samples: 428
  20. Epoch 4/60
  21. step 7/7 - loss: 0.0187 - 590ms/step
  22. Eval begin...
  23. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  24. step 2/2 - loss: 0.0171 - 490ms/step
  25. Eval samples: 428
  26. Epoch 5/60
  27. step 7/7 - loss: 0.0153 - 598ms/step
  28. Eval begin...
  29. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  30. step 2/2 - loss: 0.0059 - 508ms/step
  31. Eval samples: 428
  32. Epoch 6/60
  33. step 7/7 - loss: 0.0134 - 593ms/step
  34. Eval begin...
  35. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  36. step 2/2 - loss: 0.0031 - 495ms/step
  37. Eval samples: 428
  38. Epoch 7/60
  39. step 7/7 - loss: 0.0107 - 594ms/step
  40. Eval begin...
  41. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  42. step 2/2 - loss: 0.0023 - 510ms/step
  43. Eval samples: 428
  44. Epoch 8/60
  45. step 7/7 - loss: 0.0100 - 590ms/step
  46. Eval begin...
  47. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  48. step 2/2 - loss: 0.0014 - 503ms/step
  49. Eval samples: 428
  50. Epoch 9/60
  51. step 7/7 - loss: 0.0102 - 595ms/step
  52. Eval begin...
  53. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  54. step 2/2 - loss: 0.0017 - 535ms/step
  55. Eval samples: 428
  56. Epoch 10/60
  57. step 7/7 - loss: 0.0088 - 599ms/step
  58. Eval begin...
  59. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  60. step 2/2 - loss: 0.0029 - 501ms/step
  61. Eval samples: 428
  62. Epoch 11/60
  63. step 7/7 - loss: 0.0090 - 600ms/step
  64. Eval begin...
  65. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  66. step 2/2 - loss: 0.0011 - 505ms/step
  67. Eval samples: 428
  68. Epoch 12/60
  69. step 7/7 - loss: 0.0076 - 597ms/step
  70. Eval begin...
  71. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  72. step 2/2 - loss: 0.0017 - 503ms/step
  73. Eval samples: 428
  74. Epoch 13/60
  75. step 7/7 - loss: 0.0071 - 603ms/step
  76. Eval begin...
  77. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  78. step 2/2 - loss: 0.0028 - 504ms/step
  79. Eval samples: 428
  80. Epoch 14/60
  81. step 7/7 - loss: 0.0077 - 595ms/step
  82. Eval begin...
  83. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  84. step 2/2 - loss: 0.0044 - 501ms/step
  85. Eval samples: 428
  86. Epoch 15/60
  87. step 7/7 - loss: 0.0076 - 600ms/step
  88. Eval begin...
  89. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  90. step 2/2 - loss: 0.0013 - 502ms/step
  91. Eval samples: 428
  92. Epoch 16/60
  93. step 7/7 - loss: 0.0072 - 599ms/step
  94. Eval begin...
  95. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  96. step 2/2 - loss: 9.3609e-04 - 498ms/step
  97. Eval samples: 428
  98. Epoch 17/60
  99. step 7/7 - loss: 0.0076 - 584ms/step
  100. Eval begin...
  101. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  102. step 2/2 - loss: 0.0036 - 482ms/step
  103. Eval samples: 428
  104. Epoch 18/60
  105. step 7/7 - loss: 0.0077 - 566ms/step
  106. Eval begin...
  107. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  108. step 2/2 - loss: 0.0011 - 485ms/step
  109. Eval samples: 428
  110. Epoch 19/60
  111. step 7/7 - loss: 0.0057 - 586ms/step
  112. Eval begin...
  113. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  114. step 2/2 - loss: 0.0019 - 486ms/step
  115. Eval samples: 428
  116. Epoch 20/60
  117. step 7/7 - loss: 0.0061 - 570ms/step
  118. Eval begin...
  119. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  120. step 2/2 - loss: 0.0012 - 485ms/step
  121. Eval samples: 428
  122. Epoch 21/60
  123. step 7/7 - loss: 0.0055 - 591ms/step
  124. Eval begin...
  125. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  126. step 2/2 - loss: 0.0018 - 499ms/step
  127. Eval samples: 428
  128. Epoch 22/60
  129. step 7/7 - loss: 0.0067 - 588ms/step
  130. Eval begin...
  131. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  132. step 2/2 - loss: 8.7753e-04 - 500ms/step
  133. Eval samples: 428
  134. Epoch 23/60
  135. step 7/7 - loss: 0.0056 - 588ms/step
  136. Eval begin...
  137. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  138. step 2/2 - loss: 9.4301e-04 - 511ms/step
  139. Eval samples: 428
  140. Epoch 24/60
  141. step 7/7 - loss: 0.0054 - 598ms/step
  142. Eval begin...
  143. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  144. step 2/2 - loss: 0.0010 - 505ms/step
  145. Eval samples: 428
  146. Epoch 25/60
  147. step 7/7 - loss: 0.0056 - 608ms/step
  148. Eval begin...
  149. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  150. step 2/2 - loss: 8.5451e-04 - 498ms/step
  151. Eval samples: 428
  152. Epoch 26/60
  153. step 7/7 - loss: 0.0286 - 600ms/step
  154. Eval begin...
  155. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  156. step 2/2 - loss: 0.0165 - 505ms/step
  157. Eval samples: 428
  158. Epoch 27/60
  159. step 7/7 - loss: 0.0082 - 610ms/step
  160. Eval begin...
  161. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  162. step 2/2 - loss: 0.0065 - 500ms/step
  163. Eval samples: 428
  164. Epoch 28/60
  165. step 7/7 - loss: 0.0085 - 610ms/step
  166. Eval begin...
  167. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  168. step 2/2 - loss: 0.0021 - 506ms/step
  169. Eval samples: 428
  170. Epoch 29/60
  171. step 7/7 - loss: 0.0048 - 597ms/step
  172. Eval begin...
  173. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  174. step 2/2 - loss: 0.0027 - 496ms/step
  175. Eval samples: 428
  176. Epoch 30/60
  177. step 7/7 - loss: 0.0051 - 604ms/step
  178. Eval begin...
  179. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  180. step 2/2 - loss: 0.0010 - 524ms/step
  181. Eval samples: 428
  182. Epoch 31/60
  183. step 7/7 - loss: 0.0049 - 600ms/step
  184. Eval begin...
  185. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  186. step 2/2 - loss: 7.4699e-04 - 506ms/step
  187. Eval samples: 428
  188. Epoch 32/60
  189. step 7/7 - loss: 0.0051 - 598ms/step
  190. Eval begin...
  191. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  192. step 2/2 - loss: 7.6433e-04 - 505ms/step
  193. Eval samples: 428
  194. Epoch 33/60
  195. step 7/7 - loss: 0.0049 - 588ms/step
  196. Eval begin...
  197. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  198. step 2/2 - loss: 0.0013 - 515ms/step
  199. Eval samples: 428
  200. Epoch 34/60
  201. step 7/7 - loss: 0.0054 - 598ms/step
  202. Eval begin...
  203. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  204. step 2/2 - loss: 7.3304e-04 - 502ms/step
  205. Eval samples: 428
  206. Epoch 35/60
  207. step 7/7 - loss: 0.0044 - 607ms/step
  208. Eval begin...
  209. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  210. step 2/2 - loss: 8.8994e-04 - 494ms/step
  211. Eval samples: 428
  212. Epoch 36/60
  213. step 7/7 - loss: 0.0043 - 629ms/step
  214. Eval begin...
  215. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  216. step 2/2 - loss: 0.0011 - 499ms/step
  217. Eval samples: 428
  218. Epoch 37/60
  219. step 7/7 - loss: 0.0045 - 601ms/step
  220. Eval begin...
  221. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  222. step 2/2 - loss: 7.7268e-04 - 535ms/step
  223. Eval samples: 428
  224. Epoch 38/60
  225. step 7/7 - loss: 0.0045 - 594ms/step
  226. Eval begin...
  227. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  228. step 2/2 - loss: 6.8808e-04 - 506ms/step
  229. Eval samples: 428
  230. Epoch 39/60
  231. step 7/7 - loss: 0.0040 - 590ms/step
  232. Eval begin...
  233. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  234. step 2/2 - loss: 7.0140e-04 - 522ms/step
  235. Eval samples: 428
  236. Epoch 40/60
  237. step 7/7 - loss: 0.0061 - 593ms/step
  238. Eval begin...
  239. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  240. step 2/2 - loss: 0.0029 - 496ms/step
  241. Eval samples: 428
  242. Epoch 41/60
  243. step 7/7 - loss: 0.0046 - 601ms/step
  244. Eval begin...
  245. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  246. step 2/2 - loss: 6.9420e-04 - 573ms/step
  247. Eval samples: 428
  248. Epoch 42/60
  249. step 7/7 - loss: 0.0077 - 590ms/step
  250. Eval begin...
  251. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  252. step 2/2 - loss: 0.0029 - 522ms/step
  253. Eval samples: 428
  254. Epoch 43/60
  255. step 7/7 - loss: 0.0038 - 591ms/step
  256. Eval begin...
  257. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  258. step 2/2 - loss: 7.0032e-04 - 523ms/step
  259. Eval samples: 428
  260. Epoch 44/60
  261. step 7/7 - loss: 0.0042 - 598ms/step
  262. Eval begin...
  263. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  264. step 2/2 - loss: 0.0025 - 519ms/step
  265. Eval samples: 428
  266. Epoch 45/60
  267. step 7/7 - loss: 0.0054 - 616ms/step
  268. Eval begin...
  269. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  270. step 2/2 - loss: 7.9877e-04 - 515ms/step
  271. Eval samples: 428
  272. Epoch 46/60
  273. step 7/7 - loss: 0.0047 - 607ms/step
  274. Eval begin...
  275. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  276. step 2/2 - loss: 0.0021 - 504ms/step
  277. Eval samples: 428
  278. Epoch 47/60
  279. step 7/7 - loss: 0.0047 - 609ms/step
  280. Eval begin...
  281. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  282. step 2/2 - loss: 6.5195e-04 - 559ms/step
  283. Eval samples: 428
  284. Epoch 48/60
  285. step 7/7 - loss: 0.0046 - 626ms/step
  286. Eval begin...
  287. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  288. step 2/2 - loss: 0.0013 - 523ms/step
  289. Eval samples: 428
  290. Epoch 49/60
  291. step 7/7 - loss: 0.0039 - 597ms/step
  292. Eval begin...
  293. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  294. step 2/2 - loss: 6.3211e-04 - 521ms/step
  295. Eval samples: 428
  296. Epoch 50/60
  297. step 7/7 - loss: 0.0035 - 600ms/step
  298. Eval begin...
  299. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  300. step 2/2 - loss: 6.7967e-04 - 514ms/step
  301. Eval samples: 428
  302. Epoch 51/60
  303. step 7/7 - loss: 0.0033 - 605ms/step
  304. Eval begin...
  305. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  306. step 2/2 - loss: 6.4899e-04 - 521ms/step
  307. Eval samples: 428
  308. Epoch 52/60
  309. step 7/7 - loss: 0.0046 - 606ms/step
  310. Eval begin...
  311. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  312. step 2/2 - loss: 0.0017 - 520ms/step
  313. Eval samples: 428
  314. Epoch 53/60
  315. step 7/7 - loss: 0.0036 - 633ms/step
  316. Eval begin...
  317. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  318. step 2/2 - loss: 6.4985e-04 - 524ms/step
  319. Eval samples: 428
  320. Epoch 54/60
  321. step 7/7 - loss: 0.0038 - 601ms/step
  322. Eval begin...
  323. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  324. step 2/2 - loss: 0.0017 - 531ms/step
  325. Eval samples: 428
  326. Epoch 55/60
  327. step 7/7 - loss: 0.0057 - 598ms/step
  328. Eval begin...
  329. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  330. step 2/2 - loss: 0.0032 - 509ms/step
  331. Eval samples: 428
  332. Epoch 56/60
  333. step 7/7 - loss: 0.0042 - 597ms/step
  334. Eval begin...
  335. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  336. step 2/2 - loss: 7.3378e-04 - 514ms/step
  337. Eval samples: 428
  338. Epoch 57/60
  339. step 7/7 - loss: 0.0065 - 609ms/step
  340. Eval begin...
  341. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  342. step 2/2 - loss: 8.6400e-04 - 525ms/step
  343. Eval samples: 428
  344. Epoch 58/60
  345. step 7/7 - loss: 0.0056 - 621ms/step
  346. Eval begin...
  347. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  348. step 2/2 - loss: 0.0013 - 528ms/step
  349. Eval samples: 428
  350. Epoch 59/60
  351. step 7/7 - loss: 0.0040 - 608ms/step
  352. Eval begin...
  353. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  354. step 2/2 - loss: 7.8955e-04 - 507ms/step
  355. Eval samples: 428
  356. Epoch 60/60
  357. step 7/7 - loss: 0.0028 - 603ms/step
  358. Eval begin...
  359. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  360. step 2/2 - loss: 0.0014 - 516ms/step
  361. Eval samples: 428

六、模型预测

为了更好的观察预测结果,我们分别可视化验证集结果与标注点的对比,和在未标注的测试集的预测结果。 ### 6.1 验证集结果可视化 红色的关键点为网络预测的结果, 绿色的关键点为标注的groundtrue。

  1. result = model.predict(val_dataset, batch_size=1)
  1. Predict begin...
  2. step 428/428 [==============================] - 15ms/step
  3. Predict samples: 428
  1. def plot_sample(x, y, axis, gt=[]):
  2. img = x.reshape(96, 96)
  3. axis.imshow(img, cmap='gray')
  4. axis.scatter(y[0::2], y[1::2], marker='x', s=10, color='r')
  5. if gt!=[]:
  6. axis.scatter(gt[0::2], gt[1::2], marker='x', s=10, color='lime')
  7. fig = plt.figure(figsize=(10, 7))
  8. fig.subplots_adjust(
  9. left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
  10. for i in range(16):
  11. axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
  12. idx = np.random.randint(val_dataset.__len__())
  13. img, gt_label = val_dataset[idx]
  14. gt_label = gt_label*96
  15. label_pred = result[0][idx].reshape(-1)
  16. label_pred = label_pred*96
  17. plot_sample(img[0], label_pred, axis, gt_label)
  18. plt.show()
  1. /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/ipykernel_launcher.py:5: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
  2. """

../../../_images/landmark_detection_18_1.png

6.2 测试集结果可视化

  1. result = model.predict(test_dataset, batch_size=1)
  1. Predict begin...
  2. step 1142/1783 [==================>...........] - ETA: 9s - 15ms/st
  1. fig = plt.figure(figsize=(10, 7))
  2. fig.subplots_adjust(
  3. left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
  4. for i in range(16):
  5. axis = fig.add_subplot(4, 4, i+1, xticks=[], yticks=[])
  6. idx = np.random.randint(test_dataset.__len__())
  7. img, _ = test_dataset[idx]
  8. label_pred = result[0][idx].reshape(-1)
  9. label_pred = label_pred*96
  10. plot_sample(img[0], label_pred, axis)
  11. plt.show()

../../../_images/landmark_detection_21_0.png