通过OCR实现验证码识别

作者: GT_老张

时间: 2021.01

摘要: 本篇将介绍如何通过飞桨实现简单的CRNN+CTC自定义数据集OCR识别模型,数据集采用 CaptchaDataset中OCR部分的9453张图像,其中前8453张图像在本案例中作为训练集,后1000张则作为测试集。

在更复杂的场景中推荐使用 PaddleOCR产出工业级模型,模型轻量且精度大幅提升。

同样也可以在 PaddleHub中快速使用PaddleOCR。

一、环境配置

本教程基于Paddle 2.0 编写,如果您的环境不是本版本,请先参考官网安装 Paddle 2.0 。

  1. import paddle
  2. print(paddle.__version__)
  1. 2.0.0

二、自定义数据集读取器

常见的开发任务中,我们并不一定会拿到标准的数据格式,好在我们可以通过自定义Reader的形式来随心所欲读取自己想要数据。

设计合理的Reader往往可以带来更好的性能,我们可以将读取标签文件列表、制作图像文件列表等必要操作在 __init__特殊方法中实现。这样就可以在实例化 Reader时装入内存,避免使用时频繁读取导致增加额外开销。同样我们可以在 __getitem__特殊方法中实现如图像增强、归一化等个性操作,完成数据读取后即可释放该部分内存。

需要我们注意的是,如果不能保证自己数据十分纯净,可以通过 tryexpect来捕获异常并指出该数据的位置。当然也可以制定一个策略,使其在发生数据读取异常后依旧可以正常进行训练。

2.1 数据展示

image1

点此快速获取本节数据集,待数据集下载完毕后可使用!unzip OCR_Dataset.zip -d data/命令或熟悉的解压软件进行解压,待数据准备工作完成后修改本文“训练准备”中的DATA_PATH = 解压后数据集路径

  1. # 解压数据集
  2. !unzip OCR_Dataset.zip -d data/
  1. import os
  2. import PIL.Image as Image
  3. import numpy as np
  4. from paddle.io import Dataset
  5. # 图片信息配置 - 通道数、高度、宽度
  6. IMAGE_SHAPE_C = 3
  7. IMAGE_SHAPE_H = 30
  8. IMAGE_SHAPE_W = 70
  9. # 数据集图片中标签长度最大值设置 - 因图片中均为4个字符,故该处填写为4即可
  10. LABEL_MAX_LEN = 4
  11. class Reader(Dataset):
  12. def __init__(self, data_path: str, is_val: bool = False):
  13. """
  14. 数据读取Reader
  15. :param data_path: Dataset路径
  16. :param is_val: 是否为验证集
  17. """
  18. super().__init__()
  19. self.data_path = data_path
  20. # 读取Label字典
  21. with open(os.path.join(self.data_path, "label_dict.txt"), "r", encoding="utf-8") as f:
  22. self.info = eval(f.read())
  23. # 获取文件名列表
  24. self.img_paths = [img_name for img_name in self.info]
  25. # 将数据集后1000张图片设置为验证集,当is_val为真时img_path切换为后1000张
  26. self.img_paths = self.img_paths[-1000:] if is_val else self.img_paths[:-1000]
  27. def __getitem__(self, index):
  28. # 获取第index个文件的文件名以及其所在路径
  29. file_name = self.img_paths[index]
  30. file_path = os.path.join(self.data_path, file_name)
  31. # 捕获异常 - 在发生异常时终止训练
  32. try:
  33. # 使用Pillow来读取图像数据
  34. img = Image.open(file_path)
  35. # 转为Numpy的array格式并整体除以255进行归一化
  36. img = np.array(img, dtype="float32").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255
  37. except Exception as e:
  38. raise Exception(file_name + "\t文件打开失败,请检查路径是否准确以及图像文件完整性,报错信息如下:\n" + str(e))
  39. # 读取该图像文件对应的Label字符串,并进行处理
  40. label = self.info[file_name]
  41. label = list(label)
  42. # 将label转化为Numpy的array格式
  43. label = np.array(label, dtype="int32").reshape(LABEL_MAX_LEN)
  44. return img, label
  45. def __len__(self):
  46. # 返回每个Epoch中图片数量
  47. return len(self.img_paths)

三、模型配置

3.1 定义模型结构以及模型输入

模型方面使用的简单的CRNN-CTC结构,输入形为CHW的图像在经过CNN->Flatten->Linear->RNN->Linear后输出图像中每个位置所对应的字符概率。考虑到CTC解码器在面对图像中元素数量不一、相邻元素重复时会存在无法正确对齐等情况,故额外添加一个类别代表“分隔符”进行改善。

CTC相关论文:Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neu

image1

网络部分,因本篇采用数据集较为简单且图像尺寸较小并不适合较深层次网络。若在对尺寸较大的图像进行模型构建,可以考虑使用更深层次网络/注意力机制来完成。当然也可以通过目标检测形式先检出文本位置,然后进行OCR部分模型构建。

image2

PaddleOCR效果图

System Message: ERROR/3 (/FluidDoc/doc/paddle/tutorial/cv_case/image_ocr/image_ocr.rst, line 134)

Duplicate substitution definition name: “image1”.

  1. import paddle
  2. # 分类数量设置 - 因数据集中共包含0~9共10种数字+分隔符,所以是11分类任务
  3. CLASSIFY_NUM = 11
  4. # 定义输入层,shape中第0维使用-1则可以在预测时自由调节batch size
  5. input_define = paddle.static.InputSpec(shape=[-1, IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W],
  6. dtype="float32",
  7. name="img")
  8. # 定义网络结构
  9. class Net(paddle.nn.Layer):
  10. def __init__(self, is_infer: bool = False):
  11. super().__init__()
  12. self.is_infer = is_infer
  13. # 定义一层3x3卷积+BatchNorm
  14. self.conv1 = paddle.nn.Conv2D(in_channels=IMAGE_SHAPE_C,
  15. out_channels=32,
  16. kernel_size=3)
  17. self.bn1 = paddle.nn.BatchNorm2D(32)
  18. # 定义一层步长为2的3x3卷积进行下采样+BatchNorm
  19. self.conv2 = paddle.nn.Conv2D(in_channels=32,
  20. out_channels=64,
  21. kernel_size=3,
  22. stride=2)
  23. self.bn2 = paddle.nn.BatchNorm2D(64)
  24. # 定义一层1x1卷积压缩通道数,输出通道数设置为比LABEL_MAX_LEN稍大的定值可获取更优效果,当然也可设置为LABEL_MAX_LEN
  25. self.conv3 = paddle.nn.Conv2D(in_channels=64,
  26. out_channels=LABEL_MAX_LEN + 4,
  27. kernel_size=1)
  28. # 定义全连接层,压缩并提取特征(可选)
  29. self.linear = paddle.nn.Linear(in_features=429,
  30. out_features=128)
  31. # 定义RNN层来更好提取序列特征,此处为双向LSTM输出为2 x hidden_size,可尝试换成GRU等RNN结构
  32. self.lstm = paddle.nn.LSTM(input_size=128,
  33. hidden_size=64,
  34. direction="bidirectional")
  35. # 定义输出层,输出大小为分类数
  36. self.linear2 = paddle.nn.Linear(in_features=64 * 2,
  37. out_features=CLASSIFY_NUM)
  38. def forward(self, ipt):
  39. # 卷积 + ReLU + BN
  40. x = self.conv1(ipt)
  41. x = paddle.nn.functional.relu(x)
  42. x = self.bn1(x)
  43. # 卷积 + ReLU + BN
  44. x = self.conv2(x)
  45. x = paddle.nn.functional.relu(x)
  46. x = self.bn2(x)
  47. # 卷积 + ReLU
  48. x = self.conv3(x)
  49. x = paddle.nn.functional.relu(x)
  50. # 将3维特征转换为2维特征 - 此处可以使用reshape代替
  51. x = paddle.tensor.flatten(x, 2)
  52. # 全连接 + ReLU
  53. x = self.linear(x)
  54. x = paddle.nn.functional.relu(x)
  55. # 双向LSTM - [0]代表取双向结果,[1][0]代表forward结果,[1][1]代表backward结果,详细说明可在官方文档中搜索'LSTM'
  56. x = self.lstm(x)[0]
  57. # 输出层 - Shape = (Batch Size, Max label len, Signal)
  58. x = self.linear2(x)
  59. # 在计算损失时ctc-loss会自动进行softmax,所以在预测模式中需额外做softmax获取标签概率
  60. if self.is_infer:
  61. # 输出层 - Shape = (Batch Size, Max label len, Prob)
  62. x = paddle.nn.functional.softmax(x)
  63. # 转换为标签
  64. x = paddle.argmax(x, axis=-1)
  65. return x

四、训练准备

4.1 定义label输入以及超参数

监督训练需要定义label,预测则不需要该步骤。

  1. # 数据集路径设置
  2. DATA_PATH = "./data/OCR_Dataset"
  3. # 训练轮数
  4. EPOCH = 10
  5. # 每批次数据大小
  6. BATCH_SIZE = 16
  7. label_define = paddle.static.InputSpec(shape=[-1, LABEL_MAX_LEN],
  8. dtype="int32",
  9. name="label")

4.2 定义CTC Loss

了解CTC解码器效果后,我们需要在训练中让模型尽可能接近这种类型输出形式,那么我们需要定义一个CTC Loss来计算模型损失。不必担心,在飞桨框架中内置了多种Loss,无需手动复现即可完成损失计算。

使用文档:CTCLoss

  1. class CTCLoss(paddle.nn.Layer):
  2. def __init__(self):
  3. """
  4. 定义CTCLoss
  5. """
  6. super().__init__()
  7. def forward(self, ipt, label):
  8. input_lengths = paddle.full(shape=[BATCH_SIZE, 1],fill_value=LABEL_MAX_LEN + 4,dtype= "int64")
  9. label_lengths = paddle.full(shape=[BATCH_SIZE, 1],fill_value=LABEL_MAX_LEN,dtype= "int64")
  10. # 按文档要求进行转换dim顺序
  11. ipt = paddle.tensor.transpose(ipt, [1, 0, 2])
  12. # 计算loss
  13. loss = paddle.nn.functional.ctc_loss(ipt, label, input_lengths, label_lengths, blank=10)
  14. return loss

4.3 实例化模型并配置优化策略

  1. # 实例化模型
  2. model = paddle.Model(Net(), inputs=input_define, labels=label_define)
  1. # 定义优化器
  2. optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters())
  3. # 为模型配置运行环境并设置该优化策略
  4. model.prepare(optimizer=optimizer,
  5. loss=CTCLoss())

五、开始训练

  1. # 执行训练
  2. model.fit(train_data=Reader(DATA_PATH),
  3. eval_data=Reader(DATA_PATH, is_val=True),
  4. batch_size=BATCH_SIZE,
  5. epochs=EPOCH,
  6. save_dir="output/",
  7. save_freq=1,
  8. verbose=1)
  1. The loss value printed in the log is the current step, and the metric is the average value of previous step.
  2. Epoch 1/10
  3. step 529/529 [==============================] - loss: 0.1299 - 10ms/step
  4. save checkpoint at /home/aistudio/output/0
  5. Eval begin...
  6. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  7. step 63/63 [==============================] - loss: 0.1584 - 6ms/step
  8. Eval samples: 1000
  9. Epoch 2/10
  10. step 529/529 [==============================] - loss: 0.0300 - 9ms/step
  11. save checkpoint at /home/aistudio/output/1
  12. Eval begin...
  13. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  14. step 63/63 [==============================] - loss: 0.0663 - 6ms/step
  15. Eval samples: 1000
  16. Epoch 3/10
  17. step 529/529 [==============================] - loss: 0.2056 - 9ms/step
  18. save checkpoint at /home/aistudio/output/2
  19. Eval begin...
  20. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  21. step 63/63 [==============================] - loss: 0.0392 - 6ms/step
  22. Eval samples: 1000
  23. Epoch 4/10
  24. step 529/529 [==============================] - loss: 0.0115 - 9ms/step
  25. save checkpoint at /home/aistudio/output/3
  26. Eval begin...
  27. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  28. step 63/63 [==============================] - loss: 0.0281 - 6ms/step
  29. Eval samples: 1000
  30. Epoch 5/10
  31. step 529/529 [==============================] - loss: 0.0121 - 10ms/step
  32. save checkpoint at /home/aistudio/output/4
  33. Eval begin...
  34. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  35. step 63/63 [==============================] - loss: 0.0251 - 6ms/step
  36. Eval samples: 1000
  37. Epoch 6/10
  38. step 529/529 [==============================] - loss: 0.0090 - 9ms/step
  39. save checkpoint at /home/aistudio/output/5
  40. Eval begin...
  41. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  42. step 63/63 [==============================] - loss: 0.0170 - 6ms/step
  43. Eval samples: 1000
  44. Epoch 7/10
  45. step 529/529 [==============================] - loss: 0.0049 - 9ms/step
  46. save checkpoint at /home/aistudio/output/6
  47. Eval begin...
  48. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  49. step 63/63 [==============================] - loss: 0.0149 - 6ms/step
  50. Eval samples: 1000
  51. Epoch 8/10
  52. step 529/529 [==============================] - loss: 0.0081 - 9ms/step
  53. save checkpoint at /home/aistudio/output/7
  54. Eval begin...
  55. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  56. step 63/63 [==============================] - loss: 0.0113 - 6ms/step
  57. Eval samples: 1000
  58. Epoch 9/10
  59. step 529/529 [==============================] - loss: 0.0051 - 9ms/step
  60. save checkpoint at /home/aistudio/output/8
  61. Eval begin...
  62. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  63. step 63/63 [==============================] - loss: 0.0221 - 6ms/step
  64. Eval samples: 1000
  65. Epoch 10/10
  66. step 529/529 [==============================] - loss: 0.0135 - 9ms/step
  67. save checkpoint at /home/aistudio/output/9
  68. Eval begin...
  69. The loss value printed in the log is the current batch, and the metric is the average value of previous step.
  70. step 63/63 [==============================] - loss: 0.0111 - 6ms/step
  71. Eval samples: 1000
  72. save checkpoint at /home/aistudio/output/final

六、预测前准备

6.1 像定义训练Reader一样定义预测Reader

  1. # 与训练近似,但不包含Label
  2. class InferReader(Dataset):
  3. def __init__(self, dir_path=None, img_path=None):
  4. """
  5. 数据读取Reader(预测)
  6. :param dir_path: 预测对应文件夹(二选一)
  7. :param img_path: 预测单张图片(二选一)
  8. """
  9. super().__init__()
  10. if dir_path:
  11. # 获取文件夹中所有图片路径
  12. self.img_names = [i for i in os.listdir(dir_path) if os.path.splitext(i)[1] == ".jpg"]
  13. self.img_paths = [os.path.join(dir_path, i) for i in self.img_names]
  14. elif img_path:
  15. self.img_names = [os.path.split(img_path)[1]]
  16. self.img_paths = [img_path]
  17. else:
  18. raise Exception("请指定需要预测的文件夹或对应图片路径")
  19. def get_names(self):
  20. """
  21. 获取预测文件名顺序
  22. """
  23. return self.img_names
  24. def __getitem__(self, index):
  25. # 获取图像路径
  26. file_path = self.img_paths[index]
  27. # 使用Pillow来读取图像数据并转成Numpy格式
  28. img = Image.open(file_path)
  29. img = np.array(img, dtype="float32").reshape((IMAGE_SHAPE_C, IMAGE_SHAPE_H, IMAGE_SHAPE_W)) / 255
  30. return img
  31. def __len__(self):
  32. return len(self.img_paths)

6.2 参数设置

  1. # 待预测目录 - 可在测试数据集中挑出\b3张图像放在该目录中进行推理
  2. INFER_DATA_PATH = "./sample_img"
  3. # 训练后存档点路径 - final 代表最终训练所得模型
  4. CHECKPOINT_PATH = "./output/final.pdparams"
  5. # 每批次处理数量
  6. BATCH_SIZE = 32

6.3 展示待预测数据

  1. import matplotlib.pyplot as plt
  2. plt.figure(figsize=(10, 10))
  3. sample_idxs = np.random.choice(50000, size=25, replace=False)
  4. for img_id, img_name in enumerate(os.listdir(INFER_DATA_PATH)):
  5. plt.subplot(1, 3, img_id + 1)
  6. plt.xticks([])
  7. plt.yticks([])
  8. im = Image.open(os.path.join(INFER_DATA_PATH, img_name))
  9. plt.imshow(im, cmap=plt.cm.binary)
  10. plt.xlabel("Img name: " + img_name)
  11. plt.show()

../../../_images/image_ocr_26_0.png

七、开始预测

飞桨2.0 CTC Decoder 相关API正在迁移中,本节暂时使用简易版解码器。

  1. # 编写简易版解码器
  2. def ctc_decode(text, blank=10):
  3. """
  4. 简易CTC解码器
  5. :param text: 待解码数据
  6. :param blank: 分隔符索引值
  7. :return: 解码后数据
  8. """
  9. result = []
  10. cache_idx = -1
  11. for char in text:
  12. if char != blank and char != cache_idx:
  13. result.append(char)
  14. cache_idx = char
  15. return result
  16. # 实例化推理模型
  17. model = paddle.Model(Net(is_infer=True), inputs=input_define)
  18. # 加载训练好的参数模型
  19. model.load(CHECKPOINT_PATH)
  20. # 设置运行环境
  21. model.prepare()
  22. # 加载预测Reader
  23. infer_reader = InferReader(INFER_DATA_PATH)
  24. img_names = infer_reader.get_names()
  25. results = model.predict(infer_reader, batch_size=BATCH_SIZE)
  26. index = 0
  27. for text_batch in results[0]:
  28. for prob in text_batch:
  29. out = ctc_decode(prob, blank=10)
  30. print(f"文件名:{img_names[index]},推理结果为:{out}")
  31. index += 1
  1. Predict begin...
  2. step 1/1 [==============================] - 6ms/step
  3. Predict samples: 3
  4. 文件名:9450.jpg,推理结果为:[8, 2, 0, 5]
  5. 文件名:9452.jpg,推理结果为:[0, 3, 0, 0]
  6. 文件名:9451.jpg,推理结果为:[3, 4, 6, 3]