应用梯度累积算法

Linux GPU 模型调优 中级 高级

应用梯度累积算法 - 图1

概述

本教程介绍梯度累积的训练方式,目的是为了解决由于内存不足导致某些大型网络无法训练大Batch_size的问题。

传统的训练方式是每次计算得到loss和梯度后,直接用所得梯度对参数进行更新。

与传统的训练方式不同,梯度累积引入Mini-batch的概念,首先对每个Mini-batch的数据计算loss和梯度,但不立即更新模型参数,而是先对所得梯度进行累加,然后在指定数量(N)个Mini-batch之后,用累积后的梯度更新网络参数。下次训练前清空过往累积梯度后重新累加,如此往复。

最终目的是为了达到跟直接用N*Mini-batch数据训练几乎同样的效果。

本教程用于GPU, 你可以在这里下载主要的训练样例代码:https://gitee.com/mindspore/docs/tree/r1.0/tutorials/tutorial_code/gradient_accumulation

创建梯度累积模型

以MNIST作为示范数据集,自定义简单模型实现梯度累积。

导入需要的库文件

下列是我们所需要的公共模块及MindSpore的模块及库文件。

  1. import argparse
  2. import os
  3. from collections.abc import Iterable
  4. import mindspore.nn as nn
  5. from mindspore import ParameterTuple
  6. from mindspore import context
  7. from mindspore.nn import Cell
  8. import mindspore.ops as ops
  9. from mindspore.train.dataset_helper import DatasetHelper
  10. from mindspore.train.serialization import save_checkpoint
  11. from model_zoo.official.cv.lenet.src.dataset import create_dataset
  12. from model_zoo.official.cv.lenet.src.lenet import LeNet5

加载数据集

利用MindSpore的dataset提供的MnistDataset接口加载MNIST数据集,此部分代码由model_zoolenet目录下的dataset.py导入。

定义网络

这里以LeNet网络为例进行介绍,当然也可以使用其它的网络,如ResNet-50、BERT等, 此部分代码由model_zoolenet目录下的lenet.py导入。

定义训练模型

将训练流程拆分为正向反向训练、参数更新和累积梯度清理三个部分:

  • TrainForwardBackward计算loss和梯度,利用grad_sum实现梯度累加。

  • TrainOptim实现参数更新。

  • TrainClear实现对梯度累加变量grad_sum清零。

  1. _sum_op = ops.MultitypeFuncGraph("grad_sum_op")
  2. _clear_op = ops.MultitypeFuncGraph("clear_op")
  3. @_sum_op.register("Tensor", "Tensor")
  4. def _cumulative_gard(grad_sum, grad):
  5. """Apply gard sum to cumulative gradient."""
  6. add = ops.AssignAdd()
  7. return add(grad_sum, grad)
  8. @_clear_op.register("Tensor", "Tensor")
  9. def _clear_grad_sum(grad_sum, zero):
  10. """Apply zero to clear grad_sum."""
  11. success = True
  12. success = ops.depend(success, ops.assign(grad_sum, zero))
  13. return success
  14. class TrainForwardBackward(Cell):
  15. def __init__(self, network, optimizer, grad_sum, sens=1.0):
  16. super(TrainForwardBackward, self).__init__(auto_prefix=False)
  17. self.network = network
  18. self.network.set_grad()
  19. self.network.add_flags(defer_inline=True)
  20. self.weights = ParameterTuple(network.trainable_params())
  21. self.optimizer = optimizer
  22. self.grad_sum = grad_sum
  23. self.grad = ops.GradOperation(get_by_list=True, sens_param=True)
  24. self.sens = sens
  25. self.hyper_map = ops.HyperMap()
  26. def construct(self, *inputs):
  27. weights = self.weights
  28. loss = self.network(*inputs)
  29. sens = ops.Fill()(ops.DType()(loss), ops.Shape()(loss), self.sens)
  30. grads = self.grad(self.network, weights)(*inputs, sens)
  31. return ops.depend(loss, self.hyper_map(ops.partial(_sum_op), self.grad_sum, grads))
  32. class TrainOptim(Cell):
  33. def __init__(self, optimizer, grad_sum):
  34. super(TrainOptim, self).__init__(auto_prefix=False)
  35. self.optimizer = optimizer
  36. self.grad_sum = grad_sum
  37. def construct(self):
  38. return self.optimizer(self.grad_sum)
  39. class TrainClear(Cell):
  40. def __init__(self, grad_sum, zeros):
  41. super(TrainClear, self).__init__(auto_prefix=False)
  42. self.grad_sum = grad_sum
  43. self.zeros = zeros
  44. self.hyper_map = ops.HyperMap()
  45. def construct(self):
  46. success = self.hyper_map(ops.partial(_clear_op), self.grad_sum, self.zeros)
  47. return success

定义训练过程

每个Mini-batch通过正反向训练计算loss和梯度,通过mini_steps控制每次更新参数前的累加次数。达到累加次数后进行参数更新和 累加梯度变量清零。

  1. class GradientAccumulation:
  2. def __init__(self, network, loss_fn, optimizer):
  3. self._network = network
  4. self._loss_fn = loss_fn
  5. self._optimizer = optimizer
  6. params = self._optimizer.parameters
  7. self._grad_sum = params.clone(prefix="grad_sum", init='zeros')
  8. self._zeros = params.clone(prefix="zeros", init='zeros')
  9. self._train_forward_backward = self._build_train_forward_backward_network()
  10. self._train_optim = self._build_train_optim()
  11. self._train_clear = self._build_train_clear()
  12. @staticmethod
  13. def _transform_callbacks(callbacks):
  14. """Transform callback to a list."""
  15. if callbacks is None:
  16. return []
  17. if isinstance(callbacks, Iterable):
  18. return list(callbacks)
  19. return [callbacks]
  20. def _build_train_forward_backward_network(self):
  21. """Build forward and backward network"""
  22. network = self._network
  23. network = nn.WithLossCell(network, self._loss_fn)
  24. loss_scale = 1.0
  25. network = TrainForwardBackward(network, self._optimizer, self._grad_sum, loss_scale).set_train()
  26. return network
  27. def _build_train_optim(self):
  28. """Build optimizer network"""
  29. network = TrainOptim(self._optimizer, self._grad_sum).set_train()
  30. return network
  31. def _build_train_clear(self):
  32. """Build clear network"""
  33. network = TrainClear(self._grad_sum, self._zeros).set_train()
  34. return network
  35. def train_process(self, epoch, train_dataset, mini_steps=None):
  36. """
  37. Training process. The data would be passed to network directly.
  38. """
  39. dataset_helper = DatasetHelper(train_dataset, dataset_sink_mode=False, epoch_num=epoch)
  40. for i in range(epoch):
  41. step = 0
  42. for k, next_element in enumerate(dataset_helper):
  43. loss = self._train_forward_backward(*next_element)
  44. if (k + 1) % mini_steps == 0:
  45. step += 1
  46. print("epoch:", i + 1, "step:", step, "loss is ", loss)
  47. self._train_optim()
  48. self._train_clear()
  49. train_dataset.reset()
  50. save_checkpoint(self._train_forward_backward, "gradient_accumulation.ckpt", )

训练并保存模型

调用网络、优化器及损失函数,然后自定义GradientAccumulationtrain_process接口,进行模型训练。

  1. if __name__ == "__main__":
  2. parser = argparse.ArgumentParser(description='MindSpore Gard Cumulative Example')
  3. parser.add_argument('--device_target', type=str, default="GPU", choices=['GPU'],
  4. help='device where the code will be implemented (default: GPU)')
  5. parser.add_argument('--data_path', type=str, default="./Data",
  6. help='path where the dataset is saved')
  7. args = parser.parse_args()
  8. context.set_context(mode=context.GRAPH_MODE, device_target=args.device_target)
  9. ds_train = create_dataset(os.path.join(args.data_path, "train"), 32)
  10. network = LeNet5(10)
  11. net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
  12. net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
  13. model = GradientAccumulation(network, net_loss, net_opt)
  14. print("============== Starting Training ==============")
  15. model.train_process(10, ds_train, mini_steps=4)

实验结果

在经历了10轮epoch之后,在测试集上的精度约为96.31%。

执行训练

  1. 运行训练代码,查看运行结果。

    1. python train.py --data_path=./MNIST_Data

    输出如下,可以看到loss值随着训练逐步降低:

    1. epoch: 1 step: 27 loss is 0.3660637
    2. epoch: 1 step: 28 loss is 0.25238192
    3. ...
    4. epoch: 3 step: 2 loss is 0.12296932
    5. epoch: 3 step: 3 loss is 0.15799297
    6. ...
    7. epoch: 10 step: 448 loss is 0.06443884
    8. epoch: 10 step: 449 loss is 0.0067842817
  2. 查看保存的CheckPoint文件。

    训练过程中保存了CheckPoint文件gradient_accumulation.ckpt,即模型文件。

验证模型

通过model_zoolenet目录下的eval.py,使用保存的CheckPoint文件,加载验证数据集,进行验证。

  1. python eval.py --data_path=./MNIST_Data --ckpt_path=./gradient_accumulation.ckpt --device_target=GPU

输出如下,可以看到使用验证的数据集,正确率在96.31%左右,与batch_size为32的验证结果一致。

  1. ============== Starting Testing ==============
  2. ============== {'Accuracy': 0.9631730769230769} ==============