StepDecay

class paddle.fluid.dygraph. StepDecay ( learning_rate, step_size, decay_rate=0.1 ) [源代码]

该接口提供 step_size 衰减学习率的功能,每经过 step_sizeepoch 时会通过 decay_rate 衰减一次学习率。

算法可以描述为:

  1. learning_rate = 0.5
  2. step_size = 30
  3. decay_rate = 0.1
  4. learning_rate = 0.5 if epoch < 30
  5. learning_rate = 0.05 if 30 <= epoch < 60
  6. learning_rate = 0.005 if 60 <= epoch < 90
  7. ...

参数:

  • learning_rate (float|int) - 初始化的学习率。可以是Python的float或int。

  • step_size (int) - 学习率每衰减一次的间隔。

  • decay_rate (float, optional) - 学习率的衰减率。 new_lr = origin_lr * decay_rate 。其值应该小于1.0。默认:0.1。

返回: 无

代码示例

  1. import paddle.fluid as fluid
  2. import numpy as np
  3. with fluid.dygraph.guard():
  4. x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
  5. linear = fluid.dygraph.Linear(10, 10)
  6. input = fluid.dygraph.to_variable(x)
  7. scheduler = fluid.dygraph.StepDecay(0.5, step_size=3)
  8. adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())
  9. for epoch in range(9):
  10. for batch_id in range(5):
  11. out = linear(input)
  12. loss = fluid.layers.reduce_mean(out)
  13. adam.minimize(loss)
  14. scheduler.epoch()
  15. print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr()))
  16. # epoch:0, current lr is 0.5
  17. # epoch:1, current lr is 0.5
  18. # epoch:2, current lr is 0.5
  19. # epoch:3, current lr is 0.05
  20. # epoch:4, current lr is 0.05
  21. # epoch:5, current lr is 0.05
  22. # epoch:6, current lr is 0.005
  23. # epoch:7, current lr is 0.005
  24. # epoch:8, current lr is 0.005

epoch ( epoch=None )

通过当前的 epoch 调整学习率,调整后的学习率将会在下一次调用 optimizer.minimize 时生效。

参数:

  • epoch (int|float,可选) - 类型:int或float。指定当前的epoch数。默认:无,此时将会自动累计epoch数。

返回:

代码示例:

参照上述示例代码。