Optimizers

Open In Colab

Define the general fastai optimizer and the variants

  1. /usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  2. return torch._C._cuda_getDeviceCount() > 0
  1. add_docs(_BaseOptimizer,
  2. all_params="List of param_groups, parameters, and hypers",
  3. freeze_to="Freeze parameter groups up to `n`",
  4. freeze="Freeze up to last parameter group",
  5. set_freeze="Set `rg` for parameter group `n` only",
  6. unfreeze="Unfreeze the entire model",
  7. set_hypers="`set_hyper` for all `kwargs`",
  8. set_hyper="Set the value(s) in `v` for hyper-parameter `k`")

class Optimizer[source]

Optimizer(params, cbs, train_bn=True, **defaults) :: _BaseOptimizer

Base optimizer class for the fastai library, updating params with cbs

  1. add_docs(Optimizer,
  2. zero_grad="Standard PyTorch API: Zero all the grad attributes of the parameters",
  3. step="Standard PyTorch API: Update the stats and execute the steppers in on all parameters that have a grad",
  4. state_dict="Return the state of the optimizer in a dictionary",
  5. load_state_dict="Load the content of `sd`",
  6. clear_state="Reset the state of the optimizer")

Initializing an Optimizer

params will be used to create the param_groups of the optimizer. If it’s a collection (or a generator) of parameters, it will be a L containing one L with all the parameters. To define multiple parameter groups params should be passed as a collection (or a generator) of Ls.

Note: In PyTorch, model.parameters() returns a generator with all the parameters, that you can directly pass to Optimizer.

  1. opt = Optimizer([1,2,3], noop)
  2. test_eq(opt.param_lists, [[1,2,3]])
  3. opt = Optimizer(range(3), noop)
  4. test_eq(opt.param_lists, [[0,1,2]])
  5. opt = Optimizer([[1,2],[3]], noop)
  6. test_eq(opt.param_lists, [[1,2],[3]])
  7. opt = Optimizer(([o,o+1] for o in range(0,4,2)), noop)
  8. test_eq(opt.param_lists, [[0,1],[2,3]])

cbs is a list of functions that will be composed when applying the step. For instance, you can compose a function making the SGD step, with another one applying weight decay. Additionally, each cb can have a defaults attribute that contains hyper-parameters and their default value. Those are all gathered at initialization, and new values can be passed to override those defaults with the defaults kwargs. The steppers will be called by Optimizer.step (which is the standard PyTorch name), and gradients can be cleared with Optimizer.zero_grad (also a standard PyTorch name).

Once the defaults have all been pulled off, they are copied as many times as there are param_groups and stored in hypers. To apply different hyper-parameters to different groups (differential learning rates, or no weight decay for certain layers for instance), you will need to adjust those values after the init.

  1. def tst_arg(p, lr=0, **kwargs): return p
  2. tst_arg.defaults = dict(lr=1e-2)
  3. def tst_arg2(p, lr2=0, **kwargs): return p
  4. tst_arg2.defaults = dict(lr2=1e-3)
  5. def tst_arg3(p, mom=0, **kwargs): return p
  6. tst_arg3.defaults = dict(mom=0.9)
  7. def tst_arg4(p, **kwargs): return p
  8. opt = Optimizer([1,2,3], [tst_arg,tst_arg2, tst_arg3])
  9. test_eq(opt.hypers, [{'lr2': 1e-3, 'mom': 0.9, 'lr': 1e-2}])
  10. opt = Optimizer([1,2,3], tst_arg, lr=0.1)
  11. test_eq(opt.hypers, [{'lr': 0.1}])
  12. opt = Optimizer([[1,2],[3]], tst_arg)
  13. test_eq(opt.hypers, [{'lr': 1e-2}, {'lr': 1e-2}])
  14. opt = Optimizer([[1,2],[3]], tst_arg, lr=0.1)
  15. test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.1}])

For each hyper-parameter, you can pass a slice or a collection to set them, if there are multiple parameter groups. A slice will be converted to a log-uniform collection from its beginning to its end, or if it only has an end e, to a collection of as many values as there are parameter groups that are ...,e/10,e/10,e.

Setting an hyper-parameter with a collection that has a different number of elements than the optimizer has parameter groups will raise an error.

  1. opt = Optimizer([[1,2],[3]], tst_arg, lr=[0.1,0.2])
  2. test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.2}])
  3. opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-2))
  4. test_eq(opt.hypers, [{'lr': 1e-3}, {'lr': 1e-3}, {'lr': 1e-2}])
  5. opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-4,1e-2))
  6. test_eq(opt.hypers, [{'lr': 1e-4}, {'lr': 1e-3}, {'lr': 1e-2}])
  7. test_eq(opt.param_groups, [{'params': [1,2], 'lr': 1e-4}, {'params': [3], 'lr': 1e-3}, {'params': [4], 'lr': 1e-2}])
  8. test_fail(lambda: Optimizer([[1,2],[3],[4]], tst_arg, lr=np.array([0.1,0.2])))

Basic steppers

To be able to give examples of optimizer steps, we will need some steppers, like the following:

sgd_step[source]

sgd_step(p, lr, **kwargs)

  1. def tst_param(val, grad=None):
  2. "Create a tensor with `val` and a gradient of `grad` for testing"
  3. res = tensor([val]).float()
  4. res.grad = tensor([val/10 if grad is None else grad]).float()
  5. return res
  1. p = tst_param(1., 0.1)
  2. sgd_step(p, 1.)
  3. test_eq(p, tensor([0.9]))
  4. test_eq(p.grad, tensor([0.1]))

weight_decay[source]

weight_decay(p, lr, wd, do_wd=True, **kwargs)

Weight decay as decaying p with lr*wd

  1. p = tst_param(1., 0.1)
  2. weight_decay(p, 1., 0.1)
  3. test_eq(p, tensor([0.9]))
  4. test_eq(p.grad, tensor([0.1]))

l2_reg[source]

l2_reg(p, lr, wd, do_wd=True, **kwargs)

L2 regularization as adding wd*p to p.grad

  1. p = tst_param(1., 0.1)
  2. l2_reg(p, 1., 0.1)
  3. test_eq(p, tensor([1.]))
  4. test_eq(p.grad, tensor([0.2]))

Warning: Weight decay and L2 regularization is the same thing for basic SGD, but for more complex optimizers, they are very different.

Making the step

Optimizer.step[source]

Optimizer.step()

This method will loop over all param groups, then all parameters for which grad is not None and call each function in stepper, passing it the parameter p with the hyper-parameters in the corresponding dict in hypers.

  1. r = L.range(4)
  2. def tst_params(): return r.map(tst_param)
  3. params = tst_params()
  4. opt = Optimizer(params, sgd_step, lr=0.1)
  5. opt.step()
  6. test_close([p.item() for p in params], r.map(mul(0.99)))
  1. params = tst_params()
  2. opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
  3. opt.step()
  4. test_close([p.item() for p in params], r.map(mul(0.98)))
  1. params = tst_params()
  2. opt = Optimizer(params, sgd_step, lr=0.1)
  3. params[-1].grad = None
  4. opt.step()
  5. test_close([p.item() for p in params], [0., 0.99, 1.98, 3.])
  1. params = tst_params()
  2. opt = Optimizer([params[:2], params[2:]], sgd_step, lr=0.1)
  3. opt.hypers[0]['lr'] = 0.01
  4. opt.step()
  5. test_close([p.item() for p in params], [0., 0.999, 1.98, 2.97])

Optimizer.zero_grad[source]

Optimizer.zero_grad()

  1. params = tst_params()
  2. opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
  3. opt.zero_grad()
  4. [test_eq(p.grad, tensor([0.])) for p in params];

Some of the Optimizer cbs can be functions updating the state associated with a parameter. That state can then be used by any stepper. The best example is a momentum calculation.

  1. def tst_stat(p, **kwargs):
  2. s = kwargs.get('sum', torch.zeros_like(p)) + p.data
  3. return {'sum': s}
  4. tst_stat.defaults = {'mom': 0.9}
  5. #Test Optimizer init
  6. opt = Optimizer([1,2,3], tst_stat)
  7. test_eq(opt.hypers, [{'mom': 0.9}])
  8. opt = Optimizer([1,2,3], tst_stat, mom=0.99)
  9. test_eq(opt.hypers, [{'mom': 0.99}])
  10. #Test stat
  11. x = torch.randn(4,5)
  12. state = tst_stat(x)
  13. assert 'sum' in state
  14. test_eq(x, state['sum'])
  15. state = tst_stat(x, **state)
  16. test_eq(state['sum'], 2*x)

Statistics

average_grad[source]

average_grad(p, mom, dampening=False, grad_avg=None, **kwargs)

Keeps track of the avg grads of p in state with mom.

dampening=False gives the classical formula for momentum in SGD:

  1. new_val = old_val * mom + grad

whereas dampening=True makes it an exponential moving average:

  1. new_val = old_val * mom + grad * (1-mom)
  1. p = tst_param([1,2,3], [4,5,6])
  2. state = {}
  3. state = average_grad(p, mom=0.9, **state)
  4. test_eq(state['grad_avg'], p.grad)
  5. state = average_grad(p, mom=0.9, **state)
  6. test_eq(state['grad_avg'], p.grad * 1.9)
  7. #Test dampening
  8. state = {}
  9. state = average_grad(p, mom=0.9, dampening=True, **state)
  10. test_eq(state['grad_avg'], 0.1*p.grad)
  11. state = average_grad(p, mom=0.9, dampening=True, **state)
  12. test_close(state['grad_avg'], (0.1*0.9+0.1)*p.grad)

average_sqr_grad[source]

average_sqr_grad(p, sqr_mom, dampening=True, sqr_avg=None, **kwargs)

dampening=False gives the classical formula for momentum in SGD:

  1. new_val = old_val * mom + grad**2

whereas dampening=True makes it an exponential moving average:

  1. new_val = old_val * mom + (grad**2) * (1-mom)
  1. p = tst_param([1,2,3], [4,5,6])
  2. state = {}
  3. state = average_sqr_grad(p, sqr_mom=0.99, dampening=False, **state)
  4. test_eq(state['sqr_avg'], p.grad.pow(2))
  5. state = average_sqr_grad(p, sqr_mom=0.99, dampening=False, **state)
  6. test_eq(state['sqr_avg'], p.grad.pow(2) * 1.99)
  7. #Test dampening
  8. state = {}
  9. state = average_sqr_grad(p, sqr_mom=0.99, **state)
  10. test_close(state['sqr_avg'], 0.01*p.grad.pow(2))
  11. state = average_sqr_grad(p, sqr_mom=0.99, **state)
  12. test_close(state['sqr_avg'], (0.01*0.99+0.01)*p.grad.pow(2))

Freezing part of the model

Optimizer.freeze[source]

Optimizer.freeze()

Optimizer.freeze_to[source]

Optimizer.freeze_to(n)

Optimizer.unfreeze[source]

Optimizer.unfreeze()

  1. params = [tst_params(), tst_params(), tst_params()]
  2. opt = Optimizer(params, sgd_step, lr=0.1)
  3. opt.freeze_to(1)
  4. req_grad = Self.requires_grad()
  5. test_eq(L(params[0]).map(req_grad), [False]*4)
  6. for i in {1,2}: test_eq(L(params[i]).map(req_grad), [True]*4)
  7. #Unfreezing
  8. opt.unfreeze()
  9. for i in range(2): test_eq(L(params[i]).map(req_grad), [True]*4)
  10. #TODO: test warning
  11. # opt.freeze_to(3)

Parameters such as batchnorm weights/bias can be marked to always be in training mode, just put force_train=true in their state.

  1. params = [tst_params(), tst_params(), tst_params()]
  2. opt = Optimizer(params, sgd_step, lr=0.1)
  3. for p in L(params[1])[[1,3]]: opt.state[p] = {'force_train': True}
  4. opt.freeze()
  5. test_eq(L(params[0]).map(req_grad), [False]*4)
  6. test_eq(L(params[1]).map(req_grad), [False, True, False, True])
  7. test_eq(L(params[2]).map(req_grad), [True]*4)

Serializing

Optimizer.state_dict[source]

Optimizer.state_dict()

Optimizer.load_state_dict[source]

Optimizer.load_state_dict(sd)

  1. p = tst_param([1,2,3], [4,5,6])
  2. opt = Optimizer(p, average_grad)
  3. opt.step()
  4. test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
  5. sd = opt.state_dict()
  6. p1 = tst_param([10,20,30], [40,50,60])
  7. opt = Optimizer(p1, average_grad, mom=0.99)
  8. test_eq(opt.hypers[0]['mom'], 0.99)
  9. test_eq(opt.state, {})
  10. opt.load_state_dict(sd)
  11. test_eq(opt.hypers[0]['mom'], 0.9)
  12. test_eq(opt.state[p1]['grad_avg'], tensor([[4., 5., 6.]]))

Optimizer.clear_state[source]

Optimizer.clear_state()

  1. p = tst_param([1,2,3], [4,5,6])
  2. opt = Optimizer(p, average_grad)
  3. opt.state[p] = {'force_train': True}
  4. opt.step()
  5. test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
  6. opt.clear_state()
  7. test_eq(opt.state[p], {'force_train': True})

Optimizers

SGD with momentum

momentum_step[source]

momentum_step(p, lr, grad_avg, **kwargs)

Step for SGD with momentum with lr

SGD[source]

SGD(params, lr, mom=0.0, wd=0.0, decouple_wd=True)

A Optimizer for SGD with lr and mom and params

Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).

  1. params = tst_params()
  2. opt = SGD(params, lr=0.1)
  3. opt.step()
  4. test_close([p.item() for p in params], [i*0.99 for i in range(4)])
  5. opt.step()
  6. [p.item() for p in params]
  7. test_close([p.item() for p in params], [i*0.98 for i in range(4)])
  1. params = tst_params()
  2. opt = SGD(params, lr=0.1, mom=0.9)
  3. assert isinstance(opt, Optimizer)
  4. opt.step()
  5. test_close([p.item() for p in params], [i*0.99 for i in range(4)])
  6. opt.step()
  7. [p.item() for p in params]
  8. test_close([p.item() for p in params], [i*(1 - 0.1 * (0.1 + 0.1*1.9)) for i in range(4)])
  9. for i,p in enumerate(params): test_close(opt.state[p]['grad_avg'].item(), i*0.19)

Test weight decay, notice how we can see that L2 regularization is different from weight decay even for simple SGD with momentum.

  1. params = tst_params()
  2. #Weight decay
  3. opt = SGD(params, lr=0.1, mom=0.9, wd=0.1)
  4. opt.step()
  5. test_close([p.item() for p in params], [i*0.98 for i in range(4)])
  6. #L2 reg
  7. opt = SGD(params, lr=0.1, mom=0.9, wd=0.1, decouple_wd=False)
  8. opt.step()
  9. #TODO: fix cause this formula was wrong
  10. #test_close([p.item() for p in params], [i*0.97 for i in range(4)])

RMSProp

rms_prop_step[source]

rms_prop_step(p, lr, sqr_avg, eps, grad_avg=None, **kwargs)

Step for SGD with momentum with lr

RMSProp[source]

RMSProp(params, lr, sqr_mom=0.99, mom=0.0, wd=0.0, decouple_wd=True)

A Optimizer for RMSProp with lr, sqr_mom, mom and params

RMSProp was introduced by Geoffrey Hinton in his course. What is named sqr_mom here is the alpha in the course. Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).

  1. params = tst_param([1,2,3], [0.1,0.2,0.3])
  2. opt = RMSProp(params, lr=0.1)
  3. opt.step()
  4. test_close(params[0], tensor([0.,1.,2.]))
  5. opt.step()
  6. step = - 0.1 * 0.1 / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
  7. test_close(params[0], tensor([step, 1+step, 2+step]))
  1. params = tst_param([1,2,3], [0.1,0.2,0.3])
  2. opt = RMSProp(params, lr=0.1, mom=0.9)
  3. opt.step()
  4. test_close(params[0], tensor([0.,1.,2.]))
  5. opt.step()
  6. step = - 0.1 * (0.1 + 0.9*0.1) / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
  7. test_close(params[0], tensor([step, 1+step, 2+step]))

Adam

step_stat[source]

step_stat(p, step=0, **kwargs)

Register the number of steps done in state for p

  1. p = tst_param(1,0.1)
  2. state = {}
  3. state = step_stat(p, **state)
  4. test_eq(state['step'], 1)
  5. for _ in range(5): state = step_stat(p, **state)
  6. test_eq(state['step'], 6)

debias[source]

debias(mom, damp, step)

adam_step[source]

adam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs)

Step for Adam with lr on p

Adam[source]

Adam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-05, wd=0.01, decouple_wd=True)

A Optimizer for Adam with lr, mom, sqr_mom, eps and params

Adam was introduced by Diederik P. Kingma and Jimmy Ba in Adam: A Method for Stochastic Optimization. For consistency across optimizers, we renamed beta1 and beta2 in the paper to mom and sqr_mom. Note that our defaults also differ from the paper (0.99 for sqr_mom or beta2, 1e-5 for eps). Those values seem to be better from our experiments in a wide range of situations.

Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).

Note: Don’t forget that eps is an hyper-parameter you can change. Some models won’t train without a very high eps like 0.1 (intuitively, the higher eps is, the closer we are to normal SGD). The usual default of 1e-8 is often too extreme in the sense we don’t manage to get as good results as with SGD.

  1. params = tst_param([1,2,3], [0.1,0.2,0.3])
  2. opt = Adam(params, lr=0.1, wd=0)
  3. opt.step()
  4. step = -0.1 * 0.1 / (math.sqrt(0.1**2) + 1e-8)
  5. test_close(params[0], tensor([1+step, 2+step, 3+step]))
  6. opt.step()
  7. test_close(params[0], tensor([1+2*step, 2+2*step, 3+2*step]), eps=1e-3)

RAdam

RAdam (for rectified Adam) was introduced by Zhang et al. in On the Variance of the Adaptive Learning Rate and Beyond to slightly modify the Adam optimizer to be more stable at the beginning of training (and thus not require a long warmup). They use an estimate of the variance of the moving average of the squared gradients (the term in the denominator of traditional Adam) and rescale this moving average by this term before performing the update.

This version also incorporates SAdam; set beta to enable this (definition same as in the paper).

radam_step[source]

radam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, beta, **kwargs)

Step for RAdam with lr on p

RAdam[source]

RAdam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-05, wd=0.0, beta=0.0, decouple_wd=True)

A Optimizer for Adam with lr, mom, sqr_mom, eps and params

This is the effective correction reported to the adam step for 500 iterations in RAdam. We can see how it goes from 0 to 1, mimicking the effect of a warm-up.

  1. beta = 0.99
  2. r_inf = 2/(1-beta) - 1
  3. rs = np.array([r_inf - 2*s*beta**s/(1-beta**s) for s in range(5,500)])
  4. v = np.sqrt(((rs-4) * (rs-2) * r_inf)/((r_inf-4)*(r_inf-2)*rs))
  5. plt.plot(v);

Optimizer - 图2

  1. params = tst_param([1,2,3], [0.1,0.2,0.3])
  2. opt = RAdam(params, lr=0.1)
  3. #The r factor is lower than 5 during the first 5 steps so updates use the average of gradients (all the same)
  4. r_inf = 2/(1-0.99) - 1
  5. for i in range(5):
  6. r = r_inf - 2*(i+1)*0.99**(i+1)/(1-0.99**(i+1))
  7. assert r <= 5
  8. opt.step()
  9. p = tensor([0.95, 1.9, 2.85])
  10. test_close(params[0], p)
  11. #The r factor is greater than 5 for the sixth step so we update with RAdam
  12. r = r_inf - 2*6*0.99**6/(1-0.99**6)
  13. assert r > 5
  14. opt.step()
  15. v = math.sqrt(((r-4) * (r-2) * r_inf)/((r_inf-4)*(r_inf-2)*r))
  16. step = -0.1*0.1*v/(math.sqrt(0.1**2) + 1e-8)
  17. test_close(params[0], p+step)

QHAdam

QHAdam (for Quasi-Hyperbolic Adam) was introduced by Ma & Yarats in Quasi-Hyperbolic Momentum and Adam for Deep Learning as a “computationally cheap, intuitive to interpret, and simple to implement” optimizer. Additional code can be found in their qhoptim repo. QHAdam is based on QH-Momentum, which introduces the immediate discount factor nu, encapsulating plain SGD (nu = 0) and momentum (nu = 1). QH-Momentum is defined below, where g_t+1 is the update of the moment. An interpretation of QHM is as a nu-weighted average of the momentum update step and the plain SGD update step.

θ_t+1 ← θ_t − lr * [(1 − nu) · ∇L_t(θ_t) + nu · g_t+1]

QHAdam takes the concept behind QHM above and applies it to Adam, replacing both of Adam’s moment estimators with quasi-hyperbolic terms.

The paper’s suggested default parameters are mom = 0.999, sqr_mom = 0.999, nu_1 = 0.7 and and nu_2 = 1.0. When training is not stable, it is possible that setting nu_2 < 1 can improve stability by imposing a tighter step size bound. Note that QHAdam recovers Adam when nu_1 = nu_2 = 1.0. QHAdam recovers RMSProp (Hinton et al., 2012) when nu_1 = 0 and nu_2 = 1, and NAdam (Dozat, 2016) when nu_1 = mom and nu_2 = 1.

Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).

qhadam_step[source]

qhadam_step(p, lr, mom, sqr_mom, sqr_avg, nu_1, nu_2, step, grad_avg, eps, **kwargs)

QHAdam[source]

QHAdam(params, lr, mom=0.999, sqr_mom=0.999, nu_1=0.7, nu_2=1.0, eps=1e-08, wd=0.0, decouple_wd=True)

An Optimizer for Adam with lr, mom, sqr_mom, nus, epsandparams`

  1. params = tst_param([1,2,3], [0.1,0.2,0.3])
  2. opt = QHAdam(params, lr=0.1)
  3. opt.step()
  4. step = -0.1 * (((1-0.7) * 0.1) + (0.7 * 0.1)) / (
  5. math.sqrt(((1-1.0) * 0.1**2) + (1.0 * 0.1**2)) + 1e-8)
  6. test_close(params[0], tensor([1+step, 2+step, 3+step]))
  7. opt.step()
  8. test_close(params[0], tensor([1+2*step, 2+2*step, 3+2*step]), eps=1e-3)

LARS/LARC

larc_layer_lr[source]

larc_layer_lr(p, lr, trust_coeff, wd, eps, clip=True, **kwargs)

Computes the local lr before weight decay is applied

larc_step[source]

larc_step(p, local_lr, grad_avg=None, **kwargs)

Step for LARC local_lr on p

Larc[source]

Larc(params, lr, mom=0.9, clip=True, trust_coeff=0.02, eps=1e-08, wd=0.0, decouple_wd=True)

A Optimizer for Adam with lr, mom, sqr_mom, eps and params

The LARS optimizer was first introduced in Large Batch Training of Convolutional Networks then refined in its LARC variant (original LARS is with clip=False). A learning rate is computed for each individual layer with a certain trust_coefficient, then clipped to be always less than lr.

Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).

  1. params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
  2. opt = Larc(params, lr=0.1)
  3. opt.step()
  4. #First param local lr is 0.02 < lr so it's not clipped
  5. test_close(opt.state[params[0]]['local_lr'], 0.02)
  6. #Second param local lr is 0.2 > lr so it's clipped
  7. test_eq(opt.state[params[1]]['local_lr'], 0.1)
  8. test_close(params[0], tensor([0.998,1.996,2.994]))
  9. test_close(params[1], tensor([0.999,1.998,2.997]))
  1. params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
  2. opt = Larc(params, lr=0.1, clip=False)
  3. opt.step()
  4. #No clipping
  5. test_close(opt.state[params[0]]['local_lr'], 0.02)
  6. test_close(opt.state[params[1]]['local_lr'], 0.2)
  7. test_close(params[0], tensor([0.998,1.996,2.994]))
  8. test_close(params[1], tensor([0.998,1.996,2.994]))

LAMB

lamb_step[source]

lamb_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs)

Step for LAMB with lr on p

Lamb[source]

Lamb(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-05, wd=0.0, decouple_wd=True)

A Optimizer for Adam with lr, mom, sqr_mom, eps and params

LAMB was introduced in Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. Intuitively, it’s LARC applied to Adam. As in Adam, we renamed beta1 and beta2 in the paper to mom and sqr_mom. Note that our defaults also differ from the paper (0.99 for sqr_mom or beta2, 1e-5 for eps). Those values seem to be better from our experiments in a wide range of situations.

Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).

  1. params = tst_param([1,2,3], [0.1,0.2,0.3])
  2. opt = Lamb(params, lr=0.1)
  3. opt.step()
  4. test_close(params[0], tensor([0.7840,1.7840,2.7840]), eps=1e-3)

Lookahead was introduced by Zhang et al. in Lookahead Optimizer: k steps forward, 1 step back. It can be run on top of any optimizer and consists in having the final weights of the model be a moving average. In practice, we update our model using the internal optimizer but keep a copy of old weights that and every k steps, we change the weights by a moving average of the fast weights (the ones updated by the inner optimizer) with the slow weights (the copy of old weights). Those slow weights act like a stability mechanism.

class Lookahead[source]

Lookahead(opt, k=6, alpha=0.5) :: Optimizer

Wrap opt in a lookahead optimizer

  1. params = tst_param([1,2,3], [0.1,0.2,0.3])
  2. p,g = params[0].data.clone(),tensor([0.1,0.2,0.3])
  3. opt = Lookahead(SGD(params, lr=0.1))
  4. for k in range(5): opt.step()
  5. #first 5 steps are normal SGD steps
  6. test_close(params[0], p - 0.5*g)
  7. #Since k=6, sixth step is a moving average of the 6 SGD steps with the initial weight
  8. opt.step()
  9. test_close(params[0], p * 0.5 + (p-0.6*g) * 0.5)

ranger[source]

ranger(p, lr, mom=0.95, wd=0.01, eps=1e-06, sqr_mom=0.99, beta=0.0, decouple_wd=True)

Convenience method for Lookahead with RAdam

OptimWrapper provides simple functionality to use existing optimizers constructed with torch.optim.Optimizer.

detuplify_pg[source]

detuplify_pg(d)

  1. tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
  2. test_eq(detuplify_pg(tst), {'lr': 1e-2, 'mom': 0.9})
  3. tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
  4. test_eq(detuplify_pg(tst), {'lr': 1e-2, 'betas__0': 0.9, 'betas__1': 0.999})

set_item_pg[source]

set_item_pg(pg, k, v)

  1. tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
  2. test_eq(set_item_pg(tst, 'lr', 1e-3), {'lr': 1e-3, 'mom': 0.9, 'params':[0,1,2]})
  3. tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
  4. test_eq(set_item_pg(tst, 'betas__0', 0.95), {'lr': 1e-2, 'betas': (0.95,0.999), 'params':[0,1,2]})

class OptimWrapper[source]

OptimWrapper(params, opt, hp_map=None, convert_groups=True, **kwargs) :: _BaseOptimizer

A wrapper class for existing PyTorch optimizers

  1. sgd = SGD([tensor([1,2,3])], lr=1e-3, mom=0.9, wd=1e-2)
  2. tst_sgd = OptimWrapper([tensor([1,2,3])], torch.optim.SGD, lr=1e-3, momentum=0.9, weight_decay=1e-2)
  3. #Access to param_groups
  4. test_eq(tst_sgd.param_lists, sgd.param_lists)
  5. #Set param_groups
  6. tst_sgd.param_lists = [[tensor([4,5,6])]]
  7. test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
  8. #Access to hypers
  9. test_eq(tst_sgd.hypers, [{**sgd.hypers[0], 'dampening': 0., 'nesterov': False}])
  10. #Set hypers
  11. tst_sgd.set_hyper('mom', 0.95)
  12. test_eq(tst_sgd.opt.param_groups[0]['momentum'], 0.95)
  1. tst_sgd = OptimWrapper([{'params': [tensor([1,2,3])], 'lr': 1e-3},
  2. {'params': [tensor([4,5,6])], 'lr': 1e-2}], torch.optim.SGD, momentum=0.9, weight_decay=1e-2)
  3. sgd = SGD([[tensor([1,2,3])], [tensor([4,5,6])]], lr=[1e-3, 1e-2], mom=0.9, wd=1e-2)
  4. #Access to param_groups
  5. test_eq(tst_sgd.param_lists, sgd.param_lists)
  6. #Set param_groups
  7. tst_sgd.param_lists = [[tensor([4,5,6])], [tensor([1,2,3])]]
  8. test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
  9. test_eq(tst_sgd.opt.param_groups[1]['params'], [tensor(1,2,3)])
  10. #Access to hypers
  11. test_eq(tst_sgd.hypers, [{**sgd.hypers[i], 'dampening': 0., 'nesterov': False} for i in range(2)])
  12. #Set hypers
  13. tst_sgd.set_hyper('mom', 0.95)
  14. test_eq([pg['momentum'] for pg in tst_sgd.opt.param_groups], [0.95,0.95])
  15. tst_sgd.set_hyper('lr', [1e-4,1e-3])
  16. test_eq([pg['lr'] for pg in tst_sgd.opt.param_groups], [1e-4,1e-3])
  1. def _mock_train(m, x, y, opt):
  2. m.train()
  3. for i in range(0, 100, 25):
  4. z = m(x[i:i+25])
  5. loss = F.mse_loss(z, y[i:i+25])
  6. loss.backward()
  7. opt.step()
  8. opt.zero_grad()
  1. m = nn.Linear(4,5)
  2. x = torch.randn(100, 3, 4)
  3. y = torch.randn(100, 3, 5)
  4. try:
  5. torch.save(m.state_dict(), 'tmp.pth')
  6. wgt,bias = m.weight.data.clone(),m.bias.data.clone()
  7. m.load_state_dict(torch.load('tmp.pth'))
  8. opt1 = OptimWrapper(m.parameters(), torch.optim.AdamW, betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2)
  9. _mock_train(m, x.clone(), y.clone(), opt1)
  10. wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
  11. m.load_state_dict(torch.load('tmp.pth'))
  12. opt2 = Adam(m.parameters(), 1e-3, wd=1e-2)
  13. _mock_train(m, x.clone(), y.clone(), opt2)
  14. wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
  15. test_close(wgt1,wgt2,eps=1e-3)
  16. test_close(bias1,bias2,eps=1e-3)
  17. finally: os.remove('tmp.pth')
  1. m = nn.Linear(4,5)
  2. x = torch.randn(100, 3, 4)
  3. y = torch.randn(100, 3, 5)
  4. try:
  5. torch.save(m.state_dict(), 'tmp.pth')
  6. wgt,bias = m.weight.data.clone(),m.bias.data.clone()
  7. m.load_state_dict(torch.load('tmp.pth'))
  8. opt1 = OptimWrapper(m.parameters(), torch.optim.Adam, betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2)
  9. _mock_train(m, x.clone(), y.clone(), opt1)
  10. wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
  11. m.load_state_dict(torch.load('tmp.pth'))
  12. opt2 = Adam(m.parameters(), 1e-3, wd=1e-2, decouple_wd=False)
  13. _mock_train(m, x.clone(), y.clone(), opt2)
  14. wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
  15. test_close(wgt1,wgt2,eps=1e-3)
  16. test_close(bias1,bias2,eps=1e-3)
  17. finally: os.remove('tmp.pth')

To use an existing PyTorch optimizer, you can define an optimizer function like this:

  1. opt_func = partial(OptimWrapper, opt=torch.optim.SGD)

Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021