Tracking callbacks

Open In Colab

Callbacks that make decisions depending how a monitored metric/loss behaves

  1. /usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  2. return torch._C._cuda_getDeviceCount() > 0

class TerminateOnNaNCallback[source]

TerminateOnNaNCallback(after_create=None, before_fit=None, before_epoch=None, before_train=None, before_batch=None, after_pred=None, after_loss=None, before_backward=None, before_step=None, after_cancel_step=None, after_step=None, after_cancel_batch=None, after_batch=None, after_cancel_train=None, after_train=None, before_validate=None, after_cancel_validate=None, after_validate=None, after_cancel_epoch=None, after_epoch=None, after_cancel_fit=None, after_fit=None) :: Callback

A Callback that terminates training if loss is NaN.

  1. learn = synth_learner()
  2. learn.fit(10, lr=100, cbs=TerminateOnNaNCallback())
epochtrain_lossvalid_losstime
01914263325772146366332652801648230400.00000000:00
  1. assert len(learn.recorder.losses) < 10 * len(learn.dls.train)
  2. for l in learn.recorder.losses:
  3. assert not torch.isinf(l) and not torch.isnan(l)

class TrackerCallback[source]

TrackerCallback(monitor='valid_loss', comp=None, min_delta=0.0, reset_on_fit=True) :: Callback

A Callback that keeps track of the best value in monitor.

When implementing a Callback that has behavior that depends on the best value of a metric or loss, subclass this Callback and use its best (for best value so far) and new_best (there was a new best value this epoch) attributes. If you want to maintain best over subsequent calls to fit (e.g., Learner.fit_one_cycle), set reset_on_fit = True.

comp is the comparison operator used to determine if a value is best than another (defaults to np.less if ‘loss’ is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount.

class EarlyStoppingCallback[source]

EarlyStoppingCallback(monitor='valid_loss', comp=None, min_delta=0.0, patience=1, reset_on_fit=True) :: TrackerCallback

A TrackerCallback that terminates training when monitored quantity stops improving.

comp is the comparison operator used to determine if a value is best than another (defaults to np.less if ‘loss’ is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. patience is the number of epochs you’re willing to wait without improvement.

  1. learn = synth_learner(n_trn=2, metrics=F.mse_loss)
  2. learn.fit(n_epoch=200, lr=1e-7, cbs=EarlyStoppingCallback(monitor='mse_loss', min_delta=0.1, patience=2))
epochtrain_lossvalid_lossmse_losstime
025.91337628.70214828.70214800:00
125.95222928.70207428.70207400:00
225.97002628.70196528.70196500:00
  1. No improvement since epoch 0: early stopping
  1. learn.validate()
  1. (#2) [28.70196533203125,28.70196533203125]
  1. learn = synth_learner(n_trn=2)
  2. learn.fit(n_epoch=200, lr=1e-7, cbs=EarlyStoppingCallback(monitor='valid_loss', min_delta=0.1, patience=2))
epochtrain_lossvalid_losstime
015.5804928.50400600:00
115.5920668.50398300:00
215.6030768.50394800:00
  1. No improvement since epoch 0: early stopping

class SaveModelCallback[source]

SaveModelCallback(monitor='valid_loss', comp=None, min_delta=0.0, fname='model', every_epoch=False, with_opt=False, reset_on_fit=True) :: TrackerCallback

A TrackerCallback that saves the model’s best during training and loads it at the end.

comp is the comparison operator used to determine if a value is best than another (defaults to np.less if ‘loss’ is in the name passed in monitor, np.greater otherwise) and min_delta is an optional float that requires a new value to go over the current best (depending on comp) by at least that amount. Model will be saved in learn.path/learn.model_dir/name.pth, maybe every_epoch or at each improvement of the monitored quantity.

  1. learn = synth_learner(n_trn=2, path=Path.cwd()/'tmp')
  2. learn.fit(n_epoch=2, cbs=SaveModelCallback())
  3. assert (Path.cwd()/'tmp/models/model.pth').exists()
  4. learn.fit(n_epoch=2, cbs=SaveModelCallback(every_epoch=True))
  5. for i in range(2): assert (Path.cwd()/f'tmp/models/model_{i}.pth').exists()
  6. shutil.rmtree(Path.cwd()/'tmp')
epochtrain_lossvalid_losstime
010.48804610.30700900:00
110.41001310.06404100:00
  1. Better model found at epoch 0 with valid_loss value: 10.307008743286133.
  2. Better model found at epoch 1 with valid_loss value: 10.064041137695312.
epochtrain_lossvalid_losstime
010.0380219.71825800:00
19.8386789.30001100:00

ReduceLROnPlateau

class ReduceLROnPlateau[source]

ReduceLROnPlateau(monitor='valid_loss', comp=None, min_delta=0.0, patience=1, factor=10.0, min_lr=0, reset_on_fit=True) :: TrackerCallback

A TrackerCallback that reduces learning rate when a metric has stopped improving.

  1. learn = synth_learner(n_trn=2)
  2. learn.fit(n_epoch=4, lr=1e-7, cbs=ReduceLROnPlateau(monitor='valid_loss', min_delta=0.1, patience=2))
epochtrain_lossvalid_losstime
011.29906716.74523500:00
111.28930116.74520300:00
211.27641316.74515200:00
311.26798216.74514600:00
  1. Epoch 2: reducing lr to 1e-08
  1. learn = synth_learner(n_trn=2)
  2. learn.fit(n_epoch=6, lr=5e-8, cbs=ReduceLROnPlateau(monitor='valid_loss', min_delta=0.1, patience=2, min_lr=1e-8))
epochtrain_lossvalid_losstime
021.62930115.61761400:00
121.60887315.61758900:00
221.62017315.61755600:00
321.61913115.61754600:00
421.61591515.61753700:00
521.60632715.61752600:00
  1. Epoch 2: reducing lr to 1e-08

Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021