Usage of optimizers

An optimizer is one of the two arguments required for compiling a Keras model:

  1. from keras import optimizers
  2. model = Sequential()
  3. model.add(Dense(64, kernel_initializer='uniform', input_shape=(10,)))
  4. model.add(Activation('softmax'))
  5. sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
  6. model.compile(loss='mean_squared_error', optimizer=sgd)

You can either instantiate an optimizer before passing it to model.compile() , as in the above example, or you can call it by its name. In the latter case, the default parameters for the optimizer will be used.

  1. # pass optimizer by name: default parameters will be used
  2. model.compile(loss='mean_squared_error', optimizer='sgd')

Parameters common to all Keras optimizers

The parameters clipnorm and clipvalue can be used with all optimizers to control gradient clipping:

  1. from keras import optimizers
  2. # All parameter gradients will be clipped to
  3. # a maximum norm of 1.
  4. sgd = optimizers.SGD(lr=0.01, clipnorm=1.)
  1. from keras import optimizers
  2. # All parameter gradients will be clipped to
  3. # a maximum value of 0.5 and
  4. # a minimum value of -0.5.
  5. sgd = optimizers.SGD(lr=0.01, clipvalue=0.5)

[source]

SGD

  1. keras.optimizers.SGD(learning_rate=0.01, momentum=0.0, nesterov=False)

Stochastic gradient descent optimizer.

Includes support for momentum,learning rate decay, and Nesterov momentum.

Arguments

  • learning_rate: float >= 0. Learning rate.
  • momentum: float >= 0. Parameter that accelerates SGD in the relevant direction and dampens oscillations.
  • nesterov: boolean. Whether to apply Nesterov momentum.

[source]

RMSprop

  1. keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9)

RMSProp optimizer.

It is recommended to leave the parameters of this optimizerat their default values(except the learning rate, which can be freely tuned).

Arguments

  • learning_rate: float >= 0. Learning rate.
  • rho: float >= 0.

References

[source]

Adagrad

  1. keras.optimizers.Adagrad(learning_rate=0.01)

Adagrad optimizer.

Adagrad is an optimizer with parameter-specific learning rates,which are adapted relative to how frequently a parameter getsupdated during training. The more updates a parameter receives,the smaller the learning rate.

It is recommended to leave the parameters of this optimizerat their default values.

Arguments

  • learning_rate: float >= 0. Initial learning rate.

References

[source]

Adadelta

  1. keras.optimizers.Adadelta(learning_rate=1.0, rho=0.95)

Adadelta optimizer.

Adadelta is a more robust extension of Adagradthat adapts learning rates based on a moving window of gradient updates,instead of accumulating all past gradients. This way, Adadelta continueslearning even when many updates have been done. Compared to Adagrad, in theoriginal version of Adadelta you don't have to set an initial learningrate. In this version, initial learning rate and decay factor canbe set, as in most other Keras optimizers.

It is recommended to leave the parameters of this optimizerat their default values.

Arguments

  • learning_rate: float >= 0. Initial learning rate, defaults to 1. It is recommended to leave it at the default value.
  • rho: float >= 0. Adadelta decay factor, corresponding to fraction of gradient to keep at each time step.

References

[source]

Adam

  1. keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)

Adam optimizer.

Default parameters follow those provided in the original paper.

Arguments

  • learning_rate: float >= 0. Learning rate.
  • beta_1: float, 0 < beta < 1. Generally close to 1.
  • beta_2: float, 0 < beta < 1. Generally close to 1.
  • amsgrad: boolean. Whether to apply the AMSGrad variant of this algorithm from the paper "On the Convergence of Adam and Beyond".

References

[source]

Adamax

  1. keras.optimizers.Adamax(learning_rate=0.002, beta_1=0.9, beta_2=0.999)

Adamax optimizer from Adam paper's Section 7.

It is a variant of Adam based on the infinity norm.Default parameters follow those provided in the paper.

Arguments

  • learning_rate: float >= 0. Learning rate.
  • beta_1: float, 0 < beta < 1. Generally close to 1.
  • beta_2: float, 0 < beta < 1. Generally close to 1.

References

[source]

Nadam

  1. keras.optimizers.Nadam(learning_rate=0.002, beta_1=0.9, beta_2=0.999)

Nesterov Adam optimizer.

Much like Adam is essentially RMSprop with momentum,Nadam is RMSprop with Nesterov momentum.

Default parameters follow those provided in the paper.It is recommended to leave the parameters of this optimizerat their default values.

Arguments

  • learning_rate: float >= 0. Learning rate.
  • beta_1: float, 0 < beta < 1. Generally close to 1.
  • beta_2: float, 0 < beta < 1. Generally close to 1.

References