Usage of optimizers

An optimizer is one of the two arguments required for compiling a Keras model:

model = Sequential()
model.add(Dense(64, init='uniform', input_dim=10))
model.add(Activation('tanh'))
model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)

You can either instantiate an optimizer before passing it to model.compile() , as in the above example, or you can call it by its name. In the latter case, the default parameters for the optimizer will be used.

# pass optimizer by name: default parameters will be used
model.compile(loss='mean_squared_error', optimizer='sgd')

Base class

keras.optimizers.Optimizer(**kwargs)

All optimizers descended from this class support the following keyword argument:

  • clipnorm: float >= 0.

Note: this is base class for building optimizers, not an actual optimizer that can be used for training models.


SGD

keras.optimizers.SGD(lr=0.01, momentum=0., decay=0., nesterov=False)

Arguments:

  • lr: float >= 0. Learning rate.
  • momentum: float >= 0. Parameter updates momentum.
  • decay: float >= 0. Learning rate decay over each update.
  • nesterov: boolean. Whether to apply Nesterov momentum.

Adagrad

keras.optimizers.Adagrad(lr=0.01, epsilon=1e-6)

It is recommended to leave the parameters of this optimizer at their default values.

Arguments:

  • lr: float >= 0. Learning rate.
  • epsilon: float >= 0.

Adadelta

keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-6)

It is recommended to leave the parameters of this optimizer at their default values.

Arguments:

  • lr: float >= 0. Learning rate. It is recommended to leave it at the default value.
  • rho: float >= 0.
  • epsilon: float >= 0. Fuzz factor.

For more info, see "Adadelta: an adaptive learning rate method" by Matthew Zeiler.


RMSprop

keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-6)

It is recommended to leave the parameters of this optimizer at their default values.

Arguments:

  • lr: float >= 0. Learning rate.
  • rho: float >= 0.
  • epsilon: float >= 0. Fuzz factor.

Adam

keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-8)

Adam optimizer, proposed by Kingma and Lei Ba in Adam: A Method For Stochastic Optimization. Default parameters are those suggested in the paper.

Arguments:

  • lr: float >= 0. Learning rate.
  • beta_1, beta_2: floats, 0 < beta < 1. Generally close to 1.
  • epsilon: float >= 0. Fuzz factor.