Usage of optimizers

An optimizer is one of the two arguments required for compiling a Keras model:

from keras import optimizers

model = Sequential()
model.add(Dense(64, kernel_initializer='uniform', input_shape=(10,)))
model.add(Activation('tanh'))
model.add(Activation('softmax'))

sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)

You can either instantiate an optimizer before passing it to model.compile() , as in the above example, or you can call it by its name. In the latter case, the default parameters for the optimizer will be used.

# pass optimizer by name: default parameters will be used
model.compile(loss='mean_squared_error', optimizer='sgd')

Parameters common to all Keras optimizers

The parameters clipnorm and clipvalue can be used with all optimizers to control gradient clipping:

from keras import optimizers

# All parameter gradients will be clipped to
# a maximum norm of 1.
sgd = optimizers.SGD(lr=0.01, clipnorm=1.)
from keras import optimizers

# All parameter gradients will be clipped to
# a maximum value of 0.5 and
# a minimum value of -0.5.
sgd = optimizers.SGD(lr=0.01, clipvalue=0.5)

[source]

RMSprop

keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)

RMSProp optimizer.

It is recommended to leave the parameters of this optimizer at their default values (except the learning rate, which can be freely tuned).

This optimizer is usually a good choice for recurrent neural networks.

Arguments

  • lr: float >= 0. Learning rate.
  • rho: float >= 0.
  • epsilon: float >= 0. Fuzz factor.
  • decay: float >= 0. Learning rate decay over each update.

References


[source]

Adagrad

keras.optimizers.Adagrad(lr=0.01, epsilon=1e-08, decay=0.0)

Adagrad optimizer.

It is recommended to leave the parameters of this optimizer at their default values.

Arguments

  • lr: float >= 0. Learning rate.
  • epsilon: float >= 0.
  • decay: float >= 0. Learning rate decay over each update.

References


[source]

Adadelta

keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-08, decay=0.0)

Adadelta optimizer.

It is recommended to leave the parameters of this optimizer at their default values.

Arguments

  • lr: float >= 0. Learning rate. It is recommended to leave it at the default value.
  • rho: float >= 0.
  • epsilon: float >= 0. Fuzz factor.
  • decay: float >= 0. Learning rate decay over each update.

References


[source]

SGD

keras.optimizers.SGD(lr=0.01, momentum=0.0, decay=0.0, nesterov=False)

Stochastic gradient descent optimizer.

Includes support for momentum, learning rate decay, and Nesterov momentum.

Arguments

  • lr: float >= 0. Learning rate.
  • momentum: float >= 0. Parameter updates momentum.
  • decay: float >= 0. Learning rate decay over each update.
  • nesterov: boolean. Whether to apply Nesterov momentum.

[source]

Adam

keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

Adam optimizer.

Default parameters follow those provided in the original paper.

Arguments

  • lr: float >= 0. Learning rate.
  • beta_1: float, 0 < beta < 1. Generally close to 1.
  • beta_2: float, 0 < beta < 1. Generally close to 1.
  • epsilon: float >= 0. Fuzz factor.
  • decay: float >= 0. Learning rate decay over each update.

References


[source]

Adamax

keras.optimizers.Adamax(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

Adamax optimizer from Adam paper's Section 7.

It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper.

Arguments

  • lr: float >= 0. Learning rate.
  • beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.
  • epsilon: float >= 0. Fuzz factor.
  • decay: float >= 0. Learning rate decay over each update.

References


[source]

Nadam

keras.optimizers.Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)

Nesterov Adam optimizer.

Much like Adam is essentially RMSprop with momentum, Nadam is Adam RMSprop with Nesterov momentum.

Default parameters follow those provided in the paper. It is recommended to leave the parameters of this optimizer at their default values.

Arguments

  • lr: float >= 0. Learning rate.
  • beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.
  • epsilon: float >= 0. Fuzz factor.

References


[source]

TFOptimizer

keras.optimizers.TFOptimizer(optimizer)

Wrapper class for native TensorFlow optimizers.