AdamOptimizer

class paddle.fluid.optimizer.AdamOptimizer(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, regularization=None, name=None, lazy_mode=False)[source]

The Adam optimzier uses an optimization described at the end of section 2 of Adam paper , it can dynamically adjusts the learning rate of each parameter using the 1st moment estimates and the 2nd moment estimates of the gradient.

The parameter param_out update rule with gradient grad:

\[ \begin{align}\begin{aligned}t & = t + 1\\moment\_1\_out & = {\beta}_1 * moment\_1 + (1 - {\beta}_1) * grad\\moment\_2\_out & = {\beta}_2 * moment\_2 + (1 - {\beta}_2) * grad * grad\\learning\_rate & = learning\_rate * \ \frac{\sqrt{1 - {\beta}_2^t}}{1 - {\beta}_1^t}\\param\_out & = param - learning\_rate * \frac{moment\_1}{\sqrt{moment\_2} + \epsilon}\end{aligned}\end{align} \]

Related paper: Adam: A Method for Stochastic Optimization

Parameters
  • learning_rate (float|Variable, optional) – The learning rate used to update Parameter. It can be a float value or a Variable with a float type. The default value is 0.001.

  • beta1 (float, optional) – The exponential decay rate for the 1st moment estimates. The default value is 0.9.

  • beta2 (float, optional) – The exponential decay rate for the 2nd moment estimates. The default value is 0.999.

  • epsilon (float, optional) – A small float value for numerical stability. The default value is 1e-08.

  • regularization (WeightDecayRegularizer, optional) – A Regularizer, such as L2DecayRegularizer. The default value is None.

  • name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. The default value is None.

  • lazy_mode (bool, optional) – The official Adam algorithm has two moving-average accumulators. The accumulators are updated at every step. Every element of the two moving-average is updated in both dense mode and sparse mode. If the size of parameter is very large, then the update may be very slow. The lazy mode only update the element that has gradient in current mini-batch, so it will be much more faster. But this mode has different semantics with the original Adam algorithm and may lead to different result. The default value is False.

Examples

import paddle
import paddle.fluid as fluid

place = fluid.CPUPlace()
main = fluid.Program()
with fluid.program_guard(main):
    x = fluid.data(name='x', shape=[None, 13], dtype='float32')
    y = fluid.data(name='y', shape=[None, 1], dtype='float32')
    y_predict = fluid.layers.fc(input=x, size=1, act=None)
    cost = fluid.layers.square_error_cost(input=y_predict, label=y)
    avg_cost = fluid.layers.mean(cost)

    adam_optimizer = fluid.optimizer.AdamOptimizer(0.01)
    adam_optimizer.minimize(avg_cost)

    fetch_list = [avg_cost]
    train_reader = paddle.batch(
        paddle.dataset.uci_housing.train(), batch_size=1)
    feeder = fluid.DataFeeder(place=place, feed_list=[x, y])
    exe = fluid.Executor(place)
    exe.run(fluid.default_startup_program())
    for data in train_reader():
        exe.run(main, feed=feeder.feed(data), fetch_list=fetch_list)
minimize(loss, startup_program=None, parameter_list=None, no_grad_set=None, grad_clip=None)

Add operations to minimize loss by updating parameter_list.

Parameters
  • loss (Variable) – A Variable containing the value to minimize.

  • startup_program (Program, optional) – Program for initializing parameters in parameter_list. The default value is None, at this time default_startup_program will be used.

  • parameter_list (list, optional) – List of Variable names to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Variable objects that don’t need to be updated. The default value is None.

  • grad_clip (GradClipBase, optional) – Gradient clipping strategy, static graph mode does not need to use this argument. Currently, this argument only supports gradient clipping in dygraph mode. In the future, this argument my be adjusted. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is Parameter, grad is the gradient value corresponding to the parameter.

Return type

tuple

Examples

Please refer to the example of current Optimizer.

set_dict(state_dict)

Load optimizer state dict. For Adam opimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Variable needed by optimizer

Returns

None

Examples

with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding( "emb", [10, 10])

    state_dict = emb.state_dict()
    fluid.save_dygraph( state_dict, "paddle_dy")

    adam = fluid.optimizer.Adam( learning_rate = fluid.layers.noam_decay( 100, 10000) )
    state_dict = adam.state_dict()
    fluid.save_dygraph( state_dict, "padle_dy")

    para_state_dict, opti_state_dict = fluid.load_dygraph( "paddle_dy")

    adam.set_dict( opti_state_dict )
state_dict()

Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam opimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimzier never be called(minimize function), the state_dict is empty.

Args: None :returns: dict contains all the variablel used by optimizer :rtype: state_dict(dict)

Examples

import paddle.fluid as fluid
adam = fluid.optimizer.Adam(0.001)
state_dict = adam.state_dict()