LambOptimizer

class paddle.fluid.optimizer.LambOptimizer(learning_rate=0.001, lamb_weight_decay=0.01, beta1=0.9, beta2=0.999, epsilon=1e-06, regularization=None, exclude_from_weight_decay_fn=None, name=None)[source]

LAMB (Layer-wise Adaptive Moments optimizer for Batching training) Optimizer.

LAMB Optimizer is designed to scale up the batch size of training without losing accuracy, which supports adaptive element-wise updating and accurate layer-wise correction. For more information, please refer to Large Batch Optimization for Deep Learning: Training BERT in 76 minutes .

The updating of parameters follows:

\[ \begin{align}\begin{aligned}m_t &= \beta_1 m_{t - 1}+ (1 - \beta_1)g_t\\v_t &= \beta_2 v_{t - 1} + (1 - \beta_2)g_t^2\\r_t &= \frac{m_t}{\sqrt{v_t}+\epsilon}\\w_t &= w_{t-1} -\eta_t \frac{\left \| w_{t-1}\right \|}{\left \| r_t + \lambda w_{t-1}\right \|} (r_t + \lambda w_{t-1})\end{aligned}\end{align} \]

where \(m\) is the 1st moment, and \(v\) the 2nd moment, \(\eta\) the learning rate, \(\lambda\) the LAMB weight decay rate.

Parameters
  • learning_rate (float|Variable, optional) – the learning rate used to update parameters. Can be a float value or a Variable with data type float32. Default 0.001.

  • lamb_weight_decay (float, optional) – The LAMB weight decay rate. Default 0.01.

  • beta1 (float, optional) – The exponential decay rate for the 1st moment estimates. Default 0.9.

  • beta2 (float, optional) – The exponential decay rate for the 2nd moment estimates. Default 0.999.

  • epsilon (float, optional) – A small float value for numerical stability. Default 1e-6.

  • regularization (Regularizer|None) – A Regularizer, such as fluid.regularizer.L1DecayRegularizer. Default None.

  • exclude_from_weight_decay_fn (function|None) – Exclude a parameter from weight decay when exclude_from_weight_decay_fn(parameter) returns true. Default None.

  • name (str|None) – For detailed information, please refer to Name . Usually name is no need to set and None by default.

Examples

import paddle.fluid as fluid

data = fluid.data(name='x', shape=[-1, 5], dtype='float32')
hidden = fluid.layers.fc(input=data, size=10)
cost = fluid.layers.mean(hidden)

def exclude_fn(param):
    return param.name.endswith('.b_0')

optimizer = fluid.optimizer.Lamb(learning_rate=0.002,
                                 exclude_from_weight_decay_fn=exclude_fn)
optimizer.minimize(cost)
minimize(loss, startup_program=None, parameter_list=None, no_grad_set=None, grad_clip=None)

Add operations to minimize loss by updating parameter_list.

Parameters
  • loss (Variable) – A Variable containing the value to minimize.

  • startup_program (Program, optional) – Program for initializing parameters in parameter_list. The default value is None, at this time default_startup_program will be used.

  • parameter_list (list, optional) – List of Variable names to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Variable objects that don’t need to be updated. The default value is None.

  • grad_clip (GradClipBase, optional) – Gradient clipping strategy, static graph mode does not need to use this argument. Currently, this argument only supports gradient clipping in dygraph mode. In the future, this argument my be adjusted. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is Parameter, grad is the gradient value corresponding to the parameter.

Return type

tuple

Examples

Please refer to the example of current Optimizer.

set_dict(state_dict)

Load optimizer state dict. For Adam opimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Variable needed by optimizer

Returns

None

Examples

with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding( "emb", [10, 10])

    state_dict = emb.state_dict()
    fluid.save_dygraph( state_dict, "paddle_dy")

    adam = fluid.optimizer.Adam( learning_rate = fluid.layers.noam_decay( 100, 10000) )
    state_dict = adam.state_dict()
    fluid.save_dygraph( state_dict, "padle_dy")

    para_state_dict, opti_state_dict = fluid.load_dygraph( "paddle_dy")

    adam.set_dict( opti_state_dict )
state_dict()

Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam opimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimzier never be called(minimize function), the state_dict is empty.

Args: None :returns: dict contains all the variablel used by optimizer :rtype: state_dict(dict)

Examples

import paddle.fluid as fluid
adam = fluid.optimizer.Adam(0.001)
state_dict = adam.state_dict()