LambOptimizer¶
- class paddle.fluid.optimizer. LambOptimizer ( learning_rate=0.001, lamb_weight_decay=0.01, beta1=0.9, beta2=0.999, epsilon=1e-06, parameter_list=None, regularization=None, grad_clip=None, exclude_from_weight_decay_fn=None, name=None ) [source]
-
LAMB (Layer-wise Adaptive Moments optimizer for Batching training) Optimizer.
LAMB Optimizer is designed to scale up the batch size of training without losing accuracy, which supports adaptive element-wise updating and accurate layer-wise correction. For more information, please refer to Large Batch Optimization for Deep Learning: Training BERT in 76 minutes .
The updating of parameters follows:
\[ \begin{align}\begin{aligned}\begin{split}m_t &= \\beta_1 m_{t - 1}+ (1 - \\beta_1)g_t\end{split}\\\begin{split}v_t &= \\beta_2 v_{t - 1} + (1 - \\beta_2)g_t^2\end{split}\\\begin{split}m_t &= \\frac{m_t}{\\beta_1^t}\end{split}\\\begin{split}v_t &= \\frac{v_t}{\\beta_2^t}\end{split}\\\begin{split}r_t &= \\frac{m_t}{\\sqrt{v_t}+\\epsilon}\end{split}\\\begin{split}w_t &= w_{t-1} -\\eta_t \\frac{\\left \| w_{t-1}\\right \|}{\\left \| r_t + \\lambda w_{t-1}\\right \|} (r_t + \\lambda w_{t-1})\end{split}\end{aligned}\end{align} \]where \(m\) is the 1st moment, and \(v\) the 2nd moment, \(\\eta\) the learning rate, \(\\lambda\) the LAMB weight decay rate.
- Parameters
-
learning_rate (float|Variable, optional) – the learning rate used to update parameters. Can be a float value or a Variable with data type float32. Default 0.001.
lamb_weight_decay (float, optional) – The LAMB weight decay rate. Default 0.01.
beta1 (float, optional) – The exponential decay rate for the 1st moment estimates. Default 0.9.
beta2 (float, optional) – The exponential decay rate for the 2nd moment estimates. Default 0.999.
epsilon (float, optional) – A small float value for numerical stability. Default 1e-6.
parameter_list (Iterable, optional) – Iterable of
Variable
names to update to minimizeloss
. This parameter is required in dygraph mode. The default value is None in static graph mode, at this time all parameters will be updated.regularization (WeightDecayRegularizer, optional) –
- The strategy of regularization. There are two method:
-
api_fluid_regularizer_L1Decay , api_fluid_regularizer_L2Decay . If a parameter has set
regularizer using api_fluid_ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional) – Gradient cliping strategy, it’s an instance of some derived class of
GradientClipBase
. There are three cliping strategies ( api_paddle_fluid_clip_ClipGradByGlobalNorm , api_paddle_fluid_clip_ClipGradByNorm , api_paddle_fluid_clip_ClipGradByValue ). If you want better convergence, it is recommended to use api_paddle_fluid_clip_ClipGradByGlobalNorm . Default None, meaning there is no gradient clipping.exclude_from_weight_decay_fn (function|None) – Exclude a parameter from weight decay when exclude_from_weight_decay_fn(parameter) returns true. Default None.
name (str|None) – For detailed information, please refer to Name . Usually name is no need to set and None by default.
Examples
import paddle import paddle.fluid as fluid paddle.enable_static() data = paddle.static.data(name='x', shape=[-1, 5], dtype='float32') hidden = paddle.static.nn.fc(x=data, size=10) cost = paddle.mean(hidden) def exclude_fn(param): return param.name.endswith('.b_0') optimizer = fluid.optimizer.Lamb(learning_rate=0.002, exclude_from_weight_decay_fn=exclude_fn) optimizer.minimize(cost)
-
append_regularization_ops
(
parameters_and_grads,
regularization=None
)
append_regularization_ops¶
-
Create and add backward regularization Operators
Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization.
- Parameters
-
parameters_and_grads – A list of (parameters, gradients) pairs that need to be regularized.
regularization – A global regularizer. If the parameter is not set. It will be applied with regularizer.
- Returns
-
list of (parameters, gradients) pair with the regularized gradient
- Return type
-
list[(Variable, Variable)]
- Raises
-
Exception – Unknown regularization type
-
apply_gradients
(
params_grads
)
apply_gradients¶
-
Second part of minimize, appending optimization operators for given params_grads pairs.
- Parameters
-
params_grads (list) – list of (param, grad) pair to do optimization.
- Returns
-
A list of operators appended to the current program.
- Return type
-
list
Examples
import paddle.fluid as fluid loss = network() optimizer = fluid.optimizer.SGD(learning_rate=0.1) params_grads = optimizer.backward(loss) # you may append operations for params_grads here # ... optimizer.apply_gradients(params_grads)
-
apply_optimize
(
loss,
startup_program,
params_grads
)
apply_optimize¶
-
Second part of minimize, appending optimization operators for given params_grads pairs. :param loss: loss variable to run optimizations. :type loss: Variable :param startup_program: startup_program for initializing parameters
in parameter_list.
- Parameters
-
params_grads (list) – list of (param, grad) pair to do optimization.
- Returns
-
A list of operators appended to the current program.
- Return type
-
list
-
backward
(
loss,
startup_program=None,
parameter_list=None,
no_grad_set=None,
callbacks=None
)
backward¶
-
The first part of
minimize
, do auto-diff to append backward operations for the current program.- Parameters
-
loss (Variable) –
loss
variable to run optimizations.startup_program (Program, optional) – api_fluid_Program for initializing parameters in
parameter_list
. The default value is None, at this time api_fluid_default_startup_program will be used.parameter_list (Iterable, optional) – Iterable of
Variable
orVariable.name
to update to minimizeloss
. The default value is None, at this time all parameters will be updated.no_grad_set (set, optional) – Set of
Variable
orVariable.name
that don’t need to be updated. The default value is None.callbacks (list, optional) – list of callable objects to run when appending backward operator for one parameter. The default value is None.
- Returns
-
-
list of (param, grad) variable pairs, param is
Parameter
, -
grad is the gradient value corresponding to the parameter.
-
list of (param, grad) variable pairs, param is
- Return type
-
list
Examples
See examples in
apply_gradients
.
-
clear_gradients
(
)
clear_gradients¶
-
Clear the gradients of all optimized parameters for model.
If not, new gradient will accumulat on previous gradient.
- Returns
-
None
Examples
import paddle.fluid as fluid import paddle import numpy as np with fluid.dygraph.guard(): value = np.arange(26).reshape(2, 13).astype("float32") a = fluid.dygraph.to_variable(value) linear = paddle.nn.Linear(13, 5) # This can be any optimizer supported by dygraph. adam = fluid.optimizer.Adam(learning_rate = 0.01, parameter_list = linear.parameters()) out = linear(a) out.backward() adam.minimize(out) adam.clear_gradients()
-
current_step_lr
(
)
current_step_lr¶
-
- Api_attr
-
imperative
Get current step learning rate. The return value is all the same When LearningRateDecay is not used, otherwise return the step learning rate.
- Returns
-
The learning rate of the current step.
- Return type
-
float
Examples
import paddle.fluid as fluid import numpy as np import paddle # example1: LearningRateDecay is not used, return value is all the same with fluid.dygraph.guard(): emb = paddle.nn.Embedding(10, 10) adam = fluid.optimizer.Adam(0.001, parameter_list = emb.parameters()) lr = adam.current_step_lr() print(lr) # 0.001 # example2: PiecewiseDecay is used, return the step learning rate with fluid.dygraph.guard(): inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype("float32") linear = paddle.nn.Linear(10, 10) inp = fluid.dygraph.to_variable(inp) out = linear(inp) loss = paddle.mean(out) bd = [2, 4, 6, 8] value = [0.2, 0.4, 0.6, 0.8, 1.0] adam = fluid.optimizer.Adam(fluid.dygraph.PiecewiseDecay(bd, value, 0), parameter_list=linear.parameters()) # first step: learning rate is 0.2 np.allclose(adam.current_step_lr(), 0.2, rtol=1e-06, atol=0.0) # True # learning rate for different steps ret = [0.2, 0.2, 0.4, 0.4, 0.6, 0.6, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0] for i in range(12): adam.minimize(loss) lr = adam.current_step_lr() np.allclose(lr, ret[i], rtol=1e-06, atol=0.0) # True
-
minimize
(
loss,
startup_program=None,
parameter_list=None,
no_grad_set=None
)
minimize¶
-
Add operations to minimize
loss
by updatingparameter_list
.- Parameters
-
loss (Variable) – A
Variable
containing the value to minimize.startup_program (Program, optional) – api_fluid_Program for initializing parameters in
parameter_list
. The default value is None, at this time api_fluid_default_startup_program will be used.parameter_list (Iterable, optional) – Iterable of
Variable
orVariable.name
to update to minimizeloss
. The default value is None, at this time all parameters will be updated.no_grad_set (set, optional) – Set of
Variable
orVariable.name
that don’t need to be updated. The default value is None.
- Returns
-
tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is
Parameter
, grad is the gradient value corresponding to the parameter. The returned tuple can be passed tofetch_list
inExecutor.run()
to indicate program pruning. If so, the program will be pruned byfeed
andfetch_list
before run, see details inExecutor
. - Return type
-
tuple
Examples
Please refer to the example of current Optimizer.
-
set_dict
(
state_dict
)
set_dict¶
-
Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.
- Parameters
-
state_dict (dict) – Dict contains all the Variable needed by optimizer
- Returns
-
None
Examples
import paddle paddle.disable_static() emb = paddle.nn.Embedding(10, 10) state_dict = emb.state_dict() paddle.save(state_dict, "paddle_dy.pdparams") scheduler = paddle.optimizer.lr.NoamDecay( d_model=0.01, warmup_steps=100, verbose=True) adam = paddle.optimizer.Adam( learning_rate=scheduler, parameters=emb.parameters()) state_dict = adam.state_dict() paddle.save(state_dict, "paddle_dy.pdopt") para_state_dict = paddle.load("paddle_dy.pdparams") opti_state_dict = paddle.load("paddle_dy.pdopt")
-
set_lr
(
value
)
set_lr¶
-
- Api_attr
-
imperative
Set the value of the learning rate manually in the optimizer. If the optimizer use LearningRateDecay, this API cannot be invoked, because it will lead to conflict.
- Parameters
-
value (float|Variable) – the value of learning rate
- Returns
-
None
Examples
import paddle import paddle.fluid as fluid import paddle with fluid.dygraph.guard(): linear = paddle.nn.Linear(10, 10) adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters()) # set learning rate manually by python float value lr_list = [0.2, 0.3, 0.4, 0.5, 0.6] for i in range(5): adam.set_lr(lr_list[i]) lr = adam.current_step_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.2 # current lr is 0.3 # current lr is 0.4 # current lr is 0.5 # current lr is 0.6 # set learning rate manually by framework Variable lr_var = paddle.static.create_global_var( shape=[1], value=0.7, dtype='float32') adam.set_lr(lr_var) lr = adam.current_step_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.7
-
set_state_dict
(
state_dict
)
set_state_dict¶
-
Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.
- Parameters
-
state_dict (dict) – Dict contains all the Variable needed by optimizer
- Returns
-
None
Examples
import paddle paddle.disable_static() emb = paddle.nn.Embedding(10, 10) state_dict = emb.state_dict() paddle.save(state_dict, "paddle_dy.pdparams") scheduler = paddle.optimizer.lr.NoamDecay( d_model=0.01, warmup_steps=100, verbose=True) adam = paddle.optimizer.Adam( learning_rate=scheduler, parameters=emb.parameters()) state_dict = adam.state_dict() paddle.save(state_dict, "paddle_dy.pdopt") para_state_dict = paddle.load("paddle_dy.pdparams") opti_state_dict = paddle.load("paddle_dy.pdopt")
-
state_dict
(
)
state_dict¶
-
Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.
Args: None :returns: dict contains all the variable used by optimizer :rtype: state_dict(dict)
Examples
import paddle.fluid as fluid import paddle with fluid.dygraph.guard(): emb = paddle.nn.Embedding(10, 10) adam = fluid.optimizer.Adam(0.001, parameter_list=emb.parameters()) state_dict = adam.state_dict()