The Adam optimizer uses an optimization described at the end of section 2 of Adam paper , it can dynamically adjusts the learning rate of each parameter using the 1st moment estimates and the 2nd moment estimates of the gradient.

The parameter param_out update rule with gradient grad:

\begin{align}\begin{aligned}t & = t + 1\\\begin{split}moment\_1\_out & = {\\beta}_1 * moment\_1 + (1 - {\\beta}_1) * grad\end{split}\\\begin{split}moment\_2\_out & = {\\beta}_2 * moment\_2 + (1 - {\\beta}_2) * grad * grad\end{split}\\\begin{split}learning\_rate & = learning\_rate * \\ \\frac{\sqrt{1 - {\\beta}_2^t}}{1 - {\\beta}_1^t}\end{split}\\\begin{split}param\_out & = param - learning\_rate * \\frac{moment\_1}{\sqrt{moment\_2} + \epsilon}\end{split}\end{aligned}\end{align}

Related paper: Adam: A Method for Stochastic Optimization

Parameters
• learning_rate (float|LRScheduler, optional) – The learning rate used to update Parameter. It can be a float value or a LRScheduler. The default value is 0.001.

• beta1 (float|Tensor, optional) – The exponential decay rate for the 1st moment estimates. It should be a float number or a Tensor with shape [1] and data type as float32. The default value is 0.9.

• beta2 (float|Tensor, optional) – The exponential decay rate for the 2nd moment estimates. It should be a float number or a Tensor with shape [1] and data type as float32. The default value is 0.999.

• epsilon (float, optional) – A small float value for numerical stability. The default value is 1e-08.

• parameters (list|tuple, optional) – List/Tuple of Tensor to update to minimize loss. This parameter is required in dygraph mode. The default value is None in static mode, at this time all parameters will be updated.

• weight_decay (float|WeightDecayRegularizer, optional) – The strategy of regularization. It canbe a float value as coeff of L2 regularization or api_fluid_regularizer_L1Decay, api_fluid_regularizer_L2Decay. If a parameter has set regularizer using api_fluid_ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.

• grad_clip (GradientClipBase, optional) – Gradient cliping strategy, it’s an instance of some derived class of GradientClipBase . There are three cliping strategies ( api_fluid_clip_GradientClipByGlobalNorm , api_fluid_clip_GradientClipByNorm , api_fluid_clip_GradientClipByValue ). Default None, meaning there is no gradient clipping.

• lazy_mode (bool, optional) – The official Adam algorithm has two moving-average accumulators. The accumulators are updated at every step. Every element of the two moving-average is updated in both dense mode and sparse mode. If the size of parameter is very large, then the update may be very slow. The lazy mode only update the element that has gradient in current mini-batch, so it will be much more faster. But this mode has different semantics with the original Adam algorithm and may lead to different result. The default value is False.

• multi_precision (bool, optional) – Whether to use multi-precision during weight updating. Default is false.

• name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. The default value is None.

Examples

import paddle

out = linear(inp)
parameters=linear.parameters())
out.backward()

# Adam with beta1/beta2 as Tensor and weight_decay as float

out = linear(inp)

parameters=linear.parameters(),
beta1=beta1,
beta2=beta2,
weight_decay=0.01)
out.backward()

step ( )

Execute the optimizer and update parameters once.

Returns

None

Examples

import paddle

# This can be any optimizer supported by dygraph.
parameters = linear.parameters())
out = linear(a)
out.backward()


Create and add backward regularization Operators

Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization.

Parameters
• parameters_and_grads – A list of (parameters, gradients) pairs that need to be regularized.

• regularization – A global regularizer. If the parameter is not set. It will be applied with regularizer.

Returns

Return type

list[(Variable, Variable)]

Raises

Exception – Unknown regularization type

Clear the gradients of all optimized parameters for model.

Returns

None

Examples

import numpy as np

value = np.arange(26).reshape(2, 13).astype("float32")
# This can be any optimizer supported by dygraph.
parameters = linear.parameters())
out = linear(a)
out.backward()

get_lr ( )

Get current learning rate of optimizer. If ‘LRScheduler’ is not used, the return value is all the same. If ‘LRScheduler’ is used, the return value is the current scheduled learing rete.

Returns

The current learning rate of optimizer.

Return type

float

Examples

# train on default dynamic graph mode
import numpy as np

## example1: LRScheduler is not used, return the same value is all the same
for batch in range(10):
out = emb(input)
out.backward()
print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.01

## example2: StepDecay is used, return the scheduled learning rate
for batch in range(10):
out = emb(input)
out.backward()
print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05...
scheduler.step()

# train on static graph mode

exe.run(start_prog)
for batch in range(10):
print("Learning rate of step{}: {}", adam.get_lr())     # 0.5->0.05->0.005...
out = exe.run(main_prog, feed={'x': np.random.randn(3, 10).astype('float32')})
scheduler.step()

minimize ( loss, startup_program=None, parameters=None, no_grad_set=None )

Add operations to minimize loss by updating parameters.

Parameters
• loss (Tensor) – A Tensor containing the value to minimize.

• startup_program (Program, optional) – api_fluid_Program for initializing parameters in parameters. The default value is None, at this time api_fluid_default_startup_program will be used.

• parameters (list, optional) – List of Tensor or Tensor.name to update to minimize loss. The default value is None, at this time all parameters will be updated.

• no_grad_set (set, optional) – Set of Tensor or Tensor.name that don’t need to be updated. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) tensor pairs, param is Parameter, grad is the gradient value corresponding to the parameter. In static graph mode, the returned tuple can be passed to fetch_list in Executor.run() to indicate program pruning. If so, the program will be pruned by feed and fetch_list before run, see details in Executor.

Return type

tuple

Examples

import paddle
input = paddle.uniform(shape=[10, 10], min=-0.1, max=0.1)
out = linear(input)

parameters=linear.parameters(),
weight_decay=0.01)
out.backward()

set_lr ( value )
Api_attr

imperative

Set the value of the learning rate manually in the optimizer. If the optimizer use LRScheduler, this API cannot be invoked, because it will lead to conflict.

Parameters

value (float) – the value of learning rate

Returns

None

Examples

import paddle

# set learning rate manually by python float value
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
print("current lr is {}".format(lr))
# Print:
#    current lr is 0.2
#    current lr is 0.3
#    current lr is 0.4
#    current lr is 0.5
#    current lr is 0.6

set_state_dict ( state_dict )

Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Tensor needed by optimizer

Returns

None

Examples

import paddle

layer_state_dict = emb.state_dict()

d_model=0.01, warmup_steps=100, verbose=True)
learning_rate=scheduler,
parameters=emb.parameters())


state_dict ( )

Get state dict information from optimizer. It contain all the tensor used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.

Parameters

None

Returns

dict contains all the Tensor used by optimizer

Return type

state_dict(dict)

Examples

import paddle