AdamaxOptimizer

class paddle.fluid.optimizer. AdamaxOptimizer ( learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, parameter_list=None, regularization=None, grad_clip=None, name=None ) [source]

The Adamax optimizer is implemented based on the Adamax Optimization in Section 7 of Adam paper. The Adamax algorithm is a variant of the Adam algorithm based on the infinite norm, which makes the learning rate update algorithm more stable and simple.

The parameter param_out update rule with gradient grad:

\[ \begin{align}\begin{aligned}t & = t + 1\\\begin{split}moment\_out & = {\\beta}_1 * moment + (1 - {\\beta}_1) * grad\end{split}\\\begin{split}inf\_norm\_out & = max({\\beta}_2 * inf\_norm + \epsilon, |grad|)\end{split}\\\begin{split}learning\_rate & = \\frac{learning\_rate}{1 - {\\beta}_1^t}\end{split}\\\begin{split}param\_out & = param - learning\_rate * \\frac{moment\_out}{inf\_norm\_out}\end{split}\end{aligned}\end{align} \]

Related paper: Adam: A Method for Stochastic Optimization

The original paper does not have an epsilon attribute, it is added here for numerical stability to prevent the division by 0 error.

Parameters
  • learning_rate (float|Variable, optional) – The learning rate used to update Parameter. It can be a float value or a Variable with a float type. The default value is 0.001.

  • beta1 (float, optional) – The exponential decay rate for the 1st moment estimates. The default value is 0.9.

  • beta2 (float, optional) – The exponential decay rate for the 2nd moment estimates. The default value is 0.999.

  • epsilon (float, optional) – A small float value for numerical stability. The default value is 1e-08.

  • parameter_list (Iterable, optional) – Iterable of Variable names to update to minimize loss. This parameter is required in dygraph mode. The default value is None in static mode, at this time all parameters will be updated.

  • regularization (WeightDecayRegularizer, optional) –

    The strategy of regularization. There are two method:

    api_fluid_regularizer_L1Decay , api_fluid_regularizer_L2Decay . If a parameter has set

    System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/fluid/optimizer.py:docstring of paddle.fluid.optimizer.AdamaxOptimizer, line 44)

    Definition list ends without a blank line; unexpected unindent.

    regularizer using api_fluid_ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.

  • grad_clip (GradientClipBase, optional) – Gradient cliping strategy, it’s an instance of some derived class of GradientClipBase . There are three cliping strategies ( api_fluid_clip_GradientClipByGlobalNorm , api_fluid_clip_GradientClipByNorm , api_fluid_clip_GradientClipByValue ). Default None, meaning there is no gradient clipping.

  • name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. The default value is None.

Notes:

Currently, AdamaxOptimizer doesn’t support sparse parameter optimization.

Examples

import paddle.fluid as fluid
import numpy

# First create the Executor.
place = fluid.CPUPlace() # fluid.CUDAPlace(0)
exe = fluid.Executor(place)

train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
    data = fluid.data(name='X', shape=[None, 1], dtype='float32')
    hidden = fluid.layers.fc(input=data, size=10)
    loss = fluid.layers.mean(hidden)
    adam = fluid.optimizer.AdamaxOptimizer(learning_rate=0.2)
    adam.minimize(loss)

# Run the startup program once and only once.
exe.run(startup_program)

x = numpy.random.random(size=(10, 1)).astype('float32')
outs = exe.run(program=train_program,
              feed={'X': x},
               fetch_list=[loss.name])
append_regularization_ops ( parameters_and_grads, regularization=None )

append_regularization_ops

Create and add backward regularization Operators

Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization.

Parameters
  • parameters_and_grads – A list of (parameters, gradients) pairs that need to be regularized.

  • regularization – A global regularizer. If the parameter is not set. It will be applied with regularizer.

Returns

list of (parameters, gradients) pair with the regularized gradient

Return type

list[(Variable, Variable)]

Raises

Exception – Unknown regularization type

apply_gradients ( params_grads )

apply_gradients

Second part of minimize, appending optimization operators for given params_grads pairs.

Parameters

params_grads (list) – list of (param, grad) pair to do optimization.

Returns

A list of operators appended to the current program.

Return type

list

Examples

import paddle.fluid as fluid
loss = network()
optimizer = fluid.optimizer.SGD(learning_rate=0.1)
params_grads = optimizer.backward(loss)
# you may append operations for params_grads here
# ...
optimizer.apply_gradients(params_grads)
apply_optimize ( loss, startup_program, params_grads )

apply_optimize

Second part of minimize, appending optimization operators for given params_grads pairs. :param loss: loss variable to run optimizations. :type loss: Variable :param startup_program: startup_program for initializing parameters

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/fluid/optimizer.py:docstring of paddle.fluid.optimizer.Optimizer.apply_optimize, line 6)

Unexpected indentation.

in parameter_list.

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/fluid/optimizer.py:docstring of paddle.fluid.optimizer.Optimizer.apply_optimize, line 7)

Block quote ends without a blank line; unexpected unindent.

Parameters

params_grads (list) – list of (param, grad) pair to do optimization.

Returns

A list of operators appended to the current program.

Return type

list

backward ( loss, startup_program=None, parameter_list=None, no_grad_set=None, callbacks=None )

backward

The first part of minimize, do auto-diff to append backward operations for the current program.

Parameters
  • loss (Variable) – loss variable to run optimizations.

  • startup_program (Program, optional) – api_fluid_Program for initializing parameters in parameter_list. The default value is None, at this time api_fluid_default_startup_program will be used.

  • parameter_list (Iterable, optional) – Iterable of Variable or Variable.name to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Variable or Variable.name that don’t need to be updated. The default value is None.

  • callbacks (list, optional) – list of callable objects to run when appending backward operator for one parameter. The default value is None.

Returns

list of (param, grad) variable pairs, param is Parameter,

grad is the gradient value corresponding to the parameter.

Return type

list

Examples

See examples in apply_gradients.

clear_gradients ( )

clear_gradients

Clear the gradients of all optimized parameters for model.

If not, new gradient will accumulat on previous gradient.

Returns

None

Examples

import paddle.fluid as fluid
import numpy as np

with fluid.dygraph.guard():
    value = np.arange(26).reshape(2, 13).astype("float32")
    a = fluid.dygraph.to_variable(value)
    linear = fluid.Linear(13, 5, dtype="float32")
    # This can be any optimizer supported by dygraph.
    adam = fluid.optimizer.Adam(learning_rate = 0.01,
                                parameter_list = linear.parameters())
    out = linear(a)
    out.backward()
    adam.minimize(out)
    adam.clear_gradients()
current_step_lr ( )

current_step_lr

Api_attr

imperative

Get current step learning rate. The return value is all the same When LearningRateDecay is not used, otherwise return the step learning rate.

Returns

The learning rate of the current step.

Return type

float

Examples

import paddle.fluid as fluid
import numpy as np

# example1: LearningRateDecay is not used, return value is all the same
with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])
    adam = fluid.optimizer.Adam(0.001, parameter_list = emb.parameters())
    lr = adam.current_step_lr()
    print(lr) # 0.001

# example2: PiecewiseDecay is used, return the step learning rate
with fluid.dygraph.guard():
    inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype("float32")
    linear = fluid.dygraph.nn.Linear(10, 10)
    inp = fluid.dygraph.to_variable(inp)
    out = linear(inp)
    loss = fluid.layers.reduce_mean(out)

    bd = [2, 4, 6, 8]
    value = [0.2, 0.4, 0.6, 0.8, 1.0]
    adam = fluid.optimizer.Adam(fluid.dygraph.PiecewiseDecay(bd, value, 0),
                           parameter_list=linear.parameters())

    # first step: learning rate is 0.2
    np.allclose(adam.current_step_lr(), 0.2, rtol=1e-06, atol=0.0) # True

    # learning rate for different steps
    ret = [0.2, 0.2, 0.4, 0.4, 0.6, 0.6, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0]
    for i in range(12):
        adam.minimize(loss)
        lr = adam.current_step_lr()
        np.allclose(lr, ret[i], rtol=1e-06, atol=0.0) # True
minimize ( loss, startup_program=None, parameter_list=None, no_grad_set=None )

minimize

Add operations to minimize loss by updating parameter_list.

Parameters
  • loss (Variable) – A Variable containing the value to minimize.

  • startup_program (Program, optional) – api_fluid_Program for initializing parameters in parameter_list. The default value is None, at this time api_fluid_default_startup_program will be used.

  • parameter_list (Iterable, optional) – Iterable of Variable or Variable.name to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Variable or Variable.name that don’t need to be updated. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is Parameter, grad is the gradient value corresponding to the parameter. The returned tuple can be passed to fetch_list in Executor.run() to indicate program pruning. If so, the program will be pruned by feed and fetch_list before run, see details in Executor.

Return type

tuple

Examples

Please refer to the example of current Optimizer.

set_dict ( state_dict )

set_dict

Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Variable needed by optimizer

Returns

None

Examples

import paddle
import paddle.fluid as fluid

paddle.disable_static()

emb = paddle.nn.Embedding(10, 10)

state_dict = emb.state_dict()
fluid.save_dygraph(state_dict, "paddle_dy")

scheduler = paddle.optimizer.lr.NoamDecay(
    d_model=0.01, warmup_steps=100, verbose=True)
adam = paddle.optimizer.Adam(
    learning_rate=scheduler,
    parameters=emb.parameters())
state_dict = adam.state_dict()
fluid.save_dygraph(state_dict, "paddle_dy")

para_state_dict, opti_state_dict = fluid.load_dygraph("paddle_dy")
set_lr ( value )

set_lr

Api_attr

imperative

Set the value of the learning rate manually in the optimizer. If the optimizer use LearningRateDecay, this API cannot be invoked, because it will lead to conflict.

Parameters

value (float|Variable) – the value of learning rate

Returns

None

Examples

import paddle.fluid as fluid

with fluid.dygraph.guard():
    linear = fluid.dygraph.nn.Linear(10, 10)

    adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters())

    # set learning rate manually by python float value
    lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
    for i in range(5):
        adam.set_lr(lr_list[i])
        lr = adam.current_step_lr()
        print("current lr is {}".format(lr))
    # Print:
    #    current lr is 0.2
    #    current lr is 0.3
    #    current lr is 0.4
    #    current lr is 0.5
    #    current lr is 0.6


    # set learning rate manually by framework Variable
    lr_var = fluid.layers.create_global_var(
        shape=[1], value=0.7, dtype='float32')
    adam.set_lr(lr_var)
    lr = adam.current_step_lr()
    print("current lr is {}".format(lr))
    # Print:
    #    current lr is 0.7
set_state_dict ( state_dict )

set_state_dict

Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Variable needed by optimizer

Returns

None

Examples

import paddle
import paddle.fluid as fluid

paddle.disable_static()

emb = paddle.nn.Embedding(10, 10)

state_dict = emb.state_dict()
fluid.save_dygraph(state_dict, "paddle_dy")

scheduler = paddle.optimizer.lr.NoamDecay(
    d_model=0.01, warmup_steps=100, verbose=True)
adam = paddle.optimizer.Adam(
    learning_rate=scheduler,
    parameters=emb.parameters())
state_dict = adam.state_dict()
fluid.save_dygraph(state_dict, "paddle_dy")

para_state_dict, opti_state_dict = fluid.load_dygraph("paddle_dy")
state_dict ( )

state_dict

Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.

Args: None :returns: dict contains all the variable used by optimizer :rtype: state_dict(dict)

Examples

import paddle.fluid as fluid

with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])

    adam = fluid.optimizer.Adam(0.001, parameter_list=emb.parameters())
    state_dict = adam.state_dict()