Momentum

class paddle.optimizer. Momentum ( learning_rate=0.001, momentum=0.9, parameters=None, use_nesterov=False, weight_decay=None, grad_clip=None, multi_precision=False, rescale_grad=1.0, name=None ) [source]

Simple Momentum optimizer with velocity state

This optimizer has a flag for Nestrov Momentum.

The update equations are as follows:

\[ \begin{align}\begin{aligned}& velocity = mu * velocity + gradient\\& if (use\_nesterov):\\&\quad param = param - (gradient + mu * velocity) * learning\_rate\\& else:\\&\quad param = param - learning\_rate * velocity\end{aligned}\end{align} \]
Parameters
  • learning_rate (float|Tensor|LearningRateDecay, optional) – The learning rate used to update Parameter. It can be a float value, a Tensor with a float type or a LearningRateDecay. The default value is 0.001.

  • momentum (float) – Momentum factor. The default value is 0.9.

  • parameters (list, optional) – List of Tensor to update to minimize loss. This parameter is required in dygraph mode. The default value is None in static mode, at this time all parameters will be updated.

  • weight_decay (float|WeightDecayRegularizer, optional) – The strategy of regularization.

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/optimizer/momentum.py:docstring of paddle.optimizer.momentum.Momentum, line 30)

Field list ends without a blank line; unexpected unindent.

:param It canbe a float value as coeff of L2 regularization or : :param api_fluid_regularizer_L1Decay: :param api_fluid_regularizer_L2Decay.: :param If a parameter has set regularizer using api_fluid_ParamAttr already: :param : :param the regularization setting here in optimizer will be ignored for this parameter. : :param Otherwise: :param the regularization setting here in optimizer will take effect. : :param Default None: :param meaning there is no regularization.: :param grad_clip: Gradient cliping strategy, it’s an instance of

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/optimizer/momentum.py:docstring of paddle.optimizer.momentum.Momentum, line 41)

Unexpected indentation.

some derived class of GradientClipBase . There are three cliping strategies ( api_fluid_clip_GradientClipByGlobalNorm , api_fluid_clip_GradientClipByNorm , api_fluid_clip_GradientClipByValue ). Default None, meaning there is no gradient clipping.

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/optimizer/momentum.py:docstring of paddle.optimizer.momentum.Momentum, line 44)

Block quote ends without a blank line; unexpected unindent.

Parameters
  • multi_precision (bool, optional) – Whether to use multi-precision during weight updating. Default is false.

  • rescale_grad (float, optional) – Multiply the gradient with rescale_grad before updating. Often choose to be 1.0/batch_size.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Examples

import paddle
import numpy as np
paddle.disable_static()
inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype("float32")
linear = paddle.nn.Linear(10, 10)
inp = paddle.to_tensor(inp)
out = linear(inp)
loss = paddle.mean(out)
beta1 = paddle.to_tensor([0.9], dtype="float32")
beta2 = paddle.to_tensor([0.99], dtype="float32")
momentum = paddle.optimizer.Momentum(learning_rate=0.1, parameters=linear.parameters(), weight_decay=0.01)
back = out.backward()
momentum.step()
momentum.clear_grad()
clear_grad ( )

Clear the gradients of all optimized parameters for model.

If not, new gradient will accumulat on previous gradient.

Returns

None

Examples

import numpy as np
import paddle

value = np.arange(26).reshape(2, 13).astype("float32")
a = paddle.to_tensor(value)
linear = paddle.nn.Linear(13, 5)
# This can be any optimizer supported by dygraph.
adam = paddle.optimizer.Adam(learning_rate = 0.01,
                            parameters = linear.parameters())
out = linear(a)
out.backward()
adam.step()
adam.clear_grad()
get_lr ( )

Get current learning rate of optimizer. If ‘LRScheduler’ is not used, the return value is all the same. If ‘LRScheduler’ is used, the return value is the current scheduled learing rete.

Returns

The current learning rate of optimizer.

Return type

float

Examples

# train on default dynamic graph mode
import paddle
import numpy as np
emb = paddle.nn.Embedding(10, 3)

## example1: LRScheduler is not used, return the same value is all the same
adam = paddle.optimizer.Adam(0.01, parameters = emb.parameters())
for batch in range(10):
    input = paddle.randint(low=0, high=5, shape=[5])
    out = emb(input)
    out.backward()
    print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.01
    adam.step()

## example2: StepDecay is used, return the scheduled learning rate
scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1)
adam = paddle.optimizer.Adam(scheduler, parameters = emb.parameters())
for batch in range(10):
    input = paddle.randint(low=0, high=5, shape=[5])
    out = emb(input)
    out.backward()
    print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05...
    adam.step()
    scheduler.step()

# train on static graph mode
paddle.enable_static()
main_prog = paddle.static.Program()
start_prog = paddle.static.Program()
with paddle.static.program_guard(main_prog, start_prog):
    x = paddle.static.data(name='x', shape=[None, 10])
    z = paddle.static.nn.fc(x, 100)
    loss = paddle.mean(z)
    scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1)
    adam = paddle.optimizer.Adam(learning_rate=scheduler)
    adam.minimize(loss)

exe = paddle.static.Executor()
exe.run(start_prog)
for batch in range(10):
    print("Learning rate of step{}: {}", adam.get_lr())     # 0.5->0.05->0.005...
    out = exe.run(main_prog, feed={'x': np.random.randn(3, 10).astype('float32')})
    scheduler.step()
minimize ( loss, startup_program=None, parameters=None, no_grad_set=None )

Add operations to minimize loss by updating parameters.

Parameters
  • loss (Tensor) – A Tensor containing the value to minimize.

  • startup_program (Program, optional) – api_fluid_Program for initializing parameters in parameters. The default value is None, at this time api_fluid_default_startup_program will be used.

  • parameters (list, optional) – List of Tensor or Tensor.name to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Tensor or Tensor.name that don’t need to be updated. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) tensor pairs, param is Parameter, grad is the gradient value corresponding to the parameter. In static graph mode, the returned tuple can be passed to fetch_list in Executor.run() to indicate program pruning. If so, the program will be pruned by feed and fetch_list before run, see details in Executor.

Return type

tuple

Examples

import paddle
linear = paddle.nn.Linear(10, 10)
input = paddle.uniform(shape=[10, 10], min=-0.1, max=0.1)
out = linear(input)
loss = paddle.mean(out)

beta1 = paddle.to_tensor([0.9], dtype="float32")
beta2 = paddle.to_tensor([0.99], dtype="float32")

adam = paddle.optimizer.Adam(learning_rate=0.1,
        parameters=linear.parameters(),
        weight_decay=0.01)
out.backward()
adam.minimize(loss)
adam.clear_grad()
set_lr ( value )
Api_attr

imperative

Set the value of the learning rate manually in the optimizer. If the optimizer use LRScheduler, this API cannot be invoked, because it will lead to conflict.

Parameters

value (float) – the value of learning rate

Returns

None

Examples

import paddle
linear = paddle.nn.Linear(10, 10)

adam = paddle.optimizer.Adam(0.1, parameters=linear.parameters())

# set learning rate manually by python float value
lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
for i in range(5):
    adam.set_lr(lr_list[i])
    lr = adam.get_lr()
    print("current lr is {}".format(lr))
# Print:
#    current lr is 0.2
#    current lr is 0.3
#    current lr is 0.4
#    current lr is 0.5
#    current lr is 0.6
set_state_dict ( state_dict )

Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Tensor needed by optimizer

Returns

None

Examples

import paddle

emb = paddle.nn.Embedding(10, 10)

layer_state_dict = emb.state_dict()
paddle.save(layer_state_dict, "emb.pdparams")

scheduler = paddle.optimizer.lr.NoamDecay(
    d_model=0.01, warmup_steps=100, verbose=True)
adam = paddle.optimizer.Adam(
    learning_rate=scheduler,
    parameters=emb.parameters())
opt_state_dict = adam.state_dict()
paddle.save(opt_state_dict, "adam.pdopt")

opti_state_dict = paddle.load("adam.pdopt")
adam.set_state_dict(opti_state_dict)
state_dict ( )

Get state dict information from optimizer. It contain all the tensor used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.

Parameters

None

Returns

dict contains all the Tensor used by optimizer

Return type

state_dict(dict)

Examples

import paddle
emb = paddle.nn.Embedding(10, 10)

adam = paddle.optimizer.Adam(0.001, parameters=emb.parameters())
state_dict = adam.state_dict()
step ( )

Execute the optimizer and update parameters once.

Returns

None

Examples

import paddle
import numpy as np

value = np.arange(26).reshape(2, 13).astype("float32")
a = paddle.to_tensor(value)
linear = paddle.nn.Linear(13, 5)
# This can be any optimizer supported by dygraph.
adam = paddle.optimizer.Adam(learning_rate = 0.01,
                            parameters = linear.parameters())
out = linear(a)
out.backward()
adam.step()
adam.clear_grad()