DGCMomentumOptimizer

api_attr

declarative programming (static graph)

class paddle.fluid.optimizer.DGCMomentumOptimizer(learning_rate, momentum, rampup_begin_step, rampup_step=1, sparsity=[0.999], parameter_list=None, use_nesterov=False, num_trainers=None, regularization=None, grad_clip=None, name=None)[source]

DGC (Deep Gradient Compression) Momentum Optimizer. Original paper is https://arxiv.org/abs/1712.01887

DGC reduces the communication bandwidth by sending only the important gradients (sparse update): only gradients larger than a threshold are transmitted.

To avoid losing information, DGC accumulates the rest of the gradients locally.

Eventually, these gradients become large enough to be transmitted.

Thus, DGC sends the large gradients immediately but eventually sends all of the gradients over time.

To ensure no loss of accuracy, DGC employs momentum correction and local gradient clipping on top of the gradient sparsification to maintain model performance.

DGC also uses momentum factor masking and warmup training to overcome the staleness problem caused by reduced communication.

This optimizer will do two things:

  1. Compress the gradient by get TopK import value from tensor and use it for allreduce to reduce network bandwidth.

  2. Call momentum to optimize the cost.

Parameters
  • learning_rate (float|Variable) – The learning rate used to update parameters. It can be a float value or a Variable with one float value as a data element.

  • momentum (float) – Momentum factor.

  • rampup_begin_step (int) – The beginning step from which gradient compression is implemented.

  • rampup_step (int) – Time steps used in sparsity warm-up periods. Default is 1. For example, if the sparsity is [0.75, 0.9375, 0.984375, 0.996, 0.999], and the rampup_step is 100, it will use 0.75 at 0~19 steps, and 0.9375 at 20~39 steps, and so on. And when reach sparsity array ends, it will use 0.999 then and after.

  • sparsity (list[float]) – Get top important element from gradient tensor, the ratio is (1 - current sparsity). Default is [0.999]. For example, if the sparsity is [0.99, 0.999], the top [1%, 0.1%] important element will be transmitted.

  • parameter_list (list, optional) – List of Variable names to update to minimize loss. This parameter is required in dygraph mode. The default value is None in static mode, at this time all parameters will be updated.

  • use_nesterov (bool) – Enables Nesterov momentum. True means use Nesterov. Default is False.

  • regularization (WeightDecayRegularizer, optional) – The strategy of regularization. There are two method: L1Decay , L2Decay . If a parameter has set regularizer using ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.

  • grad_clip (GradientClipByNorm, optional) – Gradient cliping strategy. DGCMomentumOptimizer only support GradientClipByNorm , and if not, it will raise TypeError. Default None, meaning there is no gradient clipping.

  • name (str, optional) – This parameter is used by developers to print debugging information. For details, please refer to Name. Default is None.

Examples

import paddle.fluid as fluid
optimizer = fluid.optimizer.DGCMomentumOptimizer(
            learning_rate=0.0001,
            momentum=0.9,
            rampup_step=1000,
            rampup_begin_step=1252,
            sparsity=[0.999, 0.999])
clear_gradients()

Clear the gradients of all optimized parameters for model.

Returns

None

Examples

import paddle.fluid as fluid
import numpy as np

with fluid.dygraph.guard():
    value = np.arange(26).reshape(2, 13).astype("float32")
    a = fluid.dygraph.to_variable(value)
    linear = fluid.Linear(13, 5, dtype="float32")
    # This can be any optimizer supported by dygraph.
    adam = fluid.optimizer.Adam(learning_rate = 0.01,
                                parameter_list = linear.parameters())
    out = linear(a)
    out.backward()
    adam.minimize(out)
    adam.clear_gradients()
current_step_lr()

Note

This API is ONLY available in Dygraph mode

Get current step learning rate. The return value is all the same When LearningRateDecay is not used, otherwise return the step learning rate.

Returns

The learning rate of the current step.

Return type

float

Examples

import paddle.fluid as fluid
import numpy as np

# example1: LearningRateDecay is not used, return value is all the same
with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])
    adam = fluid.optimizer.Adam(0.001, parameter_list = emb.parameters())
    lr = adam.current_step_lr()
    print(lr) # 0.001

# example2: PiecewiseDecay is used, return the step learning rate
with fluid.dygraph.guard():
    inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype("float32")
    linear = fluid.dygraph.nn.Linear(10, 10)
    inp = fluid.dygraph.to_variable(inp)
    out = linear(inp)
    loss = fluid.layers.reduce_mean(out)

    bd = [2, 4, 6, 8]
    value = [0.2, 0.4, 0.6, 0.8, 1.0]
    adam = fluid.optimizer.Adam(fluid.dygraph.PiecewiseDecay(bd, value, 0),
                           parameter_list=linear.parameters())

    # first step: learning rate is 0.2
    np.allclose(adam.current_step_lr(), 0.2, rtol=1e-06, atol=0.0) # True

    # learning rate for different steps
    ret = [0.2, 0.2, 0.4, 0.4, 0.6, 0.6, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0]
    for i in range(12):
        adam.minimize(loss)
        lr = adam.current_step_lr()
        np.allclose(lr, ret[i], rtol=1e-06, atol=0.0) # True
minimize(loss, startup_program=None, parameter_list=None, no_grad_set=None)

Add operations to minimize loss by updating parameter_list.

Parameters
  • loss (Variable) – A Variable containing the value to minimize.

  • startup_program (Program, optional) – Program for initializing parameters in parameter_list. The default value is None, at this time default_startup_program will be used.

  • parameter_list (list, optional) – List of Variable or Variable.name to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Variable or Variable.name that don’t need to be updated. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is Parameter, grad is the gradient value corresponding to the parameter. The returned tuple can be passed to fetch_list in Executor.run() to indicate program pruning. If so, the program will be pruned by feed and fetch_list before run, see details in Executor.

Return type

tuple

Examples

Please refer to the example of current Optimizer.

set_dict(state_dict)

Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Variable needed by optimizer

Returns

None

Examples

with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])

    state_dict = emb.state_dict()
    fluid.save_dygraph(state_dict, "paddle_dy")

    adam = fluid.optimizer.Adam(learning_rate=fluid.layers.noam_decay( 100, 10000),
                                parameter_list=emb.parameters())
    state_dict = adam.state_dict()
    fluid.save_dygraph(state_dict, "paddle_dy")

    para_state_dict, opti_state_dict = fluid.load_dygraph( "paddle_dy")

    adam.set_dict(opti_state_dict)
state_dict()

Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.

Args: None :returns: dict contains all the variable used by optimizer :rtype: state_dict(dict)

Examples

import paddle.fluid as fluid

with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])

    adam = fluid.optimizer.Adam(0.001, parameter_list=emb.parameters())
    state_dict = adam.state_dict()