ModelAverage

class paddle.fluid.optimizer.ModelAverage(average_window_rate, min_average_window=10000, max_average_window=10000, regularization=None, name=None)[source]

The ModelAverage optimizer accumulates specific continuous historical parameters during training. The accumulated historical range can be controlled by the passed average_window_rate argument. The averaged Parameter are used in the prediction, which usually can improve the accuracy of the prediction.

Accumulate the average of the Parameter in the sliding window, the result will be saved in a temporary variable, can be applied to the current model’s Parameter by calling the apply() method, and the current model Parameter can be restored by calling the restore() method.

The window size for calculating the average is determined by average_window_rate, min_average_window, max_average_window and the current Parameter update times (num_updates).

When the cumulative times (num_accumulates) is greater than the specific window threshold (average_window), the accumulated Parameter temporary variable is set to 0.0. The following example will help to understand the role of these arguments:

if num_accumulates >= min_average_window and num_accumulates >= min(max_average_window, num_updates * average_window_rate):
    num_accumulates = 0

In the above conditional judgment statement, num_accumulates indicates the current accumulated number, which can be abstractly understood as the length of the cumulative window. The length of the window must be at least the length set by the min_average_window argument, and cannot exceed the length specified by the max_average_window argument or num_updates * average_window_rate, where num_updates indicates the current Parameter update times, average_window_rate is a coefficient that calculates the length of the window.

Parameters
  • average_window_rate (float) – The calculate ratio of the window length relative to Parameter update times.

  • min_average_window (int, optional) – the minimum size of average window length. The default value is 10000.

  • max_average_window (int, optional) – The maximum size of average window length. The default value is 10000.

  • regularization (WeightDecayRegularizer, optional) – A Regularizer, such as L2DecayRegularizer. The default value is None.

  • name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. The default value is None.

Examples

import paddle.fluid as fluid
import numpy

# First create the Executor.
place = fluid.CPUPlace()  # fluid.CUDAPlace(0)
exe = fluid.Executor(place)

train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
    # build net
    data = fluid.data(name='X', shape=[None, 1], dtype='float32')
    hidden = fluid.layers.fc(input=data, size=10)
    loss = fluid.layers.mean(hidden)
    optimizer = fluid.optimizer.Momentum(learning_rate=0.2, momentum=0.1)
    optimizer.minimize(loss)

    # build ModelAverage optimizer
    model_average = fluid.optimizer.ModelAverage(0.15,
                                                 min_average_window=10000,
                                                 max_average_window=12500)

    exe.run(startup_program)
    for i in range(12500):
        x = numpy.random.random(size=(10, 1)).astype('float32')
        outs = exe.run(program=train_program,
                       feed={'X': x},
                       fetch_list=[loss.name])

    # apply ModelAverage
    with model_average.apply(exe):
        x = numpy.random.random(size=(10, 1)).astype('float32')
        exe.run(program=train_program,
                feed={'X': x},
                fetch_list=[loss.name])
apply(executor, need_restore=True)

Apply the average of the cumulative Parameter to the parameters of the current model.

Parameters
  • executor (fluid.Executor) – The current network executor.

  • need_restore (bool) – Restore flag variable, if set to True, the network will restore the parameters of the network to the default value, if set to False, it will not be restored. The default value is True.

Examples

import paddle.fluid as fluid
import numpy

# First create the Executor.
place = fluid.CPUPlace()  # fluid.CUDAPlace(0)
exe = fluid.Executor(place)

train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
    # build net
    data = fluid.data(name='X', shape=[None, 1], dtype='float32')
    hidden = fluid.layers.fc(input=data, size=10)
    loss = fluid.layers.mean(hidden)
    optimizer = fluid.optimizer.Momentum(learning_rate=0.2, momentum=0.1)
    optimizer.minimize(loss)

    # build ModelAverage optimizer
    model_average = fluid.optimizer.ModelAverage(0.15,
                                                min_average_window=10000,
                                                max_average_window=12500)

    exe.run(startup_program)
    for i in range(12500):
        x = numpy.random.random(size=(10, 1)).astype('float32')
        outs = exe.run(program=train_program,
                    feed={'X': x},
                    fetch_list=[loss.name])

    # apply ModelAverage
    with model_average.apply(exe):
        x = numpy.random.random(size=(10, 1)).astype('float32')
        exe.run(program=train_program,
                feed={'X': x},
                fetch_list=[loss.name])
restore(executor)

Restore Parameter values of current model.

Parameters

executor (fluid.Executor) – The current network executor.

Examples

import paddle.fluid as fluid
import numpy

# First create the Executor.
place = fluid.CPUPlace()  # fluid.CUDAPlace(0)
exe = fluid.Executor(place)

train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
    # build net
    data = fluid.data(name='X', shape=[None, 1], dtype='float32')
    hidden = fluid.layers.fc(input=data, size=10)
    loss = fluid.layers.mean(hidden)
    optimizer = fluid.optimizer.Momentum(learning_rate=0.2, momentum=0.1)
    optimizer.minimize(loss)

    # build ModelAverage optimizer
    model_average = fluid.optimizer.ModelAverage(0.15,
                                                min_average_window=10000,
                                                max_average_window=12500)

    exe.run(startup_program)
    for i in range(12500):
        x = numpy.random.random(size=(10, 1)).astype('float32')
        outs = exe.run(program=train_program,
                    feed={'X': x},
                    fetch_list=[loss.name])

    # apply ModelAverage
    with model_average.apply(exe, False):
        x = numpy.random.random(size=(10, 1)).astype('float32')
        exe.run(program=train_program,
                feed={'X': x},
                fetch_list=[loss.name])

    # restore Parameters
    model_average.restore(exe)
minimize(loss, startup_program=None, parameter_list=None, no_grad_set=None, grad_clip=None)

Add operations to minimize loss by updating parameter_list.

Parameters
  • loss (Variable) – A Variable containing the value to minimize.

  • startup_program (Program, optional) – Program for initializing parameters in parameter_list. The default value is None, at this time default_startup_program will be used.

  • parameter_list (list, optional) – List of Variable names to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Variable objects that don’t need to be updated. The default value is None.

  • grad_clip (GradClipBase, optional) – Gradient clipping strategy, static graph mode does not need to use this argument. Currently, this argument only supports gradient clipping in dygraph mode. In the future, this argument my be adjusted. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is Parameter, grad is the gradient value corresponding to the parameter.

Return type

tuple

Examples

Please refer to the example of current Optimizer.

set_dict(state_dict)

Load optimizer state dict. For Adam opimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Variable needed by optimizer

Returns

None

Examples

with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding( "emb", [10, 10])

    state_dict = emb.state_dict()
    fluid.save_dygraph( state_dict, "paddle_dy")

    adam = fluid.optimizer.Adam( learning_rate = fluid.layers.noam_decay( 100, 10000) )
    state_dict = adam.state_dict()
    fluid.save_dygraph( state_dict, "padle_dy")

    para_state_dict, opti_state_dict = fluid.load_dygraph( "paddle_dy")

    adam.set_dict( opti_state_dict )
state_dict()

Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam opimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimzier never be called(minimize function), the state_dict is empty.

Args: None :returns: dict contains all the variablel used by optimizer :rtype: state_dict(dict)

Examples

import paddle.fluid as fluid
adam = fluid.optimizer.Adam(0.001)
state_dict = adam.state_dict()