ModelAverage¶
- api_attr
declarative programming (static graph)
-
class
paddle.fluid.optimizer.
ModelAverage
(average_window_rate, min_average_window=10000, max_average_window=10000, regularization=None, name=None)[source] The ModelAverage optimizer accumulates specific continuous historical parameters during training. The accumulated historical range can be controlled by the passed
average_window_rate
argument. The averagedParameter
are used in the prediction, which usually can improve the accuracy of the prediction.Accumulate the average of the
Parameter
in the sliding window, the result will be saved in a temporary variable, can be applied to the current model’sParameter
by calling theapply()
method, and the current modelParameter
can be restored by calling therestore()
method.The window size for calculating the average is determined by
average_window_rate
,min_average_window
,max_average_window
and the currentParameter
update times (num_updates).When the cumulative times (num_accumulates) is greater than the specific window threshold (average_window), the accumulated
Parameter
temporary variable is set to 0.0. The following example will help to understand the role of these arguments:if num_accumulates >= min_average_window and num_accumulates >= min(max_average_window, num_updates * average_window_rate): num_accumulates = 0
In the above conditional judgment statement,
num_accumulates
indicates the current accumulated number, which can be abstractly understood as the length of the cumulative window. The length of the window must be at least the length set by themin_average_window
argument, and cannot exceed the length specified by themax_average_window
argument ornum_updates * average_window_rate
, wherenum_updates
indicates the currentParameter
update times,average_window_rate
is a coefficient that calculates the length of the window.- Parameters
average_window_rate (float) – The calculate ratio of the window length relative to
Parameter
update times.min_average_window (int, optional) – the minimum size of average window length. The default value is 10000.
max_average_window (int, optional) – The maximum size of average window length. The default value is 10000.
regularization (WeightDecayRegularizer, optional) – The strategy of regularization. There are two method: L1Decay , L2Decay . If a parameter has set regularizer using ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.
name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. The default value is None.
Examples
import paddle.fluid as fluid import numpy # First create the Executor. place = fluid.CPUPlace() # fluid.CUDAPlace(0) exe = fluid.Executor(place) train_program = fluid.Program() startup_program = fluid.Program() with fluid.program_guard(train_program, startup_program): # build net data = fluid.data(name='X', shape=[None, 1], dtype='float32') hidden = fluid.layers.fc(input=data, size=10) loss = fluid.layers.mean(hidden) optimizer = fluid.optimizer.Momentum(learning_rate=0.2, momentum=0.1) optimizer.minimize(loss) # build ModelAverage optimizer model_average = fluid.optimizer.ModelAverage(0.15, min_average_window=10000, max_average_window=12500) exe.run(startup_program) for i in range(12500): x = numpy.random.random(size=(10, 1)).astype('float32') outs = exe.run(program=train_program, feed={'X': x}, fetch_list=[loss.name]) # apply ModelAverage with model_average.apply(exe): x = numpy.random.random(size=(10, 1)).astype('float32') exe.run(program=train_program, feed={'X': x}, fetch_list=[loss.name])
-
apply
(executor, need_restore=True) Apply the average of the cumulative
Parameter
to the parameters of the current model.- Parameters
executor (fluid.Executor) – The current network executor.
need_restore (bool) – Restore flag variable, if set to True, the network will restore the parameters of the network to the default value, if set to False, it will not be restored. The default value is True.
Examples
import paddle.fluid as fluid import numpy # First create the Executor. place = fluid.CPUPlace() # fluid.CUDAPlace(0) exe = fluid.Executor(place) train_program = fluid.Program() startup_program = fluid.Program() with fluid.program_guard(train_program, startup_program): # build net data = fluid.data(name='X', shape=[None, 1], dtype='float32') hidden = fluid.layers.fc(input=data, size=10) loss = fluid.layers.mean(hidden) optimizer = fluid.optimizer.Momentum(learning_rate=0.2, momentum=0.1) optimizer.minimize(loss) # build ModelAverage optimizer model_average = fluid.optimizer.ModelAverage(0.15, min_average_window=10000, max_average_window=12500) exe.run(startup_program) for i in range(12500): x = numpy.random.random(size=(10, 1)).astype('float32') outs = exe.run(program=train_program, feed={'X': x}, fetch_list=[loss.name]) # apply ModelAverage with model_average.apply(exe): x = numpy.random.random(size=(10, 1)).astype('float32') exe.run(program=train_program, feed={'X': x}, fetch_list=[loss.name])
-
restore
(executor) Restore
Parameter
values of current model.- Parameters
executor (fluid.Executor) – The current network executor.
Examples
import paddle.fluid as fluid import numpy # First create the Executor. place = fluid.CPUPlace() # fluid.CUDAPlace(0) exe = fluid.Executor(place) train_program = fluid.Program() startup_program = fluid.Program() with fluid.program_guard(train_program, startup_program): # build net data = fluid.data(name='X', shape=[None, 1], dtype='float32') hidden = fluid.layers.fc(input=data, size=10) loss = fluid.layers.mean(hidden) optimizer = fluid.optimizer.Momentum(learning_rate=0.2, momentum=0.1) optimizer.minimize(loss) # build ModelAverage optimizer model_average = fluid.optimizer.ModelAverage(0.15, min_average_window=10000, max_average_window=12500) exe.run(startup_program) for i in range(12500): x = numpy.random.random(size=(10, 1)).astype('float32') outs = exe.run(program=train_program, feed={'X': x}, fetch_list=[loss.name]) # apply ModelAverage with model_average.apply(exe, False): x = numpy.random.random(size=(10, 1)).astype('float32') exe.run(program=train_program, feed={'X': x}, fetch_list=[loss.name]) # restore Parameters model_average.restore(exe)
-
clear_gradients
() Clear the gradients of all optimized parameters for model.
- Returns
None
Examples
import paddle.fluid as fluid import numpy as np with fluid.dygraph.guard(): value = np.arange(26).reshape(2, 13).astype("float32") a = fluid.dygraph.to_variable(value) linear = fluid.Linear(13, 5, dtype="float32") # This can be any optimizer supported by dygraph. adam = fluid.optimizer.Adam(learning_rate = 0.01, parameter_list = linear.parameters()) out = linear(a) out.backward() adam.minimize(out) adam.clear_gradients()
-
current_step_lr
() Note
This API is ONLY available in Dygraph mode
Get current step learning rate. The return value is all the same When LearningRateDecay is not used, otherwise return the step learning rate.
- Returns
The learning rate of the current step.
- Return type
float
Examples
import paddle.fluid as fluid import numpy as np # example1: LearningRateDecay is not used, return value is all the same with fluid.dygraph.guard(): emb = fluid.dygraph.Embedding([10, 10]) adam = fluid.optimizer.Adam(0.001, parameter_list = emb.parameters()) lr = adam.current_step_lr() print(lr) # 0.001 # example2: PiecewiseDecay is used, return the step learning rate with fluid.dygraph.guard(): inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype("float32") linear = fluid.dygraph.nn.Linear(10, 10) inp = fluid.dygraph.to_variable(inp) out = linear(inp) loss = fluid.layers.reduce_mean(out) bd = [2, 4, 6, 8] value = [0.2, 0.4, 0.6, 0.8, 1.0] adam = fluid.optimizer.Adam(fluid.dygraph.PiecewiseDecay(bd, value, 0), parameter_list=linear.parameters()) # first step: learning rate is 0.2 np.allclose(adam.current_step_lr(), 0.2, rtol=1e-06, atol=0.0) # True # learning rate for different steps ret = [0.2, 0.2, 0.4, 0.4, 0.6, 0.6, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0] for i in range(12): adam.minimize(loss) lr = adam.current_step_lr() np.allclose(lr, ret[i], rtol=1e-06, atol=0.0) # True
-
minimize
(loss, startup_program=None, parameter_list=None, no_grad_set=None) Add operations to minimize
loss
by updatingparameter_list
.- Parameters
loss (Variable) – A
Variable
containing the value to minimize.startup_program (Program, optional) – Program for initializing parameters in
parameter_list
. The default value is None, at this time default_startup_program will be used.parameter_list (list, optional) – List of
Variable
orVariable.name
to update to minimizeloss
. The default value is None, at this time all parameters will be updated.no_grad_set (set, optional) – Set of
Variable
orVariable.name
that don’t need to be updated. The default value is None.
- Returns
tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is
Parameter
, grad is the gradient value corresponding to the parameter. The returned tuple can be passed tofetch_list
inExecutor.run()
to indicate program pruning. If so, the program will be pruned byfeed
andfetch_list
before run, see details inExecutor
.- Return type
tuple
Examples
Please refer to the example of current Optimizer.
-
set_dict
(state_dict) Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.
- Parameters
state_dict (dict) – Dict contains all the Variable needed by optimizer
- Returns
None
Examples
with fluid.dygraph.guard(): emb = fluid.dygraph.Embedding([10, 10]) state_dict = emb.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") adam = fluid.optimizer.Adam(learning_rate=fluid.layers.noam_decay( 100, 10000), parameter_list=emb.parameters()) state_dict = adam.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") para_state_dict, opti_state_dict = fluid.load_dygraph( "paddle_dy") adam.set_dict(opti_state_dict)
-
state_dict
() Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.
Args: None :returns: dict contains all the variable used by optimizer :rtype: state_dict(dict)
Examples
import paddle.fluid as fluid with fluid.dygraph.guard(): emb = fluid.dygraph.Embedding([10, 10]) adam = fluid.optimizer.Adam(0.001, parameter_list=emb.parameters()) state_dict = adam.state_dict()