ModelAverage
- class paddle.incubate. ModelAverage ( average_window_rate: float, parameters: Sequence[Tensor] | Sequence[_ParameterConfig] | None = None, min_average_window: int = 10000, max_average_window: int = 10000, name: str | None = None ) [source]
- 
         The ModelAverage optimizer accumulates specific continuous historical parameters during training. The accumulated historical range can be controlled by the passed average_window_rateargument. The averagedParameterare used in the prediction, which usually can improve the accuracy of the prediction.Accumulate the average of the Parameterin the sliding window, the result will be saved in a temporary variable, can be applied to the current model’sParameterby calling theapply()method, and the current modelParametercan be restored by calling therestore()method.The window size for calculating the average is determined by average_window_rate,min_average_window,max_average_windowand the currentParameterupdate times (num_updates).When the cumulative times (num_accumulates) is greater than the specific window threshold (average_window), the accumulated Parametertemporary variable is set to 0.0. The following example will help to understand the role of these arguments:if num_accumulates >= min_average_window and num_accumulates >= min(max_average_window, num_updates * average_window_rate): num_accumulates = 0 In the above conditional judgment statement, num_accumulatesindicates the current accumulated number, which can be abstractly understood as the length of the cumulative window. The length of the window must be at least the length set by themin_average_windowargument, and cannot exceed the length specified by themax_average_windowargument ornum_updates * average_window_rate, wherenum_updatesindicates the currentParameterupdate times,average_window_rateis a coefficient that calculates the length of the window.- Parameters
- 
           - average_window_rate (float) – The calculate ratio of the window length relative to - Parameterupdate times.
- parameters (list, optional) – List of - Tensornames to update to minimize- loss. This parameter is required in dygraph mode. The default value is None in static graph mode, at this time all parameters will be updated.
- min_average_window (int, optional) – the minimum size of average window length. The default value is 10000. 
- max_average_window (int, optional) – The maximum size of average window length. The default value is 10000. 
- name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to api_guide_Name. The default value is None. 
 
 Examples >>> >>> import numpy as np >>> import paddle >>> import paddle.nn as nn >>> import paddle.optimizer as opt >>> BATCH_SIZE = 16 >>> BATCH_NUM = 4 >>> EPOCH_NUM = 4 >>> IMAGE_SIZE = 784 >>> CLASS_NUM = 10 >>> # define a random dataset >>> class RandomDataset(paddle.io.Dataset): # type: ignore[type-arg] ... def __init__(self, num_samples): ... self.num_samples = num_samples ... def __getitem__(self, idx): ... image = np.random.random([IMAGE_SIZE]).astype('float32') ... label = np.random.randint(0, CLASS_NUM - 1, (1, )).astype('int64') ... return image, label ... def __len__(self): ... return self.num_samples ... >>> class LinearNet(nn.Layer): ... def __init__(self): ... super().__init__() ... self._linear = nn.Linear(IMAGE_SIZE, CLASS_NUM) ... self.bias = self._linear.bias ... ... @paddle.jit.to_static ... def forward(self, x): ... return self._linear(x) ... >>> def train(layer, loader, loss_fn, opt, model_average): ... for epoch_id in range(EPOCH_NUM): ... for batch_id, (image, label) in enumerate(loader()): ... out = layer(image) ... loss = loss_fn(out, label) ... loss.backward() ... opt.step() ... model_average.step() ... opt.clear_grad() ... model_average.clear_grad() ... print("Train Epoch {} batch {}: loss = {}, bias = {}".format( ... epoch_id, batch_id, np.mean(loss.numpy()), layer.bias.numpy())) ... >>> def evaluate(layer, loader, loss_fn): ... for batch_id, (image, label) in enumerate(loader()): ... out = layer(image) ... loss = loss_fn(out, label) ... loss.backward() ... print("Evaluate batch {}: loss = {}, bias = {}".format( ... batch_id, np.mean(loss.numpy()), layer.bias.numpy())) ... >>> # create network >>> layer = LinearNet() >>> loss_fn = nn.CrossEntropyLoss() >>> optimizer = opt.Momentum(learning_rate=0.2, momentum=0.1, parameters=layer.parameters()) >>> model_average = paddle.incubate.ModelAverage( ... 0.15, ... parameters=layer.parameters(), ... min_average_window=2, ... max_average_window=10 ... ) ... >>> # create data loader >>> dataset = RandomDataset(BATCH_NUM * BATCH_SIZE) >>> loader = paddle.io.DataLoader(dataset, ... batch_size=BATCH_SIZE, ... shuffle=True, ... drop_last=True, ... num_workers=2) ... >>> # create data loader >>> eval_loader = paddle.io.DataLoader(dataset, ... batch_size=BATCH_SIZE, ... shuffle=True, ... drop_last=True, ... num_workers=1 ... ) ... >>> # train >>> train(layer, loader, loss_fn, optimizer, model_average) >>> print("\nEvaluate With ModelAverage") >>> with model_average.apply(need_restore=False): ... evaluate(layer, eval_loader, loss_fn) >>> print("\nEvaluate With Restored Parameters") >>> model_average.restore() >>> evaluate(layer, eval_loader, loss_fn) - 
            
           append_regularization_ops
           (
           parameters_and_grads: list[tuple[Tensor, Tensor]], 
           regularization: WeightDecayRegularizer | None = None
           ) 
            list[tuple[Tensor, Tensor]]
           append_regularization_ops¶
- 
           Create and add backward regularization Operators Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization. - Parameters
- 
             - parameters_and_grads (list[tuple[Tensor,Tensor]]) – A list of (parameters, gradients) pairs that need to be regularized. 
- regularization (WeightDecayRegularizer|None, optional) – A global regularizer. If the parameter is not set. It will be applied with regularizer. 
 
- Returns
- 
             
             - list of (parameters, gradients)
- 
               pair with the regularized gradient 
 
- Return type
- 
             list[tuple[Tensor,Tensor]] 
- Raises
- 
             Exception – Unknown regularization type 
 
 - 
            
           apply_gradients
           (
           params_grads: list[tuple[Tensor, Tensor]]
           ) 
            list[Operator]
           apply_gradients¶
- 
           Second part of minimize, appending optimization operators for given params_grads pairs. - Parameters
- 
             params_grads (list[tuple[Tensor, Tensor]]) – list of (param, grad) pair to do optimization. 
- Returns
- 
             A list of operators appended to the current program. 
- Return type
- 
             list 
 Examples >>> import paddle >>> inp = paddle.uniform([10, 10], dtype="float32", min=-0.1, max=0.1) >>> linear = paddle.nn.Linear(10, 10) >>> out = linear(inp) >>> loss = paddle.mean(out) >>> optimizer = paddle.optimizer.Adam(learning_rate=0.1, ... parameters=linear.parameters()) >>> params_grads = optimizer.backward(loss) >>> optimizer.apply_gradients(params_grads) 
 - 
            
           backward
           (
           loss: Tensor, 
           startup_program: Program | None = None, 
           parameters: list[Tensor] | list[str] | None = None, 
           no_grad_set: set[Tensor] | set[str] | None = None, 
           callbacks: list[Callable[..., None]] | None = None
           ) 
            list[tuple[Tensor, Tensor]]
           backward¶
- 
           The first part of minimize, do auto-diff to append backward operations for the current program.- Parameters
- 
             - loss (Tensor) – - losstensor to run optimizations.
- startup_program (Program|None, optional) – Program for initializing parameters in - parameters. The default value is None, at this time default_startup_program will be used.
- parameters (list[Tensor]|list[str]|None, optional) – List of - Tensoror- Tensor.nameto update to minimize- loss. The default value is None, at this time all parameters will be updated.
- no_grad_set (set[Tensor]|set[str]|None, optional) – Set of - Tensoror- Tensor.namethat don’t need to be updated. The default value is None.
- callbacks (list|None, optional) – list of callable objects to run when appending backward operator for one parameter. The default value is None. 
 
- Returns
- 
             
             - 
               list[tuple[Tensor, Tensor]], list of (param, grad) tensor pairs, param is 
               Parameter,
- 
               grad is the gradient value corresponding to the parameter. 
 
- 
               list[tuple[Tensor, Tensor]], list of (param, grad) tensor pairs, param is 
               
 Examples >>> import paddle >>> x = paddle.arange(26, dtype="float32").reshape([2, 13]) >>> linear = paddle.nn.Linear(13, 5) >>> # This can be any optimizer supported by dygraph. >>> adam = paddle.optimizer.Adam(learning_rate = 0.01, ... parameters = linear.parameters()) >>> out = linear(x) >>> out.backward() >>> adam.step() >>> adam.clear_grad() 
 - 
            
           clear_grad
           (
           set_to_zero: bool = True
           ) 
            None
           clear_grad¶
- 
           Clear the gradients of all optimized parameters for model. If not, new gradient will accumulat on previous gradient. There are two method to clear grad: set_to_zero or delete grad. - Parameters
- 
             set_to_zero (bool, optional) – If set grads to zero or not, default is True. 
- Returns
- 
             None 
 Examples >>> import paddle >>> a = paddle.arange(26, dtype="float32").reshape([2, 13]) >>> linear = paddle.nn.Linear(13, 5) >>> # This can be any optimizer supported by dygraph. >>> adam = paddle.optimizer.Adam(learning_rate = 0.01, ... parameters = linear.parameters()) >>> out = linear(a) >>> out.backward() >>> adam.step() >>> adam.clear_grad() 
 - 
            
           get_lr
           (
           ) 
            float
           get_lr¶
- 
           Get current learning rate of optimizer. If ‘LRScheduler’ is not used, the return value is all the same. If ‘LRScheduler’ is used, the return value is the current scheduled learning rete. - Returns
- 
             float, The current learning rate of optimizer. 
 Examples >>> # train on default dynamic graph mode >>> import paddle >>> import numpy as np >>> emb = paddle.nn.Embedding(10, 3) >>> ## example1: LRScheduler is not used, return the same value is all the same >>> adam = paddle.optimizer.Adam(0.01, parameters = emb.parameters()) >>> for batch in range(10): ... input = paddle.randint(low=0, high=5, shape=[5]) ... out = emb(input) ... out.backward() ... print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.01 ... adam.step() Learning rate of step0: 0.01 Learning rate of step1: 0.01 Learning rate of step2: 0.01 Learning rate of step3: 0.01 Learning rate of step4: 0.01 Learning rate of step5: 0.01 Learning rate of step6: 0.01 Learning rate of step7: 0.01 Learning rate of step8: 0.01 Learning rate of step9: 0.01 >>> ## example2: StepDecay is used, return the scheduled learning rate >>> scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1) >>> adam = paddle.optimizer.Adam(scheduler, parameters = emb.parameters()) >>> for batch in range(10): ... input = paddle.randint(low=0, high=5, shape=[5]) ... out = emb(input) ... out.backward() ... print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05... ... adam.step() ... scheduler.step() Learning rate of step0: 0.5 Learning rate of step1: 0.5 Learning rate of step2: 0.05 Learning rate of step3: 0.05 Learning rate of step4: 0.005000000000000001 Learning rate of step5: 0.005000000000000001 Learning rate of step6: 0.0005000000000000001 Learning rate of step7: 0.0005000000000000001 Learning rate of step8: 5.000000000000001e-05 Learning rate of step9: 5.000000000000001e-05 >>> # train on static graph mode >>> paddle.enable_static() >>> main_prog = paddle.static.Program() >>> start_prog = paddle.static.Program() >>> with paddle.static.program_guard(main_prog, start_prog): ... x = paddle.static.data(name='x', shape=[None, 10]) ... z = paddle.static.nn.fc(x, 100) ... loss = paddle.mean(z) ... scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1) ... adam = paddle.optimizer.Adam(learning_rate=scheduler) ... adam.minimize(loss) >>> exe = paddle.static.Executor() >>> exe.run(start_prog) >>> for batch in range(10): ... print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05->0.005... ... out = exe.run(main_prog, feed={'x': np.random.randn(3, 10).astype('float32')}) ... scheduler.step() Learning rate of step0: 0.5 Learning rate of step1: 0.5 Learning rate of step2: 0.05 Learning rate of step3: 0.05 Learning rate of step4: 0.005000000000000001 Learning rate of step5: 0.005000000000000001 Learning rate of step6: 0.0005000000000000001 Learning rate of step7: 0.0005000000000000001 Learning rate of step8: 5.000000000000001e-05 Learning rate of step9: 5.000000000000001e-05 
 - 
            
           set_lr
           (
           value: float
           ) 
            None
           set_lr¶
- 
           - Api_attr
- 
             imperative 
 Set the value of the learning rate manually in the optimizer. If the optimizer use LRScheduler, this API cannot be invoked, because it will lead to conflict. - Parameters
- 
             value (float) – the value of learning rate. 
- Returns
- 
             None 
 Examples >>> import paddle >>> linear = paddle.nn.Linear(10, 10) >>> adam = paddle.optimizer.Adam(0.1, parameters=linear.parameters()) >>> # set learning rate manually by python float value >>> lr_list = [0.2, 0.3, 0.4, 0.5, 0.6] >>> for i in range(5): ... adam.set_lr(lr_list[i]) ... lr = adam.get_lr() ... print("current lr is {}".format(lr)) current lr is 0.2 current lr is 0.3 current lr is 0.4 current lr is 0.5 current lr is 0.6 
 - 
            
           set_lr_scheduler
           (
           scheduler: LRScheduler
           ) 
            None
           set_lr_scheduler¶
- 
           - Api_attr
- 
             imperative 
 Set the LRScheduler of the learning rate manually in the optimizer. If the optimizer already used LRScheduler previously, this API will set it be the new one. - Parameters
- 
             scheduler (LRScheduler) – the LRScheduler of learning rate 
- Returns
- 
             None 
 Examples >>> import paddle >>> linear = paddle.nn.Linear(10, 10) >>> adam = paddle.optimizer.Adam(0.1, parameters=linear.parameters()) >>> # set learning rate manually by class LRScheduler >>> scheduler = paddle.optimizer.lr.MultiStepDecay(learning_rate=0.5, milestones=[2,4,6], gamma=0.8) >>> adam.set_lr_scheduler(scheduler) >>> lr = adam.get_lr() >>> print("current lr is {}".format(lr)) current lr is 0.5 >>> # set learning rate manually by another LRScheduler >>> scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.1, step_size=5, gamma=0.6) >>> adam.set_lr_scheduler(scheduler) >>> lr = adam.get_lr() >>> print("current lr is {}".format(lr)) current lr is 0.1 
 - 
            
           set_state_dict
           (
           state_dict: dict[str, Tensor]
           ) 
            None
           set_state_dict¶
- 
           Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be changed. - Parameters
- 
             state_dict (dict) – Dict contains all the Tensor needed by optimizer 
- Returns
- 
             None 
 Examples >>> import paddle >>> emb = paddle.nn.Embedding(10, 10) >>> layer_state_dict = emb.state_dict() >>> paddle.save(layer_state_dict, "emb.pdparams") >>> scheduler = paddle.optimizer.lr.NoamDecay( ... d_model=100, warmup_steps=100, verbose=True) >>> adam = paddle.optimizer.Adam( ... learning_rate=scheduler, ... parameters=emb.parameters()) >>> opt_state_dict = adam.state_dict() >>> paddle.save(opt_state_dict, "adam.pdopt") >>> opti_state_dict = paddle.load("adam.pdopt") >>> adam.set_state_dict(opti_state_dict) 
 - 
            
           state_dict
           (
           ) 
            dict[str, Tensor]
           state_dict¶
- 
           Get state dict information from optimizer. It contain all the tensor used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty. - Returns
- 
             dict[str,Tensor], dict contains all the Tensor used by optimizer 
 Examples >>> import paddle >>> emb = paddle.nn.Embedding(10, 10) >>> adam = paddle.optimizer.Adam(0.001, parameters=emb.parameters()) >>> state_dict = adam.state_dict() 
 - 
            
           minimize
           (
           loss: Tensor, 
           startup_program: Program | None = None, 
           parameters: list[Tensor] | None = None, 
           no_grad_set: set[Tensor] | set[str] | None = None
           ) 
            None
           minimize¶
- 
           Add operations to minimize lossby updatingparameters.- Parameters
- 
             - loss (Tensor) – A - Tensorcontaining the value to minimize.
- startup_program (Program, optional) – Program for initializing parameters in - parameters. The default value is None, at this time default_startup_program will be used.
- parameters (list, optional) – List of - Tensoror- Tensor.nameto update to minimize- loss. The default value is None, at this time all parameters will be updated.
- no_grad_set (set, optional) – Set of - Tensoror- Tensor.namethat don’t need to be updated. The default value is None.
 
- Returns
- 
             tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) tensor pairs, param is Parameter, grad is the gradient value corresponding to the parameter. In static graph mode, the returned tuple can be passed tofetch_listinExecutor.run()to indicate program pruning. If so, the program will be pruned byfeedandfetch_listbefore run, see details inExecutor.
- Return type
- 
             tuple 
 Examples >>> import paddle >>> inp = paddle.rand([1, 10], dtype="float32") >>> linear = paddle.nn.Linear(10, 1) >>> out = linear(inp) >>> loss = paddle.mean(out) >>> loss.backward() >>> sgd = paddle.optimizer.SGD(learning_rate=0.1,parameters=linear.parameters()) >>> sgd.minimize(loss) >>> modelaverage = paddle.incubate.ModelAverage( ... 0.15, ... parameters=linear.parameters(), ... min_average_window=2, ... max_average_window=4 ... ) >>> modelaverage.minimize(loss) >>> sgd.clear_grad() >>> modelaverage.clear_grad() 
 - 
            
           step
           (
           ) 
            None
           step¶
- 
           Execute the optimizer and update parameters once. - Returns
- 
             None 
 Examples >>> import paddle >>> inp = paddle.rand([1, 10], dtype="float32") >>> linear = paddle.nn.Linear(10, 1) >>> out = linear(inp) >>> loss = paddle.mean(out) >>> sgd = paddle.optimizer.SGD(learning_rate=0.1,parameters=linear.parameters()) >>> modelaverage = paddle.incubate.ModelAverage( ... 0.15, ... parameters=linear.parameters(), ... min_average_window=2, ... max_average_window=4 ... ) >>> loss.backward() >>> sgd.step() >>> modelaverage.step() >>> sgd.clear_grad() >>> modelaverage.clear_grad() 
 - 
            
           apply
           (
           executor: Executor | None = None, 
           need_restore: bool = True
           ) 
            Generator[None, None, None]
           apply¶
- 
           Apply the average of the cumulative Parameterto the parameters of the current model.- Parameters
- 
             - executor (Executor) – The network executor in static-graph mode. The default value is None in dygraph mode. 
- need_restore (bool) – Restore flag variable, if set to True, the network will restore the parameters of the network to the default value, if set to False, it will not be restored. The default value is True. 
 
 Examples >>> import paddle >>> inp = paddle.rand([1, 10], dtype="float32") >>> linear = paddle.nn.Linear(10, 1) >>> out = linear(inp) >>> loss = paddle.mean(out) >>> loss.backward() >>> sgd = paddle.optimizer.SGD(learning_rate=0.1,parameters=linear.parameters()) >>> modelaverage = paddle.incubate.ModelAverage( ... 0.15, ... parameters=linear.parameters(), ... min_average_window=2, ... max_average_window=4 ... ) >>> sgd.step() >>> modelaverage.step() >>> with modelaverage.apply(): ... for param in linear.parameters(): ... print(param) >>> for param in linear.parameters(): ... print(param) 
 - 
            
           restore
           (
           executor: Executor | None = None
           ) 
            None
           restore¶
- 
           Restore Parametervalues of current model.- Parameters
- 
             executor (Executor) – The network executor in static-graph mode. The default value is None in dygraph mode 
 Examples >>> import paddle >>> inp = paddle.rand([1, 10], dtype="float32") >>> linear = paddle.nn.Linear(10, 1) >>> out = linear(inp) >>> loss = paddle.mean(out) >>> loss.backward() >>> sgd = paddle.optimizer.SGD(learning_rate=0.1,parameters=linear.parameters()) >>> modelaverage = paddle.incubate.ModelAverage( ... 0.15, ... parameters=linear.parameters(), ... min_average_window=2, ... max_average_window=4 ... ) >>> sgd.step() >>> modelaverage.step() >>> with modelaverage.apply(need_restore=False): ... for param in linear.parameters(): ... print(param) >>> for param in linear.parameters(): ... print(param) >>> modelaverage.restore() >>> for param in linear.parameters(): ... print(param) 
 
