RMSPropOptimizer¶
- class paddle.fluid.optimizer. RMSPropOptimizer ( learning_rate, rho=0.95, epsilon=1e-06, momentum=0.0, centered=False, parameter_list=None, regularization=None, grad_clip=None, name=None ) [source]
- 
         Root Mean Squared Propagation (RMSProp) is an unpublished, adaptive learning rate method. The original slides proposed RMSProp: Slide 29 of http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf . The original equation is as follows: \[ \begin{align}\begin{aligned}\begin{split}r(w, t) & = \\rho r(w, t-1) + (1 - \\rho)(\\nabla Q_{i}(w))^2\end{split}\\\begin{split}w & = w - \\frac{\\eta} {\\sqrt{r(w,t) + \\epsilon}} \\nabla Q_{i}(w)\end{split}\end{aligned}\end{align} \]The first equation calculates moving average of the squared gradient for each weight. Then dividing the gradient by \(sqrt{v(w,t)}\). In some cases, adding a momentum term :math: \beta is beneficial. In our implementation, Nesterov momentum is used: \[ \begin{align}\begin{aligned}\begin{split}r(w, t) & = \\rho r(w, t-1) + (1 - \\rho)(\\nabla Q_{i}(w))^2\end{split}\\\begin{split}v(w, t) & = \\beta v(w, t-1) + \\frac{\\eta} {\\sqrt{r(w,t) + \\epsilon}} \\nabla Q_{i}(w)\end{split}\\w & = w - v(w, t)\end{aligned}\end{align} \]if centered is True: \[ \begin{align}\begin{aligned}\begin{split}r(w, t) & = \\rho r(w, t-1) + (1 - \\rho)(\\nabla Q_{i}(w))^2\end{split}\\\begin{split}g(w, t) & = \\rho g(w, t-1) + (1 - \\rho)\\nabla Q_{i}(w)\end{split}\\\begin{split}v(w, t) & = \\beta v(w, t-1) + \\frac{\\eta} {\\sqrt{r(w,t) - (g(w, t))^2 + \\epsilon}} \\nabla Q_{i}(w)\end{split}\\w & = w - v(w, t)\end{aligned}\end{align} \]where, \(\\rho\) is a hyperparameter and typical values are 0.9, 0.95 and so on. :math: beta is the momentum term. :math: \epsilon is a smoothing term to avoid division by zero, usually set somewhere in range from 1e-4 to 1e-8. - Parameters
- 
           - learning_rate (float) – Global learning rate. 
- rho (float) – rho is :math: \rho in equation, default is 0.95. 
- epsilon (float) – - math
- 
               \epsilon in equation is smoothing term to 
 - avoid division by zero, default is 1e-6. 
- momentum (float) – \(\\beta\) in equation is the momentum term, default is 0.0. 
- centered (bool) – If True, gradients are normalized by the estimated variance of the gradient; if False, by the uncentered second moment. Setting this to True may help with training, but is slightly more expensive in terms of computation and memory. Defaults to False. 
- parameter_list (Iterable, optional) – Iterable of - Variablenames to update to minimize- loss. This parameter is required in dygraph mode. The default value is None in static mode, at this time all parameters will be updated.
- regularization (WeightDecayRegularizer, optional) – - The strategy of regularization. There are two method:
- 
               api_fluid_regularizer_L1Decay , api_fluid_regularizer_L2Decay . If a parameter has set 
 - regularizer using api_fluid_ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization. 
- grad_clip (GradientClipBase, optional) – Gradient cliping strategy, it’s an instance of some derived class of - GradientClipBase. There are three cliping strategies ( api_fluid_clip_GradientClipByGlobalNorm , api_fluid_clip_GradientClipByNorm , api_fluid_clip_GradientClipByValue ). Default None, meaning there is no gradient clipping.
- name (str, optional) – This parameter is used by developers to print debugging information. For details, please refer to Name. Default is None. 
 
- Raises
- 
           ValueError – If learning_rate, rho, epsilon, momentum are None. 
 Examples import paddle import paddle.fluid as fluid import numpy as np place = fluid.CPUPlace() main = fluid.Program() with fluid.program_guard(main): x = fluid.layers.data(name='x', shape=[13], dtype='float32') y = fluid.layers.data(name='y', shape=[1], dtype='float32') y_predict = fluid.layers.fc(input=x, size=1, act=None) cost = fluid.layers.square_error_cost(input=y_predict, label=y) avg_cost = fluid.layers.mean(cost) rms_optimizer = fluid.optimizer.RMSProp(learning_rate=0.1) rms_optimizer.minimize(avg_cost) fetch_list = [avg_cost] train_reader = paddle.batch( paddle.dataset.uci_housing.train(), batch_size=1) feeder = fluid.DataFeeder(place=place, feed_list=[x, y]) exe = fluid.Executor(place) exe.run(fluid.default_startup_program()) for data in train_reader(): exe.run(main, feed=feeder.feed(data), fetch_list=fetch_list) - 
            
           append_regularization_ops
           (
           parameters_and_grads, 
           regularization=None
           )
           append_regularization_ops¶
- 
           Create and add backward regularization Operators Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization. - Parameters
- 
             - parameters_and_grads – A list of (parameters, gradients) pairs that need to be regularized. 
- regularization – A global regularizer. If the parameter is not set. It will be applied with regularizer. 
 
- Returns
- 
             list of (parameters, gradients) pair with the regularized gradient 
- Return type
- 
             list[(Variable, Variable)] 
- Raises
- 
             Exception – Unknown regularization type 
 
 - 
            
           apply_gradients
           (
           params_grads
           )
           apply_gradients¶
- 
           Second part of minimize, appending optimization operators for given params_grads pairs. - Parameters
- 
             params_grads (list) – list of (param, grad) pair to do optimization. 
- Returns
- 
             A list of operators appended to the current program. 
- Return type
- 
             list 
 Examples import paddle.fluid as fluid loss = network() optimizer = fluid.optimizer.SGD(learning_rate=0.1) params_grads = optimizer.backward(loss) # you may append operations for params_grads here # ... optimizer.apply_gradients(params_grads) 
 - 
            
           apply_optimize
           (
           loss, 
           startup_program, 
           params_grads
           )
           apply_optimize¶
- 
           Second part of minimize, appending optimization operators for given params_grads pairs. :param loss: loss variable to run optimizations. :type loss: Variable :param startup_program: startup_program for initializing parameters in parameter_list. - Parameters
- 
             params_grads (list) – list of (param, grad) pair to do optimization. 
- Returns
- 
             A list of operators appended to the current program. 
- Return type
- 
             list 
 
 - 
            
           backward
           (
           loss, 
           startup_program=None, 
           parameter_list=None, 
           no_grad_set=None, 
           callbacks=None
           )
           backward¶
- 
           The first part of minimize, do auto-diff to append backward operations for the current program.- Parameters
- 
             - loss (Variable) – - lossvariable to run optimizations.
- startup_program (Program, optional) – api_fluid_Program for initializing parameters in - parameter_list. The default value is None, at this time api_fluid_default_startup_program will be used.
- parameter_list (Iterable, optional) – Iterable of - Variableor- Variable.nameto update to minimize- loss. The default value is None, at this time all parameters will be updated.
- no_grad_set (set, optional) – Set of - Variableor- Variable.namethat don’t need to be updated. The default value is None.
- callbacks (list, optional) – list of callable objects to run when appending backward operator for one parameter. The default value is None. 
 
- Returns
- 
             
             - 
               list of (param, grad) variable pairs, param is 
               Parameter,
- 
               grad is the gradient value corresponding to the parameter. 
 
- 
               list of (param, grad) variable pairs, param is 
               
- Return type
- 
             list 
 Examples See examples in apply_gradients.
 - 
            
           clear_gradients
           (
           )
           clear_gradients¶
- 
           Clear the gradients of all optimized parameters for model. If not, new gradient will accumulat on previous gradient. - Returns
- 
             None 
 Examples import paddle.fluid as fluid import numpy as np with fluid.dygraph.guard(): value = np.arange(26).reshape(2, 13).astype("float32") a = fluid.dygraph.to_variable(value) linear = fluid.Linear(13, 5, dtype="float32") # This can be any optimizer supported by dygraph. adam = fluid.optimizer.Adam(learning_rate = 0.01, parameter_list = linear.parameters()) out = linear(a) out.backward() adam.minimize(out) adam.clear_gradients() 
 - 
            
           current_step_lr
           (
           )
           current_step_lr¶
- 
           - Api_attr
- 
             imperative 
 Get current step learning rate. The return value is all the same When LearningRateDecay is not used, otherwise return the step learning rate. - Returns
- 
             The learning rate of the current step. 
- Return type
- 
             float 
 Examples import paddle.fluid as fluid import numpy as np # example1: LearningRateDecay is not used, return value is all the same with fluid.dygraph.guard(): emb = fluid.dygraph.Embedding([10, 10]) adam = fluid.optimizer.Adam(0.001, parameter_list = emb.parameters()) lr = adam.current_step_lr() print(lr) # 0.001 # example2: PiecewiseDecay is used, return the step learning rate with fluid.dygraph.guard(): inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype("float32") linear = fluid.dygraph.nn.Linear(10, 10) inp = fluid.dygraph.to_variable(inp) out = linear(inp) loss = fluid.layers.reduce_mean(out) bd = [2, 4, 6, 8] value = [0.2, 0.4, 0.6, 0.8, 1.0] adam = fluid.optimizer.Adam(fluid.dygraph.PiecewiseDecay(bd, value, 0), parameter_list=linear.parameters()) # first step: learning rate is 0.2 np.allclose(adam.current_step_lr(), 0.2, rtol=1e-06, atol=0.0) # True # learning rate for different steps ret = [0.2, 0.2, 0.4, 0.4, 0.6, 0.6, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0] for i in range(12): adam.minimize(loss) lr = adam.current_step_lr() np.allclose(lr, ret[i], rtol=1e-06, atol=0.0) # True 
 - 
            
           minimize
           (
           loss, 
           startup_program=None, 
           parameter_list=None, 
           no_grad_set=None
           )
           minimize¶
- 
           Add operations to minimize lossby updatingparameter_list.- Parameters
- 
             - loss (Variable) – A - Variablecontaining the value to minimize.
- startup_program (Program, optional) – api_fluid_Program for initializing parameters in - parameter_list. The default value is None, at this time api_fluid_default_startup_program will be used.
- parameter_list (Iterable, optional) – Iterable of - Variableor- Variable.nameto update to minimize- loss. The default value is None, at this time all parameters will be updated.
- no_grad_set (set, optional) – Set of - Variableor- Variable.namethat don’t need to be updated. The default value is None.
 
- Returns
- 
             tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is Parameter, grad is the gradient value corresponding to the parameter. The returned tuple can be passed tofetch_listinExecutor.run()to indicate program pruning. If so, the program will be pruned byfeedandfetch_listbefore run, see details inExecutor.
- Return type
- 
             tuple 
 Examples Please refer to the example of current Optimizer. 
 - 
            
           set_dict
           (
           state_dict
           )
           set_dict¶
- 
           Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed. - Parameters
- 
             state_dict (dict) – Dict contains all the Variable needed by optimizer 
- Returns
- 
             None 
 Examples import paddle import paddle.fluid as fluid paddle.disable_static() emb = paddle.nn.Embedding(10, 10) state_dict = emb.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") scheduler = paddle.optimizer.lr.NoamDecay( d_model=0.01, warmup_steps=100, verbose=True) adam = paddle.optimizer.Adam( learning_rate=scheduler, parameters=emb.parameters()) state_dict = adam.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") para_state_dict, opti_state_dict = fluid.load_dygraph("paddle_dy") 
 - 
            
           set_lr
           (
           value
           )
           set_lr¶
- 
           - Api_attr
- 
             imperative 
 Set the value of the learning rate manually in the optimizer. If the optimizer use LearningRateDecay, this API cannot be invoked, because it will lead to conflict. - Parameters
- 
             value (float|Variable) – the value of learning rate 
- Returns
- 
             None 
 Examples import paddle.fluid as fluid with fluid.dygraph.guard(): linear = fluid.dygraph.nn.Linear(10, 10) adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters()) # set learning rate manually by python float value lr_list = [0.2, 0.3, 0.4, 0.5, 0.6] for i in range(5): adam.set_lr(lr_list[i]) lr = adam.current_step_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.2 # current lr is 0.3 # current lr is 0.4 # current lr is 0.5 # current lr is 0.6 # set learning rate manually by framework Variable lr_var = fluid.layers.create_global_var( shape=[1], value=0.7, dtype='float32') adam.set_lr(lr_var) lr = adam.current_step_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.7 
 - 
            
           set_state_dict
           (
           state_dict
           )
           set_state_dict¶
- 
           Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed. - Parameters
- 
             state_dict (dict) – Dict contains all the Variable needed by optimizer 
- Returns
- 
             None 
 Examples import paddle import paddle.fluid as fluid paddle.disable_static() emb = paddle.nn.Embedding(10, 10) state_dict = emb.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") scheduler = paddle.optimizer.lr.NoamDecay( d_model=0.01, warmup_steps=100, verbose=True) adam = paddle.optimizer.Adam( learning_rate=scheduler, parameters=emb.parameters()) state_dict = adam.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") para_state_dict, opti_state_dict = fluid.load_dygraph("paddle_dy") 
 - 
            
           state_dict
           (
           )
           state_dict¶
- 
           Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty. Args: None :returns: dict contains all the variable used by optimizer :rtype: state_dict(dict) Examples import paddle.fluid as fluid with fluid.dygraph.guard(): emb = fluid.dygraph.Embedding([10, 10]) adam = fluid.optimizer.Adam(0.001, parameter_list=emb.parameters()) state_dict = adam.state_dict() 
 
