MultiStepDecay¶
- class paddle.fluid.dygraph.learning_rate_scheduler. MultiStepDecay ( learning_rate, milestones, decay_rate=0.1 ) [source]
- 
         - Api_attr
- 
           imperative 
 Decays the learning rate of optimizerbydecay_rateonceepochreaches one of the milestones.The algorithm can be described as the code below. learning_rate = 0.5 milestones = [30, 50] decay_rate = 0.1 if epoch < 30: learning_rate = 0.5 elif epoch < 50: learning_rate = 0.05 else: learning_rate = 0.005- Parameters
- 
           - learning_rate (float|int) – The initial learning rate. It can be set to python float or int number. 
- milestones (tuple|list) – List or tuple of each boundaries. Must be increasing. 
- decay_rate (float, optional) – The Ratio that the learning rate will be reduced. - new_lr = origin_lr * decay_rate. It should be less than 1.0. Default: 0.1.
 
- Returns
- 
           None. 
 Examples import paddle.fluid as fluid import numpy as np with fluid.dygraph.guard(): x = np.random.uniform(-1, 1, [10, 10]).astype("float32") linear = fluid.dygraph.Linear(10, 10) input = fluid.dygraph.to_variable(x) scheduler = fluid.dygraph.MultiStepDecay(0.5, milestones=[3, 5]) adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters()) for epoch in range(6): for batch_id in range(5): out = linear(input) loss = fluid.layers.reduce_mean(out) adam.minimize(loss) scheduler.epoch() print("epoch:{}, current lr is {}" .format(epoch, adam.current_step_lr())) # epoch:0, current lr is 0.5 # epoch:1, current lr is 0.5 # epoch:2, current lr is 0.5 # epoch:3, current lr is 0.05 # epoch:4, current lr is 0.05 # epoch:5, current lr is 0.005 - 
            
           create_lr_var
           (
           lr
           )
           create_lr_var¶
- 
           convert lr from float to variable - Parameters
- 
             lr – learning rate 
- Returns
- 
             learning rate variable 
 
 - 
            
           epoch
           (
           epoch=None
           )
           epoch¶
- 
           compueted learning_rate and update it when invoked. 
 - 
            
           set_dict
           (
           state_dict
           )
           set_dict¶
- 
           Loads the schedulers state. 
 - 
            
           set_state_dict
           (
           state_dict
           )
           set_state_dict¶
- 
           Loads the schedulers state. 
 - 
            
           state_dict
           (
           )
           state_dict¶
- 
           Returns the state of the scheduler as a dict.It is a subset of self.__dict__ . 
 
