LinearLR¶
- class paddle.optimizer.lr. LinearLR ( learning_rate, total_steps, start_factor=0.3333333333333333, end_factor=1.0, last_epoch=- 1, verbose=False ) [source]
 - 
         
Set the learning rate according to linear scheduler. The learning rate will be firstly multiplied by start_factor and linearly increase to end learning rate.
- Parameters
 - 
           
learning_rate (float) – The initial learning rate. It is a python float number.
total_steps (int) – Number of iterations that the learning_rate reaches end learning_rate.
start_factor (float) – Start learning rate is defined by start_factor * learning_rate . Default: 1./3.
end_factor (float) – 1.0.
last_epoch (int, optional) – The index of last epoch. Can be set to restart training.Default: -1, means initial learning rate.
verbose – (bool, optional): If
True, prints a message to stdout for each update. Default:False.
 - Returns
 - 
           
LinearLRinstance to schedule learning rate. 
Examples
>>> # Example1: train on default dynamic graph mode >>> import paddle >>> import numpy as np >>> # train on default dynamic graph mode >>> linear = paddle.nn.Linear(10, 10) >>> scheduler = paddle.optimizer.lr.LinearLR(learning_rate=0.5, total_steps=5, verbose=True) >>> sgd = paddle.optimizer.SGD(learning_rate=scheduler, parameters=linear.parameters()) >>> for epoch in range(5): ... for batch_id in range(20): ... x = paddle.uniform([10, 10]) ... out = linear(x) ... loss = paddle.mean(out) ... loss.backward() ... sgd.step() ... sgd.clear_gradients() ... scheduler.step()
>>> # Example2: train on static graph mode >>> import paddle >>> import numpy as np >>> paddle.enable_static() >>> main_prog = paddle.static.Program() >>> start_prog = paddle.static.Program() >>> with paddle.static.program_guard(main_prog, start_prog): ... x = paddle.static.data(name='x', shape=[None, 4, 5]) ... y = paddle.static.data(name='y', shape=[None, 4, 5]) ... z = paddle.static.nn.fc(x, 100) ... loss = paddle.mean(z) ... scheduler = paddle.optimizer.lr.LinearLR(learning_rate=0.5, ... total_steps=5, verbose=True) ... sgd = paddle.optimizer.SGD(learning_rate=scheduler) ... sgd.minimize(loss) ... >>> exe = paddle.static.Executor() >>> exe.run(start_prog) >>> for epoch in range(5): ... for batch_id in range(20): ... out = exe.run( ... main_prog, ... feed={ ... 'x': np.random.randn(3, 4, 5).astype('float32'), ... 'y': np.random.randn(3, 4, 5).astype('float32') ... }, ... fetch_list=loss.name) ... scheduler.step()
- 
            
           get_lr
           (
           )
           
get_lr¶
 - 
           
For those subclass who overload
LRScheduler(Base Class), User should have a custom implementation ofget_lr().Otherwise, an
NotImplementedErrorexception will be thrown. 
- 
            
           set_dict
           (
           state_dict
           )
           
set_dict¶
 - 
           
Loads the schedulers state.
 
- 
            
           set_state_dict
           (
           state_dict
           )
           
set_state_dict¶
 - 
           
Loads the schedulers state.
 
- 
            
           state_dict
           (
           )
           
state_dict¶
 - 
           
Returns the state of the scheduler as a
dict.It is a subset of
self.__dict__. 
- 
            
           state_keys
           (
           )
           
state_keys¶
 - 
           
For those subclass who overload
LRScheduler(Base Class). Acquiescently, “last_epoch, last_lr” will be saved byself.keys = ['last_epoch', 'last_lr'].last_epochis the current epoch num, andlast_lris the current learning rate.If you want to change the default behavior, you should have a custom implementation of
_state_keys()to redefineself.keys. 
- 
            
           step
           (
           epoch=None
           )
           
step¶
 - 
           
stepshould be called afteroptimizer.step. It will update the learning rate in optimizer according to currentepoch. The new learning rate will take effect on nextoptimizer.step.- Parameters
 - 
             
epoch (int, None) – specify current epoch. Default: None. Auto-increment from last_epoch=-1.
 - Returns
 - 
             
None
 
Examples
>>> import paddle >>> value = paddle.arange(26, dtype='float32') >>> a = paddle.reshape(value, [2, 13]) >>> linear = paddle.nn.Linear(13, 5) >>> adadelta = paddle.optimizer.Adadelta(learning_rate=0.0003, epsilon=1e-06, rho=0.95, ... parameters = linear.parameters()) >>> out = linear(a) >>> out.backward() >>> adadelta.step() >>> adadelta.clear_grad()
>>> import paddle >>> value = paddle.arange(26, dtype='float32') >>> a = paddle.reshape(value, [2, 13]) >>> linear = paddle.nn.Linear(13, 5) >>> adadelta = paddle.optimizer.Adadelta(learning_rate=0.0003, epsilon=1e-06, rho=0.95, ... parameters = linear.parameters()) >>> out = linear(a) >>> out.backward() >>> adadelta.step() >>> adadelta.clear_grad()
 
 
