LRScheduler

class paddle.optimizer.lr. LRScheduler ( learning_rate=0.1, last_epoch=- 1, verbose=False ) [source]

LRScheduler Base class. Define the common interface of a learning rate scheduler.

User can import it by from paddle.optimizer.lr import LRScheduler ,

then overload it for your subclass and have a custom implementation of get_lr() .

Otherwise, an NotImplementedError exception will be thrown.

Parameters
  • learning_rate (float) – The initial learning rate. It is a python float number.

  • last_epoch (int, optional) – The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.

  • verbose (bool, optional) – If True, prints a message to stdout for each update. Default: False .

Returns

instance to schedule learning rate.

Examples

Here is an example of a simple StepDecay implementation.

import paddle
from paddle.optimizer.lr import LRScheduler

class StepDecay(LRScheduler):
    def __init__(self,
                learning_rate,
                step_size,
                gamma=0.1,
                last_epoch=-1,
                verbose=False):
        if not isinstance(step_size, int):
            raise TypeError(
                "The type of 'step_size' must be 'int', but received %s." %
                type(step_size))
        if gamma >= 1.0:
            raise ValueError('gamma should be < 1.0.')

        self.step_size = step_size
        self.gamma = gamma
        super(StepDecay, self).__init__(learning_rate, last_epoch, verbose)

    def get_lr(self):
        i = self.last_epoch // self.step_size
        return self.base_lr * (self.gamma**i)
step ( epoch=None )

step should be called after optimizer.step . It will update the learning rate in optimizer according to current epoch . The new learning rate will take effect on next optimizer.step .

Parameters

epoch (int, None) – specify current epoch. Default: None. Auto-increment from last_epoch=-1.

Returns

None

state_dict ( )

Returns the state of the scheduler as a dict.

It is a subset of self.__dict__ .

state_keys ( )

For those subclass who overload LRScheduler (Base Class). Acquiescently, “last_epoch, last_lr” will be saved by self.keys = ['last_epoch', 'last_lr'] .

last_epoch is the current epoch num, and last_lr is the current learning rate.

If you want to change the default behavior, you should have a custom implementation of _state_keys() to redefine self.keys .

set_state_dict ( state_dict )

Loads the schedulers state.

set_dict ( state_dict )

Loads the schedulers state.

get_lr ( )

For those subclass who overload LRScheduler (Base Class), User should have a custom implementation of get_lr() .

Otherwise, an NotImplementedError exception will be thrown.