NaturalExpDecay

class paddle.fluid.dygraph.NaturalExpDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32')[source]

Applies natural exponential decay to the initial learning rate.

The algorithm can be described as following.

\[decayed\_learning\_rate = learning\_rate * e^{y}\]

If staircase is set to False, then:

\[y = - decay\_rate * \frac{global\_step}{decay\_steps}\]

If staircase is set to True, then:

\[y = - decay\_rate * math.floor(\frac{global\_step}{decay\_steps})\]
Parameters
  • learning_rate (Variable|float) – The initial learning rate. If the type is Variable, it’s a tensor with shape [1], the data type can be float32 or float64. It also can be set to python int number.

  • decay_steps (int) – The decay step size. It determines the decay cycle.

  • decay_rate (int) – The decay rate.

  • staircase (bool, optional) – If set to True, decay the learning rate at discrete intervals. The default value is False.

  • begin (int, optional) – The begin step. The initial value of global_step described above. The default value is 0.

  • step (int, optional) – The step size used to calculate the new global_step in the description above. The defalult value is 1.

  • dtype (str, optional) – The data type used to create the learning rate variable. The data type can be set as ‘float32’, ‘float64’. The default value is ‘float32’.

Returns

None.

Examples

import paddle.fluid as fluid
base_lr = 0.1
with fluid.dygraph.guard():
    sgd_optimizer = fluid.optimizer.SGD(
            learning_rate=fluid.dygraph.NaturalExpDecay(
                  learning_rate=base_lr,
                  decay_steps=10000,
                  decay_rate=0.5,
                  staircase=True))