class paddle.fluid.dygraph.learning_rate_scheduler. LambdaDecay ( learning_rate, lr_lambda ) [source]
Api_attr

imperative

Sets the learning rate of `optimizer` to the initial lr times a multiplicative factor, and this multiplicative factor is computed by function `lr_lambda` . `lr_lambda` is function which receives `epoch` .

The algorithm can be described as the code below.

```learning_rate = 0.5        # init learning_rate
lr_lambda = lambda epoch: 0.95 ** epoch

learning_rate = 0.5        # epoch 0
learning_rate = 0.475      # epoch 1
learning_rate = 0.45125    # epoch 2
```
Parameters
• learning_rate (float|int) – The initial learning rate. It can be set to python float or int number.

• lr_lambda (function) – A function which computes a multiplicative factor given an integer parameter `epoch` , and then multiply the initial learning rate by this multiplicative factor.

Returns

None.

Examples

```import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
x = np.random.uniform(-1, 1, [10, 10]).astype("float32")
linear = paddle.nn.Linear(10, 10)
input = fluid.dygraph.to_variable(x)
scheduler = fluid.dygraph.LambdaDecay(0.5, lr_lambda=lambda x: 0.95**x)
adam = fluid.optimizer.Adam(learning_rate = scheduler, parameter_list = linear.parameters())

for epoch in range(6):
for batch_id in range(5):
out = linear(input)
scheduler.epoch()

print("epoch:%d, current lr is %f" .format(epoch, adam.current_step_lr()))
# epoch:0, current lr is 0.5
# epoch:1, current lr is 0.475
# epoch:2, current lr is 0.45125
```
create_lr_var ( lr )

convert lr from float to variable

Parameters

lr – learning rate

Returns

learning rate variable

epoch ( epoch=None )

compueted learning_rate and update it when invoked.

set_dict ( state_dict )

Loads the schedulers state.

set_state_dict ( state_dict )

Loads the schedulers state.

state_dict ( )

Returns the state of the scheduler as a `dict`.

It is a subset of self.__dict__ .