L2Decay

class paddle.regularizer. L2Decay ( coeff=0.0 ) [source]

Implement the L2 Weight Decay Regularization, which helps to prevent the model over-fitting.

It can be set in ParamAttr or optimizer (such as Momentum ). When set in ParamAttr , it only takes effect for trainable parameters in this layer. When set in optimizer , it takes effect for all trainable parameters. When set together, ParamAttr has higher priority than optimizer , which means that for a trainable parameter, if regularizer is defined in its ParamAttr, then the regularizer in Optimizer will be ignored. Otherwise the regularizer in Optimizer will be used.

In the implementation, the loss function of L2 Weight Decay Regularization is as follows:

\[loss = 0.5 * coeff * reduce\_sum(square(x))\]
Parameters

coeff (float, optional) – regularization coeff. Default:0.0

Examples

>>> # Example1: set Regularizer in optimizer
>>> import paddle
>>> from paddle.regularizer import L2Decay
>>> linear = paddle.nn.Linear(10, 10)
>>> inp = paddle.rand(shape=[10, 10], dtype="float32")
>>> out = linear(inp)
>>> loss = paddle.mean(out)
>>> beta1 = paddle.to_tensor([0.9], dtype="float32")
>>> beta2 = paddle.to_tensor([0.99], dtype="float32")
>>> momentum = paddle.optimizer.Momentum(
...     learning_rate=0.1,
...     parameters=linear.parameters(),
...     weight_decay=L2Decay(0.0001))
>>> back = out.backward()
>>> momentum.step()
>>> momentum.clear_grad()
>>> # Example2: set Regularizer in parameters
>>> # Set L2 regularization in parameters.
>>> # Global regularizer does not take effect on my_conv2d for this case.
>>> from paddle.nn import Conv2D
>>> from paddle import ParamAttr
>>> from paddle.regularizer import L2Decay

>>> my_conv2d = Conv2D(
...         in_channels=10,
...         out_channels=10,
...         kernel_size=1,
...         stride=1,
...         padding=0,
...         weight_attr=ParamAttr(regularizer=L2Decay(coeff=0.01)),
...         bias_attr=False)