# lrn¶

paddle.fluid.layers.lrn(input, n=5, k=1.0, alpha=0.0001, beta=0.75, name=None, data_format='NCHW')[source]

This operator implements the Local Response Normalization Layer. This layer performs a type of “lateral inhibition” by normalizing over local input regions. For more information, please refer to ImageNet Classification with Deep Convolutional Neural Networks

The formula is as follows:

$Output(i, x, y) = Input(i, x, y) / \left(k + \alpha \sum\limits^{\min(C-1, i + n/2)}_{j = \max(0, i - n/2)}(Input(j, x, y))^2\right)^{\beta}$

In the above equation:

• $$n$$ : The number of channels to sum over.

• $$k$$ : The offset (avoid being divided by 0).

• $$\alpha$$ : The scaling parameter.

• $$\beta$$ : The exponent parameter.

Parameters
• input (Variable) – Input feature, 4D-Tensor with the shape of [N,C,H,W] or [N, H, W, C], where N is the batch size, C is the input channel, H is Height, W is weight. The data type is float32. The rank of this tensor must be 4, otherwise it will raise ValueError.

• n (int, optional) – The number of channels to sum over. Default: 5

• k (float, optional) – An offset, positive. Default: 1.0

• alpha (float, optional) – The scaling parameter, positive. Default:1e-4

• beta (float, optional) – The exponent, positive. Default:0.75

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

• data_format (str, optional) – Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from: “NCHW”, “NHWC”. The default is “NCHW”. When it is “NCHW”, the data is stored in the order of: [batch_size, input_channels, input_height, input_width].

Returns

A tensor variable storing the transformation result with the same shape and data type as input.

Return type

Variable

Examples:

import paddle.fluid as fluid
data = fluid.data(
name="data", shape=[None, 3, 112, 112], dtype="float32")
lrn = fluid.layers.lrn(input=data)
print(lrn.shape)  # [-1, 3, 112, 112]
print(lrn.dtype)  # float32