# normalize¶

paddle.nn.functional. normalize ( x, p=2, axis=1, epsilon=1e-12, name=None ) [source]

Normalize x along dimension axis using $$L_p$$ norm. This layer computes

$y = \frac{x}{ \max\left( \lvert \lvert x \rvert \rvert_p, epsilon\right) }$
$\lvert \lvert x \rvert \rvert_p = \left( \sum_i {\lvert x_i \rvert^p} \right)^{1/p}$

where, $$\sum_i{\lvert x_i \rvert^p}$$ is calculated along the axis dimension.

Parameters
• x (Tensor) – The input tensor could be N-D tensor, and the input data type could be float32 or float64.

• p (float|int, optional) – The exponent value in the norm formulation. Default: 2.

• axis (int, optional) – The axis on which to apply normalization. If axis < 0, the dimension to normalization is x.ndim + axis. -1 is the last dimension.

• epsilon (float, optional) – Small float added to denominator to avoid dividing by zero. Default is 1e-12.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the output has the same shape and data type with x.

Examples

>>> import paddle
>>> import paddle.nn.functional as F

>>> x = paddle.arange(6, dtype="float32").reshape([2,3])
>>> y = F.normalize(x)
>>> print(y)
Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[0.        , 0.44721359, 0.89442718],
[0.42426404, 0.56568539, 0.70710671]])

>>> y = F.normalize(x, p=1.5)
>>> print(y)
Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[0.        , 0.40862012, 0.81724024],
[0.35684016, 0.47578689, 0.59473360]])

>>> y = F.normalize(x, axis=0)
>>> print(y)
Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[0.        , 0.24253564, 0.37139067],
[1.        , 0.97014254, 0.92847669]])