# l2_normalize¶

paddle.fluid.layers.nn. l2_normalize ( x, axis, epsilon=1e-12, name=None ) [source]

This op normalizes x along dimension axis using an L2 norm. For a 1-D tensor (dim is fixed to 0), this layer computes

$\begin{split}y = \\frac{x}{ \sqrt{\sum {x^2} + epsion }}\end{split}$

For x with more dimensions, this layer independently normalizes each 1-D slice along dimension axis.

Parameters
• x (Variable|list) – The input tensor could be N-D tensor, and the input data type could be float32 or float64.

• axis (int) – The axis on which to apply normalization. If axis < 0, the dimension to normalization is rank(X) + axis. -1 is the last dimension.

• epsilon (float) – The epsilon value is used to avoid division by zero, the default value is 1e-12.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The output has the same shape and data type with x.

Return type

Variable

Examples

# declarative mode
import numpy as np


Explicit markup ends without a blank line; unexpected unindent.

Unexpected indentation.

input = fluid.data(name=”input”, shape=[2,3]) output = fluid.layers.l2_normalize(x=input,axis=0) place = fluid.CPUPlace() exe = fluid.Executor(place) exe.run(fluid.default_startup_program())

input_data = np.random.rand(2,3).astype(“float32”) print(input_data)

# [[0.5171216 0.12704141 0.56018186] # [0.93251234 0.5382788 0.81709313]]

output_data = exe.run(fluid.default_main_program(),

feed={“input”:input_data}, fetch_list=[output], return_numpy=True)

print(output_data)

# [array([[0.48496857, 0.22970329, 0.56545246], # [0.8745316 , 0.9732607 , 0.82478094]], dtype=float32)]

# imperative mode import paddle.fluid.dygraph as dg

with dg.guard(place) as g:

input = dg.to_variable(input_data) output = fluid.layers.l2_normalize(x=input, axis=-1) print(output.numpy())

# [[0.66907585 0.16437206 0.7247892 ] # [0.6899054 0.3982376 0.6045142 ]]