layer_norm

paddle.nn.functional. layer_norm ( x, normalized_shape, weight=None, bias=None, epsilon=1e-05, name=None ) [source]

nn.LayerNorm is recommended. For more information, please refer to LayerNorm .

Parameters
  • x (Tensor) – Input Tensor. It’s data type should be bfloat16, float16, float32, float64.

  • normalized_shape (int|list|tuple) – Input shape from an expected input of size \([*, normalized_shape[0], normalized_shape[1], ..., normalized_shape[-1]]\). If it is a single integer, this module will normalize over the last dimension which is expected to be of that specific size.

  • weight (Tensor, optional) – The weight tensor of batch_norm. Default: None.

  • bias (Tensor, optional) – The bias tensor of batch_norm. Default: None.

  • epsilon (float, optional) – The small value added to the variance to prevent division by zero. Default: 1e-05.

  • name (str, optional) – Name for the LayerNorm, default is None. For more information, please refer to Name .

Returns

None

Examples

>>> import paddle
>>> paddle.seed(2023)
>>> x = paddle.rand((2, 2, 2, 3))
>>> layer_norm_out = paddle.nn.functional.layer_norm(x, x.shape[1:])
>>> print(layer_norm_out)
Tensor(shape=[2, 2, 2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[[ 0.87799639, -0.32706568, -1.23529339],
   [ 1.01540327, -0.66222906, -0.72354043]],
  [[ 1.24183702,  0.45458138, -0.33506915],
   [ 0.41468468,  1.26852870, -1.98983312]]],
 [[[ 0.02837803,  1.27684665, -0.90110683],
   [-0.94709367, -0.15110941, -1.16546965]],
  [[-0.82010198,  0.11218392, -0.86506516],
   [ 1.09489357,  0.19107464,  2.14656854]]]])