layer_norm¶
- paddle.nn.functional. layer_norm ( x, normalized_shape, weight=None, bias=None, epsilon=1e-05, name=None ) [source]
- 
         see more detail in paddle.nn.LayerNorm - Parameters
- 
           - x (Tensor) – Input Tensor. It’s data type should be float32, float64. 
- normalized_shape (int|list|tuple) – Input shape from an expected input of size \([*, normalized_shape[0], normalized_shape[1], ..., normalized_shape[-1]]\). If it is a single integer, this module will normalize over the last dimension which is expected to be of that specific size. 
- epsilon (float, optional) – The small value added to the variance to prevent division by zero. Default: 1e-05. 
- weight (Tensor, optional) – The weight tensor of batch_norm. Default: None. 
- bias (Tensor, optional) – The bias tensor of batch_norm. Default: None. 
- name (str, optional) – Name for the LayerNorm, default is None. For more information, please refer to Name.. 
 
- Returns
- 
           None 
 Examples import paddle x = paddle.rand((2, 2, 2, 3)) layer_norm_out = paddle.nn.functional.layer_norm(x, x.shape[1:]) print(layer_norm_out) 
