batch_norm

paddle.nn.functional. batch_norm ( x, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.9, epsilon=1e-05, data_format='NCHW', use_global_stats=None, name=None ) [source]

Applies Batch Normalization as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.functional.batch_norm is used for nn.BatchNorm1D, nn.BatchNorm2D, nn.BatchNorm3D. Please use above API for BatchNorm.

Parameters
  • x (Tesnor) – input value. It’s data type should be float32, float64.

  • running_mean (Tensor) – running mean.

  • running_var (Tensor) – running variance.

  • weight (Tensor, optional) – The weight tensor of batch_norm. Default: None.

  • bias (Tensor, optional) – The bias tensor of batch_norm. Default: None.

  • epsilon (float, optional) – The small value added to the variance to prevent division by zero. Default: 1e-5.

  • training (bool, optional) – True means train mode which compute by batch data and track global mean and var during train period. False means inference mode which compute by global mean and var which calculated by train period. Default False.

  • momentum (float, optional) – The value used for the moving_mean and moving_var computation. Default: 0.9.

  • data_format (str, optional) – Specify the input data format, may be “NC”, “NCL”, “NCHW”, “NCDHW”, “NLC”, “NHWC” or “NDHWC”, where N is batch size, C is the number of the feature map, D is the depth of the feature, H is the height of the feature map, W is the width of the feature map, L is the length of the feature map. Default “NCHW”.

  • use_global_stats (bool|None, optional) – Whether to use global mean and variance. If set to False, use the statistics of one mini-batch, if set to True, use the global statistics, if set to None, use global statistics in the test phase and use the statistics of one mini-batch in the training phase. Default: None.

  • name (str, optional) – Name for the BatchNorm, default is None. For more information, please refer to Name..

Returns

None

Examples

>>> import paddle

>>> x = paddle.arange(12, dtype="float32").reshape([2, 1, 2, 3])
>>> print(x)
Tensor(shape=[2, 1, 2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[[0. , 1. , 2. ],
   [3. , 4. , 5. ]]],
 [[[6. , 7. , 8. ],
   [9. , 10., 11.]]]])
>>> running_mean = paddle.to_tensor([0], dtype="float32")
>>> running_variance = paddle.to_tensor([1], dtype="float32")
>>> weight = paddle.to_tensor([2], dtype="float32")
>>> bias = paddle.to_tensor([1], dtype="float32")

>>> batch_norm_out = paddle.nn.functional.batch_norm(x, running_mean,
...                                             running_variance, weight, bias)
>>> print(batch_norm_out)
Tensor(shape=[2, 1, 2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[[1.         , 2.99998999 , 4.99997997 ],
   [6.99996948 , 8.99995995 , 10.99994946]]],
 [[[12.99993896, 14.99992943, 16.99991989],
   [18.99990845, 20.99989891, 22.99988937]]]])