class paddle.fluid.initializer. XavierInitializer ( uniform=True, fan_in=None, fan_out=None, seed=0 ) [source]

This class implements the Xavier weight initializer from the paper Understanding the difficulty of training deep feedforward neural networks by Xavier Glorot and Yoshua Bengio.

This initializer is designed to keep the scale of the gradients approximately same in all the layers. In case of Uniform distribution, the range is [-x, x], where

\[\begin{split}x = \sqrt{\\frac{6.0}{fan\_in + fan\_out}}\end{split}\]

In case of Normal distribution, the mean is 0 and the standard deviation is

\[\begin{split}\sqrt{\\frac{2.0}{fan\_in + fan\_out}}\end{split}\]
  • uniform (bool,default True) – whether to use uniform ,if False use normal distribution

  • fan_in (float,default None) – fan_in for Xavier initialization. If None, it is inferred from the variable.

  • fan_out (float,default None) – fan_out for Xavier initialization. If None, it is inferred from the variable.

  • seed (int) – random seed


It is recommended to set fan_in and fan_out to None for most cases.


import paddle.fluid as fluid
queries = fluid.data(name='x', shape=[None,1], dtype='float32')
fc = fluid.layers.fc(
    input=queries, size=10,
forward ( var, block=None )


Initialize the input tensor with Xavier initialization.

  • var (Tensor) – Tensor that needs to be initialized.

  • block (Block, optional) – The block in which initialization ops should be added. Used in static graph only, default None.


The initialization op