XavierInitializer

class paddle.fluid.initializer.XavierInitializer(uniform=True, fan_in=None, fan_out=None, seed=0)[source]

This class implements the Xavier weight initializer from the paper Understanding the difficulty of training deep feedforward neural networks by Xavier Glorot and Yoshua Bengio.

This initializer is designed to keep the scale of the gradients approximately same in all the layers. In case of Uniform distribution, the range is [-x, x], where

\[x = \sqrt{\frac{6.0}{fan\_in + fan\_out}}\]

In case of Normal distribution, the mean is 0 and the standard deviation is

\[\sqrt{\frac{2.0}{fan\_in + fan\_out}}\]
Parameters
  • uniform (bool,default True) – whether to use uniform ,if False use normal distribution

  • fan_in (float,default None) – fan_in for Xavier initialization. If None, it is inferred from the variable.

  • fan_out (float,default None) – fan_out for Xavier initialization. If None, it is inferred from the variable.

  • seed (int) – random seed

Note

It is recommended to set fan_in and fan_out to None for most cases.

Examples

import paddle.fluid as fluid
queries = fluid.data(name='x', shape=[None,1], dtype='float32')
fc = fluid.layers.fc(
    input=queries, size=10,
    param_attr=fluid.initializer.Xavier(uniform=False))