paddle.fluid.layers.nn. dropout ( x, dropout_prob, is_test=None, seed=None, name=None, dropout_implementation='downgrade_in_infer' ) [source]

Computes dropout.

Drop or keep each element of x independently. Dropout is a regularization technique for reducing overfitting by preventing neuron co-adaption during training. The dropout operator randomly sets (according to the given dropout probability) the outputs of some units to zero, while others are remain unchanged.

dropout op can be removed from the program to make the program more efficient.

  • x (Variable) – The input tensor variable. The data type is float16 or float32 or float64.

  • dropout_prob (float) – Probability of setting units to zero.

  • is_test (bool) – A flag indicating whether it is in test phrase or not. Default None, in dynamic graph, it use global tracer mode; in static graph, it means False.

  • seed (int) – A Python integer used to create random seeds. If this parameter is set to None, a random seed is used. NOTE: If an integer seed is given, always the same output units will be dropped. DO NOT use a fixed seed in training.Default: None.

  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.

  • dropout_implementation (string) –


    1. downgrade_in_infer(default), downgrade the outcome at inference

      • train: out = input * mask

      • inference: out = input * (1.0 - dropout_prob)

      (mask is a tensor same shape with input, value is 0 or 1 ratio of 0 is dropout_prob)

    2. upscale_in_train, upscale the outcome at training time

      • train: out = input * mask / ( 1.0 - dropout_prob )

      • inference: out = input

      (mask is a tensor same shape with input, value is 0 or 1 ratio of 0 is dropout_prob)


A Variable holding Tensor representing the dropout, has same shape and data type with x.


import paddle
import paddle.fluid as fluid

x = fluid.data(name="data", shape=[None, 32, 32], dtype="float32")
dropped = fluid.layers.dropout(x, dropout_prob=0.5)