Dropout

class paddle.nn. Dropout ( p=0.5, axis=None, mode='upscale_in_train', name=None ) [source]

Dropout is a regularization technique for reducing overfitting by preventing neuron co-adaption during training as described in the paper: Improving neural networks by preventing co-adaptation of feature detectors The dropout operator randomly sets the outputs of some units to zero, while upscale others according to the given dropout probability.

See paddle.nn.functional.dropout for more details.

In dygraph mode, please use eval() to switch to evaluation mode, where dropout is disabled.

Parameters
  • p (float | int) – Probability of setting units to zero. Default: 0.5

  • axis (int | list) – The axis along which the dropout is performed. Default None.

  • mode (str, optional) –

    [‘upscale_in_train’(default) | ‘downscale_in_infer’]

    1. upscale_in_train(default), upscale the output at training time

      • train: out = input * mask / ( 1.0 - p )

      • inference: out = input

    2. downscale_in_infer, downscale the output at inference

      • train: out = input * mask

      • inference: out = input * (1.0 - p)

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:
  • input: N-D tensor.

  • output: N-D tensor, the same shape as input.

Examples

import paddle
import numpy as np

x = np.array([[1,2,3], [4,5,6]]).astype('float32')
x = paddle.to_tensor(x)
m = paddle.nn.Dropout(p=0.5)
y_train = m(x)
m.eval()  # switch the model to test phase
y_test = m(x)
print(x)
print(y_train)
print(y_test)
forward ( input )

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( )

Extra representation of this layer, you can have custom implementation of your own layer.