Dropout(p=0.5, seed=None, dropout_implementation='downgrade_in_infer', is_test=False)
This interface is used to construct a callable object of the
Dropoutclass. For more details, refer to code examples.
Drop or keep each element of input independently. Dropout is a regularization technique for reducing overfitting by preventing neuron co-adaption during training. The dropout operator randomly sets (according to the given dropout probability) the outputs of some units to zero, while others are remain unchanged.
Dropout layer can be removed for efficiency concern.
p (float, optional) – Probability of setting units to zero. Default: 0.5
seed (int, optional) – A Python integer used to create random seeds. If this parameter is set to None, a random seed is used. NOTE: If an integer seed is given, always the same output units will be dropped. DO NOT use a fixed seed in training. Default: None.
dropout_implementation (string, optional) –
downgrade_in_infer(default), downgrade the outcome at inference
train: out = input * mask
inference: out = input * (1.0 - p)
(mask is a tensor same shape with input, value is 0 or 1 ratio of 0 is dropout_prob)
upscale_in_train, upscale the outcome at training time
train: out = input * mask / ( 1.0 - p )
inference: out = input
(mask is a tensor same shape with input, value is 0 or 1 ratio of 0 is p)
is_test (bool, optional) – A flag indicating whether it is in test phrase or not. This flag only has effect on static graph mode. For dygraph mode, please use
eval(). Default: False.
import paddle.fluid as fluid from paddle.fluid.dygraph.base import to_variable import numpy as np x = np.random.random(size=(3, 10, 3, 7)).astype('float32') with fluid.dygraph.guard(): x = to_variable(x) m = fluid.dygraph.Dropout(p=0.5) droped_train = m(x) # switch to eval mode m.eval() droped_eval = m(x)
Defines the computation performed at every call. Should be overridden by all subclasses.
*inputs (tuple) – unpacked tuple arguments
**kwargs (dict) – unpacked dict arguments