AlphaDropout

class paddle.nn. AlphaDropout ( p=0.5, name=None ) [source]

Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout fits well to SELU activate function by randomly setting activations to the negative saturation value.

For more information, please refer to: Self-Normalizing Neural Networks

In dygraph mode, please use eval() to switch to evaluation mode, where dropout is disabled.

Parameters
  • p (float | int) – Probability of setting units to zero. Default: 0.5

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:
  • input: N-D tensor.

  • output: N-D tensor, the same shape as input.

Examples

>>> import paddle
>>> paddle.seed(2023)

>>> x = paddle.to_tensor([[-1, 1], [-1, 1]], dtype="float32")
>>> m = paddle.nn.AlphaDropout(p=0.5)
>>> y_train = m(x)
>>> print(y_train)
Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
[[-0.10721093,  1.66559887],
 [-0.77919382,  1.66559887]])

>>> m.eval()  # switch the model to test phase
>>> y_test = m(x)
>>> print(y_test)
Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
[[-1.,  1.],
 [-1.,  1.]])
forward ( input )

forward

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( )

extra_repr

Extra representation of this layer, you can have custom implementation of your own layer.