RReLU¶
- class paddle.nn. RReLU ( lower=0.125, upper=0.3333333333333333, name=None ) [source]
-
RReLU activation layer.
Applies the randomized leaky rectified liner unit function to improve generalization performance, as described in the paper: Empirical Evaluation of Rectified Activations in Convolutional Network
During training, randomly samples the negative slope for activation values as described below:
RReLU(x)={x,if x>=0a∗x,otherwisewhere x is the input tensor, a is randomly sampled from uniform distribution in range (lower, upper),
In the test phase, the negative slope will take the average value of lower and upper:
RReLU(x)={x,if x>=0(lower+upper)∗0.5∗x,otherwisewhere x is the input tensor, lower and upper are the bounds of uniform distribution.
- Parameters
-
lower (float, optional) – The lower bound of uniform distribution. Default: 1.0/8.0.
upper (float, optional) – The upper bound of uniform distribution. Default: 1.0/3.0.
name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
- Shape:
-
input: Tensor with any shape. Default dtype is float32.
output: Tensor with the same shape as input.
Examples
import paddle input_tensor = paddle.to_tensor([[[[-2.0, 3.0, -4.0, 5.0], [ 3.0, -4.0, 5.0, -6.0], [-7.0, -8.0, 8.0, 9.0]], [[ 1.0, -2.0, -3.0, 4.0], [-5.0, 6.0, 7.0, -8.0], [ 6.0, 7.0, 8.0, 9.0]]]], dtype='float32') rrelu_layer = paddle.nn.RReLU(0.1, 0.3) out = rrelu_layer(input_tensor) print(out) #[[[[-0.20000899 3. -0.88108218 5. ] # [ 3. -0.55175185 5. -1.07761011] # [-1.06806871 -1.98962009 8. 9. ]] # [[ 1. -0.52382672 -0.65515128 4. ] # [-1.37663394 6. 7. -2.34657836] # [ 6. 7. 8. 9. ]]]]
-
forward
(
x
)
forward¶
-
Defines the computation performed at every call. Should be overridden by all subclasses.
- Parameters
-
*inputs (tuple) – unpacked tuple arguments
**kwargs (dict) – unpacked dict arguments
-
extra_repr
(
)
extra_repr¶
-
Extra representation of this layer, you can have custom implementation of your own layer.