rrelu

paddle.nn.functional. rrelu ( x, lower=0.125, upper=0.3333333333333333, training=True, name=None ) [source]

rrelu activation.

Applies the randomized leaky rectified liner unit function to improve generalization performance, as described in the paper: Empirical Evaluation of Rectified Activations in Convolutional Network

During training, randomly samples the negative slope for activation values as described below:

\[\begin{split}rrelu(x)= \left\{ \begin{array}{rcl} x, & & if \ x >= 0 \\ a * x, & & otherwise \\ \end{array} \right.\end{split}\]

where \(x\) is the input tensor, \(a\) is randomly sampled from uniform distribution in range (\(lower\), \(upper\)),

In the test phase, the negative slope will take the average value of \(lower\) and \(upper\):

\[\begin{split}rrelu(x)= \left\{ \begin{array}{rcl} x, & & if \ x >= 0 \\ (lower + upper) * 0.5 * x, & & otherwise \\ \end{array} \right.\end{split}\]

where \(x\) is the input tensor, \(lower\) and \(upper\) are the bounds of uniform distribution.

Parameters
  • x (Tensor) – The input Tensor with data type float16, float32, float64.

  • lower (float, optional) – The lower bound of uniform distribution. Default: 0.125.

  • upper (float, optional) – The upper bound of uniform distribution. Default: 0.3333333333333333.

  • training (bool, optional) – Current mode is in training or others. Default is True.

  • name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

A Tensor with the same data type and shape as x .

Examples

>>> import paddle
>>> import paddle.nn.functional as F
>>> paddle.seed(1)
>>> input_tensor = paddle.to_tensor([[[[-2.0,  3.0, -4.0,  5.0],
...                                    [ 3.0, -4.0,  5.0, -6.0],
...                                    [-7.0, -8.0,  8.0,  9.0]],
...                                   [[ 1.0, -2.0, -3.0,  4.0],
...                                    [-5.0,  6.0,  7.0, -8.0],
...                                    [ 6.0,  7.0,  8.0,  9.0]]]], dtype='float32')
>>> out = F.rrelu(input_tensor, 0.1, 0.3)
>>> print(out)
Tensor(shape=[1, 2, 3, 4], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[[-0.20715050,  3.        , -1.01193857,  5.        ],
   [ 3.        , -0.94084597,  5.        , -0.65544695],
   [-1.24268556, -2.34339547,  8.        ,  9.        ]],
  [[ 1.        , -0.44942653, -0.68969047,  4.        ],
   [-1.03736508,  6.        ,  7.        , -0.95799232],
   [ 6.        ,  7.        ,  8.        ,  9.        ]]]])