prelu

paddle.static.nn. prelu ( x, mode, param_attr=None, name=None ) [source]

Warning: API “paddle.fluid.layers.nn.prelu” is deprecated since 2.0.0, and will be removed in future versions. Please use “paddle.static.nn.prelu” instead.

prelu activation.

\[\begin{split}prelu(x) = max(0, x) + \\alpha * min(0, x)\end{split}\]

There are three modes for the activation:

all: All elements share same alpha.
channel: Elements in same channel share same alpha.
element: All elements do not share alpha. Each element has its own alpha.
Parameters:

x (Tensor): The input Tensor or LoDTensor with data type float32. mode (str): The mode for weight sharing. param_attr (ParamAttr|None, optional): The parameter attribute for the learnable

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/fluid/layers/nn.py:docstring of paddle.fluid.layers.nn.prelu, line 20)

Unexpected indentation.

weight (alpha), it can be create by ParamAttr. None by default. For detailed information, please refer to api_fluid_ParamAttr.

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/fluid/layers/nn.py:docstring of paddle.fluid.layers.nn.prelu, line 22)

Block quote ends without a blank line; unexpected unindent.

name (str, optional): Name for the operation (optional, default is None).

For more information, please refer to Name.

Returns:

Tensor: A tensor with the same shape and data type as x.

Examples:

import paddle

x = paddle.to_tensor([-1., 2., 3.])
param = paddle.ParamAttr(initializer=paddle.nn.initializer.Constant(0.2))
out = paddle.static.nn.prelu(x, 'all', param)
# [-0.2, 2., 3.]