prelu

paddle.static.nn. prelu ( x, mode, param_attr=None, data_format='NCHW', name=None ) [source]

prelu activation.

\[prelu(x) = max(0, x) + \alpha * min(0, x)\]

There are three modes for the activation:

all: All elements share same alpha.
channel: Elements in same channel share same alpha.
element: All elements do not share alpha. Each element has its own alpha.
Parameters
  • x (Tensor) – The input Tensor or LoDTensor with data type float32.

  • mode (str) – The mode for weight sharing.

  • param_attr (ParamAttr|None, optional) – The parameter attribute for the learnable weight (alpha), it can be create by ParamAttr. None by default. For detailed information, please refer to ParamAttr.

  • data_format (str, optional) – Data format that specifies the layout of input. It may be “NC”, “NCL”, “NCHW”, “NCDHW”, “NLC”, “NHWC” or “NDHWC”. Default: “NCHW”.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A tensor with the same shape and data type as x.

Return type

Tensor

Examples

>>> import paddle
>>> paddle.enable_static()

>>> x = paddle.static.data(name="x", shape=[None, 5, 10, 10], dtype="float32")
>>> mode = 'channel'
>>> output = paddle.static.nn.prelu(
...     x,mode,param_attr=paddle.ParamAttr(name='alpha'))