prelu

Note: This API is only avaliable in [Static Graph] mode

paddle.fluid.layers.prelu(x, mode, param_attr=None, name=None)[source]

Equation:

\[y = \max(0, x) + \alpha * \min(0, x)\]

There are three modes for the activation:

all: All elements share same alpha.
channel: Elements in same channel share same alpha.
element: All elements do not share alpha. Each element has its own alpha.
Parameters
  • x (Variable) – The input Tensor or LoDTensor with data type float32.

  • mode (str) – The mode for weight sharing.

  • param_attr (ParamAttr|None) – The parameter attribute for the learnable weight (alpha), it can be create by ParamAttr. None by default. For detailed information, please refer to ParamAttr.

  • name (str|None) – For detailed information, please refer to Name. Usually name is no need to set and None by default.

Returns

output(Variable): The tensor or LoDTensor with the same shape as input. The data type is float32.

Return type

Variable

Examples

import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
x = fluid.data(name="x", shape=[None,5,10,10], dtype="float32")
mode = 'channel'
output = fluid.layers.prelu(
         x,mode,param_attr=ParamAttr(name='alpha'))