prelu¶
- paddle.static.nn. prelu ( x, mode, param_attr=None, data_format='NCHW', name=None ) [source]
-
prelu activation.
prelu(x)=max(0,x)+α∗min(0,x)There are three modes for the activation:
all: All elements share same alpha. channel: Elements in same channel share same alpha. element: All elements do not share alpha. Each element has its own alpha.
- Parameters
-
x (Tensor) – The input Tensor or LoDTensor with data type float32.
mode (str) – The mode for weight sharing.
param_attr (ParamAttr|None, optional) – The parameter attribute for the learnable weight (alpha), it can be create by ParamAttr. None by default.
data_format (str, optional) – Data format that specifies the layout of input. It may be “NC”, “NCL”, “NCHW”, “NCDHW”, “NLC”, “NHWC” or “NDHWC”. Default: “NCHW”.
name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
- Returns
-
Tensor, A tensor with the same shape and data type as x.
Examples
import paddle x = paddle.to_tensor([-1., 2., 3.]) param = paddle.ParamAttr(initializer=paddle.nn.initializer.Constant(0.2)) out = paddle.static.nn.prelu(x, 'all', param) # [-0.2, 2., 3.]