fused_linear_activation

paddle.incubate.nn.functional. fused_linear_activation ( x, y, bias, trans_x=False, trans_y=False, activation=None ) [source]

Fully-connected linear and activation transformation operator. This method requires CUDA version >= 11.6.

Parameters
  • x (Tensor) – the input Tensor to be multiplied.

  • y (Tensor) – the weight Tensor to be multiplied. Its rank must be 2.

  • bias (Tensor) – the input bias Tensor, the bias is added to the matrix multiplication result.

  • trans_x (bool, optional) – Whether to transpose \(x\) before multiplication.

  • trans_y (bool, optional) – Whether to transpose \(y\) before multiplication.

  • activation (str, optional) – Activation function, Currently, the available activation functions are limited to “gelu” (Gaussian Error Linear Unit) and “relu” (Rectified Linear Unit). These activation functions are applied to the output of the bias add. Default: None.

Returns

the output Tensor.

Return type

Tensor

Examples

>>> 
>>> 
>>> import paddle
>>> from paddle.incubate.nn.functional import fused_linear_activation

>>> paddle.set_device('gpu')
>>> x = paddle.randn([3, 4])
>>> weight = paddle.randn([4, 5])
>>> bias = paddle.randn([5])
>>> out = fused_linear_activation(x, weight, bias)
>>> print(out.shape)
[3, 5]