# ops¶

## abs¶

paddle.fluid.layers.abs(x, name=None)

Abs Activation Operator.

$$out = |x|$$

Parameters
• x – Input of Abs operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Abs operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.abs(data)


## acos¶

paddle.fluid.layers.acos(x, name=None)

Arccosine Activation Operator.

$$out = cos^{-1}(x)$$

Parameters

x – Input of acos operator

Returns

Output of acos operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.acos(data)


## asin¶

paddle.fluid.layers.asin(x, name=None)

Arcsine Activation Operator.

$$out = sin^{-1}(x)$$

Parameters

x – Input of asin operator

Returns

Output of asin operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.asin(data)


## atan¶

paddle.fluid.layers.atan(x, name=None)

Arctanh Activation Operator.

$$out = tanh^{-1}(x)$$

Parameters

x – Input of atan operator

Returns

Output of atan operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.atan(data)


## ceil¶

paddle.fluid.layers.ceil(x, name=None)

Ceil Activation Operator.

$$out = \left \lceil x \right \rceil$$

Parameters
• x – Input of Ceil operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Ceil operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.ceil(data)


## cos¶

paddle.fluid.layers.cos(x, name=None)

Cosine Activation Operator.

$$out = cos(x)$$

Parameters
• x – Input of Cos operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Cos operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.cos(data)


## cumsum¶

paddle.fluid.layers.cumsum(x, axis=None, exclusive=None, reverse=None)[source]

The cumulative sum of the elements along a given axis. By default, the first element of the result is the same of the first element of the input. If exlusive is true, the first element of the result is 0.

Parameters
• x – Input of cumsum operator

• axis (INT) – The dimenstion to accumulate along. -1 means the last dimenstion [default -1].

• exclusive (BOOLEAN) – Whether to perform exclusive cumsum. [default false].

• reverse (BOOLEAN) – If true, the cumsum is performed in the reversed direction. [default false].

Returns

Output of cumsum operator

Examples

>>> import paddle.fluid as fluid
>>> data = fluid.layers.data(name="input", shape=[32, 784])
>>> result = fluid.layers.cumsum(data, axis=0)


## exp¶

paddle.fluid.layers.exp(x, name=None)

Exp Activation Operator.

$$out = e^x$$

Parameters
• x – Input of Exp operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Exp operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.exp(data)


## floor¶

paddle.fluid.layers.floor(x, name=None)

Floor Activation Operator.

$$out = \left \lfloor x \right \rfloor$$

Parameters
• x – Input of Floor operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Floor operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.floor(data)


## hard_shrink¶

paddle.fluid.layers.hard_shrink(x, threshold=None)[source]

HardShrink activation operator

$\begin{split}out = \begin{cases} x, \text{if } x > \lambda \\ x, \text{if } x < -\lambda \\ 0, \text{otherwise} \end{cases}\end{split}$
Parameters
• x – Input of HardShrink operator

• threshold (FLOAT) – The value of threshold for HardShrink. [default: 0.5]

Returns

Output of HardShrink operator

Examples

>>> import paddle.fluid as fluid
>>> data = fluid.layers.data(name="input", shape=[784])
>>> result = fluid.layers.hard_shrink(x=data, threshold=0.3)


## logsigmoid¶

paddle.fluid.layers.logsigmoid(x, name=None)

Logsigmoid Activation Operator

$$out = \log \frac{1}{1 + e^{-x}}$$

Parameters
• x – Input of LogSigmoid operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of LogSigmoid operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.logsigmoid(data)


## reciprocal¶

paddle.fluid.layers.reciprocal(x, name=None)

Reciprocal Activation Operator.

$$out = \frac{1}{x}$$

Parameters
• x – Input of Reciprocal operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Reciprocal operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.reciprocal(data)


## round¶

paddle.fluid.layers.round(x, name=None)

Round Activation Operator.

$$out = [x]$$

Parameters
• x – Input of Round operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Round operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.round(data)


## rsqrt¶

paddle.fluid.layers.rsqrt(x, name=None)

Rsqrt Activation Operator.

Please make sure input is legal in case of numeric errors.

$$out = \frac{1}{\sqrt{x}}$$

Parameters
• x – Input of Rsqrt operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Rsqrt operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.rsqrt(data)


## sigmoid¶

paddle.fluid.layers.sigmoid(x, name=None)

Sigmoid Activation Operator

$$out = \frac{1}{1 + e^{-x}}$$

Parameters
• x – Input of Sigmoid operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Sigmoid operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.sigmoid(data)


## sin¶

paddle.fluid.layers.sin(x, name=None)

Sine Activation Operator.

$$out = sin(x)$$

Parameters
• x – Input of Sin operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Sin operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.sin(data)


## softplus¶

paddle.fluid.layers.softplus(x, name=None)

Softplus Activation Operator.

$$out = \ln(1 + e^{x})$$

Parameters
• x – Input of Softplus operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Softplus operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.softplus(data)


## softshrink¶

paddle.fluid.layers.softshrink(x, name=None)

Softshrink Activation Operator

$\begin{split}out = \begin{cases} x - \lambda, \text{if } x > \lambda \\ x + \lambda, \text{if } x < -\lambda \\ 0, \text{otherwise} \end{cases}\end{split}$
Parameters
• x – Input of Softshrink operator

• lambda (FLOAT) – non-negative offset

Returns

Output of Softshrink operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.softshrink(data)


## softsign¶

paddle.fluid.layers.softsign(x, name=None)

Softsign Activation Operator.

$$out = \frac{x}{1 + |x|}$$

Parameters
• x – Input of Softsign operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Softsign operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.softsign(data)


## sqrt¶

paddle.fluid.layers.sqrt(x, name=None)

Sqrt Activation Operator.

Please make sure legal input, when input a negative value closed to zero, you should add a small epsilon(1e-12) to avoid negative number caused by numerical errors.

$$out = \sqrt{x}$$

Parameters
• x – Input of Sqrt operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Sqrt operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.sqrt(data)


## square¶

paddle.fluid.layers.square(x, name=None)

Square Activation Operator.

$$out = x^2$$

Parameters
• x – Input of Square operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Square operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.square(data)


## tanh¶

paddle.fluid.layers.tanh(x, name=None)

Tanh Activation Operator.

$$out = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$

Parameters
• x – Input of Tanh operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of Tanh operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.tanh(data)


## tanh_shrink¶

paddle.fluid.layers.tanh_shrink(x, name=None)

TanhShrink Activation Operator.

$$out = x - \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$

Parameters
• x – Input of TanhShrink operator

• use_cudnn (BOOLEAN) – (bool, default false) Only used in cudnn kernel, need install cudnn

Returns

Output of TanhShrink operator

Examples

import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.tanh_shrink(data)


## thresholded_relu¶

paddle.fluid.layers.thresholded_relu(x, threshold=None)[source]

ThresholdedRelu activation operator

$\begin{split}out = \begin{cases} x, \text{if } x > threshold \\ 0, \text{otherwise} \end{cases}\end{split}$
Parameters
• x – Input of ThresholdedRelu operator

• threshold (FLOAT) – The threshold location of activation. [default 1.0].

Returns

Output of ThresholdedRelu operator

Examples

>>> import paddle.fluid as fluid
>>> data = fluid.layers.data(name="input", shape=[1])
>>> result = fluid.layers.thresholded_relu(data, threshold=0.4)


## uniform_random¶

paddle.fluid.layers.uniform_random(shape, dtype='float32', min=-1.0, max=1.0, seed=0)[source]

This operator initializes a variable with random values sampled from a uniform distribution. The random result is in set [min, max].

Parameters
• shape (list) – The shape of output variable.

• dtype (np.dtype|core.VarDesc.VarType|str) – The type of data, such as float32, float64 etc. Default: float32.

• min (float) – Minimum value of uniform random. Default -1.0.

• max (float) – Maximun value of uniform random. Default 1.0.

• seed (int) – Random seed used for generating samples. 0 means use a seed generated by the system. Note that if seed is not 0, this operator will always generate the same random numbers every time. Default 0.

Examples

import paddle.fluid as fluid
result = fluid.layers.uniform_random(shape=[32, 784])