# Variable¶

class paddle.static. Variable ( block, type=VarType.LOD_TENSOR, name=None, shape=None, dtype=None, lod_level=None, capacity=None, persistable=None, error_clip=None, stop_gradient=False, is_data=False, need_check_feed=False, belong_to_optimizer=False, **kwargs ) [source]
Notes:

The constructor of Variable should not be invoked directly.

In Static Graph Mode: Please use Block.create_var to create a Static variable which has no data until being feed.

In Dygraph Mode: Please use api_fluid_dygraph_to_variable to create a dygraph variable with real data

In Fluid, every input and output of an OP is a variable. In most cases, variables are used for holding different kinds of data or training labels. A variable belongs to a Block . All variable has its own name and two variables in different Block could have the same name.

There are many kinds of variables. Each kind of them has its own attributes and usages. Please refer to the framework.proto for details.

Most of a Variable’s member variables can be set to be None. It mean it is not available or will be specified later.

Examples

In Static Graph Mode:

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')


In Dygraph Mode:

import paddle.fluid as fluid
import numpy as np

with fluid.dygraph.guard():
new_variable = fluid.dygraph.to_variable(np.arange(10))

detach ( )

Returns a new Variable, detached from the current graph. It will share data with origin Variable and without tensor copy. In addition, the detached Variable doesn’t provide gradient propagation.

Returns

The detached Variable.

Return type

( Variable | dtype is same as current Variable)

Examples

import paddle

paddle.enable_static()

# create a static Variable
x = paddle.static.data(name='x', shape=[3, 2, 1])

# create a detached Variable
y = x.detach()

numpy ( )
Notes:

This API is ONLY available in Dygraph mode

Returns a numpy array shows the value of current Variable

Returns

The numpy value of current Variable.

Return type

ndarray

Returns type:

ndarray: dtype is same as current Variable

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph.base import to_variable
from paddle.fluid.dygraph import Linear
import numpy as np

data = np.random.uniform(-1, 1, [30, 10, 32]).astype('float32')
with fluid.dygraph.guard():
linear = Linear(32, 64)
data = to_variable(data)
x = linear(data)
print(x.numpy())

backward ( retain_graph=False )
Notes:

This API is ONLY available in Dygraph mode

Run backward of current Graph which starts from current Tensor.

Parameters

retain_graph (bool, optional) – If False, the graph used to compute grads will be freed. If you would like to add more ops to the built graph after calling this method( backward ), set the parameter retain_graph to True, then the grads will be retained. Thus, seting it to False is much more memory-efficient. Defaults to False.

Returns

None

Return type

NoneType

Examples

import numpy as np
import paddle
paddle.disable_static()

x = np.ones([2, 2], np.float32)
inputs = []
for _ in range(10):
tmp = paddle.to_tensor(x)
# if we don't set tmp's stop_gradient as False then, all path to loss will has no gradient since
# there is no one need gradient on it.
tmp.stop_gradient=False
inputs.append(tmp)
ret = paddle.add_n(inputs)
loss = paddle.sum(ret)
loss.backward()

gradient ( )
Notes:

This API is ONLY available in Dygraph mode

Get the Gradient of Current Variable

Returns

if Variable’s type is LoDTensor, return numpy value of the gradient of current Variable, if Variable’s type is SelectedRows, return tuple of ndarray, first element of tuple is numpy value of the gradient of current Variable, second element of tuple is numpy value of the rows of current Variable.

Return type

ndarray or tuple of ndarray

Examples

import paddle.fluid as fluid
import numpy as np

# example1: return ndarray
x = np.ones([2, 2], np.float32)
with fluid.dygraph.guard():
inputs2 = []
for _ in range(10):
tmp = fluid.dygraph.base.to_variable(x)
tmp.stop_gradient=False
inputs2.append(tmp)
ret2 = fluid.layers.sums(inputs2)
loss2 = fluid.layers.reduce_sum(ret2)
loss2.backward()
print(loss2.gradient())

# example2: return tuple of ndarray
with fluid.dygraph.guard():
embedding = fluid.dygraph.Embedding(
size=[20, 32],
param_attr='emb.w',
is_sparse=True)
x_data = np.arange(12).reshape(4, 3).astype('int64')
x_data = x_data.reshape((-1, 3, 1))
x = fluid.dygraph.base.to_variable(x_data)
out = embedding(x)
out.backward()
print(embedding.weight.gradient())

clear_gradient ( )
Notes:

1. This API is ONLY available in Dygraph mode

2. Use it only Variable has gradient, normally we use this for Parameters since other temporal Variable will be deleted by Python’s GC

Clear (set to 0 ) the Gradient of Current Variable

Returns: None

Examples

import paddle.fluid as fluid
import numpy as np

x = np.ones([2, 2], np.float32)
with fluid.dygraph.guard():
inputs2 = []
for _ in range(10):
tmp = fluid.dygraph.base.to_variable(x)
tmp.stop_gradient=False
inputs2.append(tmp)
ret2 = fluid.layers.sums(inputs2)
loss2 = fluid.layers.reduce_sum(ret2)
loss2.backward()
print(loss2.gradient())
loss2.clear_gradient()
print("After clear {}".format(loss2.gradient()))

to_string ( throw_on_error, with_details=False )

Get debug string.

Parameters
• throw_on_error (bool) – True if raise an exception when self is not initialized.

• with_details (bool) – more details about variables and parameters (e.g. trainable, optimize_attr, …) will be printed when with_details is True. Default value is False;

Returns

The debug string.

Return type

str

Examples

import paddle.fluid as fluid
import paddle

paddle.enable_static()
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
print(new_variable.to_string(True))
print("=============with detail===============")
print(new_variable.to_string(True, True))

element_size ( )

Returns the size in bytes of an element in the Tensor.

Examples

import paddle
paddle.enable_static()

x = paddle.static.data(name='x1', shape=[3, 2], dtype='bool')
x.element_size() # 1

x = paddle.static.data(name='x2', shape=[3, 2], dtype='int16')
x.element_size() # 2

x = paddle.static.data(name='x3', shape=[3, 2], dtype='float16')
x.element_size() # 2

x = paddle.static.data(name='x4', shape=[3, 2], dtype='float32')
x.element_size() # 4

x = paddle.static.data(name='x5', shape=[3, 2], dtype='float64')
x.element_size() # 8

property stop_gradient

Indicating if we stop gradient from current Variable

Notes: This Property has default value as True in Dygraph mode, while Parameter’s default value is False. However, in Static Graph Mode all Variable’s default stop_gradient value is False

Examples

import paddle.fluid as fluid
import numpy as np

with fluid.dygraph.guard():
value0 = np.arange(26).reshape(2, 13).astype("float32")
value1 = np.arange(6).reshape(2, 3).astype("float32")
value2 = np.arange(10).reshape(2, 5).astype("float32")
linear = fluid.Linear(13, 5, dtype="float32")
linear2 = fluid.Linear(3, 3, dtype="float32")
a = fluid.dygraph.to_variable(value0)
b = fluid.dygraph.to_variable(value1)
c = fluid.dygraph.to_variable(value2)
out1 = linear(a)
out2 = linear2(b)
out1.stop_gradient = True
out = fluid.layers.concat(input=[out1, out2, c], axis=1)
out.backward()

assert linear.weight.gradient() is None
assert (out1.gradient() == 0).all()

property persistable

Indicating if we current Variable should be long-term alive

Notes: This Property will be deprecated and this API is just to help user understand concept

1. All Variable’s persistable is False except Parameters.

2. In Dygraph mode, this property should not be changed

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
print("persistable of current Var is: {}".format(new_variable.persistable))

property is_parameter

Indicating if current Variable is a Parameter

Examples

import paddle
new_parameter = paddle.static.create_parameter(name="X",
shape=[10, 23, 48],
dtype='float32')
if new_parameter.is_parameter:
print("Current var is a Parameter")
else:
print("Current var is not a Parameter")

# Current var is a Parameter

property grad_name

Indicating name of the gradient Variable of current Variable.

Notes: This is a read-only property. It simply returns name of gradient Variable from a naming convention but doesn’t guarantee the gradient exists.

Examples




import paddle.fluid as fluid

x = fluid.data(name=”x”, shape=[-1, 23, 48], dtype=’float32’) print(x.grad_name) # output is “x@GRAD

property name

Indicating name of current Variable

Notes: If it has two or more Varaible share the same name in the same Block , it means these Variable will share content in no- Dygraph mode. This is how we achieve Parameter sharing

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
print("name of current Var is: {}".format(new_variable.name))

property shape

Indicating shape of current Variable

Notes: This is a read-only property

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
print("shape of current Var is: {}".format(new_variable.shape))

property dtype

Indicating data type of current Variable

Notes: This is a read-only property

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
print("Dtype of current Var is: {}".format(new_variable.dtype))

property lod_level

Indicating LoD info of current Variable, please refer to api_fluid_LoDTensor_en to check the meaning of LoD

Notes:

1. This is a read-only property

2. Don’t support this property in Dygraph mode, it’s value should be 0(int)

Examples

import paddle
import paddle.fluid as fluid

paddle.enable_static()
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
print("LoD Level of current Var is: {}".format(new_variable.lod_level))

property type

Indicating Type of current Variable

Notes: This is a read-only property

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
shape=[-1, 23, 48],
dtype='float32')
print("Type of current Var is: {}".format(new_variable.type))

property T

Permute current Variable with its dimensions reversed.

If n is the dimensions of x , x.T is equivalent to x.transpose([n-1, n-2, …, 0]).

Examples

import paddle
paddle.enable_static()

x = paddle.ones(shape=[2, 3, 5])
x_T = x.T

exe = paddle.static.Executor()
x_T_np = exe.run(paddle.static.default_main_program(), fetch_list=[x_T])[0]
print(x_T_np.shape)
# (5, 3, 2)

clone ( )

Returns a new static Variable, which is the clone of the original static Variable. It remains in the current graph, that is, the cloned Variable provides gradient propagation. Calling out = tensor.clone() is same as out = assign(tensor) .

Returns

The cloned Variable.

Return type

Variable

Examples

import paddle

paddle.enable_static()

# create a static Variable
x = paddle.static.data(name='x', shape=[3, 2, 1])
# create a cloned Variable
y = x.clone()

get_value ( scope=None )

Get the value of variable in given scope.

Parameters

scope (Scope, optional) – If scope is None, it will be set to global scope obtained through ‘paddle.static.global_scope()’. Otherwise, use scope. Default: None

Returns

the value in given scope.

Return type

Tensor

Examples

import paddle
import paddle.static as static
import numpy as np

paddle.enable_static()

x = static.data(name="x", shape=[10, 10], dtype='float32')

y = static.nn.fc(x, 10, name='fc')
place = paddle.CPUPlace()
exe = static.Executor(place)
prog = paddle.static.default_main_program()
exe.run(static.default_startup_program())
inputs = np.ones((10, 10), dtype='float32')
exe.run(prog, feed={'x': inputs}, fetch_list=[y, ])
path = 'temp/tensor_'
for var in prog.list_vars():
if var.persistable:
t = var.get_value()
paddle.save(t, path+var.name+'.pdtensor')

for var in prog.list_vars():
if var.persistable:
t_load = paddle.load(path+var.name+'.pdtensor')
var.set_value(t_load)

set_value ( value, scope=None )

Set the value to the tensor in given scope.

Parameters
• value (Tensor/ndarray) – The value to be set.

• scope (Scope, optional) – If scope is None, it will be set to global scope obtained through ‘paddle.static.global_scope()’. Otherwise, use scope. Default: None

Returns

None

Examples

import paddle
import paddle.static as static
import numpy as np

paddle.enable_static()

x = static.data(name="x", shape=[10, 10], dtype='float32')

y = static.nn.fc(x, 10, name='fc')
place = paddle.CPUPlace()
exe = static.Executor(place)
prog = paddle.static.default_main_program()
exe.run(static.default_startup_program())
inputs = np.ones((10, 10), dtype='float32')
exe.run(prog, feed={'x': inputs}, fetch_list=[y, ])
path = 'temp/tensor_'
for var in prog.list_vars():
if var.persistable:
t = var.get_value()
paddle.save(t, path+var.name+'.pdtensor')

for var in prog.list_vars():
if var.persistable:
t_load = paddle.load(path+var.name+'.pdtensor')
var.set_value(t_load)

size ( )

Returns the number of elements for current Variable, which is a int64 Variable with shape [1]

Returns

the number of elements for current Variable

Return type

Variable

Examples

import paddle

paddle.enable_static()

# create a static Variable
x = paddle.static.data(name='x', shape=[3, 2, 1])

# get the number of elements of the Variable
y = x.size()

property attr_names

Get the names of all attributes defined.

property dist_attr

Get distributed attribute of this Variable.

abs ( name=None )

Abs Operator.

This operator is used to perform elementwise abs for input $X$. $$out = |x|$$

Parameters
• x (Tensor) – (Tensor), The input tensor of abs op.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

(Tensor), The output tensor of abs op.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.abs(x)
print(out)
# [0.4 0.2 0.1 0.3]

acos ( name=None )

Arccosine Operator.

$$out = \cos^{-1}(x)$$

Parameters
• x (Tensor) – Input of acos operator

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of acos operator

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.acos(x)
print(out)
# [1.98231317 1.77215425 1.47062891 1.26610367]

acosh ( name=None )

Acosh Activation Operator.

$$out = acosh(x)$$

Parameters
• x (Tensor) – Input of Acosh operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Acosh operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([1., 3., 4., 5.])
out = paddle.acosh(x)
print(out)
# [0.        , 1.76274729, 2.06343699, 2.29243159]

add ( y, name=None )

Elementwise Add Operator. Add two tensors element-wise The equation is:

$Out=X+Y$

$X$ the tensor of any dimension. $Y$ the tensor whose dimensions must be less than or equal to the dimensions of $X$.

There are two cases for this operator:

1. The shape of $Y$ is the same with $X$.

2. The shape of $Y$ is a continuous subsequence of $X$.

For case 2:

1. Broadcast $Y$ to match the shape of $X$, where axis is the start dimension index for broadcasting $Y$ onto $X$.

2. If $axis$ is -1 (default), $axis$=rank($X$)−rank($Y$).

3. The trailing dimensions of size 1 for $Y$ will be ignored for the consideration of subsequence, such as shape($Y$) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

Parameters
• x (Tensor) – Tensor or LoDTensor of any dimensions. Its dtype should be int32, int64, float32, float64.

• y (Tensor) – Tensor or LoDTensor of any dimensions. Its dtype should be int32, int64, float32, float64.

• name (string, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with x.

Examples

import paddle

x = paddle.to_tensor([2, 3, 4], 'float64')
y = paddle.to_tensor([1, 5, 2], 'float64')
z = paddle.add(x, y)
print(z)  # [3., 8., 6. ]

add_ ( y, name=None )

Inplace version of add API, the output Tensor will be inplaced with input x. Please refer to api_tensor_add.

add_n ( name=None )

Sum one or more Tensor of the input.

For example:

Case 1:

Input:
input.shape = [2, 3]
input = [[1, 2, 3],
[4, 5, 6]]

Output:
output.shape = [2, 3]
output = [[1, 2, 3],
[4, 5, 6]]

Case 2:

Input:
First input:
input1.shape = [2, 3]
Input1 = [[1, 2, 3],
[4, 5, 6]]

The second input:
input2.shape = [2, 3]
input2 = [[7, 8, 9],
[10, 11, 12]]

Output:
output.shape = [2, 3]
output = [[8, 10, 12],
[14, 16, 18]]

Parameters
• inputs (Tensor|list[Tensor]|tuple[Tensor]) – A Tensor or a list/tuple of Tensors. The shape and data type of the list/tuple elements should be consistent. Input can be multi-dimensional Tensor, and data types can be: float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the sum of input $$inputs$$ , its shape and data types are consistent with $$inputs$$.

Examples

import paddle

input0 = paddle.to_tensor([[1, 2, 3], [4, 5, 6]], dtype='float32')
input1 = paddle.to_tensor([[7, 8, 9], [10, 11, 12]], dtype='float32')
output = paddle.add_n([input0, input1])
# [[8., 10., 12.],
#  [14., 16., 18.]]

addmm ( x, y, beta=1.0, alpha=1.0, name=None )

addmm

Perform matrix multiplication for input $x$ and $y$. $input$ is added to the final result. The equation is:

$Out = alpha * x * y + beta * input$

$Input$, $x$ and $y$ can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input $input$.

Parameters
• input (Tensor) – The input Tensor to be added to the final result.

• x (Tensor) – The first input Tensor for matrix multiplication.

• y (Tensor) – The second input Tensor for matrix multiplication.

• beta (float, optional) – Coefficient of $input$, default is 1.

• alpha (float, optional) – Coefficient of $x*y$, default is 1.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output Tensor of addmm.

Return type

Tensor

Examples

import paddle

x = paddle.ones([2,2])
y = paddle.ones([2,2])
input = paddle.ones([2,2])

out = paddle.addmm( input=input, x=x, y=y, beta=0.5, alpha=5.0 )

print(out)
# [[10.5 10.5]
# [10.5 10.5]]

all ( axis=None, keepdim=False, name=None )

Computes the logical and of tensor elements over the given dimension.

Parameters
• x (Tensor) – An N-D Tensor, the input data type should be bool.

• axis (int|list|tuple, optional) – The dimensions along which the logical and is compute. If None, and all elements of x and return a Tensor with a single element, otherwise must be in the range $$[-rank(x), rank(x))$$. If $$axis[i] < 0$$, the dimension to reduce is $$rank + axis[i]$$.

• keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result Tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Results the logical and on the specified axis of input Tensor x, it’s data type is bool.

Return type

Tensor

Examples

import paddle

# x is a bool Tensor with following elements:
#    [[True, False]
#     [True, True]]
x = paddle.to_tensor([[1, 0], [1, 1]], dtype='int32')
print(x)
x = paddle.cast(x, 'bool')

# out1 should be [False]
out1 = paddle.all(x)  # [False]
print(out1)

# out2 should be [True, False]
out2 = paddle.all(x, axis=0)  # [True, False]
print(out2)

# keepdim=False, out3 should be [False, True], out.shape should be (2,)
out3 = paddle.all(x, axis=-1)  # [False, True]
print(out3)

# keepdim=True, out4 should be [[False], [True]], out.shape should be (2,1)
out4 = paddle.all(x, axis=1, keepdim=True) # [[False], [True]]
print(out4)

allclose ( y, rtol=1e-05, atol=1e-08, equal_nan=False, name=None )

This operator checks if all $$x$$ and $$y$$ satisfy the condition:

$\left| x - y \right| \leq atol + rtol \times \left| y \right|$

elementwise, for all elements of $$x$$ and $$y$$. The behaviour of this operator is analogous to $$numpy.allclose$$, namely that it returns $$True$$ if two tensors are elementwise equal within a tolerance.

Parameters
• x (Tensor) – The input tensor, it’s data type should be float32, float64.

• y (Tensor) – The input tensor, it’s data type should be float32, float64.

• rtol (rtoltype, optional) – The relative tolerance. Default: $$1e-5$$ .

• atol (atoltype, optional) – The absolute tolerance. Default: $$1e-8$$ .

• equal_nan (equalnantype, optional) – If $$True$$ , then two $$NaNs$$ will be compared as equal. Default: $$False$$ .

• name (str, optional) – Name for the operation. For more information, please refer to Name. Default: None.

Returns

The output tensor, it’s data type is bool.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([10000., 1e-07])
y = paddle.to_tensor([10000.1, 1e-08])
result1 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=False, name="ignore_nan")
# [False]

result2 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=True, name="equal_nan")
# [False]

x = paddle.to_tensor([1.0, float('nan')])
y = paddle.to_tensor([1.0, float('nan')])
result1 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=False, name="ignore_nan")
# [False]

result2 = paddle.allclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=True, name="equal_nan")
# [True]

amax ( axis=None, keepdim=False, name=None )

Computes the maximum of tensor elements over the given axis.

Note

The difference between max and amax is: If there are multiple maximum elements, amax evenly distributes gradient between these equal values, while max propagates gradient to all of them.

Parameters
• x (Tensor) – A tensor, the data type is float32, float64, int32, int64, the dimension is no more than 4.

• axis (int|list|tuple, optional) – The axis along which the maximum is computed. If None, compute the maximum over all elements of x and return a Tensor with a single element, otherwise must be in the range $$[-x.ndim(x), x.ndim(x))$$. If $$axis[i] < 0$$, the axis to reduce is $$x.ndim + axis[i]$$.

• keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of maximum on the specified axis of input tensor, it’s data type is the same as x.

Examples

import paddle
# data_x is a Tensor with shape [2, 4] with multiple maximum elements
# the axis is a int element

x = paddle.to_tensor([[0.1, 0.9, 0.9, 0.9],
[0.9, 0.9, 0.6, 0.7]],
dtype='float64', stop_gradient=False)
# There are 5 maximum elements:
# 1) amax evenly distributes gradient between these equal values,
#    thus the corresponding gradients are 1/5=0.2;
# 2) while max propagates gradient to all of them,
#    thus the corresponding gradient are 1.
result1 = paddle.amax(x)
result1.backward()
print(result1, x.grad)
#[0.9], [[0., 0.2, 0.2, 0.2], [0.2, 0.2, 0., 0.]]

x.clear_grad()
result1_max = paddle.max(x)
result1_max.backward()
print(result1_max, x.grad)
#[0.9], [[0., 1.0, 1.0, 1.0], [1.0, 1.0, 0., 0.]]

###############################

x.clear_grad()
result2 = paddle.amax(x, axis=0)
result2.backward()
print(result2, x.grad)
#[0.9, 0.9, 0.9, 0.9], [[0., 0.5, 1., 1.], [1., 0.5, 0., 0.]]

x.clear_grad()
result3 = paddle.amax(x, axis=-1)
result3.backward()
print(result3, x.grad)
#[0.9, 0.9], [[0., 0.3333, 0.3333, 0.3333], [0.5, 0.5, 0., 0.]]

x.clear_grad()
result4 = paddle.amax(x, axis=1, keepdim=True)
result4.backward()
print(result4, x.grad)
#[[0.9], [0.9]], [[0., 0.3333, 0.3333, 0.3333.], [0.5, 0.5, 0., 0.]]

# data_y is a Tensor with shape [2, 2, 2]
# the axis is list
y = paddle.to_tensor([[[0.1, 0.9], [0.9, 0.9]],
[[0.9, 0.9], [0.6, 0.7]]],
dtype='float64', stop_gradient=False)
result5 = paddle.amax(y, axis=[1, 2])
result5.backward()
print(result5, y.grad)
#[0.9., 0.9], [[[0., 0.3333], [0.3333, 0.3333]], [[0.5, 0.5], [0., 1.]]]

y.clear_grad()
result6 = paddle.amax(y, axis=[0, 1])
result6.backward()
print(result6, y.grad)
#[0.9., 0.9], [[[0., 0.3333], [0.5, 0.3333]], [[0.5, 0.3333], [1., 1.]]]

amin ( axis=None, keepdim=False, name=None )

Computes the minimum of tensor elements over the given axis

Note

The difference between min and amin is: If there are multiple minimum elements, amin evenly distributes gradient between these equal values, while min propagates gradient to all of them.

Parameters
• x (Tensor) – A tensor, the data type is float32, float64, int32, int64, the dimension is no more than 4.

• axis (int|list|tuple, optional) – The axis along which the minimum is computed. If None, compute the minimum over all elements of x and return a Tensor with a single element, otherwise must be in the range $$[-x.ndim, x.ndim)$$. If $$axis[i] < 0$$, the axis to reduce is $$x.ndim + axis[i]$$.

• keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of minimum on the specified axis of input tensor, it’s data type is the same as input’s Tensor.

Examples

import paddle
# data_x is a Tensor with shape [2, 4] with multiple minimum elements
# the axis is a int element

x = paddle.to_tensor([[0.2, 0.1, 0.1, 0.1],
[0.1, 0.1, 0.6, 0.7]],
dtype='float64', stop_gradient=False)
# There are 5 minimum elements:
# 1) amin evenly distributes gradient between these equal values,
#    thus the corresponding gradients are 1/5=0.2;
# 2) while min propagates gradient to all of them,
#    thus the corresponding gradient are 1.
result1 = paddle.amin(x)
result1.backward()
print(result1, x.grad)
#[0.1], [[0., 0.2, 0.2, 0.2], [0.2, 0.2, 0., 0.]]

x.clear_grad()
result1_min = paddle.min(x)
result1_min.backward()
print(result1_min, x.grad)
#[0.1], [[0., 1.0, 1.0, 1.0], [1.0, 1.0, 0., 0.]]

###############################

x.clear_grad()
result2 = paddle.amin(x, axis=0)
result2.backward()
print(result2, x.grad)
#[0.1, 0.1, 0.1, 0.1], [[0., 0.5, 1., 1.], [1., 0.5, 0., 0.]]

x.clear_grad()
result3 = paddle.amin(x, axis=-1)
result3.backward()
print(result3, x.grad)
#[0.1, 0.1], [[0., 0.3333, 0.3333, 0.3333], [0.5, 0.5, 0., 0.]]

x.clear_grad()
result4 = paddle.amin(x, axis=1, keepdim=True)
result4.backward()
print(result4, x.grad)
#[[0.1], [0.1]], [[0., 0.3333, 0.3333, 0.3333.], [0.5, 0.5, 0., 0.]]

# data_y is a Tensor with shape [2, 2, 2]
# the axis is list
y = paddle.to_tensor([[[0.2, 0.1], [0.1, 0.1]],
[[0.1, 0.1], [0.6, 0.7]]],
dtype='float64', stop_gradient=False)
result5 = paddle.amin(y, axis=[1, 2])
result5.backward()
print(result5, y.grad)
#[0.1., 0.1], [[[0., 0.3333], [0.3333, 0.3333]], [[0.5, 0.5], [0., 1.]]]

y.clear_grad()
result6 = paddle.amin(y, axis=[0, 1])
result6.backward()
print(result6, y.grad)
#[0.1., 0.1], [[[0., 0.3333], [0.5, 0.3333]], [[0.5, 0.3333], [1., 1.]]]

angle ( name=None )

Element-wise angle of complex numbers. For non-negative real numbers, the angle is 0 while for negative real numbers, the angle is $$\pi$$.

Equation:
$angle(x)=arctan2(x.imag, x.real)$
Parameters
• x (Tensor) – An N-D Tensor, the data type is complex64, complex128, or float32, float64 .

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

An N-D Tensor of real data type with the same precision as that of x’s data type.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([-2, -1, 0, 1]).unsqueeze(-1).astype('float32')
y = paddle.to_tensor([-2, -1, 0, 1]).astype('float32')
z = x + 1j * y
print(z)
# Tensor(shape=[4, 4], dtype=complex64, place=Place(cpu), stop_gradient=True,
#        [[(-2-2j), (-2-1j), (-2+0j), (-2+1j)],
#         [(-1-2j), (-1-1j), (-1+0j), (-1+1j)],
#         [-2j    , -1j    ,  0j    ,  1j    ],
#         [ (1-2j),  (1-1j),  (1+0j),  (1+1j)]])

theta = paddle.angle(z)
print(theta)
# Tensor(shape=[4, 4], dtype=float32, place=Place(cpu), stop_gradient=True,
#        [[-2.35619450, -2.67794514,  3.14159274,  2.67794514],
#         [-2.03444386, -2.35619450,  3.14159274,  2.35619450],
#         [-1.57079637, -1.57079637,  0.        ,  1.57079637],
#         [-1.10714877, -0.78539819,  0.        ,  0.78539819]])

any ( axis=None, keepdim=False, name=None )

Computes the logical or of tensor elements over the given dimension, and return the result.

Parameters
• x (Tensor) – An N-D Tensor, the input data type should be bool.

• axis (int|list|tuple, optional) – The dimensions along which the logical or is compute. If None, and all elements of x and return a Tensor with a single element, otherwise must be in the range $$[-rank(x), rank(x))$$. If $$axis[i] < 0$$, the dimension to reduce is $$rank + axis[i]$$.

• keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result Tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Results the logical or on the specified axis of input Tensor x, it’s data type is bool.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1, 0], [1, 1]], dtype='int32')
x = paddle.assign(x)
print(x)
x = paddle.cast(x, 'bool')
# x is a bool Tensor with following elements:
#    [[True, False]
#     [True, True]]

# out1 should be [True]
out1 = paddle.any(x)  # [True]
print(out1)

# out2 should be [True, True]
out2 = paddle.any(x, axis=0)  # [True, True]
print(out2)

# keepdim=False, out3 should be [True, True], out.shape should be (2,)
out3 = paddle.any(x, axis=-1)  # [True, True]
print(out3)

# keepdim=True, result should be [[True], [True]], out.shape should be (2,1)
out4 = paddle.any(x, axis=1, keepdim=True)  # [[True], [True]]
print(out4)

append ( var )
Notes:

**The type variable must be LoD Tensor Array.

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/fluid/framework.py:docstring of paddle.fluid.layers.math_op_patch.monkey_patch_variable.<locals>.append, line 2); backlink

Inline strong start-string without end-string.

argmax ( axis=None, keepdim=False, dtype='int64', name=None )

Computes the indices of the max elements of the input tensor’s element along the provided axis.

Parameters
• x (Tensor) – An input N-D Tensor with type float32, float64, int16, int32, int64, uint8.

• axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is x.ndim. when axis < 0, it works the same way as axis + R. Default is None, the input x will be into the flatten tensor, and selecting the min value index.

• keepdim (bool, optional) – Whether to keep the given axis in output. If it is True, the dimensions will be same as input x and with size one in the axis. Otherwise the output dimentions is one fewer than x since the axis is squeezed. Default is False.

• dtype (str|np.dtype, optional) – Data type of the output tensor which can be int32, int64. The default value is int64 , and it will return the int64 indices.

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

Tensor, return the tensor of int32 if set dtype is int32, otherwise return the tensor of int64.

Examples

import paddle

x = paddle.to_tensor([[5,8,9,5],
[0,0,1,7],
[6,9,2,4]])
out1 = paddle.argmax(x)
print(out1) # 2
out2 = paddle.argmax(x, axis=0)
print(out2)
# [2, 2, 0, 1]
out3 = paddle.argmax(x, axis=-1)
print(out3)
# [2, 3, 1]
out4 = paddle.argmax(x, axis=0, keepdim=True)
print(out4)
# [[2, 2, 0, 1]]

argmin ( axis=None, keepdim=False, dtype='int64', name=None )

Computes the indices of the min elements of the input tensor’s element along the provided axis.

Parameters
• x (Tensor) – An input N-D Tensor with type float32, float64, int16, int32, int64, uint8.

• axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is x.ndim. when axis < 0, it works the same way as axis + R. Default is None, the input x will be into the flatten tensor, and selecting the min value index.

• keepdim (bool, optional) – Whether to keep the given axis in output. If it is True, the dimensions will be same as input x and with size one in the axis. Otherwise the output dimentions is one fewer than x since the axis is squeezed. Default is False.

• dtype (str, optional) – Data type of the output tensor which can be int32, int64. The default value is ‘int64’, and it will return the int64 indices.

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

Tensor, return the tensor of int32 if set dtype is int32, otherwise return the tensor of int64.

Examples

import paddle

x =  paddle.to_tensor([[5,8,9,5],
[0,0,1,7],
[6,9,2,4]])
out1 = paddle.argmin(x)
print(out1) # 4
out2 = paddle.argmin(x, axis=0)
print(out2)
# [1, 1, 1, 2]
out3 = paddle.argmin(x, axis=-1)
print(out3)
# [0, 0, 2]
out4 = paddle.argmin(x, axis=0, keepdim=True)
print(out4)
# [[1, 1, 1, 2]]

argsort ( axis=- 1, descending=False, name=None )

Sorts the input along the given axis, and returns the corresponding index tensor for the sorted output values. The default sort algorithm is ascending, if you want the sort algorithm to be descending, you must set the descending as True.

Parameters
• x (Tensor) – An input N-D Tensor with type float32, float64, int16, int32, int64, uint8.

• axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is Rank(x). when axis<0, it works the same way as axis+R. Default is -1.

• descending (bool, optional) – Descending is a flag, if set to true, algorithm will sort by descending order, else sort by ascending order. Default is false.

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

sorted indices(with the same shape as x and with data type int64).

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[[5,8,9,5],
[0,0,1,7],
[6,9,2,4]],
[[5,2,4,2],
[4,7,7,9],
[1,7,0,6]]],
dtype='float32')
out1 = paddle.argsort(x, axis=-1)
out2 = paddle.argsort(x, axis=0)
out3 = paddle.argsort(x, axis=1)

print(out1)
#[[[0 3 1 2]
#  [0 1 2 3]
#  [2 3 0 1]]
# [[1 3 2 0]
#  [0 1 2 3]
#  [2 0 3 1]]]

print(out2)
#[[[0 1 1 1]
#  [0 0 0 0]
#  [1 1 1 0]]
# [[1 0 0 0]
#  [1 1 1 1]
#  [0 0 0 1]]]

print(out3)
#[[[1 1 1 2]
#  [0 0 2 0]
#  [2 2 0 1]]
# [[2 0 2 0]
#  [1 1 0 2]
#  [0 2 1 1]]]

as_complex ( name=None )

Transform a real tensor to a complex tensor.

The data type of the input tensor is ‘float32’ or ‘float64’, and the data type of the returned tensor is ‘complex64’ or ‘complex128’, respectively.

The shape of the input tensor is (* ,2), (* means arbitary shape), i.e. the size of the last axis shoule be 2, which represent the real and imag part of a complex number. The shape of the returned tensor is (*,).

Parameters
• x (Tensor) – The input tensor. Data type is ‘float32’ or ‘float64’.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output. Data type is ‘complex64’ or ‘complex128’, with the same precision as the input.

Return type

Tensor

Examples

import paddle
x = paddle.arange(12, dtype=paddle.float32).reshape([2, 3, 2])
y = paddle.as_complex(x)
print(y)

# Tensor(shape=[2, 3], dtype=complex64, place=Place(gpu:0), stop_gradient=True,
#        [[1j      , (2+3j)  , (4+5j)  ],
#         [(6+7j)  , (8+9j)  , (10+11j)]])

as_real ( name=None )

Transform a complex tensor to a real tensor.

The data type of the input tensor is ‘complex64’ or ‘complex128’, and the data type of the returned tensor is ‘float32’ or ‘float64’, respectively.

When the shape of the input tensor is (*, ), (* means arbitary shape), the shape of the output tensor is (*, 2), i.e. the shape of the output is the shape of the input appended by an extra 2.

Parameters
• x (Tensor) – The input tensor. Data type is ‘complex64’ or ‘complex128’.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output. Data type is ‘float32’ or ‘float64’, with the same precision as the input.

Return type

Tensor

Examples

import paddle
x = paddle.arange(12, dtype=paddle.float32).reshape([2, 3, 2])
y = paddle.as_complex(x)
z = paddle.as_real(y)
print(z)

# Tensor(shape=[2, 3, 2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [[[0. , 1. ],
#          [2. , 3. ],
#          [4. , 5. ]],

#         [[6. , 7. ],
#          [8. , 9. ],
#          [10., 11.]]])

asin ( name=None )

Arcsine Operator.

$$out = \sin^{-1}(x)$$

Parameters
• x (Tensor) – Input of asin operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of asin operator

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.asin(x)
print(out)
# [-0.41151685 -0.20135792  0.10016742  0.30469265]

asinh ( name=None )

Asinh Activation Operator.

$$out = asinh(x)$$

Parameters
• x (Tensor) – Input of Asinh operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Asinh operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.asinh(x)
print(out)
# [-0.39003533, -0.19869010,  0.09983408,  0.29567307]

astype ( dtype )
Notes:

The variable must be a api_fluid_Tensor

Cast a variable to a specified data type.

Parameters
• self (Variable) – The source variable

• dtype – The target data type

Returns

Variable with new dtype

Return type

Variable

Examples

In Static Graph Mode:

import paddle.fluid as fluid

startup_prog = fluid.Program()
main_prog = fluid.Program()
with fluid.program_guard(startup_prog, main_prog):
original_variable = fluid.data(name = "new_variable", shape=[2,2], dtype='float32')
new_variable = original_variable.astype('int64')
print("new var's dtype is: {}".format(new_variable.dtype))


In Dygraph Mode:

import paddle.fluid as fluid
import numpy as np

x = np.ones([2, 2], np.float32)
with fluid.dygraph.guard():
original_variable = fluid.dygraph.to_variable(x)
print("original var's dtype is: {}, numpy dtype is {}".format(original_variable.dtype, original_variable.numpy().dtype))
new_variable = original_variable.astype('int64')
print("new var's dtype is: {}, numpy dtype is {}".format(new_variable.dtype, new_variable.numpy().dtype))

atan ( name=None )

Arctangent Operator.

$$out = \tan^{-1}(x)$$

Parameters
• x (Tensor) – Input of atan operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of atan operator

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.atan(x)
print(out)
# [-0.38050638 -0.19739556  0.09966865  0.29145679]

atanh ( name=None )

Atanh Activation Operator.

$$out = atanh(x)$$

Parameters
• x (Tensor) – Input of Atanh operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Atanh operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.atanh(x)
print(out)
# [-0.42364895, -0.20273256,  0.10033535,  0.30951962]

bincount ( weights=None, minlength=0, name=None )

Computes frequency of each value in the input tensor.

Parameters
• x (Tensor) – A Tensor with non-negative integer. Should be 1-D tensor.

• weights (Tensor, optional) – Weight for each value in the input tensor. Should have the same shape as input. Default is None.

• minlength (int, optional) – Minimum number of bins. Should be non-negative integer. Default is 0.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor of frequency.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 1, 4, 5])
result1 = paddle.bincount(x)
print(result1) # [0, 2, 1, 0, 1, 1]

w = paddle.to_tensor([2.1, 0.4, 0.1, 0.5, 0.5])
result2 = paddle.bincount(x, weights=w)
print(result2) # [0., 2.19999981, 0.40000001, 0., 0.50000000, 0.50000000]

bitwise_and ( y, out=None, name=None )

It operates bitwise_and on Tensor X and Y .

$Out = X \& Y$

Note

paddle.bitwise_and supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting.

Parameters
• x (Tensor) – Input Tensor of bitwise_and . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

• y (Tensor) – Input Tensor of bitwise_and . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

• out (Tensor) – Result of bitwise_and . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_and . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
y = paddle.to_tensor([4,  2, -3])
res = paddle.bitwise_and(x, y)
print(res)  # [0, 2, 1]

bitwise_not ( out=None, name=None )

It operates bitwise_not on Tensor X .

$Out = \sim X$
Parameters
• x (Tensor) – Input Tensor of bitwise_not . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

• out (Tensor) – Result of bitwise_not . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_not . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
res = paddle.bitwise_not(x)
print(res) # [4, 0, -2]

bitwise_or ( y, out=None, name=None )

It operates bitwise_or on Tensor X and Y .

$Out = X | Y$

Note

paddle.bitwise_or supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting.

Parameters
• x (Tensor) – Input Tensor of bitwise_or . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

• y (Tensor) – Input Tensor of bitwise_or . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

• out (Tensor) – Result of bitwise_or . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_or . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
y = paddle.to_tensor([4,  2, -3])
res = paddle.bitwise_or(x, y)
print(res)  # [-1, -1, -3]

bitwise_xor ( y, out=None, name=None )

It operates bitwise_xor on Tensor X and Y .

$Out = X ^\wedge Y$

Note

paddle.bitwise_xor supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting.

Parameters
• x (Tensor) – Input Tensor of bitwise_xor . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

• y (Tensor) – Input Tensor of bitwise_xor . It is a N-D Tensor of bool, uint8, int8, int16, int32, int64

• out (Tensor) – Result of bitwise_xor . It is a N-D Tensor with the same data type of input Tensor

Returns

Result of bitwise_xor . It is a N-D Tensor with the same data type of input Tensor

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([-5, -1, 1])
y = paddle.to_tensor([4,  2, -3])
res = paddle.bitwise_xor(x, y)
print(res) # [-1, -3, -4]

bmm ( y, name=None )

Applies batched matrix multiplication to two tensors.

Both of the two input tensors must be three-dementional and share the same batch size.

if x is a (b, m, k) tensor, y is a (b, k, n) tensor, the output will be a (b, m, n) tensor.

Parameters
• x (Tensor) – The input Tensor.

• y (Tensor) – The input Tensor.

• name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.

Returns

The product Tensor.

Return type

Tensor

Examples

import paddle

# In imperative mode:
# size x: (2, 2, 3) and y: (2, 3, 2)
x = paddle.to_tensor([[[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0]],
[[3.0, 3.0, 3.0],
[4.0, 4.0, 4.0]]])
y = paddle.to_tensor([[[1.0, 1.0],[2.0, 2.0],[3.0, 3.0]],
[[4.0, 4.0],[5.0, 5.0],[6.0, 6.0]]])
out = paddle.bmm(x, y)
# Tensor(shape=[2, 2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
#        [[[6. , 6. ],
#          [12., 12.]],

#         [[45., 45.],
#          [60., 60.]]])

broadcast_shape ( y_shape )

The function returns the shape of doing operation with broadcasting on tensors of x_shape and y_shape, please refer to user_guide_broadcasting for more details.

Parameters
• x_shape (list[int]|tuple[int]) – A shape of tensor.

• y_shape (list[int]|tuple[int]) – A shape of tensor.

Returns

list[int], the result shape.

Examples

import paddle

shape = paddle.broadcast_shape([2, 1, 3], [1, 3, 1])
# [2, 3, 3]

# shape = paddle.broadcast_shape([2, 1, 3], [3, 3, 1])
# ValueError (terminated with error message).

broadcast_tensors ( name=None )

This OP broadcast a list of tensors following broadcast semantics

Note

If you want know more about broadcasting, please refer to Introduction to Tensor .

Parameters
• input (list|tuple) – input is a Tensor list or Tensor tuple which is with data type bool, float16, float32, float64, int32, int64. All the Tensors in input must have same data type. Currently we only support tensors with rank no greater than 5.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The list of broadcasted tensors following the same order as input.

Return type

list(Tensor)

Examples

import paddle
x1 = paddle.rand([1, 2, 3, 4]).astype('float32')
x2 = paddle.rand([1, 2, 1, 4]).astype('float32')
x3 = paddle.rand([1, 1, 3, 1]).astype('float32')
out1, out2, out3 = paddle.broadcast_tensors(input=[x1, x2, x3])
# out1, out2, out3: tensors broadcasted from x1, x2, x3 with shape [1,2,3,4]

broadcast_to ( shape, name=None )

Broadcast the input tensor to a given shape.

Both the number of dimensions of x and the number of elements in shape should be less than or equal to 6. The dimension to broadcast to must have a value 1.

Parameters
• x (Tensor) – The input tensor, its data type is bool, float32, float64, int32 or int64.

• shape (list|tuple|Tensor) – The result shape after broadcasting. The data type is int32. If shape is a list or tuple, all its elements should be integers or 1-D Tensors with the data type int32. If shape is a Tensor, it should be an 1-D Tensor with the data type int32. The value -1 in shape means keeping the corresponding dimension unchanged.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A Tensor with the given shape. The data type is the same as x.

Return type

N-D Tensor

Examples

import paddle

data = paddle.to_tensor([1, 2, 3], dtype='int32')
out = paddle.broadcast_to(data, shape=[2, 3])
print(out)
# [[1, 2, 3], [1, 2, 3]]

bucketize ( sorted_sequence, out_int32=False, right=False, name=None )

This API is used to find the index of the corresponding 1D tensor sorted_sequence in the innermost dimension based on the given x.

Parameters
• x (Tensor) – An input N-D tensor value with type int32, int64, float32, float64.

• sorted_sequence (Tensor) – An input 1-D tensor with type int32, int64, float32, float64. The value of the tensor monotonically increases in the innermost dimension.

• out_int32 (bool, optional) – Data type of the output tensor which can be int32, int64. The default value is False, and it indicates that the output data type is int64.

• right (bool, optional) – Find the upper or lower bounds of the sorted_sequence range in the innermost dimension based on the given x. If the value of the sorted_sequence is nan or inf, return the size of the innermost dimension. The default value is False and it shows the lower bounds.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

Tensor（the same sizes of the x）, return the tensor of int32 if set out_int32 is True, otherwise return the tensor of int64.

Examples

import paddle

sorted_sequence = paddle.to_tensor([2, 4, 8, 16], dtype='int32')
x = paddle.to_tensor([[0, 8, 4, 16], [-1, 2, 8, 4]], dtype='int32')
out1 = paddle.bucketize(x, sorted_sequence)
print(out1)
# Tensor(shape=[2, 4], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [[0, 2, 1, 3],
#         [0, 0, 2, 1]])
out2 = paddle.bucketize(x, sorted_sequence, right=True)
print(out2)
# Tensor(shape=[2, 4], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [[0, 3, 2, 4],
#         [0, 1, 3, 2]])
out3 = x.bucketize(sorted_sequence)
print(out3)
# Tensor(shape=[2, 4], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [[0, 2, 1, 3],
#         [0, 0, 2, 1]])
out4 = x.bucketize(sorted_sequence, right=True)
print(out4)
# Tensor(shape=[2, 4], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [[0, 3, 2, 4],
#         [0, 1, 3, 2]])

cast ( dtype )

This OP takes in the Tensor x with x.dtype and casts it to the output with dtype. It’s meaningless if the output dtype equals the input dtype, but it’s fine if you do so.

Parameters
• x (Tensor) – An input N-D Tensor with data type bool, float16, float32, float64, int32, int64, uint8.

• dtype (np.dtype|str) – Data type of the output: bool, float16, float32, float64, int8, int32, int64, uint8.

Returns

A Tensor with the same shape as input’s.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([2, 3, 4], 'float64')
y = paddle.cast(x, 'uint8')

ceil ( name=None )

Ceil Operator. Computes ceil of x element-wise.

$$out = \\lceil x \\rceil$$

Parameters
• x (Tensor) – Input of Ceil operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Ceil operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.ceil(x)
print(out)
# [-0. -0.  1.  1.]

ceil_ ( name=None )

Inplace version of ceil API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_ceil.

cholesky ( upper=False, name=None )

Computes the Cholesky decomposition of one symmetric positive-definite matrix or batches of symmetric positive-definite matrice.

If upper is True, the decomposition has the form $$A = U^{T}U$$ , and the returned matrix $$U$$ is upper-triangular. Otherwise, the decomposition has the form $$A = LL^{T}$$ , and the returned matrix $$L$$ is lower-triangular.

Parameters
• x (Tensor) – The input tensor. Its shape should be [*, M, M], where * is zero or more batch dimensions, and matrices on the inner-most 2 dimensions all should be symmetric positive-definite. Its data type should be float32 or float64.

• upper (bool) – The flag indicating whether to return upper or lower triangular matrices. Default: False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, A Tensor with same shape and data type as x. It represents triangular matrices generated by Cholesky decomposition.

Examples

import paddle

a = paddle.rand([3, 3], dtype="float32")
a_t = paddle.transpose(a, [1, 0])
x = paddle.matmul(a, a_t) + 1e-03

out = paddle.linalg.cholesky(x, upper=False)
print(out)

cholesky_solve ( y, upper=False, name=None )

Solves a linear system of equations A @ X = B, given A’s Cholesky factor matrix u and matrix B.

Input x and y is 2D matrices or batches of 2D matrices. If the inputs are batches, the outputs is also batches.

Parameters
• x (Tensor) – The input matrix which is upper or lower triangular Cholesky factor of square matrix A. Its shape should be [*, M, M], where * is zero or more batch dimensions. Its data type should be float32 or float64.

• y (Tensor) – Multiple right-hand sides of system of equations. Its shape should be [*, M, K], where * is zero or more batch dimensions. Its data type should be float32 or float64.

• upper (bool, optional) – whether to consider the Cholesky factor as a lower or upper triangular matrix. Default: False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The solution of the system of equations. Its data type is the same as that of x.

Return type

Tensor

Examples

import paddle

u = paddle.to_tensor([[1, 1, 1],
[0, 2, 1],
[0, 0,-1]], dtype="float64")
b = paddle.to_tensor([[0], [-9], [5]], dtype="float64")
out = paddle.linalg.cholesky_solve(b, u, upper=True)

print(out)
# [-2.5, -7, 9.5]

chunk ( chunks, axis=0, name=None )

Split the input tensor into multiple sub-Tensors.

Parameters
• x (Tensor) – A N-D Tensor. The data type is bool, float16, float32, float64, int32 or int64.

• chunks (int) – The number of tensor to be split along the certain axis.

• axis (int|Tensor, optional) – The axis along which to split, it can be a scalar with type int or a Tensor with shape [1] and data type int32 or int64. If :math::axis < 0, the axis to split along is $$rank(x) + axis$$. Default is 0.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

The list of segmented Tensors.

Return type

list(Tensor)

Examples

import paddle

x = paddle.rand([3, 9, 5])

out0, out1, out2 = paddle.chunk(x, chunks=3, axis=1)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]

# axis is negative, the real axis is (rank(x) + axis) which real
# value is 1.
out0, out1, out2 = paddle.chunk(x, chunks=3, axis=-2)
# out0.shape [3, 3, 5]
# out1.shape [3, 3, 5]
# out2.shape [3, 3, 5]

clip ( min=None, max=None, name=None )

This operator clip all elements in input into the range [ min, max ] and return a resulting tensor as the following equation:

$Out = MIN(MAX(x, min), max)$
Parameters
• x (Tensor) – An N-D Tensor with data type float32, float64, int32 or int64.

• min (float|int|Tensor, optional) – The lower bound with type float , int or a Tensor with shape [1] and type int32, float32, float64.

• max (float|int|Tensor, optional) – The upper bound with type float, int or a Tensor with shape [1] and type int32, float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A Tensor with the same data type and data shape as input.

Return type

Tensor

Examples

import paddle

x1 = paddle.to_tensor([[1.2, 3.5], [4.5, 6.4]], 'float32')
out1 = paddle.clip(x1, min=3.5, max=5.0)
out2 = paddle.clip(x1, min=2.5)
print(out1)
# [[3.5, 3.5]
# [4.5, 5.0]]
print(out2)
# [[2.5, 3.5]
# [[4.5, 6.4]

clip_ ( min=None, max=None, name=None )

Inplace version of clip API, the output Tensor will be inplaced with input x. Please refer to api_tensor_clip.

concat ( axis=0, name=None )

Concatenates the input along the axis.

Parameters
• x (list|tuple) – x is a Tensor list or Tensor tuple which is with data type bool, float16, float32, float64, int32, int64, int8, uint8. All the Tensors in x must have same data type.

• axis (int|Tensor, optional) – Specify the axis to operate on the input Tensors. It’s a scalar with data type int or a Tensor with shape [1] and data type int32 or int64. The effective range is [-R, R), where R is Rank(x). When axis < 0, it works the same way as axis+R. Default is 0.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A Tensor with the same data type as x.

Return type

Tensor

Examples

import paddle

x1 = paddle.to_tensor([[1, 2, 3],
[4, 5, 6]])
x2 = paddle.to_tensor([[11, 12, 13],
[14, 15, 16]])
x3 = paddle.to_tensor([[21, 22],
[23, 24]])
zero = paddle.full(shape=[1], dtype='int32', fill_value=0)
# When the axis is negative, the real axis is (axis + Rank(x))
# As follow, axis is -1, Rank(x) is 2, the real axis is 1
out1 = paddle.concat(x=[x1, x2, x3], axis=-1)
out2 = paddle.concat(x=[x1, x2], axis=0)
out3 = paddle.concat(x=[x1, x2], axis=zero)
# out1
# [[ 1  2  3 11 12 13 21 22]
#  [ 4  5  6 14 15 16 23 24]]
# out2 out3
# [[ 1  2  3]
#  [ 4  5  6]
#  [11 12 13]
#  [14 15 16]]

cond ( p=None, name=None )

Computes the condition number of a matrix or batches of matrices with respect to a matrix norm p.

Parameters
• x (Tensor) – The input tensor could be tensor of shape (*, m, n) where * is zero or more batch dimensions for p in (2, -2), or of shape (*, n, n) where every matrix is invertible for any supported p. And the input data type could be float32 or float64.

• p (float|string, optional) – Order of the norm. Supported values are fro, nuc, 1, -1, 2, -2, inf, -inf. Default value is None, meaning that the order of the norm is 2.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

computing results of condition number, its data type is the same as input Tensor x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1., 0, -1], [0, 1, 0], [1, 0, 1]])

# compute conditional number when p is None
out = paddle.linalg.cond(x)
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [1.41421342])

# compute conditional number when order of the norm is 'fro'
out_fro = paddle.linalg.cond(x, p='fro')
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [3.16227770])

# compute conditional number when order of the norm is 'nuc'
out_nuc = paddle.linalg.cond(x, p='nuc')
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [9.24263859])

# compute conditional number when order of the norm is 1
out_1 = paddle.linalg.cond(x, p=1)
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [2.])

# compute conditional number when order of the norm is -1
out_minus_1 = paddle.linalg.cond(x, p=-1)
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [1.])

# compute conditional number when order of the norm is 2
out_2 = paddle.linalg.cond(x, p=2)
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [1.41421342])

# compute conditional number when order of the norm is -1
out_minus_2 = paddle.linalg.cond(x, p=-2)
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [0.70710683])

# compute conditional number when order of the norm is inf
out_inf = paddle.linalg.cond(x, p=float("inf"))
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [2.])

# compute conditional number when order of the norm is -inf
out_minus_inf = paddle.linalg.cond(x, p=-float("inf"))
# Tensor(shape=[1], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [1.])

a = paddle.randn([2, 4, 4])
# Tensor(shape=[2, 4, 4], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [[[-0.06784091, -0.07095790,  1.31792855, -0.58959651],
#          [ 0.20818676, -0.85640615, -0.89998871, -1.47439921],
#          [-0.49132481,  0.42250812, -0.77383220, -2.19794774],
#          [-0.33551720, -1.70003879, -1.09795380, -0.63737559]],

#         [[ 1.12026262, -0.16119350, -1.21157813,  2.74383283],
#          [-0.15999718,  0.18798758, -0.69392562,  1.35720372],
#          [-0.53013402, -2.26304483,  1.40843511, -1.02288902],
#          [ 0.69533503,  2.05261683, -0.02251151, -1.43127477]]])

a_cond_fro = paddle.linalg.cond(a, p='fro')
# Tensor(shape=[2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [8.86691189 , 75.23817444])

b = paddle.randn([2, 3, 4])
# Tensor(shape=[2, 3, 4], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [[[-0.43754861,  1.80796063, -0.78729683, -1.82264030],
#          [-0.27670753,  0.06620564,  0.29072434, -0.31155765],
#          [ 0.34123746, -0.05444612,  0.05001324, -1.46877074]],

#         [[-0.64331555, -1.51103854, -1.26277697, -0.68024760],
#          [ 2.59375715, -1.06665540,  0.96575671, -0.73330832],
#          [-0.47064447, -0.23945692, -0.95150250, -1.07125998]]])
b_cond_2 = paddle.linalg.cond(b, p=2)
# Tensor(shape=[2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [6.64228773, 3.89068866])

conj ( name=None )

This function computes the conjugate of the Tensor elementwisely.

Parameters
• x (Tensor) – The input Tensor which hold the complex numbers. Optional data types are: complex64, complex128, float32, float64, int32 or int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The conjugate of input. The shape and data type is the same with input. If the elements of tensor is real type such as float32, float64, int32 or int64, the out is the same with input.

Return type

out (Tensor)

Examples

import paddle

data=paddle.to_tensor([[1+1j, 2+2j, 3+3j], [4+4j, 5+5j, 6+6j]])
#Tensor(shape=[2, 3], dtype=complex64, place=CUDAPlace(0), stop_gradient=True,
#       [[(1+1j), (2+2j), (3+3j)],
#        [(4+4j), (5+5j), (6+6j)]])

conj_data=paddle.conj(data)
#Tensor(shape=[2, 3], dtype=complex64, place=CUDAPlace(0), stop_gradient=True,
#       [[(1-1j), (2-2j), (3-3j)],
#        [(4-4j), (5-5j), (6-6j)]])

corrcoef ( rowvar=True, name=None )

A correlation coefficient matrix indicate the correlation of each pair variables in the input matrix. For example, for an N-dimensional samples X=[x1,x2,…xN]T, then the correlation coefficient matrix element Rij is the correlation of xi and xj. The element Rii is the covariance of xi itself.

The relationship between the correlation coefficient matrix R and the covariance matrix C, is

$R_{ij} = \frac{ C_{ij} } { \sqrt{ C_{ii} * C_{jj} } }$

The values of R are between -1 and 1.

Parameters
• x (Tensor) – A N-D(N<=2) Tensor containing multiple variables and observations. By default, each row of x represents a variable. Also see rowvar below.

• rowvar (Bool, optional) – If rowvar is True (default), then each row represents a variable, with observations in the columns. Default: True.

• name (str, optional) – Name of the output. Default is None. It’s used to print debug info for developers. Details: Name.

Returns

The correlation coefficient matrix of the variables.

Examples

import paddle

xt = paddle.rand((3,4))
print(paddle.linalg.corrcoef(xt))

# Tensor(shape=[3, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
# [[ 1.        , -0.73702252,  0.66228950],
# [-0.73702258,  1.        , -0.77104872],
# [ 0.66228974, -0.77104825,  1.        ]])

cos ( name=None )

Cosine Operator. Computes cosine of x element-wise.

Input range is (-inf, inf) and output range is [-1,1].

$$out = cos(x)$$

Parameters
• x (Tensor) – Input of Cos operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Cos operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.cos(x)
print(out)
# [0.92106099 0.98006658 0.99500417 0.95533649]

cosh ( name=None )

Cosh Activation Operator.

$$out = cosh(x)$$

Parameters
• x (Tensor) – Input of Cosh operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Cosh operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.cosh(x)
print(out)
# [1.08107237 1.02006676 1.00500417 1.04533851]

count_nonzero ( axis=None, keepdim=False, name=None )

Counts the number of non-zero values in the tensor x along the specified axis.

Parameters
• x (Tensor) – An N-D Tensor, the data type is bool, float16, float32, float64, int32 or int64.

• axis (int|list|tuple, optional) – The dimensions along which the sum is performed. If None, sum all elements of x and return a Tensor with a single element, otherwise must be in the range $$[-rank(x), rank(x))$$. If $$axis[i] < 0$$, the dimension to reduce is $$rank + axis[i]$$.

• keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result Tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Results of count operation on the specified axis of input Tensor x, it’s data type is ‘int64’.

Return type

Tensor

Examples

import paddle
# x is a 2-D Tensor:
x = paddle.to_tensor([[0., 1.1, 1.2], [0., 0., 1.3], [0., 0., 0.]])
out1 = paddle.count_nonzero(x)
# [3]
out2 = paddle.count_nonzero(x, axis=0)
# [0, 1, 2]
out3 = paddle.count_nonzero(x, axis=0, keepdim=True)
# [[0, 1, 2]]
out4 = paddle.count_nonzero(x, axis=1)
# [2, 1, 0]
out5 = paddle.count_nonzero(x, axis=1, keepdim=True)
#[[2],
# [1],
# [0]]

# y is a 3-D Tensor:
y = paddle.to_tensor([[[0., 1.1, 1.2], [0., 0., 1.3], [0., 0., 0.]],
[[0., 2.5, 2.6], [0., 0., 2.4], [2.1, 2.2, 2.3]]])
out6 = paddle.count_nonzero(y, axis=[1, 2])
# [3, 6]
out7 = paddle.count_nonzero(y, axis=[0, 1])
# [1, 3, 5]

cov ( rowvar=True, ddof=True, fweights=None, aweights=None, name=None )

Estimate the covariance matrix of the input variables, given data and weights.

A covariance matrix is a square matrix, indicate the covariance of each pair variables in the input matrix. For example, for an N-dimensional samples X=[x1,x2,…xN]T, then the covariance matrix element Cij is the covariance of xi and xj. The element Cii is the variance of xi itself.

Parameters
• x (Tensor) – A N-D(N<=2) Tensor containing multiple variables and observations. By default, each row of x represents a variable. Also see rowvar below.

• rowvar (Bool, optional) – If rowvar is True (default), then each row represents a variable, with observations in the columns. Default: True

• ddof (Bool, optional) – If ddof=True will return the unbiased estimate, and ddof=False will return the simple average. Default: True

• fweights (Tensor, optional) – 1-D Tensor of integer frequency weights; The number of times each observation vector should be repeated. Default: None

• aweights (Tensor, optional) – 1-D Tensor of observation vector weights. How important of the observation vector, larger data means this element is more important. Default: None

• name (str, optional) – Name of the output. Default is None. It’s used to print debug info for developers. Details: Name

Returns

The covariance matrix Tensor of the variables.

Return type

Tensor

Examples:

import paddle

xt = paddle.rand((3,4))
paddle.linalg.cov(xt)

'''
Tensor(shape=[3, 3], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
[[0.07918842, 0.06127326, 0.01493049],
[0.06127326, 0.06166256, 0.00302668],
[0.01493049, 0.00302668, 0.01632146]])
'''

cpu ( )

Variable should not have cpu() and cuda() interface. But this interface can greatly facilitate dy2static. We do nothing here.

cross ( y, axis=9, name=None )

Computes the cross product between two tensors along an axis.

Inputs must have the same shape, and the length of their axes should be equal to 3. If axis is not given, it defaults to the first axis found with the length 3.

Parameters
• x (Tensor) – The first input tensor.

• y (Tensor) – The second input tensor.

• axis (int, optional) – The axis along which to compute the cross product. It defaults to be 9 which indicates using the first axis found with the length 3.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor. A Tensor with same data type as x.

Examples

import paddle

x = paddle.to_tensor([[1.0, 1.0, 1.0],
[2.0, 2.0, 2.0],
[3.0, 3.0, 3.0]])
y = paddle.to_tensor([[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0],
[1.0, 1.0, 1.0]])

z1 = paddle.cross(x, y)
# [[-1. -1. -1.]
#  [ 2.  2.  2.]
#  [-1. -1. -1.]]

z2 = paddle.cross(x, y, axis=1)
# [[0. 0. 0.]
#  [0. 0. 0.]
#  [0. 0. 0.]]

cuda ( )

Variable should not have cpu() and cuda() interface. But this interface can greatly facilitate dy2static. We do nothing here.

cumprod ( dim=None, dtype=None, name=None )

Compute the cumulative product of the input tensor x along a given dimension dim.

Note

The first element of the result is the same as the first element of the input.

Parameters
• x (Tensor) – the input tensor need to be cumproded.

• dim (int) – the dimension along which the input tensor will be accumulated. It need to be in the range of [-x.rank, x.rank), where x.rank means the dimensions of the input tensor x and -1 means the last dimension.

• dtype (str, optional) – The data type of the output tensor, can be float32, float64, int32, int64, complex64, complex128. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. The default value is None.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the result of cumprod operator.

Examples

import paddle

data = paddle.arange(12)
data = paddle.reshape(data, (3, 4))
# [[ 0  1  2  3 ]
#  [ 4  5  6  7 ]
#  [ 8  9  10 11]]

y = paddle.cumprod(data, dim=0)
# [[ 0  1   2   3]
#  [ 0  5  12  21]
#  [ 0 45 120 231]]

y = paddle.cumprod(data, dim=-1)
# [[ 0   0   0    0]
#  [ 4  20 120  840]
#  [ 8  72 720 7920]]

y = paddle.cumprod(data, dim=1, dtype='float64')
# [[ 0.   0.   0.    0.]
#  [ 4.  20. 120.  840.]
#  [ 8.  72. 720. 7920.]]

print(y.dtype)
# paddle.float64

cumsum ( axis=None, dtype=None, name=None )

The cumulative sum of the elements along a given axis.

Note

The first element of the result is the same as the first element of the input.

Parameters
• x (Tensor) – The input tensor needed to be cumsumed.

• axis (int, optional) – The dimension to accumulate along. -1 means the last dimension. The default (None) is to compute the cumsum over the flattened array.

• dtype (str, optional) – The data type of the output tensor, can be float32, float64, int32, int64. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. The default value is None.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the result of cumsum operator.

Examples

import paddle

data = paddle.arange(12)
data = paddle.reshape(data, (3, 4))

y = paddle.cumsum(data)
# [ 0  1  3  6 10 15 21 28 36 45 55 66]

y = paddle.cumsum(data, axis=0)
# [[ 0  1  2  3]
#  [ 4  6  8 10]
#  [12 15 18 21]]

y = paddle.cumsum(data, axis=-1)
# [[ 0  1  3  6]
#  [ 4  9 15 22]
#  [ 8 17 27 38]]

y = paddle.cumsum(data, dtype='float64')
print(y.dtype)
# paddle.float64

deg2rad ( name=None )
Convert each of the elements of input x from degrees to angles in radians.
$deg2rad(x)=\pi * x / 180$
Parameters
• x (Tensor) – An N-D Tensor, the data type is float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

An N-D Tensor, the shape and data type is the same with input (The output data type is float32 when the input data type is int).

Return type

out (Tensor)

Examples

import paddle
x1 = paddle.to_tensor([180.0, -180.0, 360.0, -360.0, 90.0, -90.0])
result1 = paddle.deg2rad(x1)
print(result1)
# Tensor(shape=[6], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#         [3.14159274, -3.14159274,  6.28318548, -6.28318548,  1.57079637,
#           -1.57079637])

x2 = paddle.to_tensor(180)
result2 = paddle.deg2rad(x2)
print(result2)
# Tensor(shape=[1], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#         [3.14159274])

diagonal ( offset=0, axis1=0, axis2=1, name=None )

This OP computes the diagonals of the input tensor x.

If x is 2D, returns the diagonal. If x has larger dimensions, diagonals be taken from the 2D planes specified by axis1 and axis2. By default, the 2D planes formed by the first and second axis of the input tensor x.

The argument offset determines where diagonals are taken from input tensor x:

• If offset = 0, it is the main diagonal.

• If offset > 0, it is above the main diagonal.

• If offset < 0, it is below the main diagonal.

Parameters
• x (Tensor) – The input tensor x. Must be at least 2-dimensional. The input data type should be bool, int32, int64, float16, float32, float64.

• offset (int, optional) – Which diagonals in input tensor x will be taken. Default: 0 (main diagonals).

• axis1 (int, optional) – The first axis with respect to take diagonal. Default: 0.

• axis2 (int, optional) – The second axis with respect to take diagonal. Default: 1.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

a partial view of input tensor in specify two dimensions, the output data type is the same as input data type.

Return type

Tensor

Examples

import paddle

x = paddle.rand([2,2,3],'float32')
print(x)
# Tensor(shape=[2, 2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[[0.45661032, 0.03751532, 0.90191704],
#          [0.43760979, 0.86177313, 0.65221709]],

#         [[0.17020577, 0.00259554, 0.28954273],
#          [0.51795638, 0.27325270, 0.18117726]]])

out1 = paddle.diagonal(x)
print(out1)
#Tensor(shape=[3, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.45661032, 0.51795638],
#        [0.03751532, 0.27325270],
#        [0.90191704, 0.18117726]])

out2 = paddle.diagonal(x, offset=0, axis1=2, axis2=1)
print(out2)
#Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.45661032, 0.86177313],
#        [0.17020577, 0.27325270]])

out3 = paddle.diagonal(x, offset=1, axis1=0, axis2=1)
print(out3)
#Tensor(shape=[3, 1], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.43760979],
#        [0.86177313],
#        [0.65221709]])

out4 = paddle.diagonal(x, offset=0, axis1=1, axis2=2)
print(out4)
#Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[0.45661032, 0.86177313],
#        [0.17020577, 0.27325270]])

diff ( n=1, axis=- 1, prepend=None, append=None, name=None )

Computes the n-th forward difference along the given axis. The first-order differences is computed by using the following formula:

$out[i] = x[i+1] - x[i]$

Higher-order differences are computed by using paddle.diff() recursively. Only n=1 is currently supported.

Parameters
• x (Tensor) – The input tensor to compute the forward difference on

• n (int, optional) – The number of times to recursively compute the difference. Only support n=1. Default:1

• axis (int, optional) – The axis to compute the difference along. Default:-1

• prepend (Tensor, optional) – The tensor to prepend to input along axis before computing the difference. It’s dimensions must be equivalent to that of x, and its shapes must match x’s shape except on axis.

• append (Tensor, optional) – The tensor to append to input along axis before computing the difference, It’s dimensions must be equivalent to that of x, and its shapes must match x’s shape except on axis.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output tensor with same dtype with x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 4, 5, 2])
out = paddle.diff(x)
print(out)
# out:
# [3, 1, -3]

y = paddle.to_tensor([7, 9])
out = paddle.diff(x, append=y)
print(out)
# out:
# [3, 1, -3, 5, 2]

z = paddle.to_tensor([[1, 2, 3], [4, 5, 6]])
out = paddle.diff(z, axis=0)
print(out)
# out:
# [[3, 3, 3]]
out = paddle.diff(z, axis=1)
print(out)
# out:
# [[1, 1], [1, 1]]

digamma ( name=None )

Calculates the digamma of the given input tensor, element-wise.

$Out = \Psi(x) = \frac{ \Gamma^{'}(x) }{ \Gamma(x) }$
Parameters
• x (Tensor) – Input Tensor. Must be one of the following types: float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the digamma of the input Tensor, the shape and data type is the same with input.

Examples

import paddle

data = paddle.to_tensor([[1, 1.5], [0, -2.2]], dtype='float32')
res = paddle.digamma(data)
print(res)
# Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[-0.57721591,  0.03648996],
#        [ nan       ,  5.32286835]])

dist ( y, p=2, name=None )

This OP returns the p-norm of (x - y). It is not a norm in a strict sense, only as a measure of distance. The shapes of x and y must be broadcastable. The definition is as follows, for details, please refer to the numpy’s broadcasting:

• Each input has at least one dimension.

• Match the two input dimensions from back to front, the dimension sizes must either be equal, one of them is 1, or one of them does not exist.

Where, z = x - y, the shapes of x and y are broadcastable, then the shape of z can be obtained as follows:

1. If the number of dimensions of x and y are not equal, prepend 1 to the dimensions of the tensor with fewer dimensions.

For example, The shape of x is [8, 1, 6, 1], the shape of y is [7, 1, 5], prepend 1 to the dimension of y.

x (4-D Tensor): 8 x 1 x 6 x 1

y (4-D Tensor): 1 x 7 x 1 x 5

2. Determine the size of each dimension of the output z: choose the maximum value from the two input dimensions.

z (4-D Tensor): 8 x 7 x 6 x 5

If the number of dimensions of the two inputs are the same, the size of the output can be directly determined in step 2. When p takes different values, the norm formula is as follows:

When p = 0, defining $0^0=0$, the zero-norm of z is simply the number of non-zero elements of z.

$\begin{split}||z||_{0}=\lim_{p \\rightarrow 0}\sum_{i=1}^{m}|z_i|^{p}\end{split}$

When p = inf, the inf-norm of z is the maximum element of the absolute value of z.

$||z||_\infty=\max_i |z_i|$

When p = -inf, the negative-inf-norm of z is the minimum element of the absolute value of z.

$||z||_{-\infty}=\min_i |z_i|$

Otherwise, the p-norm of z follows the formula,

$\begin{split}||z||_{p}=(\sum_{i=1}^{m}|z_i|^p)^{\\frac{1}{p}}\end{split}$
Parameters
• x (Tensor) – 1-D to 6-D Tensor, its data type is float32 or float64.

• y (Tensor) – 1-D to 6-D Tensor, its data type is float32 or float64.

• p (float, optional) – The norm to be computed, its data type is float32 or float64. Default: 2.

Returns

Tensor that is the p-norm of (x - y).

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[3, 3],[3, 3]], dtype="float32")
y = paddle.to_tensor([[3, 3],[3, 1]], dtype="float32")
out = paddle.dist(x, y, 0)
print(out) # out = [1.]

out = paddle.dist(x, y, 2)
print(out) # out = [2.]

out = paddle.dist(x, y, float("inf"))
print(out) # out = [2.]

out = paddle.dist(x, y, float("-inf"))
print(out) # out = [0.]

divide ( y, name=None )

Divide two tensors element-wise. The equation is:

$out = x / y$

Note

paddle.divide supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting .

Parameters
• x (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

• y (Tensor) – the input tensor, it’s data type should be float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([2, 3, 4], dtype='float64')
y = paddle.to_tensor([1, 5, 2], dtype='float64')
z = paddle.divide(x, y)
print(z)  # [2., 0.6, 2.]

dot ( y, name=None )

This operator calculates inner product for vectors.

Note

Support 1-d and 2-d Tensor. When it is 2d, the first dimension of this matrix is the batch dimension, which means that the vectors of multiple batches are dotted.

Parameters
• x (Tensor) – 1-D or 2-D Tensor. Its dtype should be float32, float64, int32, int64

• y (Tensor) – 1-D or 2-D Tensor. Its dtype soulde be float32, float64, int32, int64

• name (str, optional) – Name of the output. Default is None. It’s used to print debug info for developers. Details: Name

Returns

the calculated result Tensor.

Return type

Tensor

Examples:

import paddle

# 1-D Tensor * 1-D Tensor
x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([4, 5, 6])
z = paddle.dot(x, y)
print(z)  # [32]

# 2-D Tensor * 2-D Tensor
x = paddle.to_tensor([[1, 2, 3], [2, 4, 6]])
y = paddle.to_tensor([[4, 5, 6], [4, 5, 6]])
z = paddle.dot(x, y)
print(z)  # [[32], [64]]

eig ( name=None )

Performs the eigenvalue decomposition of a square matrix or a batch of square matrices.

Note

• If the matrix is a Hermitian or a real symmetric matrix, please use paddle.linalg.eigh instead, which is much faster.

• If only eigenvalues is needed, please use paddle.linalg.eigvals instead.

• If the matrix is of any shape, please use paddle.linalg.svd.

• This API is only supported on CPU device.

• The output datatype is always complex for both real and complex input.

Parameters
• x (Tensor) – A tensor with shape math:[*, N, N], The data type of the x should be one of float32, float64, compplex64 or complex128.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A tensor with shape math:[*, N] refers to the eigen values. Eigenvectors(Tensors): A tensor with shape math:[*, N, N] refers to the eigen vectors.

Return type

Eigenvalues(Tensors)

Examples

import paddle

paddle.device.set_device("cpu")

x = paddle.to_tensor([[1.6707249, 7.2249975, 6.5045543],
[9.956216,  8.749598,  6.066444 ],
[4.4251957, 1.7983172, 0.370647 ]])
w, v = paddle.linalg.eig(x)
print(v)
# Tensor(shape=[3, 3], dtype=complex128, place=CPUPlace, stop_gradient=False,
#       [[(-0.5061363550800655+0j) , (-0.7971760990842826+0j) ,
#         (0.18518077798279986+0j)],
#        [(-0.8308237755993192+0j) ,  (0.3463813401919749+0j) ,
#         (-0.6837005269141947+0j) ],
#        [(-0.23142567697893396+0j),  (0.4944999840400175+0j) ,
#         (0.7058765252952796+0j) ]])

print(w)
# Tensor(shape=[3], dtype=complex128, place=CPUPlace, stop_gradient=False,
#       [ (16.50471283351188+0j)  , (-5.5034820550763515+0j) ,
#         (-0.21026087843552282+0j)])

eigvals ( name=None )

Compute the eigenvalues of one or more general matrices.

Warning

The gradient kernel of this operator does not yet developed. If you need back propagation through this operator, please replace it with paddle.linalg.eig.

Parameters
• x (Tensor) – A square matrix or a batch of square matrices whose eigenvalues will be computed. Its shape should be [*, M, M], where * is zero or more batch dimensions. Its data type should be float32, float64, complex64, or complex128.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, A tensor containing the unsorted eigenvalues which has the same batch dimensions with x. The eigenvalues are complex-valued even when x is real.

Examples

import paddle

paddle.set_device("cpu")
paddle.seed(1234)

x = paddle.rand(shape=[3, 3], dtype='float64')
# [[0.02773777, 0.93004224, 0.06911496],
#  [0.24831591, 0.45733623, 0.07717843],
#  [0.48016702, 0.14235102, 0.42620817]])

print(paddle.linalg.eigvals(x))
# [(-0.27078833542132674+0j), (0.29962280156230725+0j), (0.8824477020120244+0j)] #complex128

eigvalsh ( UPLO='L', name=None )

Computes the eigenvalues of a complex Hermitian (conjugate symmetric) or a real symmetric matrix.

Parameters
• x (Tensor) – A tensor with shape $$[_, M, M]$$ , The data type of the input Tensor x should be one of float32, float64, complex64, complex128.

• UPLO (str, optional) – Lower triangular part of a (‘L’, default) or the upper triangular part (‘U’).

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The tensor eigenvalues in ascending order.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1, -2j], [2j, 5]])
out_value = paddle.eigvalsh(x, UPLO='L')
print(out_value)
# Tensor(shape=[2], dtype=float32, place=Place(cpu), stop_gradient=True,
#        [0.17157286, 5.82842731])

equal ( y, name=None )

This layer returns the truth value of $$x == y$$ elementwise.

Note

The output has no gradient.

Parameters
• x (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

• y (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

output Tensor, it’s shape is the same as the input’s Tensor, and the data type is bool. The result of this op is stop_gradient.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.equal(x, y)
print(result1)  # result1 = [True False False]

equal_all ( y, name=None )

Returns the truth value of $$x == y$$. True if two inputs have the same elements, False otherwise.

Note

The output has no gradient.

Parameters
• x (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

• y (Tensor) – Tensor, data type is bool, float32, float64, int32, int64.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

output Tensor, data type is bool, value is [False] or [True].

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 2, 3])
z = paddle.to_tensor([1, 4, 3])
result1 = paddle.equal_all(x, y)
print(result1) # result1 = [True ]
result2 = paddle.equal_all(x, z)
print(result2) # result2 = [False ]

erf ( name=None )

Erf Operator For more details, see Error function.

Equation:
$out = \frac{2}{\sqrt{\pi}} \int_{0}^{x}e^{- \eta^{2}}d\eta$
Parameters
• x (Tensor) – The input tensor, it’s data type should be float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output of Erf, dtype: float32 or float64, the same as the input, shape: the same as the input.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.erf(x)
print(out)
# [-0.42839236 -0.22270259  0.11246292  0.32862676]

erfinv ( name=None )

The inverse error function of x. Please refer to erf

$erfinv(erf(x)) = x.$
Parameters
• x (Tensor) – An N-D Tensor, the data type is float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

out (Tensor), an N-D Tensor, the shape and data type is the same with input.

Example

import paddle

x = paddle.to_tensor([0, 0.5, -1.], dtype="float32")
out = paddle.erfinv(x)
# out: [0, 0.4769, -inf]

erfinv_ ( name=None )

Inplace version of erfinv API, the output Tensor will be inplaced with input x. Please refer to api_tensor_erfinv.

exp ( name=None )

Exp Operator. Computes exp of x element-wise with a natural number $$e$$ as the base.

$$out = e^x$$

Parameters
• x (Tensor) – Input of Exp operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Exp operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.exp(x)
print(out)
# [0.67032005 0.81873075 1.10517092 1.34985881]

exp_ ( name=None )

Inplace version of exp API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_exp.

expand ( shape, name=None )

Expand the input tensor to a given shape.

Both the number of dimensions of x and the number of elements in shape should be less than or equal to 6. And the number of dimensions of x should be less than the number of elements in shape. The dimension to expand must have a value 1.

Parameters
• x (Tensor) – The input Tensor, its data type is bool, float32, float64, int32 or int64.

• shape (list|tuple|Tensor) – The result shape after expanding. The data type is int32. If shape is a list or tuple, all its elements should be integers or 1-D Tensors with the data type int32. If shape is a Tensor, it should be an 1-D Tensor with the data type int32. The value -1 in shape means keeping the corresponding dimension unchanged.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

A Tensor with the given shape. The data type is the same as x.

Return type

N-D Tensor

Examples

import paddle

data = paddle.to_tensor([1, 2, 3], dtype='int32')
out = paddle.expand(data, shape=[2, 3])
print(out)
# [[1, 2, 3], [1, 2, 3]]

expand_as ( y, name=None )

Expand the input tensor x to the same shape as the input tensor y.

Both the number of dimensions of x and y must be less than or equal to 6, and the number of dimensions of y must be greather than or equal to that of x. The dimension to expand must have a value of 1.

Parameters
• x (Tensor) – The input tensor, its data type is bool, float32, float64, int32 or int64.

• y (Tensor) – The input tensor that gives the shape to expand to.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A Tensor with the same shape as y. The data type is the same as x.

Return type

N-D Tensor

Examples

import paddle

data_x = paddle.to_tensor([1, 2, 3], 'int32')
data_y = paddle.to_tensor([[1, 2, 3], [4, 5, 6]], 'int32')
out = paddle.expand_as(data_x, data_y)
print(out)
# Tensor(shape=[2, 3], dtype=int32, place=Place(gpu:0), stop_gradient=True,
#        [[1, 2, 3],
#         [1, 2, 3]])

exponential_ ( lam=1.0, name=None )

This inplace OP fill input Tensor x with random number from a Exponential Distribution.

lam is $$\lambda$$ parameter of Exponential Distribution.

$f(x) = \lambda e^{-\lambda x}$
Parameters
• x (Tensor) – Input tensor. The data type should be float32, float64.

• lam (float, optional) – $$\lambda$$ parameter of Exponential Distribution. Default, 1.0.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

Input Tensor x.

Return type

Tensor

Examples

import paddle
paddle.set_device('cpu')
paddle.seed(100)

x = paddle.empty([2,3])
x.exponential_()
# [[0.80643415, 0.23211166, 0.01169797],
#  [0.72520673, 0.45208144, 0.30234432]]

flatten ( start_axis=0, stop_axis=- 1, name=None )

Flattens a contiguous range of axes in a tensor according to start_axis and stop_axis.

Note

The output Tensor will share data with origin Tensor and doesn’t have a Tensor copy in dygraph mode. If you want to use the Tensor copy version, please use Tensor.clone like flatten_clone_x = x.flatten().clone().

For Example:

Case 1:

Given
X.shape = (3, 100, 100, 4)

and
start_axis = 1
end_axis = 2

We get:
Out.shape = (3, 1000 * 100, 2)

Case 2:

Given
X.shape = (3, 100, 100, 4)

and
start_axis = 0
stop_axis = -1

We get:
Out.shape = (3 * 100 * 100 * 4)

Parameters
• x (Tensor) – A tensor of number of dimentions >= axis. A tensor with data type float32, float64, int8, int32, int64, uint8.

• start_axis (int) – the start axis to flatten

• stop_axis (int) – the stop axis to flatten

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A tensor with the contents of the input tensor, with input

axes flattened by indicated start axis and end axis. A Tensor with data type same as input x.

Return type

Tensor

Examples

import paddle

image_shape=(2, 3, 4, 4)

x = paddle.arange(end=image_shape[0] * image_shape[1] * image_shape[2] * image_shape[3])
img = paddle.reshape(x, image_shape)

out = paddle.flatten(img, start_axis=1, stop_axis=2)
# out shape is [2, 12, 4]

# out shares data with img in dygraph mode
img[0, 0, 0, 0] = -1
print(out[0, 0, 0]) # [-1]

flatten_ ( start_axis=0, stop_axis=- 1, name=None )

Inplace version of flatten API, the output Tensor will be inplaced with input x. Please refer to api_tensor_flatten.

flip ( axis, name=None )

Reverse the order of a n-D tensor along given axis in axis.

Parameters
• x (Tensor) – A Tensor(or LoDTensor) with shape $$[N_1, N_2,..., N_k]$$ . The data type of the input Tensor x should be float32, float64, int32, int64, bool.

• axis (list|tuple|int) – The axis(axes) to flip on. Negative indices for indexing from the end are accepted.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor or LoDTensor calculated by flip layer. The data type is same with input x.

Return type

Tensor

Examples

import paddle

image_shape=(3, 2, 2)
img = paddle.arange(image_shape[0] * image_shape[1] * image_shape[2]).reshape(image_shape)
tmp = paddle.flip(img, [0,1])
print(tmp) # [[[10,11],[8, 9]], [[6, 7],[4, 5]], [[2, 3],[0, 1]]]

out = paddle.flip(tmp,-1)
print(out) # [[[11,10],[9, 8]], [[7, 6],[5, 4]], [[3, 2],[1, 0]]]

floor ( name=None )

Floor Activation Operator. Computes floor of x element-wise.

$$out = \\lfloor x \\rfloor$$

Parameters
• x (Tensor) – Input of Floor operator, an N-D Tensor, with data type float32, float64 or float16.

• with_quant_attr (BOOLEAN) – Whether the operator has attributes used by quantization.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Output of Floor operator, a Tensor with shape same as input.

Return type

out (Tensor)

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.floor(x)
print(out)
# [-1. -1.  0.  0.]

floor_ ( name=None )

Inplace version of floor API, the output Tensor will be inplaced with input x. Please refer to api_fluid_layers_floor.

floor_divide ( y, name=None )

Floor divide two tensors element-wise. The equation is:

$out = x // y$

Note

paddle.floor_divide supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting .

Parameters
• x (Tensor) – the input tensor, it’s data type should be int32, int64.

• y (Tensor) – the input tensor, it’s data type should be int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with $x$.

Examples

import paddle

x = paddle.to_tensor([2, 3, 8, 7])
y = paddle.to_tensor([1, 5, 3, 3])
z = paddle.floor_divide(x, y)
print(z)  # [2, 0, 2, 2]

floor_mod ( y, name=None )

Mod two tensors element-wise. The equation is:

$out = x \% y$

Note

paddle.remainder supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting .

Parameters
• x (Tensor) – the input tensor, it’s data type should be float16, float32, float64, int32, int64.

• y (Tensor) – the input tensor, it’s data type should be float16, float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([2, 3, 8, 7])
y = paddle.to_tensor([1, 5, 3, 3])
z = paddle.remainder(x, y)
print(z)  # [0, 3, 2, 1]

fmax ( y, name=None )

Compares the elements at the corresponding positions of the two tensors and returns a new tensor containing the maximum value of the element. If one of them is a nan value, the other value is directly returned, if both are nan values, then the first nan value is returned. The equation is:

$out = fmax(x, y)$

Note

paddle.fmax supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting .

Parameters
• x (Tensor) – the input tensor, it’s data type should be float16, float32, float64, int32, int64.

• y (Tensor) – the input tensor, it’s data type should be float16, float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([[1, 2], [7, 8]])
y = paddle.to_tensor([[3, 4], [5, 6]])
res = paddle.fmax(x, y)
print(res)
# Tensor(shape=[2, 2], dtype=int64, place=Place(cpu), stop_gradient=True,
#        [[3, 4],
#         [7, 8]])

x = paddle.to_tensor([[1, 2, 3], [1, 2, 3]])
y = paddle.to_tensor([3, 0, 4])
res = paddle.fmax(x, y)
print(res)
# Tensor(shape=[2, 3], dtype=int64, place=Place(cpu), stop_gradient=True,
#        [[3, 2, 4],
#         [3, 2, 4]])

x = paddle.to_tensor([2, 3, 5], dtype='float32')
y = paddle.to_tensor([1, float("nan"), float("nan")], dtype='float32')
res = paddle.fmax(x, y)
print(res)
# Tensor(shape=[3], dtype=float32, place=Place(cpu), stop_gradient=True,
#        [2., 3., 5.])

x = paddle.to_tensor([5, 3, float("inf")], dtype='float32')
y = paddle.to_tensor([1, -float("inf"), 5], dtype='float32')
res = paddle.fmax(x, y)
print(res)
# Tensor(shape=[3], dtype=float32, place=Place(cpu), stop_gradient=True,
#        [5.  , 3.  , inf.])

fmin ( y, name=None )

Compares the elements at the corresponding positions of the two tensors and returns a new tensor containing the minimum value of the element. If one of them is a nan value, the other value is directly returned, if both are nan values, then the first nan value is returned. The equation is:

$out = fmin(x, y)$

Note

paddle.fmin supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting .

Parameters
• x (Tensor) – the input tensor, it’s data type should be float16, float32, float64, int32, int64.

• y (Tensor) – the input tensor, it’s data type should be float16, float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x, y have different shapes and are “broadcastable”, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle

x = paddle.to_tensor([[1, 2], [7, 8]])
y = paddle.to_tensor([[3, 4], [5, 6]])
res = paddle.fmin(x, y)
print(res)
# Tensor(shape=[2, 2], dtype=int64, place=Place(cpu), stop_gradient=True,
#        [[1, 2],
#         [5, 6]])

x = paddle.to_tensor([[[1, 2, 3], [1, 2, 3]]])
y = paddle.to_tensor([3, 0, 4])
res = paddle.fmin(x, y)
print(res)
# Tensor(shape=[1, 2, 3], dtype=int64, place=Place(cpu), stop_gradient=True,
#        [[[1, 0, 3],
#          [1, 0, 3]]])

x = paddle.to_tensor([2, 3, 5], dtype='float32')
y = paddle.to_tensor([1, float("nan"), float("nan")], dtype='float32')
res = paddle.fmin(x, y)
print(res)
# Tensor(shape=[3], dtype=float32, place=Place(cpu), stop_gradient=True,
#        [1., 3., 5.])

x = paddle.to_tensor([5, 3, float("inf")], dtype='float64')
y = paddle.to_tensor([1, -float("inf"), 5], dtype='float64')
res = paddle.fmin(x, y)
print(res)
# Tensor(shape=[3], dtype=float64, place=Place(cpu), stop_gradient=True,
#        [ 1.  , -inf.,  5.  ])

frac ( name=None )

This API is used to return the fractional portion of each element in input.

Parameters
• x (Tensor) – The input tensor, which data type should be int32, int64, float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output Tensor of frac.

Return type

Tensor

Examples

import paddle

input = paddle.to_tensor([[12.22000003, -1.02999997],
[-0.54999995, 0.66000003]])
output = paddle.frac(input)
print(output)
# Tensor(shape=[2, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
#        [[ 0.22000003, -0.02999997],
#         [-0.54999995,  0.66000003]])

gather ( index, axis=None, name=None )

Output is obtained by gathering entries of axis of x indexed by index and concatenate them together.

Given:

x = [[1, 2],
[3, 4],
[5, 6]]

index = [1, 2]
axis=[0]

Then:

out = [[3, 4],
[5, 6]]

Parameters
• x (Tensor) – The source input tensor with rank>=1. Supported data type is int32, int64, float32, float64 and uint8 (only for CPU), float16 (only for GPU).

• index (Tensor) – The index input tensor with rank=1. Data type is int32 or int64.

• axis (Tensor|int, optional) – The axis of input to be gathered, it’s can be int or a Tensor with data type is int32 or int64. The default value is None, if None, the axis is 0.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

The output is a tensor with the same rank as x.

Return type

output (Tensor)

Examples

import paddle

input = paddle.to_tensor([[1,2],[3,4],[5,6]])
index = paddle.to_tensor([0,1])
output = paddle.gather(input, index, axis=0)
# expected output: [[1,2],[3,4]]

gather_nd ( index, name=None )

This function is actually a high-dimensional extension of gather and supports for simultaneous indexing by multiple axes. index is a K-dimensional integer tensor, which is regarded as a (K-1)-dimensional tensor of index into input, where each element defines a slice of params:

$output[(i_0, ..., i_{K-2})] = input[index[(i_0, ..., i_{K-2})]]$

Obviously, index.shape[-1] <= input.rank . And, the output tensor has shape index.shape[:-1] + input.shape[index.shape[-1]:] .

Given:
x =  [[[ 0,  1,  2,  3],
[ 4,  5,  6,  7],
[ 8,  9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]]
x.shape = (2, 3, 4)

* Case 1:
index = [[1]]

gather_nd(x, index)
= [x[1, :, :]]
= [[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]

* Case 2:
index = [[0,2]]

gather_nd(x, index)
= [x[0, 2, :]]
= [8, 9, 10, 11]

* Case 3:
index = [[1, 2, 3]]

gather_nd(x, index)
= [x[1, 2, 3]]
= [23]

Parameters
• x (Tensor) – The input Tensor which it’s data type should be bool, float32, float64, int32, int64.

• index (Tensor) – The index input with rank > 1, index.shape[-1] <= input.rank. Its dtype should be int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A tensor with the shape index.shape[:-1] + input.shape[index.shape[-1]:]

Return type

output (Tensor)

Examples

import paddle

x = paddle.to_tensor([[[1, 2], [3, 4], [5, 6]],
[[7, 8], [9, 10], [11, 12]]])
index = paddle.to_tensor([[0, 1]])

output = paddle.gather_nd(x, index) #[[3, 4]]

gcd ( y, name=None )

Computes the element-wise greatest common divisor (GCD) of input |x| and |y|. Both x and y must have integer types.

Note

gcd(0,0)=0, gcd(0, y)=|y|

If x.shape != y.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

Parameters
• x (Tensor) – An N-D Tensor, the data type is int32，int64.

• y (Tensor) – An N-D Tensor, the data type is int32，int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

An N-D Tensor, the data type is the same with input.

Return type

out (Tensor)

Examples

import paddle

x1 = paddle.to_tensor(12)
x2 = paddle.to_tensor(20)
paddle.gcd(x1, x2)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [4])

x3 = paddle.arange(6)
paddle.gcd(x3, x2)
# Tensor(shape=[6], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [20, 1 , 2 , 1 , 4 , 5])

x4 = paddle.to_tensor(0)
paddle.gcd(x4, x2)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [20])

paddle.gcd(x4, x4)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [0])

x5 = paddle.to_tensor(-20)
paddle.gcd(x1, x5)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [4])

greater_equal ( y, name=None )

Returns the truth value of $$x >= y$$ elementwise, which is equivalent function to the overloaded operator >=.

Note

The output has no gradient.

Parameters
• x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The output shape is same as input x. The output data type is bool.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.greater_equal(x, y)
print(result1)  # result1 = [True False True]

greater_than ( y, name=None )

Returns the truth value of $$x > y$$ elementwise, which is equivalent function to the overloaded operator >.

Note

The output has no gradient.

Parameters
• x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The output shape is same as input x. The output data type is bool.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.greater_than(x, y)
print(result1)  # result1 = [False False True]

heaviside ( y, name=None )

Computes the Heaviside step function determined by corresponding element in y for each element in x. The equation is

$\begin{split}heaviside(x, y)= \left\{ \begin{array}{lcl} 0,& &\text{if} \ x < 0, \\ y,& &\text{if} \ x = 0, \\ 1,& &\text{if} \ x > 0. \end{array} \right.\end{split}$

Note

paddle.heaviside supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting.

Parameters
• x (Tensor) – The input tensor of Heaviside step function, it’s data type should be float16, float32, float64, int32 or int64.

• y (Tensor) – The tensor that determines a Heaviside step function, it’s data type should be float16, float32, float64, int32 or int64.

• name (str, optional) – Name for the operation (optional, default is None). Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. If x and y have different shapes and are broadcastable, the resulting tensor shape is the shape of x and y after broadcasting. If x, y have the same shape, its shape is the same as x and y.

Examples

import paddle
x = paddle.to_tensor([-0.5, 0, 0.5])
y = paddle.to_tensor([0.1])
paddle.heaviside(x, y)
#    [0.        , 0.10000000, 1.        ]
x = paddle.to_tensor([[-0.5, 0, 0.5], [-0.5, 0.5, 0]])
y = paddle.to_tensor([0.1, 0.2, 0.3])
paddle.heaviside(x, y)
#    [[0.        , 0.20000000, 1.        ],
#     [0.        , 1.        , 0.30000001]]

histogram ( bins=100, min=0, max=0, name=None )

Computes the histogram of a tensor. The elements are sorted into equal width bins between min and max. If min and max are both zero, the minimum and maximum values of the data are used.

Parameters
• input (Tensor) – A Tensor(or LoDTensor) with shape $$[N_1, N_2,..., N_k]$$ . The data type of the input Tensor should be float32, float64, int32, int64.

• bins (int, optional) – number of histogram bins.

• min (int, optional) – lower end of the range (inclusive).

• max (int, optional) – upper end of the range (inclusive).

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

data type is int64, shape is (nbins,).

Return type

Tensor

Examples

import paddle

inputs = paddle.to_tensor([1, 2, 1])
result = paddle.histogram(inputs, bins=4, min=0, max=3)
print(result) # [0, 2, 1, 0]

imag ( name=None )

Returns a new tensor containing imaginary values of input tensor.

Parameters
• x (Tensor) – the input tensor, its data type could be complex64 or complex128.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

a tensor containing imaginary values of the input tensor.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor(
[[1 + 6j, 2 + 5j, 3 + 4j], [4 + 3j, 5 + 2j, 6 + 1j]])
# Tensor(shape=[2, 3], dtype=complex64, place=CUDAPlace(0), stop_gradient=True,
#        [[(1+6j), (2+5j), (3+4j)],
#         [(4+3j), (5+2j), (6+1j)]])

imag_res = paddle.imag(x)
# Tensor(shape=[2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[6., 5., 4.],
#         [3., 2., 1.]])

imag_t = x.imag()
# Tensor(shape=[2, 3], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#        [[6., 5., 4.],
#         [3., 2., 1.]])

increment ( value=1.0, name=None )

The API is usually used for control flow to increment the data of x by an amount value. Notice that the number of elements in x must be equal to 1.

Parameters
• x (Tensor) – A tensor that must always contain only one element, its data type supports float32, float64, int32 and int64.

• value (float, optional) – The amount to increment the data of x. Default: 1.0.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the elementwise-incremented tensor with the same shape and data type as x.

Examples

import paddle

data = paddle.zeros(shape=[1], dtype='float32')
counter = paddle.increment(data)
# [1.]

index_add ( index, axis, value, name=None )

Adds the elements of the input tensor with value tensor by selecting the indices in the order given in index.

Parameters
• x (Tensor) – The Destination Tensor. Supported data types are int32, int64, float16, float32, float64.

• index (Tensor) – The 1-D Tensor containing the indices to index. The data type of index must be int32 or int64.

• axis (int) – The dimension in which we index.

• value (Tensor) – The tensor used to add the elements along the target axis.

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

same dimention and dtype with x.

Return type

Tensor

Examples

# required: gpu
import paddle

input_tensor = paddle.to_tensor(paddle.ones((3, 3)), dtype="float32")
index = paddle.to_tensor([0, 2], dtype="int32")
value = paddle.to_tensor([[1, 1, 1], [1, 1, 1]], dtype="float32")
outplace_res = paddle.index_add(input_tensor, index, 0, value)
print(outplace_res)
# Tensor(shape=[3, 3], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [[2., 2., 2.],
#         [1., 1., 1.],
#         [2., 2., 2.]])

index_add_ ( index, axis, value, name=None )

Inplace version of index_add API, the output Tensor will be inplaced with input x. Please refer to api_paddle_tensor_index_add.

Examples

# required: gpu
import paddle

input_tensor = paddle.to_tensor(paddle.ones((3, 3)), dtype="float32")
index = paddle.to_tensor([0, 2], dtype="int32")
value = paddle.to_tensor([[1, 1], [1, 1], [1, 1]], dtype="float32")
inplace_res = paddle.index_add_(input_tensor, index, 1, value)
print(inplace_res)
# Tensor(shape=[3, 3], dtype=float32, place=Place(gpu:0), stop_gradient=True,
#        [[2., 1., 2.],
#         [2., 1., 2.],
#         [2., 1., 2.]])

index_sample ( index )

IndexSample Layer

IndexSample OP returns the element of the specified location of X, and the location is specified by Index.

Given:

X = [[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10]]

Index = [[0, 1, 3],
[0, 2, 4]]

Then:

Out = [[1, 2, 4],
[6, 8, 10]]

Parameters
• x (Tensor) – The source input tensor with 2-D shape. Supported data type is int32, int64, float32, float64.

• index (Tensor) – The index input tensor with 2-D shape, first dimension should be same with X. Data type is int32 or int64.

Returns

The output is a tensor with the same shape as index.

Return type

output (Tensor)

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, 10.0, 11.0, 12.0]], dtype='float32')
index = paddle.to_tensor([[0, 1, 2],
[1, 2, 3],
[0, 0, 0]], dtype='int32')
target = paddle.to_tensor([[100, 200, 300, 400],
[500, 600, 700, 800],
[900, 1000, 1100, 1200]], dtype='int32')
out_z1 = paddle.index_sample(x, index)
print(out_z1)
#[[1. 2. 3.]
# [6. 7. 8.]
# [9. 9. 9.]]

# Use the index of the maximum value by topk op
# get the value of the element of the corresponding index in other tensors
top_value, top_index = paddle.topk(x, k=2)
out_z2 = paddle.index_sample(target, top_index)
print(top_value)
#[[ 4.  3.]
# [ 8.  7.]
# [12. 11.]]

print(top_index)
#[[3 2]
# [3 2]
# [3 2]]

print(out_z2)
#[[ 400  300]
# [ 800  700]
# [1200 1100]]

index_select ( index, axis=0, name=None )

Returns a new tensor which indexes the input tensor along dimension axis using the entries in index which is a Tensor. The returned tensor has the same number of dimensions as the original x tensor. The dim-th dimension has the same size as the length of index; other dimensions have the same size as in the x tensor.

Parameters
• x (Tensor) – The input Tensor to be operated. The data of x can be one of float32, float64, int32, int64.

• index (Tensor) – The 1-D Tensor containing the indices to index. The data type of index must be int32 or int64.

• axis (int, optional) – The dimension in which we index. Default: if None, the axis is 0.

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

A Tensor with same data type as x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, 10.0, 11.0, 12.0]])
index = paddle.to_tensor([0, 1, 1], dtype='int32')
out_z1 = paddle.index_select(x=x, index=index)
#[[1. 2. 3. 4.]
# [5. 6. 7. 8.]
# [5. 6. 7. 8.]]
out_z2 = paddle.index_select(x=x, index=index, axis=1)
#[[ 1.  2.  2.]
# [ 5.  6.  6.]
# [ 9. 10. 10.]]

inner ( y, name=None )

Inner product of two input Tensor.

Ordinary inner product for 1-D Tensors, in higher dimensions a sum product over the last axes.

Parameters
• x (Tensor) – An N-D Tensor or a Scalar Tensor. If its not a scalar Tensor, its last dimensions must match y’s.

• y (Tensor) – An N-D Tensor or a Scalar Tensor. If its not a scalar Tensor, its last dimensions must match x’s.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The inner-product Tensor, the output shape is x.shape[:-1] + y.shape[:-1].

Return type

Tensor

Examples

import paddle
x = paddle.arange(1, 7).reshape((2, 3)).astype('float32')
y = paddle.arange(1, 10).reshape((3, 3)).astype('float32')
out = paddle.inner(x, y)
print(out)
#        ([[14, 32, 50],
#         [32, 77, 122]])

inverse ( name=None )

Takes the inverse of the square matrix. A square matrix is a matrix with the same number of rows and columns. The input can be a square matrix (2-D Tensor) or batches of square matrices.

Parameters
• x (Tensor) – The input tensor. The last two dimensions should be equal. When the number of dimensions is greater than 2, it is treated as batches of square matrix. The data type can be float32 and float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A Tensor holds the inverse of x. The shape and data type

is the same as x.

Return type

Tensor

Examples

import paddle

mat = paddle.to_tensor([[2, 0], [0, 2]], dtype='float32')
inv = paddle.inverse(mat)
print(inv) # [[0.5, 0], [0, 0.5]]

is_complex ( )

Return whether x is a tensor of complex data type(complex64 or complex128).

Parameters

x (Tensor) – The input tensor.

Returns

True if the data type of the input is complex data type, otherwise false.

Return type

bool

Examples

import paddle

x = paddle.to_tensor([1 + 2j, 3 + 4j])
print(paddle.is_complex(x))
# True

x = paddle.to_tensor([1.1, 1.2])
print(paddle.is_complex(x))
# False

x = paddle.to_tensor([1, 2, 3])
print(paddle.is_complex(x))
# False

is_empty ( name=None )

Test whether a Tensor is empty.

Parameters
• x (Tensor) – The Tensor to be tested.

• name (str, optional) – The default value is None . Normally users don’t have to set this parameter. For more information, please refer to Name .

Returns

A bool scalar Tensor. True if ‘x’ is an empty Tensor.

Return type

Tensor

Examples

import paddle

input = paddle.rand(shape=[4, 32, 32], dtype='float32')
res = paddle.is_empty(x=input)
print("res:", res)
# ('res:', Tensor: eager_tmp_1
#    - place: CPUPlace
#    - shape: [1]
#    - layout: NCHW
#    - dtype: bool
#    - data: [0])

is_floating_point ( )

Returns whether the dtype of x is one of paddle.float64, paddle.float32, paddle.float16, and paddle.bfloat16.

Parameters

x (Tensor) – The input tensor.

Returns

True if the dtype of x is floating type, otherwise false.

Return type

bool

Examples

import paddle

x = paddle.arange(1., 5., dtype='float32')
y = paddle.arange(1, 5, dtype='int32')
print(paddle.is_floating_point(x))
# True
print(paddle.is_floating_point(y))
# False

is_integer ( )

Return whether x is a tensor of integeral data type.

Parameters

x (Tensor) – The input tensor.

Returns

True if the data type of the input is integer data type, otherwise false.

Return type

bool

Examples

import paddle

x = paddle.to_tensor([1 + 2j, 3 + 4j])
print(paddle.is_integer(x))
# False

x = paddle.to_tensor([1.1, 1.2])
print(paddle.is_integer(x))
# False

x = paddle.to_tensor([1, 2, 3])
print(paddle.is_integer(x))
# True

is_tensor ( )

Tests whether input object is a paddle.Tensor.

Parameters

x (object) – Object to test.

Returns

A boolean value. True if x is a paddle.Tensor, otherwise False.

Examples

import paddle

input1 = paddle.rand(shape=[2, 3, 5], dtype='float32')
check = paddle.is_tensor(input1)
print(check)  #True

input3 = [1, 4]
check = paddle.is_tensor(input3)
print(check)  #False

isclose ( y, rtol=1e-05, atol=1e-08, equal_nan=False, name=None )

This operator checks if all $$x$$ and $$y$$ satisfy the condition:

$\left| x - y \right| \leq atol + rtol \times \left| y \right|$

elementwise, for all elements of $$x$$ and $$y$$. The behaviour of this operator is analogous to $$numpy.isclose$$, namely that it returns $$True$$ if two tensors are elementwise equal within a tolerance.

Parameters
• x (Tensor) – The input tensor, it’s data type should be float32, float64.

• y (Tensor) – The input tensor, it’s data type should be float32, float64.

• rtol (rtoltype, optional) – The relative tolerance. Default: $$1e-5$$ .

• atol (atoltype, optional) – The absolute tolerance. Default: $$1e-8$$ .

• equal_nan (equalnantype, optional) – If $$True$$ , then two $$NaNs$$ will be compared as equal. Default: $$False$$ .

• name (str, optional) – Name for the operation. For more information, please refer to Name. Default: None.

Returns

The output tensor, it’s data type is bool.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([10000., 1e-07])
y = paddle.to_tensor([10000.1, 1e-08])
result1 = paddle.isclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=False, name="ignore_nan")
# [True, False]
result2 = paddle.isclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=True, name="equal_nan")
# [True, False]

x = paddle.to_tensor([1.0, float('nan')])
y = paddle.to_tensor([1.0, float('nan')])
result1 = paddle.isclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=False, name="ignore_nan")
# [True, False]
result2 = paddle.isclose(x, y, rtol=1e-05, atol=1e-08,
equal_nan=True, name="equal_nan")
# [True, True]

isfinite ( name=None )

Return whether every element of input tensor is finite number or not.

Parameters
• x (Tensor) – The input tensor, it’s data type should be float16, float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the bool result which shows every element of x whether it is finite number or not.

Examples

import paddle

x = paddle.to_tensor([float('-inf'), -2, 3.6, float('inf'), 0, float('-nan'), float('nan')])
out = paddle.isfinite(x)
print(out)  # [False  True  True False  True False False]

isinf ( name=None )

Return whether every element of input tensor is +/-INF or not.

Parameters
• x (Tensor) – The input tensor, it’s data type should be float16, float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the bool result which shows every element of x whether it is +/-INF or not.

Examples

import paddle

x = paddle.to_tensor([float('-inf'), -2, 3.6, float('inf'), 0, float('-nan'), float('nan')])
out = paddle.isinf(x)
print(out)  # [ True False False  True False False False]

isnan ( name=None )

Return whether every element of input tensor is NaN or not.

Parameters
• x (Tensor) – The input tensor, it’s data type should be float16, float32, float64, int32, int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the bool result which shows every element of x whether it is NaN or not.

Examples

import paddle

x = paddle.to_tensor([float('-inf'), -2, 3.6, float('inf'), 0, float('-nan'), float('nan')])
out = paddle.isnan(x)
print(out)  # [False False False False False  True  True]

item ( )

In order to be compatible with the item interface introduced by the dynamic graph, it does nothing but returns self. It will check that the shape must be a 1-D tensor

kron ( y, name=None )

Kron Operator.

This operator computes the Kronecker product of two tensors, a composite tensor made of blocks of the second tensor scaled by the first.

This operator assumes that the rank of the two tensors, $X$ and $Y$ are the same, if necessary prepending the smallest with ones. If the shape of $X$ is [$r_0$, $r_1$, …, $r_N$] and the shape of $Y$ is [$s_0$, $s_1$, …, $s_N$], then the shape of the output tensor is [$r_{0}s_{0}$, $r_{1}s_{1}$, …, $r_{N}s_{N}$]. The elements are products of elements from $X$ and $Y$.

The equation is: $$output[k_{0}, k_{1}, …, k_{N}] = X[i_{0}, i_{1}, …, i_{N}] * Y[j_{0}, j_{1}, …, j_{N}]$$

where $$k_{t} = i_{t} * s_{t} + j_{t}, t = 0, 1, …, N$$

Parameters
• x (Tensor) – the fist operand of kron op, data type: float16, float32, float64, int32 or int64.

• y (Tensor) – the second operand of kron op, data type: float16, float32, float64, int32 or int64. Its data type should be the same with x.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The output of kron, data type: float16, float32, float64, int32 or int64. Its data is the same with x.

Return type

Tensor

Examples

import paddle
x = paddle.to_tensor([[1, 2], [3, 4]], dtype='int64')
y = paddle.to_tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype='int64')
out = paddle.kron(x, y)
print(out)
#        [[1, 2, 3, 2, 4, 6],
#         [ 4,  5,  6,  8, 10, 12],
#         [ 7,  8,  9, 14, 16, 18],
#         [ 3,  6,  9,  4,  8, 12],
#         [12, 15, 18, 16, 20, 24],
#         [21, 24, 27, 28, 32, 36]])

kthvalue ( k, axis=None, keepdim=False, name=None )

Find values and indices of the k-th smallest at the axis.

Parameters
• x (Tensor) – A N-D Tensor with type float32, float64, int32, int64.

• k (int) – The k for the k-th smallest number to look for along the axis.

• axis (int, optional) – Axis to compute indices along. The effective range is [-R, R), where R is x.ndim. when axis < 0, it works the same way as axis + R. The default is None. And if the axis is None, it will computed as -1 by default.

• keepdim (bool, optional) – Whether to keep the given axis in output. If it is True, the dimensions will be same as input x and with size one in the axis. Otherwise the output dimentions is one fewer than x since the axis is squeezed. Default is False.

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

tuple(Tensor), return the values and indices. The value data type is the same as the input x. The indices data type is int64.

Examples

import paddle

x = paddle.randn((2,3,2))
# Tensor(shape=[2, 3, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
#       [[[ 0.22954939, -0.01296274],
#         [ 1.17135799, -0.34493217],
#         [-0.19550551, -0.17573971]],
#
#        [[ 0.15104349, -0.93965352],
#         [ 0.14745511,  0.98209465],
#         [ 0.10732264, -0.55859774]]])
y = paddle.kthvalue(x, 2, 1)
# (Tensor(shape=[2, 2], dtype=float32, place=CUDAPlace(0), stop_gradient=True,
# [[ 0.22954939, -0.17573971],
#  [ 0.14745511, -0.55859774]]), Tensor(shape=[2, 2], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#  [[0, 2],
#  [1, 2]]))

lcm ( y, name=None )

Computes the element-wise least common multiple (LCM) of input |x| and |y|. Both x and y must have integer types.

Note

lcm(0,0)=0, lcm(0, y)=0

If x.shape != y.shape, they must be broadcastable to a common shape (which becomes the shape of the output).

Parameters
• x (Tensor) – An N-D Tensor, the data type is int32，int64.

• y (Tensor) – An N-D Tensor, the data type is int32，int64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

An N-D Tensor, the data type is the same with input.

Return type

out (Tensor)

Examples

import paddle

x1 = paddle.to_tensor(12)
x2 = paddle.to_tensor(20)
paddle.lcm(x1, x2)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [60])

x3 = paddle.arange(6)
paddle.lcm(x3, x2)
# Tensor(shape=[6], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [0, 20, 20, 60, 20, 20])

x4 = paddle.to_tensor(0)
paddle.lcm(x4, x2)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [0])

paddle.lcm(x4, x4)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [0])

x5 = paddle.to_tensor(-20)
paddle.lcm(x1, x5)
# Tensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,
#        [60])

lerp ( y, weight, name=None )

Does a linear interpolation between x and y based on weight.

Equation:
$lerp(x, y, weight) = x + weight * (y - x).$
Parameters
• x (Tensor) – An N-D Tensor with starting points, the data type is float32, float64.

• y (Tensor) – An N-D Tensor with ending points, the data type is float32, float64.

• weight (float|Tensor) – The weight for the interpolation formula. When weight is Tensor, the data type is float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

An N-D Tensor, the shape and data type is the same with input.

Return type

out (Tensor)

Example

import paddle

x = paddle.arange(1., 5., dtype='float32')
y = paddle.empty([4], dtype='float32')
y.fill_(10.)
out = paddle.lerp(x, y, 0.5)
# out: [5.5, 6., 6.5, 7.]

lerp_ ( y, weight, name=None )

Inplace version of lerp API, the output Tensor will be inplaced with input x. Please refer to api_tensor_lerp.

less_equal ( y, name=None )

Returns the truth value of $$x <= y$$ elementwise, which is equivalent function to the overloaded operator <=.

Note

The output has no gradient.

Parameters
• x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The output shape is same as input x. The output data type is bool.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.less_equal(x, y)
print(result1)  # result1 = [True True False]

less_than ( y, name=None )

Returns the truth value of $$x < y$$ elementwise, which is equivalent function to the overloaded operator <.

Note

The output has no gradient.

Parameters
• x (Tensor) – First input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• y (Tensor) – Second input to compare which is N-D tensor. The input data type should be bool, float32, float64, int32, int64.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

The output shape is same as input x. The output data type is bool.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([1, 2, 3])
y = paddle.to_tensor([1, 3, 2])
result1 = paddle.less_than(x, y)
print(result1)  # result1 = [False True False]

lgamma ( name=None )

Calculates the lgamma of the given input tensor, element-wise.

This operator performs elementwise lgamma for input $X$. $$out = log\Gamma(x)$$

Parameters
• x (Tensor) – Input Tensor. Must be one of the following types: float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the lgamma of the input Tensor, the shape and data type is the same with input.

Examples

import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = paddle.lgamma(x)
print(out)
# [1.31452441, 1.76149750, 2.25271273, 1.09579802]

log ( name=None )

Calculates the natural log of the given input Tensor, element-wise.

$Out = \ln(x)$
Parameters
• x (Tensor) – Input Tensor. Must be one of the following types: float32, float64.

• name (str|None) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name

Returns

The natural log of the input Tensor computed element-wise.

Return type

Tensor

Examples

import paddle

x = [[2,3,4], [7,8,9]]
x = paddle.to_tensor(x, dtype='float32')
res = paddle.log(x)
# [[0.693147, 1.09861, 1.38629], [1.94591, 2.07944, 2.19722]]

log10 ( name=None )

Calculates the log to the base 10 of the given input tensor, element-wise.

$Out = \log_10_x$
Parameters
• x (Tensor) – Input tensor must be one of the following types: float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The log to the base 10 of the input Tensor computed element-wise.

Return type

Tensor

Examples

import paddle

# example 1: x is a float
x_i = paddle.to_tensor([[1.0], [10.0]])
res = paddle.log10(x_i) # [[0.], [1.0]]

# example 2: x is float32
x_i = paddle.full(shape=[1], fill_value=10, dtype='float32')
paddle.to_tensor(x_i)
res = paddle.log10(x_i)
print(res) # [1.0]

# example 3: x is float64
x_i = paddle.full(shape=[1], fill_value=10, dtype='float64')
paddle.to_tensor(x_i)
res = paddle.log10(x_i)
print(res) # [1.0]

log1p ( name=None )

Calculates the natural log of the given input tensor, element-wise.

$Out = \ln(x+1)$
Parameters
• x (Tensor) – Input Tensor. Must be one of the following types: float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the natural log of the input Tensor computed element-wise.

Examples

import paddle

data = paddle.to_tensor([[0], [1]], dtype='float32')
res = paddle.log1p(data)
# [[0.], [0.6931472]]

log2 ( name=None )

Calculates the log to the base 2 of the given input tensor, element-wise.

$Out = \log_2x$
Parameters
• x (Tensor) – Input tensor must be one of the following types: float32, float64.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The log to the base 2 of the input Tensor computed element-wise.

Return type

Tensor

Examples

import paddle

# example 1: x is a float
x_i = paddle.to_tensor([[1.0], [2.0]])
res = paddle.log2(x_i) # [[0.], [1.0]]

# example 2: x is float32
x_i = paddle.full(shape=[1], fill_value=2, dtype='float32')
paddle.to_tensor(x_i)
res = paddle.log2(x_i)
print(res) # [1.0]

# example 3: x is float64
x_i = paddle.full(shape=[1], fill_value=2, dtype='float64')
paddle.to_tensor(x_i)
res = paddle.log2(x_i)
print(res) # [1.0]

logcumsumexp ( axis=None, dtype=None, name=None )

The logarithm of the cumulative summation of the exponentiation of the elements along a given axis.

For summation index j given by axis and other indices i, the result is

$logcumsumexp(x)_{ij} = log \sum_{i=0}^{j}exp(x_{ij})$

Note

The first element of the result is the same as the first element of the input.

Parameters
• x (Tensor) – The input tensor.

• axis (int, optional) – The dimension to do the operation along. -1 means the last dimension. The default (None) is to compute the cumsum over the flattened array.

• dtype (str, optional) – The data type of the output tensor, can be float32, float64. If specified, the input tensor is casted to dtype before the operation is performed. This is useful for preventing data type overflows. The default value is None.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, the result of logcumsumexp operator.

Examples

import paddle

data = paddle.arange(12, dtype='float64')
data = paddle.reshape(data, (3, 4))

y = paddle.logcumsumexp(data)
# [ 0.         1.3132617  2.4076061  3.4401898  4.4519143  5.4561934
#   6.4577627  7.4583397  8.458551   9.45863   10.458658  11.458669 ]

y = paddle.logcumsumexp(data, axis=0)
# [[ 0.        1.        2.        3.      ]
#  [ 4.01815   5.01815   6.01815   7.01815 ]
#  [ 8.018479  9.018479 10.018479 11.018479]]

y = paddle.logcumsumexp(data, axis=-1)
# [[ 0.         1.3132617  2.4076061  3.4401898]
#  [ 4.         5.3132615  6.407606   7.44019  ]
#  [ 8.         9.313262  10.407606  11.440189 ]]

y = paddle.logcumsumexp(data, dtype='float64')
print(y.dtype)
# paddle.float64

logical_and ( y, out=None, name=None )

logical_and operator computes element-wise logical AND on x and y, and returns out. out is N-dim boolean Tensor. Each element of out is calculated by

$out = x \&\& y$

Note

paddle.logical_and supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting.

Parameters
• x (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

• y (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

• out (Tensor) – The Tensor that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new Tensor will be created to save the output.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with x.

Examples

import paddle

x = paddle.to_tensor([True])
y = paddle.to_tensor([True, False, True, False])
res = paddle.logical_and(x, y)
print(res) # [True False True False]

logical_not ( out=None, name=None )

logical_not operator computes element-wise logical NOT on x, and returns out. out is N-dim boolean Variable. Each element of out is calculated by

$out = !x$
Parameters
• x (Tensor) – Operand of logical_not operator. Must be a Tensor of type bool, int8, int16, in32, in64, float32, or float64.

• out (Tensor) – The Tensor that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new Tensor will be created to save the output.

• name (str|None) – The default value is None. Normally there is no need for users to set this property. For more information, please refer to Name.

Returns

n-dim bool LoDTensor or Tensor

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([True, False, True, False])
res = paddle.logical_not(x)
print(res) # [False  True False  True]

logical_or ( y, out=None, name=None )

logical_or operator computes element-wise logical OR on x and y, and returns out. out is N-dim boolean Tensor. Each element of out is calculated by

$out = x || y$

Note

paddle.logical_or supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting.

Parameters
• x (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

• y (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

• out (Tensor) – The Variable that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new Tensor will be created to save the output.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with x.

Examples

import paddle

x = paddle.to_tensor([True, False], dtype="bool").reshape([2, 1])
y = paddle.to_tensor([True, False, True, False], dtype="bool").reshape([2, 2])
res = paddle.logical_or(x, y)
print(res)
# Tensor(shape=[2, 2], dtype=bool, place=Place(cpu), stop_gradient=True,
#        [[True , True ],
#         [True , False]])

logical_xor ( y, out=None, name=None )

logical_xor operator computes element-wise logical XOR on x and y, and returns out. out is N-dim boolean Tensor. Each element of out is calculated by

$out = (x || y) \&\& !(x \&\& y)$

Note

paddle.logical_xor supports broadcasting. If you want know more about broadcasting, please refer to user_guide_broadcasting.

Parameters
• x (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

• y (Tensor) – the input tensor, it’s data type should be one of bool, int8, int16, in32, in64, float32, float64.

• out (Tensor) – The Tensor that specifies the output of the operator, which can be any Tensor that has been created in the program. The default value is None, and a new Tensor will be created to save the output.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

N-D Tensor. A location into which the result is stored. It’s dimension equals with x.

Examples

import paddle

x = paddle.to_tensor([True, False], dtype="bool").reshape([2, 1])
y = paddle.to_tensor([True, False, True, False], dtype="bool").reshape([2, 2])
res = paddle.logical_xor(x, y)
print(res)
# Tensor(shape=[2, 2], dtype=bool, place=Place(cpu), stop_gradient=True,
#        [[False, True ],
#         [True , False]])

logit ( eps=None, name=None )

This function generates a new tensor with the logit of the elements of input x. x is clamped to [eps, 1-eps] when eps is not zero. When eps is zero and x < 0 or x > 1, the function will yields NaN.

$logit(x) = ln(\frac{x}{1 - x})$

where

$\begin{split}x_i= \left\{\begin{array}{rcl} x_i & &\text{if } eps == Default \\ eps & &\text{if } x_i < eps \\ x_i & &\text{if } eps <= x_i <= 1-eps \\ 1-eps & &\text{if } x_i > 1-eps \end{array}\right.\end{split}$
Parameters
• x (Tensor) – The input Tensor with data type float32, float64.

• eps (float, optional) – the epsilon for input clamp bound. Default is None.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A Tensor with the same data type and shape as x .

Return type

out(Tensor)

Examples

import paddle

x = paddle.to_tensor([0.2635, 0.0106, 0.2780, 0.2097, 0.8095])
out1 = paddle.logit(x)
print(out1)
# [-1.0277, -4.5365, -0.9544, -1.3269,  1.4468]

logsumexp ( axis=None, keepdim=False, name=None )

Calculates the log of the sum of exponentials of x along axis .

$logsumexp(x) = \log\sum exp(x)$
Parameters
• x (Tensor) – The input Tensor with data type float32 or float64, which have no more than 4 dimensions.

• axis (int|list|tuple, optional) – The axis along which to perform logsumexp calculations. axis should be int, list(int) or tuple(int). If axis is a list/tuple of dimension(s), logsumexp is calculated along all element(s) of axis . axis or element(s) of axis should be in range [-D, D), where D is the dimensions of x . If axis or element(s) of axis is less than 0, it works the same way as $$axis + D$$ . If axis is None, logsumexp is calculated along all elements of x. Default is None.

• keepdim (bool, optional) – Whether to reserve the reduced dimension(s) in the output Tensor. If keep_dim is True, the dimensions of the output Tensor is the same as x except in the reduced dimensions(it is of size 1 in this case). Otherwise, the shape of the output Tensor is squeezed in axis . Default is False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of logsumexp along axis of x, with the same data type as x.

Examples:

import paddle

x = paddle.to_tensor([[-1.5, 0., 2.], [3., 1.2, -2.4]])
out1 = paddle.logsumexp(x) # [3.4691226]
out2 = paddle.logsumexp(x, 1) # [2.15317821, 3.15684602]

lstsq ( y, rcond=None, driver=None, name=None )

Computes a solution to the least squares problem of a system of linear equations.

Parameters
• x (Tensor) – A tensor with shape (*, M, N) , the data type of the input Tensor x should be one of float32, float64.

• y (Tensor) – A tensor with shape (*, M, K) , the data type of the input Tensor y should be one of float32, float64.

• rcond (float, optional) – The default value is None. A float pointing number used to determine the effective rank of x. If rcond is None, it will be set to max(M, N) times the machine precision of x_dtype.

• driver (str, optional) – The default value is None. The name of LAPACK method to be used. For CPU inputs the valid values are ‘gels’, ‘gelsy’, ‘gelsd, ‘gelss’. For CUDA input, the only valid driver is ‘gels’. If driver is None, ‘gelsy’ is used for CPU inputs and ‘gels’ for CUDA inputs.

• name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name.

Returns

A tuple of 4 Tensors which is (solution, residuals, rank, singular_values). solution is a tensor with shape (*, N, K), meaning the least squares solution. residuals is a tensor with shape (*, K), meaning the squared residuals of the solutions, which is computed when M > N and every matrix in x is full-rank, otherwise return an empty tensor. rank is a tensor with shape (*), meaning the ranks of the matrices in x, which is computed when driver in (‘gelsy’, ‘gelsd’, ‘gelss’), otherwise return an empty tensor. singular_values is a tensor with shape (*, min(M, N)), meaning singular values of the matrices in x, which is computed when driver in (‘gelsd’, ‘gelss’), otherwise return an empty tensor.

Return type

Tuple

Examples

import paddle

paddle.set_device("cpu")
x = paddle.to_tensor([[1, 3], [3, 2], [5, 6.]])
y = paddle.to_tensor([[3, 4, 6], [5, 3, 4], [1, 2, 1.]])
results = paddle.linalg.lstsq(x, y, driver="gelsd")
print(results[0])
# [[ 0.78350395, -0.22165027, -0.62371236],
# [-0.11340097,  0.78866047,  1.14948535]]
print(results[1])
# [19.81443405, 10.43814468, 30.56185532])
print(results[2])
# 2
print(results[3])
# [9.03455734, 1.54167950]

x = paddle.to_tensor([[10, 2, 3], [3, 10, 5], [5, 6, 12.]])
y = paddle.to_tensor([[4, 2, 9], [2, 0, 3], [2, 5, 3.]])
results = paddle.linalg.lstsq(x, y, driver="gels")
print(results[0])
# [[ 0.39386186,  0.10230173,  0.93606132],
# [ 0.10741687, -0.29028133,  0.11892585],
# [-0.05115091,  0.51918161, -0.19948854]]
print(results[1])
# []

lu ( pivot=True, get_infos=False, name=None )

Computes the LU factorization of an N-D(N>=2) matrix x.

Returns the LU factorization(inplace x) and Pivots. low triangular matrix L and upper triangular matrix U are combined to a single LU matrix.

Pivoting is done if pivot is set to True. P mat can be get by pivots:

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/fluid/framework.py:docstring of paddle.tensor.linalg.lu, line 9)

Error in “code-block” directive: maximum 1 argument(s) allowed, 17 supplied.

.. code-block:: text
ones = eye(rows) #eye matrix of rank rows
for i in range(cols):
swap(ones[i], ones[pivots[i]])
return ones

Parameters
• X (Tensor) – the tensor to factor of N-dimensions(N>=2).

• pivot (bool, optional) – controls whether pivoting is done. Default: True.

• get_infos (bool, optional) – if set to True, returns an info IntTensor. Default: False.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

factorization (Tensor), LU matrix, the factorization of input X.

pivots (IntTensor), the pivots of size(∗(N-2), min(m,n)). pivots stores all the intermediate transpositions of rows. The final permutation perm could be reconstructed by this, details refer to upper example.

infos (IntTensor, optional), if get_infos is True, this is a tensor of size (∗(N-2)) where non-zero values indicate whether factorization for the matrix or each minibatch has succeeded or failed.

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]).astype('float64')
lu,p,info = paddle.linalg.lu(x, get_infos=True)

# >>> lu:
# Tensor(shape=[3, 2], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
#    [[5.        , 6.        ],
#        [0.20000000, 0.80000000],
#        [0.60000000, 0.50000000]])
# >>> p
# Tensor(shape=[2], dtype=int32, place=CUDAPlace(0), stop_gradient=True,
#    [3, 3])
# >>> info
# Tensor(shape=[], dtype=int32, place=CUDAPlace(0), stop_gradient=True,
#    0)

P,L,U = paddle.linalg.lu_unpack(lu,p)

# >>> P
# (Tensor(shape=[3, 3], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
# [[0., 1., 0.],
# [0., 0., 1.],
# [1., 0., 0.]]),
# >>> L
# Tensor(shape=[3, 2], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
# [[1.        , 0.        ],
# [0.20000000, 1.        ],
# [0.60000000, 0.50000000]]),
# >>> U
# Tensor(shape=[2, 2], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
# [[5.        , 6.        ],
# [0.        , 0.80000000]]))

# one can verify : X = P @ L @ U ;

lu_unpack ( y, unpack_ludata=True, unpack_pivots=True, name=None )

Unpack L U and P to single matrix tensor . unpack L and U matrix from LU, unpack permutation matrix P from Pivtos .

P mat can be get by pivots:

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/fluid/framework.py:docstring of paddle.tensor.linalg.lu_unpack, line 6)

Error in “code-block” directive: maximum 1 argument(s) allowed, 15 supplied.

.. code-block:: text
ones = eye(rows) #eye matrix of rank rows
for i in range(cols):
swap(ones[i], ones[pivots[i]])


Parameters
• x (Tensor) – The LU tensor get from paddle.linalg.lu, which is combined by L and U.

• y (Tensor) – Pivots get from paddle.linalg.lu.

• unpack_ludata (bool,optional) – whether to unpack L and U from x. Default: True.

• unpack_pivots (bool, optional) – whether to unpack permutation matrix P from Pivtos. Default: True.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

P (Tensor), Permutation matrix P of lu factorization.

L (Tensor), The lower triangular matrix tensor of lu factorization.

U (Tensor), The upper triangular matrix tensor of lu factorization.

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]).astype('float64')
lu,p,info = paddle.linalg.lu(x, get_infos=True)

# >>> lu:
# Tensor(shape=[3, 2], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
#    [[5.        , 6.        ],
#        [0.20000000, 0.80000000],
#        [0.60000000, 0.50000000]])
# >>> p
# Tensor(shape=[2], dtype=int32, place=CUDAPlace(0), stop_gradient=True,
#    [3, 3])
# >>> info
# Tensor(shape=[], dtype=int32, place=CUDAPlace(0), stop_gradient=True,
#    0)

P,L,U = paddle.linalg.lu_unpack(lu,p)

# >>> P
# (Tensor(shape=[3, 3], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
# [[0., 1., 0.],
# [0., 0., 1.],
# [1., 0., 0.]]),
# >>> L
# Tensor(shape=[3, 2], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
# [[1.        , 0.        ],
# [0.20000000, 1.        ],
# [0.60000000, 0.50000000]]),
# >>> U
# Tensor(shape=[2, 2], dtype=float64, place=CUDAPlace(0), stop_gradient=True,
# [[5.        , 6.        ],
# [0.        , 0.80000000]]))

# one can verify : X = P @ L @ U ;

masked_select ( mask, name=None )

Returns a new 1-D tensor which indexes the input tensor according to the mask which is a tensor with data type of bool.

Parameters
• x (Tensor) – The input Tensor, the data type can be int32, int64, float32, float64.

• mask (Tensor) – The Tensor containing the binary mask to index with, it’s data type is bool.

• name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: None.

Returns

A 1-D Tensor which is the same data type as x.

Examples

import paddle

x = paddle.to_tensor([[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0],
[9.0, 10.0, 11.0, 12.0]])
mask = paddle.to_tensor([[True, False, False, False],
[True, True, False, False],
[True, False, False, False]])
out = paddle.masked_select(x, mask)
#[1.0 5.0 6.0 9.0]

matmul ( y, transpose_x=False, transpose_y=False, name=None )

Applies matrix multiplication to two tensors. matmul follows the complete broadcast rules, and its behavior is consistent with np.matmul.

Currently, the input tensors’ number of dimensions can be any, matmul can be used to achieve the dot, matmul and batchmatmul.

The actual behavior depends on the shapes of $$x$$, $$y$$ and the flag values of transpose_x, transpose_y. Specifically:

• If a transpose flag is specified, the last two dimensions of the tensor are transposed. If the tensor is ndim-1 of shape, the transpose is invalid. If the tensor is ndim-1 of shape $$[D]$$, then for $$x$$ it is treated as $$[1, D]$$, whereas for $$y$$ it is the opposite: It is treated as $$[D, 1]$$.

The multiplication behavior depends on the dimensions of x and y. Specifically:

• If both tensors are 1-dimensional, the dot product result is obtained.

• If both tensors are 2-dimensional, the matrix-matrix product is obtained.

• If the x is 1-dimensional and the y is 2-dimensional, a 1 is prepended to its dimension in order to conduct the matrix multiply. After the matrix multiply, the prepended dimension is removed.

• If the x is 2-dimensional and y is 1-dimensional, the matrix-vector product is obtained.

• If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is obtained. If the first argument is 1-dimensional, a 1 is prepended to its dimension in order to conduct the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (exclude the last two dimensions) dimensions are broadcasted according the broadcast rule. For example, if input is a (j, 1, n, m) tensor and the other is a (k, m, p) tensor, out will be a (j, k, n, p) tensor.

Parameters
• x (Tensor) – The input tensor which is a Tensor.

• y (Tensor) – The input tensor which is a Tensor.

• transpose_x (bool) – Whether to transpose $$x$$ before multiplication.

• transpose_y (bool) – Whether to transpose $$y$$ before multiplication.

• name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.

Returns

The output Tensor.

Return type

Tensor

Examples

import paddle

# vector * vector
x = paddle.rand([10])
y = paddle.rand([10])
z = paddle.matmul(x, y)
print(z.shape)
# [1]

# matrix * vector
x = paddle.rand([10, 5])
y = paddle.rand([5])
z = paddle.matmul(x, y)
print(z.shape)
# [10]

# batched matrix * broadcasted vector
x = paddle.rand([10, 5, 2])
y = paddle.rand([2])
z = paddle.matmul(x, y)
print(z.shape)
# [10, 5]

# batched matrix * batched matrix
x = paddle.rand([10, 5, 2])
y = paddle.rand([10, 2, 5])
z = paddle.matmul(x, y)
print(z.shape)
# [10, 5, 5]

# batched matrix * broadcasted matrix
x = paddle.rand([10, 1, 5, 2])
y = paddle.rand([1, 3, 2, 5])
z = paddle.matmul(x, y)
print(z.shape)
# [10, 3, 5, 5]

matrix_power ( n, name=None )

Computes the n-th power of a square matrix or a batch of square matrices.

Let $$X$$ be a sqaure matrix or a batch of square matrices, $$n$$ be an exponent, the equation should be:

$Out = X ^ {n}$

Specifically,

• If n > 0, it returns the matrix or a batch of matrices raised to the power of n.

• If n = 0, it returns the identity matrix or a batch of identity matrices.

• If n < 0, it returns the inverse of each matrix (if invertible) raised to the power of abs(n).

Parameters
• x (Tensor) – A square matrix or a batch of square matrices to be raised to power n. Its shape should be [*, M, M], where * is zero or more batch dimensions. Its data type should be float32 or float64.

• n (int) – The exponent. It can be any positive, negative integer or zero.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

The n-th power of the matrix (or the batch of matrices) x. Its

data type should be the same as that of x.

Return type

Tensor

Examples

import paddle

x = paddle.to_tensor([[1, 2, 3],
[1, 4, 9],
[1, 8, 27]], dtype='float64')
print(paddle.linalg.matrix_power(x, 2))
# [[6.  , 34. , 102.],
#  [14. , 90. , 282.],
#  [36. , 250., 804.]]

print(paddle.linalg.matrix_power(x, 0))
# [[1., 0., 0.],
#  [0., 1., 0.],
#  [0., 0., 1.]]

print(paddle.linalg.matrix_power(x, -2))
# [[ 12.91666667, -12.75000000,  2.83333333 ],
#  [-7.66666667 ,  8.         , -1.83333333 ],
#  [ 1.80555556 , -1.91666667 ,  0.44444444 ]]
`
max ( axis=None, keepdim=False, name=None )

Computes the maximum of tensor elements over the given axis.

Note

The difference between max and amax is: If there are multiple maximum elements, amax evenly distributes gradient between these equal values, while max propagates gradient to all of them.

Parameters