min

paddle. min ( x, axis=None, keepdim=False, name=None ) [source]

Computes the minimum of tensor elements over the given axis

Note

The difference between min and amin is: If there are multiple minimum elements, amin evenly distributes gradient between these equal values, while min propagates gradient to all of them.

Parameters
  • x (Tensor) – A tensor, the data type is float32, float64, int32, int64.

  • axis (int|list|tuple, optional) – The axis along which the minimum is computed. If None, compute the minimum over all elements of x and return a Tensor with a single element, otherwise must be in the range \([-x.ndim, x.ndim)\). If \(axis[i] < 0\), the axis to reduce is \(x.ndim + axis[i]\).

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the x unless keepdim is true, default value is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

Tensor, results of minimum on the specified axis of input tensor, it’s data type is the same as input’s Tensor.

Examples

>>> import paddle

>>> # data_x is a Tensor with shape [2, 4]
>>> # the axis is a int element
>>> x = paddle.to_tensor([[0.2, 0.3, 0.5, 0.9],
...                       [0.1, 0.2, 0.6, 0.7]],
...                       dtype='float64', stop_gradient=False)
>>> result1 = paddle.min(x)
>>> result1.backward()
>>> result1
Tensor(shape=[], dtype=float64, place=Place(cpu), stop_gradient=False,
0.10000000)
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0., 0., 0., 0.],
 [1., 0., 0., 0.]])

>>> x.clear_grad()
>>> result2 = paddle.min(x, axis=0)
>>> result2.backward()
>>> result2
Tensor(shape=[4], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.10000000, 0.20000000, 0.50000000, 0.70000000])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0., 0., 1., 0.],
 [1., 1., 0., 1.]])

>>> x.clear_grad()
>>> result3 = paddle.min(x, axis=-1)
>>> result3.backward()
>>> result3
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.20000000, 0.10000000])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[1., 0., 0., 0.],
 [1., 0., 0., 0.]])

>>> x.clear_grad()
>>> result4 = paddle.min(x, axis=1, keepdim=True)
>>> result4.backward()
>>> result4
Tensor(shape=[2, 1], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.20000000],
 [0.10000000]])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[1., 0., 0., 0.],
 [1., 0., 0., 0.]])

>>> # data_y is a Tensor with shape [2, 2, 2]
>>> # the axis is list
>>> y = paddle.to_tensor([[[1.0, 2.0], [3.0, 4.0]],
...                       [[5.0, 6.0], [7.0, 8.0]]],
...                       dtype='float64', stop_gradient=False)
>>> result5 = paddle.min(y, axis=[1, 2])
>>> result5.backward()
>>> result5
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[1., 5.])
>>> y.grad
Tensor(shape=[2, 2, 2], dtype=float64, place=Place(cpu), stop_gradient=False,
[[[1., 0.],
  [0., 0.]],
 [[1., 0.],
  [0., 0.]]])

>>> y.clear_grad()
>>> result6 = paddle.min(y, axis=[0, 1])
>>> result6.backward()
>>> result6
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[1., 2.])
>>> y.grad
Tensor(shape=[2, 2, 2], dtype=float64, place=Place(cpu), stop_gradient=False,
[[[1., 1.],
  [0., 0.]],
 [[0., 0.],
  [0., 0.]]])