amax

paddle. amax ( x, axis=None, keepdim=False, name=None ) [源代码]

对指定维度上的 Tensor 元素求最大值运算,并输出相应的计算结果。

注解

对输入有多个最大值的情况下,max 将梯度完整传回到最大值对应的位置,amax 会将梯度平均传回到最大值对应的位置

参数

  • x (Tensor) - Tensor,支持数据类型为 float32、float64、int32、int64,维度不超过 4 维。

  • axis (int|list|tuple,可选) - 求最大值运算的维度。如果为 None,则计算所有元素的最大值并返回包含单个元素的 Tensor 变量,否则必须在 \([-x.ndim, x.ndim]\) 范围内。如果 \(axis[i] <0\),则维度将变为 \(x.ndim+axis[i]\),默认值为 None。

  • keepdim (bool,可选)- 是否在输出 Tensor 中保留减小的维度。如果 keepdim 为 False,结果 Tensor 的维度将比输入 Tensor 的小,默认值为 False。

  • name (str,可选) - 具体用法请参见 Name,一般无需设置,默认值为 None。

返回

Tensor,在指定 axis 上进行求最大值运算的 Tensor,数据类型和输入数据类型一致。

代码示例

>>> import paddle
>>> # data_x is a Tensor with shape [2, 4] with multiple maximum elements
>>> # the axis is a int element

>>> x = paddle.to_tensor([[0.1, 0.9, 0.9, 0.9],
...                         [0.9, 0.9, 0.6, 0.7]],
...                         dtype='float64', stop_gradient=False)
>>> # There are 5 maximum elements:
>>> # 1) amax evenly distributes gradient between these equal values,
>>> #    thus the corresponding gradients are 1/5=0.2;
>>> # 2) while max propagates gradient to all of them,
>>> #    thus the corresponding gradient are 1.
>>> result1 = paddle.amax(x)
>>> result1.backward()
>>> result1
Tensor(shape=[], dtype=float64, place=Place(cpu), stop_gradient=False,
0.90000000)
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.20000000, 0.20000000, 0.20000000],
 [0.20000000, 0.20000000, 0.        , 0.        ]])

>>> x.clear_grad()
>>> result1_max = paddle.max(x)
>>> result1_max.backward()
>>> result1_max
Tensor(shape=[], dtype=float64, place=Place(cpu), stop_gradient=False,
0.90000000)
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0., 1., 1., 1.],
 [1., 1., 0., 0.]])

>>> x.clear_grad()
>>> result2 = paddle.amax(x, axis=0)
>>> result2.backward()
>>> result2
Tensor(shape=[4], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.90000000, 0.90000000, 0.90000000, 0.90000000])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.50000000, 1.        , 1.        ],
 [1.        , 0.50000000, 0.        , 0.        ]])

>>> x.clear_grad()
>>> result3 = paddle.amax(x, axis=-1)
>>> result3.backward()
>>> result3
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.90000000, 0.90000000])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.33333333, 0.33333333, 0.33333333],
 [0.50000000, 0.50000000, 0.        , 0.        ]])

>>> x.clear_grad()
>>> result4 = paddle.amax(x, axis=1, keepdim=True)
>>> result4.backward()
>>> result4
Tensor(shape=[2, 1], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.90000000],
 [0.90000000]])
>>> x.grad
Tensor(shape=[2, 4], dtype=float64, place=Place(cpu), stop_gradient=False,
[[0.        , 0.33333333, 0.33333333, 0.33333333],
 [0.50000000, 0.50000000, 0.        , 0.        ]])

>>> # data_y is a Tensor with shape [2, 2, 2]
>>> # the axis is list
>>> y = paddle.to_tensor([[[0.1, 0.9], [0.9, 0.9]],
...                         [[0.9, 0.9], [0.6, 0.7]]],
...                         dtype='float64', stop_gradient=False)
>>> result5 = paddle.amax(y, axis=[1, 2])
>>> result5.backward()
>>> result5
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.90000000, 0.90000000])
>>> y.grad
Tensor(shape=[2, 2, 2], dtype=float64, place=Place(cpu), stop_gradient=False,
[[[0.        , 0.33333333],
  [0.33333333, 0.33333333]],
 [[0.50000000, 0.50000000],
  [0.        , 0.        ]]])

>>> y.clear_grad()
>>> result6 = paddle.amax(y, axis=[0, 1])
>>> result6.backward()
>>> result6
Tensor(shape=[2], dtype=float64, place=Place(cpu), stop_gradient=False,
[0.90000000, 0.90000000])
>>> y.grad
Tensor(shape=[2, 2, 2], dtype=float64, place=Place(cpu), stop_gradient=False,
[[[0.        , 0.33333333],
  [0.50000000, 0.33333333]],
 [[0.50000000, 0.33333333],
  [0.        , 0.        ]]])

使用本API的教程文档