Softmax2D

class paddle.nn. Softmax2D ( name=None ) [source]

Softmax2D Activation. Given a Tensor with shape (B, C, H, W) or (C, H, W), it will apply Softmax to each location (C, h_i, w_j). The sum of result in each location (C, H_i, W_j) will be one.

Shape:
  • Input: \((B, C, H, W)\) or \((C, H, W)\)

  • Output: \((B, C, H, W)\) or \((C, H, W)\) (same as input)

Returns

A Tensor of the same shape and dtype as input with value in range [0, 1].

Examples

>>> import paddle
>>> paddle.seed(100)

>>> x = paddle.rand([1, 2, 3, 4])
>>> m = paddle.nn.Softmax2D()
>>> out = m(x)
>>> print(out)
Tensor(shape=[1, 2, 3, 4], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[[0.42608523, 0.32081410, 0.39483935, 0.55642301],
   [0.38131708, 0.45118359, 0.44891062, 0.46053308],
   [0.35746980, 0.60766530, 0.38638926, 0.70425135]],
  [[0.57391477, 0.67918587, 0.60516071, 0.44357699],
   [0.61868292, 0.54881644, 0.55108935, 0.53946698],
   [0.64253020, 0.39233473, 0.61361068, 0.29574865]]]])
forward ( x )

forward

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( )

extra_repr

Extra representation of this layer, you can have custom implementation of your own layer.