LogSoftmax

class paddle.nn. LogSoftmax ( axis=- 1, name=None ) [source]

This operator implements the log_softmax layer. The calculation process is as follows:

\[\begin{split}\begin{array} {rcl} Out[i, j] &= &log(softmax(x)) \\ &= &log(\frac{\exp(X[i, j])}{\sum_j(\exp(X[i, j])}) \end{array}\end{split}\]
Parameters
  • axis (int, optional) – The axis along which to perform log_softmax calculations. It should be in range [-D, D), where D is the dimensions of the input Tensor . If axis < 0, it works the same way as \(axis + D\) . Default is -1.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:
  • input: Tensor with any shape.

  • output: Tensor with the same shape as input.

Examples

>>> import paddle

>>> x = [[[-2.0,  3.0, -4.0,  5.0],
...       [ 3.0, -4.0,  5.0, -6.0],
...       [-7.0, -8.0,  8.0,  9.0]],
...      [[ 1.0, -2.0, -3.0,  4.0],
...       [-5.0,  6.0,  7.0, -8.0],
...       [ 6.0,  7.0,  8.0,  9.0]]]
>>> m = paddle.nn.LogSoftmax()
>>> x = paddle.to_tensor(x)
>>> out = m(x)
>>> print(out)
Tensor(shape=[2, 3, 4], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[-7.12783957 , -2.12783957 , -9.12783909 , -0.12783945 ],
  [-2.12705135 , -9.12705135 , -0.12705141 , -11.12705135],
  [-16.31326103, -17.31326103, -1.31326187 , -0.31326184 ]],
 [[-3.05181193 , -6.05181217 , -7.05181217 , -0.05181199 ],
  [-12.31326675, -1.31326652 , -0.31326646 , -15.31326675],
  [-3.44018984 , -2.44018984 , -1.44018972 , -0.44018975 ]]])
forward ( x )

forward

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( )

extra_repr

Extra representation of this layer, you can have custom implementation of your own layer.