# log_softmax¶

paddle.nn.functional. log_softmax ( x, axis=- 1, dtype=None, name=None ) [source]

This operator implements the log_softmax layer. The calculation process is as follows:

\begin{split}\begin{aligned} log\_softmax[i, j] &= log(softmax(x)) \\ &= log(\frac{\exp(X[i, j])}{\sum_j(\exp(X[i, j])}) \end{aligned}\end{split}
Parameters
• x (Tensor) – The input Tensor with data type float32, float64.

• axis (int, optional) – The axis along which to perform log_softmax calculations. It should be in range [-D, D), where D is the dimensions of x . If axis < 0, it works the same way as $$axis + D$$ . Default is -1.

• dtype (str|np.dtype|core.VarDesc.VarType, optional) – The desired data type of the output tensor. If dtype is specified, x is casted to dtype before the operation is performed. This is useful for preventing data type overflows. Supported dtype: float32, float64. If dtype is None, the output Tensor has the same dtype as x. Default is None.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

A Tensor with the same shape and data type (use dtype if it is specified) as x.

Examples

import paddle

x = [[[-2.0, 3.0, -4.0, 5.0],
[3.0, -4.0, 5.0, -6.0],
[-7.0, -8.0, 8.0, 9.0]],
[[1.0, -2.0, -3.0, 4.0],
[-5.0, 6.0, 7.0, -8.0],
[6.0, 7.0, 8.0, 9.0]]]