# softmax¶

paddle.sparse.nn.functional. softmax ( x, axis=- 1, name=None ) [source]

sparse softmax activation, requiring x to be a SparseCooTensor or SparseCsrTensor.

Note

Only support axis=-1 for SparseCsrTensor, which is faster when read data by row (axis=-1).

From the point of view of dense matrix, for each row $$i$$ and each column $$j$$ in the matrix, we have:

$softmax_ij = \frac{\exp(x_ij - max_j(x_ij))}{\sum_j(exp(x_ij - max_j(x_ij))}$
Parameters
• x (Tensor) – The input tensor. It can be SparseCooTensor/SparseCsrTensor. The data type can be float32 or float64.

• axis (int, optional) – The axis along which to perform softmax calculations. Only support -1 for SparseCsrTensor.

• name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Returns

SparseCoo or SparseCsr, whose layout is the same with x .

Return type

Tensor

Examples

>>> import paddle

>>> print(x)
[[0.        , 0.95717543, 0.43864486, 0.        ],
[0.84765935, 0.45680618, 0.39412445, 0.        ],
[0.59444654, 0.        , 0.78364515, 0.        ]])

>>> csr = x.to_sparse_csr()
>>> print(csr)
crows=[0, 2, 5, 7],
cols=[1, 2, 0, 1, 2, 0, 2],
values=[0.95717543, 0.43864486, 0.84765935, 0.45680618, 0.39412445,
0.59444654, 0.78364515])

>>> print(out)
crows=[0, 2, 5, 7],
cols=[1, 2, 0, 1, 2, 0, 2],
values=[0.62680405, 0.37319586, 0.43255258, 0.29261294, 0.27483448,
0.45284089, 0.54715902])

>>> coo = x.to_sparse_coo(sparse_dim=2)
>>> print(coo)