multi_label_soft_margin_loss

paddle.nn.functional. multi_label_soft_margin_loss ( input, label, weight=None, reduction='mean', name=None ) [source]

Calculate a multi-class multi-classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 2D Tensor of target class indices). For each sample in the mini-batch:

\[\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)}\]

where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\), \(y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}\), \(0 \leq y[j] \leq \text{x.size}(0)-1\), and \(i \neq y[j]\) for all \(i\) and \(j\). \(y\) and \(x\) must have the same size.

Parameters
  • input (Tensor) – Input tensor, the data type is float32 or float64. Shape is (N, C), where C is number of classes, and if shape is more than 2D, this is (N, C, D1, D2,…, Dk), k >= 1.

  • label (Tensor) – Label tensor, the data type is float32 or float64. The shape of label is the same as the shape of input.

  • weight (Tensor,optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C and the data type is float32, float64. Default is 'None' .

  • reduction (str, optional) – Indicate how to average the loss by batch_size, the candidates are 'none' | 'mean' | 'sum'. If reduction is 'none', the unreduced loss is returned; If reduction is 'mean', the reduced mean loss is returned; If reduction is 'sum', the summed loss is returned. Default: 'mean'

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:

input: N-D Tensor, the shape is [N, *], N is batch size and * means number of classes, available dtype is float32, float64. The sum operation operates over all the elements. label: N-D Tensor, same shape as the input. weight:N-D Tensor, the shape is [N,1] output: scalar. If reduction is 'none', then same shape as the input.

Returns

Tensor, The tensor variable storing the multi_label_soft_margin_loss of input and label.

Examples

>>> import paddle
>>> import paddle.nn.functional as F
>>> input = paddle.to_tensor([[1, -2, 3], [0, -1, 2], [1, 0, 1]], dtype=paddle.float32)
>>> # label elements in {1., -1.}
>>> label = paddle.to_tensor([[-1, 1, -1], [1, 1, 1], [1, -1, 1]], dtype=paddle.float32)
>>> loss = F.multi_label_soft_margin_loss(input, label, reduction='none')
>>> print(loss)
Tensor(shape=[3], dtype=float32, place=Place(cpu), stop_gradient=True,
       [3.49625897, 0.71111226, 0.43989015])
>>> loss = F.multi_label_soft_margin_loss(input, label, reduction='mean')
>>> print(loss)
Tensor(shape=[], dtype=float32, place=Place(cpu), stop_gradient=True,
       1.54908717)