margin_ranking_loss¶
- paddle.nn.functional. margin_ranking_loss ( input, other, label, margin=0.0, reduction='mean', name=None ) [source]
-
This op the calcluate the the margin rank loss between the input, other and label, use the math function as follows.
\[margin\_rank\_loss = max(0, -label * (input - other) + margin)\]If
reductionset to'mean', the reduced mean loss is:\[Out = MEAN(margin\_rank\_loss)\]If
reductionset to'sum', the reduced sum loss is:\[Out = SUM(margin\_rank\_loss)\]If
reductionset to'none', just return the originmargin_rank_loss.- Parameters
-
input (Tensor) – the first input tensor, it’s data type should be float32, float64.
other (Tensor) – the second input tensor, it’s data type should be float32, float64.
label (Tensor) – the label value corresponding to input, it’s data type should be float32, float64.
margin (float, optional) – The margin value to add, default value is 0;
reduction (str, optional) – Indicate the reduction to apply to the loss, the candicates are
'none','mean','sum'.Ifreductionis'none', the unreduced loss is returned; Ifreductionis'mean', the reduced mean loss is returned. Ifreductionis'sum', the reduced sum loss is returned. Default is'mean'.name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
Returns: Tensor, if
reductionis'mean'or'sum', the out shape is \([1]\), otherwise the shape is the same as input .The same dtype as input tensor.Examples
import paddle input = paddle.to_tensor([[1, 2], [3, 4]], dtype='float32') other = paddle.to_tensor([[2, 1], [2, 4]], dtype='float32') label = paddle.to_tensor([[1, -1], [-1, -1]], dtype='float32') loss = paddle.nn.functional.margin_ranking_loss(input, other, label) print(loss) # [0.75]
