HingeEmbeddingLoss¶
- class paddle.nn. HingeEmbeddingLoss ( margin=1.0, reduction='mean', name=None ) [source]
- 
         Create a callable object of HingeEmbeddingLoss to calculates hinge_embedding_loss. Measures the loss given an input tensor \(x\) and a labels tensor \(y\), and is typically used for learning nonlinear embeddings or semi-supervised learning. The loss function for \(n\)-th sample in the mini-batch is \[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}\]and the total loss functions is \[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]where \(L = \{l_1,\dots,l_N\}^\top\). - Parameters
- 
           - margin (float, optional) – Specifies the hyperparameter margin to be used. The value determines how large the input need to be to calculate in hinge_embedding_loss. When label is -1, Input smaller than margin are minimized with hinge_embedding_loss. Default = 1.0 
- reduction (str, optional) – Indicate how to average the loss by batch_size, the candicates are - 'none'|- 'mean'|- 'sum'. If- reductionis- 'none', the unreduced loss is returned; If- reductionis- 'mean', the reduced mean loss is returned; If- reductionis- 'sum', the summed loss is returned. Default:- 'mean'
- name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name. 
 
 Call Parameters: input (Tensor): Input tensor, the data type is float32 or float64. Shape is (N, C), where C is number of classes, and if shape is more than 2D, this is (N, C, D1, D2,…, Dk), k >= 1. label (Tensor): Label tensor containing 1 or -1, the data type is float32 or float64. The shape of label is the same as the shape of input. Shape: input: N-D Tensor, the shape is [N, *], N is batch size and * means any number of additional dimensions, available dtype is float32, float64. The sum operationoperates over all the elements. label: N-D Tensor, same shape as the input. output: scalar. If reductionis'none', then same shape as the input.- Returns
- 
           Tensor, The tensor variable storing the hinge_embedding_loss of input and label. 
 Examples import paddle import paddle.nn as nn input = paddle.to_tensor([[1, -2, 3], [0, -1, 2], [1, 0, 1]], dtype=paddle.float32) # label elements in {1., -1.} label = paddle.to_tensor([[-1, 1, -1], [1, 1, 1], [1, -1, 1]], dtype=paddle.float32) hinge_embedding_loss = nn.HingeEmbeddingLoss(margin=1.0, reduction='none') loss = hinge_embedding_loss(input, label) print(loss) # Tensor([[0., -2., 0.], # [0., -1., 2.], # [1., 1., 1.]]) hinge_embedding_loss = nn.HingeEmbeddingLoss(margin=1.0, reduction='mean') loss = hinge_embedding_loss(input, label) print(loss) # Tensor([0.22222222]) - 
            
           forward
           (
           input, 
           label
           )
           forward¶
- 
           Defines the computation performed at every call. Should be overridden by all subclasses. - Parameters
- 
             - *inputs (tuple) – unpacked tuple arguments 
- **kwargs (dict) – unpacked dict arguments 
 
 
 
