l1_loss¶
- paddle.nn.functional. l1_loss ( input, label, reduction='mean', name=None ) [source]
- 
         Computes the L1 Loss of Tensor inputandlabelas follows.If reduction set to 'none', the loss is:\[Out = \lvert input - label \rvert\]If reduction set to 'mean', the loss is:\[Out = MEAN(\lvert input - label \rvert)\]If reduction set to 'sum', the loss is:\[Out = SUM(\lvert input - label \rvert)\]- Parameters
- 
           - input (Tensor) – The input tensor. The shapes is [N, *], where N is batch size and * means any number of additional dimensions. It’s data type should be float32, float64, int32, int64. 
- label (Tensor) – label. The shapes is [N, *], same shape as - input. It’s data type should be float32, float64, int32, int64.
- reduction (str, optional) – Indicate the reduction to apply to the loss, the candicates are - 'none'|- 'mean'|- 'sum'. If reduction is- 'none', the unreduced loss is returned; If reduction is- 'mean', the reduced mean loss is returned. If reduction is- 'sum', the reduced sum loss is returned. Default is- 'mean'.
- name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name. 
 
- Returns
- 
           Tensor, the L1 Loss of Tensor inputandlabel. If reduction is'none', the shape of output loss is \([N, *]\), the same asinput. If reduction is'mean'or'sum', the shape of output loss is [1].
 Examples import paddle input = paddle.to_tensor([[1.5, 0.8], [0.2, 1.3]]) label = paddle.to_tensor([[1.7, 1], [0.4, 0.5]]) l1_loss = paddle.nn.functional.l1_loss(input, label) print(l1_loss.numpy()) # [0.35] l1_loss = paddle.nn.functional.l1_loss(input, label, reduction='none') print(l1_loss.numpy()) # [[0.20000005 0.19999999] # [0.2 0.79999995]] l1_loss = paddle.nn.functional.l1_loss(input, label, reduction='sum') print(l1_loss.numpy()) # [1.4] 
