KLDivLoss

class paddle.nn. KLDivLoss ( reduction='mean' ) [source]

Generate a callable object of ‘KLDivLoss’ to calculate the Kullback-Leibler divergence loss between Input(X) and Input(Target). Notes that Input(X) is the log-probability and Input(Target) is the probability.

KL divergence loss is calculated as follows:

$$l(x, y) = y * (log(y) - x)$$

Here \(x\) is input and \(y\) is label.

If reduction is 'none', the output loss is the same shape as the input, and the loss at each point is calculated separately. There is no reduction to the result.

If reduction is 'mean', the output loss is the shape of [], and the output is the average of all losses.

If reduction is 'sum', the output loss is the shape of [], and the output is the sum of all losses.

If reduction is 'batchmean', the output loss is the shape of [N], N is the batch size, and the output is the sum of all losses divided by the batch size.

Parameters

reduction (str, optional) – Indicate how to average the loss, the candicates are 'none' | 'batchmean' | 'mean' | 'sum'. If reduction is 'mean', the reduced mean loss is returned; If reduction is 'batchmean', the sum loss divided by batch size is returned; if reduction is 'sum', the reduced sum loss is returned; if reduction is 'none', no reduction will be apllied. Default is 'mean'.

Shape:

input (Tensor): (N, *), where * means, any number of additional dimensions.

label (Tensor): (N, *), same shape as input.

output (Tensor): tensor with shape: [] by default.

Examples

>>> import paddle
>>> import paddle.nn as nn

>>> shape = (5, 20)
>>> x = paddle.uniform(shape, min=-10, max=10).astype('float32')
>>> target = paddle.uniform(shape, min=-10, max=10).astype('float32')

>>> # 'batchmean' reduction, loss shape will be []
>>> kldiv_criterion = nn.KLDivLoss(reduction='batchmean')
>>> pred_loss = kldiv_criterion(x, target)
>>> print(pred_loss.shape)
[]

>>> # 'mean' reduction, loss shape will be []
>>> kldiv_criterion = nn.KLDivLoss(reduction='mean')
>>> pred_loss = kldiv_criterion(x, target)
>>> print(pred_loss.shape)
[]

>>> # 'sum' reduction, loss shape will be []
>>> kldiv_criterion = nn.KLDivLoss(reduction='sum')
>>> pred_loss = kldiv_criterion(x, target)
>>> print(pred_loss.shape)
[]

>>> # 'none' reduction, loss shape is same with X shape
>>> kldiv_criterion = nn.KLDivLoss(reduction='none')
>>> pred_loss = kldiv_criterion(x, target)
>>> print(pred_loss.shape)
[5, 20]
forward ( input, label )

forward

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments