KLDivLoss¶
- class paddle.nn. KLDivLoss ( reduction='mean' ) [source]
-
Generate a callable object of ‘KLDivLoss’ to calculate the Kullback-Leibler divergence loss between Input(X) and Input(Target). Notes that Input(X) is the log-probability and Input(Target) is the probability.
KL divergence loss is calculated as follows:
$$l(x, y) = y * (log(y) - x)$$
- Parameters
-
reduction (Tensor) – Indicate how to average the loss, the candicates are
'none'
|'batchmean'
|'mean'
|'sum'
. If reduction is'mean'
, the reduced mean loss is returned; If reduction is'batchmean'
, the sum loss divided by batch size is returned; if reduction is'sum'
, the reduced sum loss is returned; if reduction is'none'
, no reduction will be apllied. Default is'mean'
.
- Shape:
-
input (Tensor):
(N, *)
, where*
means, any number of additional dimensions.label (Tensor):
(N, *)
, same shape as input.output (Tensor): tensor with shape: [1] by default.
Examples
import paddle import paddle.nn as nn shape = (5, 20) x = paddle.uniform(shape, min=-10, max=10).astype('float32') target = paddle.uniform(shape, min=-10, max=10).astype('float32') # 'batchmean' reduction, loss shape will be [1] kldiv_criterion = nn.KLDivLoss(reduction='batchmean') pred_loss = kldiv_criterion(x, target) # shape=[1] # 'mean' reduction, loss shape will be [1] kldiv_criterion = nn.KLDivLoss(reduction='mean') pred_loss = kldiv_criterion(x, target) # shape=[1] # 'sum' reduction, loss shape will be [1] kldiv_criterion = nn.KLDivLoss(reduction='sum') pred_loss = kldiv_criterion(x, target) # shape=[1] # 'none' reduction, loss shape is same with X shape kldiv_criterion = nn.KLDivLoss(reduction='none') pred_loss = kldiv_criterion(x, target) # shape=[5, 20]
-
forward
(
input,
label
)
forward¶
-
Defines the computation performed at every call. Should be overridden by all subclasses.
- Parameters
-
*inputs (tuple) – unpacked tuple arguments
**kwargs (dict) – unpacked dict arguments