GaussianNLLLoss

class paddle.nn. GaussianNLLLoss ( full=False, epsilon=1e-06, reduction='mean', name=None ) [source]

Create a callable object of ‘GaussianNLLLoss’ to calculate Gaussian negative log likelihood loss.

This class create a callable object of Gaussian negative log likelihood loss among input, variance and label. Note that the label is treated as samples from Gaussian distributions. This class is used to train a neural network predicts the input and variance of a gaussian distribution that label are supposed to be coming from. This means input and variance should be functions(the neural network) of some inputs.

For a label having Gaussian distribution with input and variance predicted by neural network the loss is calculated as follows:

\[\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var}, \ \text{eps}\right)\right) + \frac{\left(\text{input} - \text{label}\right)^2} {\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}\]

where epsilon is used for stability. By default, the constant term of the loss function is omitted unless full is True. If variance is not the same size as input (due to a homoscedastic assumption), it must either have a final dimension of 1 or have one fewer dimension (with all other sizes being the same) for correct broadcasting.

Parameters
  • full (bool, optional) – include the constant term in the loss calculation. Default: False, means omit the constant term.

  • epsilon (float, optional) – value used to clamp variance (see note below), for stability. Default: 1e-6.

  • reduction (str, optional) – specifies the reduction to apply to the output:'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the output is the average of all batch member losses, 'sum': the output is the sum of all batch member losses. Default: 'mean'.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:
  • Input(Tensor): \((N, *)\) or \((*)\) where \(*\) means any number of additional dimensions. Available dtype is float32, float64.

  • Label(Tensor): \((N, *)\) or \((*)\), same shape as the input, or same shape as the input but with one dimension equal to 1 (to allow for broadcasting). Available dtype is float32, float64.

  • Variance(Tensor): \((N, *)\) or \((*)\), same shape as the input, or same shape as the input but with one dimension equal to 1, or same shape as the input but with one fewer dimension (to allow for broadcasting). Available dtype is float32, float64.

  • Output: scalar if reduction is 'mean' (default) or 'sum'. If reduction is 'none', then \((N, *)\), same shape as the input

Returns

A callable object of GaussianNLLLoss.

Examples::
>>> import paddle
>>> import paddle.nn as nn
>>> paddle.seed(2023)

>>> input = paddle.randn([5, 2], dtype=paddle.float32)
>>> label = paddle.randn([5, 2], dtype=paddle.float32)
>>> variance = paddle.ones([5, 2], dtype=paddle.float32)

>>> gs_nll_loss = nn.GaussianNLLLoss(full=False, epsilon=1e-6, reduction='none')
>>> loss = gs_nll_loss(input, label, variance)
>>> print(loss)
Tensor(shape=[5, 2], dtype=float32, place=Place(cpu), stop_gradient=True,
[[0.21808575, 1.43013096],
 [1.05245590, 0.00394560],
 [1.20861185, 0.00000062],
 [0.56946373, 0.73300570],
 [0.37142906, 0.12038800]])

Note

The clamping of variance is ignored with respect to autograd, and so the gradients are unaffected by it.

forward ( input, label, variance )

forward

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments