sigmoid_cross_entropy_with_logits¶
- paddle.fluid.layers.loss. sigmoid_cross_entropy_with_logits ( x, label, ignore_index=- 100, name=None, normalize=False ) [source]
-
SigmoidCrossEntropyWithLogits Operator.
This measures the element-wise probability error in classification tasks in which each class is independent. This can be thought of as predicting labels for a data-point, where labels are not mutually exclusive. For example, a news article can be about politics, technology or sports at the same time or none of these.
The logistic loss is given as follows:
\(loss = -Labels * \log(\sigma(X)) - (1 - Labels) * \log(1 - \sigma(X))\)
We know that \(\sigma(X) = \\frac{1}{1 + \exp(-X)}\). By substituting this we get:
\(loss = X - X * Labels + \log(1 + \exp(-X))\)
For stability and to prevent overflow of \(\exp(-X)\) when X < 0, we reformulate the loss as follows:
\(loss = \max(X, 0) - X * Labels + \log(1 + \exp(-\|X\|))\)
Both the input X and Labels can carry the LoD (Level of Details) information. However the output only shares the LoD with input X.
- Parameters
-
x (Tensor) – a 2-D tensor with shape N x D, where N is the batch size and D is the number of classes. This input is a tensor of logits computed by the previous operator. Logits are unscaled log probabilities given as log(p/(1-p)) The data type should be float32 or float64.
label (Tensor) – a 2-D tensor of the same type and shape as X. This input is a tensor of probabalistic labels for each logit.
ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient.
name (str|None) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name
normalize (bool) – If true, divide the output by the number of targets != ignore_index.
- Returns
-
(Tensor, default Tensor<float>), a 2-D tensor with shape N x D of elementwise logistic losses
- Return type
-
out(Tensor)
Examples
import paddle input = paddle.rand(shape=[10], dtype='float32') label = paddle.rand(shape=[10], dtype='float32') loss = paddle.fluid.layers.sigmoid_cross_entropy_with_logits(input, label, ignore_index=-1, normalize=True) print(loss)