- paddle.nn.functional. binary_cross_entropy ( input, label, weight=None, reduction='mean', name=None )
This op measures the binary_cross_entropy loss between input predictions
inputand target labels
label. The binary_cross_entropy loss can be described as:
weightis set, the loss is:\[Out = -1 * weight * (label * log(input) + (1 - label) * log(1 - input))\]
weightis None, the loss is:\[Out = -1 * (label * log(input) + (1 - label) * log(1 - input))\]
'none', the interface will return the original loss Out.
'mean', the reduced mean loss is:\[Out = MEAN(Out)\]
'sum', the reduced sum loss is:\[Out = SUM(Out)\]
Note that the input predictions
inputalways be the output of sigmoid, and the target labels
labelshould be numbers between 0 and 1.
input (Tensor) – The input predications tensor. 2-D tensor with shape: [N, *], N is batch_size, * means number of additional dimensions. The
inputshould always be the output of sigmod. Available dtype is float32, float64.
label (Tensor) – The target labels tensor. 2-D tensor with the same shape as
input. The target labels which values should be numbers between 0 and 1. Available dtype is float32, float64.
weight (Tensor, optional) – A manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch and the data type is float32, float64. Default is
reduction (str, optional) – Indicate how to average the loss by batch_size, the candicates are
'none', the unreduced loss is returned; If
'mean', the reduced mean loss is returned; If
'sum', the summed loss is returned. Default is
name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.
'none', the shape of output is
input, else the shape of output is scalar.
- Return type
import paddle input = paddle.to_tensor([0.5, 0.6, 0.7], 'float32') label = paddle.to_tensor([1.0, 0.0, 1.0], 'float32') output = paddle.nn.functional.binary_cross_entropy(input, label) print(output) # [0.65537095]