# fluid.backward¶

## append_backward¶

Append backward part to main_program.

A complete neural network training is made up of forward and backward propagation. However, when we configure a network, we only need to specify its forwrd part. The backward part is generated automatically according to the forward part by this function.

In most cases, users do not need to invoke this function manually. It will be automatically invoked by the optimizer’s minimize function.

Examples

# network configuration code
# ...
avg_loss = fluid.layers.mean(loss)