Back Propagation¶
The ability of neural network to define model depends on optimization algorithm. Optimization is a process of calculating gradient continuously and adjusting learnable parameters. You can refer to Optimizer to learn more about optimization algorithm in Fluid.
In the training process of network, gradient calculation is divided into two steps: forward computing and back propagation .
Forward computing transfers the state of the input unit to the output unit according to the network structure you build.
Back propagation calculates the derivatives of two or more compound functions by means of chain rule . The gradient of output unit is propagated back to input unit. According to the calculated gradient, the learning parameters of the network are adjusted.
You could refer to back propagation algorithm for detialed implementation process.
We do not recommend directly calling backpropagation-related APIs in fluid
, as these are very low-level APIs. Consider using the relevant APIs in Optimizer instead. When you use optimizer APIs, Fluid automatically calculates the complex back-propagation for you.
If you want to implement it by yourself, you can also use: callback
in api_fluid_backward_append_backward to define the customized gradient form of Operator. For more information, please refer to: api_fluid_backward_append_backward