backward

paddle.Tensor. backward ( self, grad_tensor=None, retain_graph=False )

Run backward of current Graph which starts from current Tensor.

The new gradient will accumulat on previous gradient.

You can clear gradient by Tensor.clear_grad() .

Parameters
  • grad_tensor (Tensor, optional) – initial gradient values of the current Tensor. If grad_tensor is None,

  • initial gradient values of the current Tensor would be Tensor filled with 1.0; (the) –

  • grad_tensor is not None (if) –

  • must have the same length as the current Tensor. (it) –

  • default value is None. (Teh) –

  • retain_graph (bool, optional) – If False, the graph used to compute grads will be freed. If you would like to add more ops to the built graph after calling this method( backward ), set the parameter retain_graph to True, then the grads will be retained. Thus, seting it to False is much more memory-efficient. Defaults to False.

Returns

None

Return type

NoneType

Examples

import paddle
x = paddle.to_tensor(5., stop_gradient=False)
for i in range(5):
    y = paddle.pow(x, 4.0)
    y.backward()
    print("{}: {}".format(i, x.grad))
# 0: [500.]
# 1: [1000.]
# 2: [1500.]
# 3: [2000.]
# 4: [2500.]

x.clear_grad()
print("{}".format(x.grad))
# 0.

grad_tensor=paddle.to_tensor(2.)
for i in range(5):
    y = paddle.pow(x, 4.0)
    y.backward(grad_tensor)
    print("{}: {}".format(i, x.grad))
# 0: [1000.]
# 1: [2000.]
# 2: [3000.]
# 3: [4000.]
# 4: [5000.]