all_reduce

paddle.distributed.collective. all_reduce ( tensor, op=0, group=0 ) [source]

Reduce a tensor over all ranks so that all get the result.

Parameters
  • tensor (Tensor) – The input Tensor. It also works as the output Tensor. Its data type should be float16, float32, float64, int32 or int64.

  • op (ReduceOp.SUM|ReduceOp.MAX|ReduceOp.Min|ReduceOp.PROD) – Optional. The operation used.

  • group (int) – Optional. The process group to work on.

Returns

None.

Examples

import numpy as np
import paddle
from paddle.distributed import ReduceOp
from paddle.distributed import init_parallel_env

paddle.set_device('gpu:%d'%paddle.distributed.ParallelEnv().dev_id)
init_parallel_env()
if paddle.distributed.ParallelEnv().local_rank == 0:
    np_data = np.array([[4, 5, 6], [4, 5, 6]])
else:
    np_data = np.array([[1, 2, 3], [1, 2, 3]])
data = paddle.to_tensor(np_data)
paddle.distributed.all_reduce(data)
out = data.numpy()
# [[5, 7, 9], [5, 7, 9]]