all_gather

paddle.distributed. all_gather ( tensor_list, tensor, group=None, sync_op=True ) [source]

Gather tensors from all participators and all get the result. As shown below, one process is started with a GPU and the data of this process is represented by its group rank. Through the all_gather operator, each GPU will have data from all GPUs.

all_gather
Parameters
  • tensor_list (list) – A list of output Tensors. Every element in the list must be a Tensor whose data type should be float16, float32, float64, int32, int64, int8, uint8, bool, bfloat16, complex64 or complex128.

  • tensor (Tensor) – The Tensor to send. Its data type should be float16, float32, float64, int32, int64, int8, uint8, bool, bfloat16, complex64 or complex128.

  • group (Group, optional) – The group instance return by new_group or None for global default group.

  • sync_op (bool, optional) – Whether this op is a sync op. The default value is True.

Returns

None.

Examples

>>> 
>>> import paddle
>>> import paddle.distributed as dist

>>> dist.init_parallel_env()
>>> tensor_list = []
>>> if dist.get_rank() == 0:
...     data = paddle.to_tensor([[4, 5, 6], [4, 5, 6]])
>>> else:
...     data = paddle.to_tensor([[1, 2, 3], [1, 2, 3]])
>>> dist.all_gather(tensor_list, data)
>>> print(tensor_list)
>>> # [[[4, 5, 6], [4, 5, 6]], [[1, 2, 3], [1, 2, 3]]] (2 GPUs)