alltoall_single

paddle.distributed.communication.stream. alltoall_single ( out_tensor, in_tensor, out_split_sizes=None, in_split_sizes=None, group=None, sync_op=True, use_calc_stream=False ) [source]

Split and Scatter the splitted input tensor to the out tensor across devices.

Parameters
  • out_tensor (Tensor) – The output tensor. Its data type should be the same as the input.

  • in_tensor (Tensor) – The input tensor. Its data type should be float16, float32, float64, int32, int64, int8, uint8 or bool.

  • out_split_sizes (List[int], optional) – Split sizes of out_tensor for dim[0]. If not given, dim[0] of out_tensor must be divisible by group size and out_tensor will be gathered averagely from all participators. If none is given, use a empty list as default.

  • in_split_sizes (List[int], optional) – Split sizes of in_tensor for dim[0]. If not given, dim[0] of in_tensor must be divisible

  • given (by group size and in_tensor will be scattered averagely to all participators. If none is) –

  • default. (use a empty list as) –

  • group (Group, optional) – Communicate in which group. If none is given, use the global group as default.

  • sync_op (bool, optional) – Indicate whether the communication is sync or not. If none is given, use true as default.

  • use_calc_stream (bool, optional) – Indicate whether the communication is done on calculation stream. If none is given, use false as default. This option is designed for high performance demand, be careful to turn it on except you are clearly know its meaning.

Returns

Return a task object.

Warning

This API only supports the dygraph mode now.

Examples

>>> 
>>> import paddle
>>> import paddle.distributed as dist

>>> dist.init_parallel_env()
>>> local_rank = dist.get_rank()

>>> # case 1
>>> output = paddle.empty([2], dtype="int64")
>>> if local_rank == 0:
...     data = paddle.to_tensor([0, 1])
>>> else:
...     data = paddle.to_tensor([2, 3])
>>> task = dist.stream.alltoall_single(output, data, sync_op=False)
>>> task.wait()
>>> out = output.numpy()
>>> print(out)
>>> # [0, 2] (2 GPUs, out for rank 0)
>>> # [1, 3] (2 GPUs, out for rank 1)

>>> # case 2
>>> size = dist.get_world_size()
>>> output = paddle.empty([(local_rank + 1) * size, size], dtype='float32')
>>> if local_rank == 0:
...     data = paddle.to_tensor([[0., 0.], [0., 0.], [0., 0.]])
>>> else:
...     data = paddle.to_tensor([[1., 1.], [1., 1.], [1., 1.]])
>>> out_split_sizes = [local_rank + 1 for i in range(size)]
>>> in_split_sizes = [i + 1 for i in range(size)]
>>> task = dist.stream.alltoall_single(output,
...                                 data,
...                                 out_split_sizes,
...                                 in_split_sizes,
...                                 sync_op=False)
>>> task.wait()
>>> out = output.numpy()
>>> print(out)
>>> # [[0., 0.], [1., 1.]]                     (2 GPUs, out for rank 0)
>>> # [[0., 0.], [0., 0.], [1., 1.], [1., 1.]] (2 GPUs, out for rank 1)