reshard

paddle.distributed. reshard ( dist_tensor, mesh, placements ) [source]

Reshard a distributed paddle.Tensor with given distributed attributes.

Parameters
  • dist_tensor (Tensor) – the distributed tensor to be resharded.

  • mesh (paddle.distributed.ProcessMesh) – The ProcessMesh object describes the Cartesian topology of the used processes.

  • placements (list[paddle.distributed.Placement]) – the placements describe how to place the tensor on ProcessMesh, it can be Shard, Replicate and Partial.

Returns

A Distributed Tensor reshared with distributed attributes.

Return type

Tensor

Examples

>>> import paddle
>>> import paddle.distributed as dist

>>> mesh = dist.ProcessMesh([0, 1], dim_names=["x"])

>>> # dense tensor
>>> a = paddle.ones([10, 20])

>>> 
>>> # distributed tensor
>>> d_tensor = dist.shard_tensor(a, mesh, [dist.Partial()])

>>> out_d_tensor = dist.reshard(d_tensor, mesh, [dist.Replicate()])

>>> print(out_d_tensor)