ipu_shard_guard

paddle.static. ipu_shard_guard ( index=None, stage=None ) [source]

Used to shard the graph on IPUs. Set each Op run on which IPU in the sharding and which stage in the pipelining.

Parameters
  • index (int, optional) – Specify which ipu the Tensor is computed on, (such as ‘0, 1, 2, 3’). The default value is None, which means the Op only run on IPU 0.

  • stage (int, optional) – Specify the computation order of the sharded model(such as ‘0, 1, 2, 3’). The sharded model will be computed from small to large. The default value is None, which means no pipelining computation order and run Ops in terms of graph.

Note: Only if the enable_manual_shard=True, the ‘index’ is able to be set not None. Please refer to paddle.static.IpuStrategy . Only if the enable_pipelining=True, the ‘stage’ is able to be set not None. Please refer to paddle.static.IpuStrategy . A index is allowed to match none stage or a stage. A stage is only allowed to match a new or duplicated index.

Examples

# required: ipu

import paddle
paddle.enable_static()
a = paddle.static.data(name='data', shape=[None, 1], dtype='int32')
with paddle.static.ipu_shard_guard(index=0, stage=0):
    b = a + 1
with paddle.static.ipu_shard_guard(index=1, stage=1):
    c = b + 1
with paddle.static.ipu_shard_guard(index=0, stage=2):
    d = c + 1