set_ipu_shard

paddle.static. set_ipu_shard ( call_func, index=- 1, stage=- 1 ) [source]

Shard the ipu with the given call function. Set every ops in call function to the given ipu sharding.

Note

Only when enable_manual_shard=True to set the index to a value other than -1. please refer to IpuStrategy . Only when enable_pipelining=True to set stage to a value other than -1. please refer to IpuStrategy . An index supports a corresponding None stage or a stage, and a stage only supports a new index or a duplicate index.

Parameters
  • call_func (Layer|function) – Specify the call function to be wrapped.

  • index (int, optional) – Specify which ipu the Tensor is computed on, (such as ‘0, 1, 2, 3’). The default value is -1, which means the Op only run on IPU 0.

  • stage (int, optional) – Specify the computation order of the sharded model(such as ‘0, 1, 2, 3’). The sharded model will be computed from small to large. The default value is -1, which means no pipelining computation order and run Ops in terms of graph.

Returns

The wrapped call function.

Examples

>>> 
>>> import paddle
>>> paddle.device.set_device('ipu')
>>> paddle.enable_static()
>>> a = paddle.static.data(name='data', shape=[None, 1], dtype='float32')
>>> relu = paddle.nn.ReLU()
>>> relu = paddle.static.set_ipu_shard(relu, index=1, stage=1)
>>> relu(a)