- paddle.static. set_ipu_shard ( call_func, index=- 1, stage=- 1 ) [source]
Shard the ipu with the given call function. Set every ops in call function to the given ipu sharding.
Only when enable_manual_shard=True to set the index to a value other than -1. please refer to IpuStrategy . Only when enable_pipelining=True to set stage to a value other than -1. please refer to IpuStrategy . An index supports a corresponding None stage or a stage, and a stage only supports a new index or a duplicate index.
call_func (Layer|function) – Specify the call function to be wrapped.
index (int, optional) – Specify which ipu the Tensor is computed on, (such as ‘0, 1, 2, 3’). The default value is -1, which means the Op only run on IPU 0.
stage (int, optional) – Specify the computation order of the sharded model(such as ‘0, 1, 2, 3’). The sharded model will be computed from small to large. The default value is -1, which means no pipelining computation order and run Ops in terms of graph.
The wrapped call function.
# required: ipu import paddle paddle.enable_static() a = paddle.static.data(name='data', shape=[None, 1], dtype='float32') relu = paddle.nn.ReLU() relu = paddle.static.set_ipu_shard(relu, index=1, stage=1) relu(a)