IpuStrategy

class paddle.static. IpuStrategy [source]

Help users precisely control the graph building in paddle.static.IpuCompiledProgram .

Returns

The IpuStrategy instance.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
set_graph_config ( num_ipus=1, is_training=True, micro_batch_size=1, enable_manual_shard=False )

set_graph_config

Set graph configuration to the IpuStrategy instance.

Parameters
  • num_ipus (int, optional) – Number of IPU devices. Default 1, which means only use 1 IPU.

  • is_training (bool, optional) – True is training graph, False is inference graph. Default True, which means is training mode.

  • batch_size (int, optional) – The batch-size in the graph. Used to make the graph batch-size fixed, if the batch-size in the graph is dynamic. Default 1, which means the batch-size would be set 1, if the batch-size is dynamice.

  • enable_manual_shard (bool, optional) – Enable graph sharding or not. Only if num_ipus > 1, enable_manual_shard is able to be set True. Default False, which means disabled.

Returns

None.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
ipu_strategy.set_graph_config(num_ipus=1,
                            is_training=True,
                            micro_batch_size=1,
                            enable_manual_shard=False)
set_pipelining_config ( enable_pipelining=False, batches_per_step=1, enable_gradient_accumulation=False, accumulation_factor=1 )

set_pipelining_config

Set pipelining configuration to the IpuStrategy instance. Used to optimize the throughput performance.

Parameters
  • enable_pipelining (bool, optional) – Enable data pipelining between subgraphs. Only if enable_manual_shard=True, enable_pipelining is able to be set True. Default False, which means disabled.

  • batches_per_step (int, optional) – Set the batches per run in data pipelining mode. Only if enable_pipelining=True, batches_per_step is able to be set > 1. Default 1, which means no data pipelining.

  • enable_gradient_accumulation (bool, optional) – Enable to accumulate gradients before updating the weights in training mode. Only if enable_pipelining=True, enable_gradient_accumulation is able to be set True. Default False, which means no gradient accumulation.

  • accumulation_factor (int, optional) – Specify the number of micro-batches to accumulate before applying the varUpdate. Default 1, which means disable the accumulation.

Returns

None.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
ipu_strategy.set_pipelining_config(enable_pipelining=False,
                                    batches_per_step=1,
                                    enable_gradient_accumulation=False,
                                    accumulation_factor=1)
set_precision_config ( enable_fp16=False )

set_precision_config

Set half computation configuration to the IpuStrategy instance. Used to optimize the performance.

Parameters

enable_fp16 (bool, optional) – Enable FLOAT16 mode and transform FLOAT32 to FLOAT16. Default False, which means disable FLOAT16 mode.

Returns

None.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
ipu_strategy.set_precision_config(enable_fp16=False)
add_custom_op ( paddle_op, popart_op=None, domain='custom.ops', version=1 )

add_custom_op

Add a mapping to use popart custom ops running on the IPU.

Parameters
  • paddle_op (str) – the name of custom op in paddle.

  • popart_op (str) – the name of custom op in popart.

  • domain (str) – domain name of custom op in popart.

  • version (int) – version of custom op in popart.

Returns

None.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
ipu_strategy.add_custom_op('paddle_relu', 'popart_relu')
set_options ( options )

set_options

Set options from dict.

Parameters

options (dict) – dict of options.

Returns

None.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
options = {'num_ipus':1, 'enable_fp16': True}
ipu_strategy.set_options(options)
get_option ( option )

get_option

Get option.

Parameters

option (str) – name of option.

Returns

option value.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
num_ipus = ipu_strategy.get_option('num_ipus')
enable_pattern ( pattern )

enable_pattern

Enable PopART pattern to optimize the graph.

Parameters

pattern (string) – the name of the pattern.

Returns

None.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
ipu_strategy.enable_pattern("ViewSimplifyPattern")
disable_pattern ( pattern )

disable_pattern

Disable PopART pattern.

Parameters

pattern (string) – the name of the pattern.

Returns

None.

Examples

# required: ipu

import paddle
import paddle.static as static

paddle.enable_static()

ipu_strategy = static.IpuStrategy()
ipu_strategy.disable_pattern("ViewSimplifyPattern")
property num_ipus

Get the number of IPU devices from IpuStrategy instance.

property is_training

Get the boolean of training or inference from IpuStrategy instance.

property enable_pipelining

Get the boolean of enable pipelining or not from IpuStrategy instance.

property enable_fp16

Get the boolean of float16 mode or not from IpuStrategy instance.