Fleet

class paddle.distributed.fleet. Fleet [source]

Unified API for distributed training of PaddlePaddle. Please reference the https://github.com/PaddlePaddle/PaddleFleetX for details

Returns

A Fleet instance

Return type

Fleet

Examples

>>> # Example1: for collective training
>>> import paddle
>>> paddle.enable_static()
>>> import paddle.distributed.fleet as fleet

>>> fleet.init(is_collective=True)

>>> strategy = fleet.DistributedStrategy()
>>> linear = paddle.nn.Linear(10, 10)
>>> optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters=linear.parameters())
>>> optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy)

>>> # do distributed training
>>> # Example2: for parameter server training
>>> import paddle
>>> paddle.enable_static()
>>> import paddle.distributed.fleet as fleet
>>> strategy = fleet.DistributedStrategy()
>>> fleet.init(strategy=strategy)

>>> optimizer = paddle.optimizer.SGD(learning_rate=0.001)
>>> optimizer = fleet.distributed_optimizer(optimizer)

>>> if fleet.is_first_worker():
...     print("this is first worker")

>>> print("current node index: {}".format(fleet.worker_index()))
>>> print("total number of worker num: {}".format(fleet.worker_num()))

>>> if fleet.is_worker():
...     print("this is worker")
>>> print("worker endpoints: {}".format(fleet.worker_endpoints(to_string=True)))

>>> print("server num: {}".format(fleet.server_num()))
>>> print("server endpoints: {}".format(fleet.server_endpoints(to_string=True)))

>>> if fleet.is_server():
...     print("this is server")
>>> fleet.stop_worker()
init ( role_maker=None, is_collective=False, strategy=None, log_level='INFO' ) [source]

init

Initialize role_maker in Fleet.

This function is responsible for the distributed architecture what you want to run your code behind.

Parameters
  • role_maker (RoleMakerBase, optional) – A RoleMakerBase containing the configuration of environment variables related to distributed training.If you did not initialize the rolemaker by yourself, it will be automatically initialized to PaddleRoleMaker. The default value is None.

  • is_collective (Boolean, optional) – A Boolean variable determines whether the program runs on Collective mode or ParameterServer mode. True means the program runs on Collective mode, and False means running on ParameterServer mode. The default value is False.

  • strategy (DistributedStrategy) – Extra properties for distributed training. For details, please refer to paddle.distributed.fleet.DistributedStrategy. Default: None.

  • log_level (Integer, String, optional) – A Integer or String Variable determining how hight the logging level is. Default is “INFO”.

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> import paddle.distributed.fleet as fleet
>>> fleet.init(is_collective=True)
>>> import paddle.distributed.fleet as fleet
>>> role = fleet.PaddleCloudRoleMaker()
>>> fleet.init(role)
>>> import paddle.distributed.fleet as fleet
>>> strategy = fleet.DistributedStrategy()
>>> fleet.init(strategy=strategy)
>>> import paddle.distributed.fleet as fleet
>>> strategy = fleet.DistributedStrategy()
>>> fleet.init(log_level = "DEBUG")
collective_perf ( comm_type, round=50, size_and_time={} ) [source]

collective_perf

Run performance test for given communication type and compare the time cost with the threshold.

Parameters
  • comm_type (str) – Communication type for performance test. Currently support “allreduce”, “broadcast”, “reduce”, “allgather” and “reduce_scatter”.

  • round (int, optional) – Loop times for performance test. More loops will cost more time and provide more accurate result. Defaults to 50.

  • size_and_time (dict, optional) – Message sizes and time thresholds for performance test. each pair will invoke a performance check. Defaults to {}, which indicates acting performance check from 1MB to 1GB without threshold set.

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init(is_collective=True)
>>> # run two tests, one with 1MB (threshold 0.5s) and another with 1GB (threshold 1s)
>>> size_and_time = {1<<20: 0.5, 1<<30: 1}
>>> fleet.collective_perf("allreduce", round=50, size_and_time = size_and_time)
is_first_worker ( ) [source]

is_first_worker

Check whether the node is the first instance of worker.

Returns

True if this is the first node of worker, False if not.

Return type

bool

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.is_first_worker()
worker_index ( ) [source]

worker_index

Get current worker index.

Returns

node id

Return type

int

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.worker_index()
worker_num ( ) [source]

worker_num

Get current total worker number.

Returns

worker numbers

Return type

int

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.worker_num()
is_worker ( ) [source]

is_worker

Check whether the node is an instance of worker.

Returns

True if this is a node of worker,

False if not.

Return type

bool

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.is_worker()
worker_endpoints ( to_string=False ) [source]

worker_endpoints

Get current worker endpoints, such as [“127.0.0.1:1001”, “127.0.0.1:1002”].

Returns

server endpoints

Return type

list/string

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.worker_endpoints()
server_num ( ) [source]

server_num

Get current total worker number.

Returns

server number

Return type

int

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.server_num()
server_index ( ) [source]

server_index

Get current server index.

Returns

node id

Return type

int

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.server_index()
server_endpoints ( to_string=False ) [source]

server_endpoints

Get current server endpoints, such as [“127.0.0.1:1001”, “127.0.0.1:1002”].

Returns

server endpoints

Return type

list/string

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.server_endpoints()
is_server ( ) [source]

is_server

Check whether the node is an instance of server.

Returns

True if this is a node of server,

False if not.

Return type

bool

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.is_server()
barrier_worker ( ) [source]

barrier_worker

barrier all workers

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> fleet.barrier_worker()
all_reduce ( input, mode='sum' ) [source]

all_reduce

all reduce input between all workers, mode can be sum, mean or max, default is sum

Returns

all reduce result

Return type

list/int

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> res = fleet.all_reduce(5)
init_worker ( scopes=None ) [source]

init_worker

initialize Communicator for parameter server training.

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.init_worker()
init_coordinator ( scopes=None ) [source]

init_coordinator

initialize coordinator node

get_fl_client ( ) [source]

get_fl_client

get worker(training node) ptr

init_server ( *args, **kwargs ) [source]

init_server

init_server executor to initialize startup program, if the args is not empty, it will run load_persistables for increment training.

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.init_server()
load_model ( path, mode ) [source]

load_model

load fleet model from path

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.load_model("path", mode=0)
load_one_table ( table_id, path, mode ) [source]

load_one_table

load fleet one table from path

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.load_one_table(0, "path", mode=0)
load_inference_model ( path, mode ) [source]

load_inference_model

load fleet inference model from path

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.load_inference_model("path", mode=1)
run_server ( ) [source]

run_server

run server will run pserver main program with executor.

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> if fleet.is_server():
...     fleet.init_server()
stop_worker ( ) [source]

stop_worker

stop Communicator and give training complete notice to parameter server.

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.init_server()
save_inference_model ( executor, dirname, feeded_var_names, target_vars, main_program=None, export_for_deployment=True, mode=0 ) [source]

save_inference_model

save inference model for inference.

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.init_server()
save_persistables ( executor, dirname, main_program=None, mode=0 ) [source]

save_persistables

saves all persistable tensors from main_program to the folder dirname. You can refer to

The dirname is used to specify the folder where persistable tensors are going to be saved. If you would like to save tensors in separate files, set filename None.

Parameters
  • executor (Executor) – The executor to run for saving persistable tensors. You can refer to Executor for more details.

  • dirname (str, optional) – The saving directory path. When you need to save the parameter to the memory, set it to None.

  • main_program (Program, optional) – The program whose persistable tensors will be saved. Default: None.

Returns

None

Examples

>>> import paddle
>>> paddle.enable_static()
>>> import paddle.distributed.fleet as fleet

>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> exe = paddle.static.Executor(paddle.CPUPlace())
>>> fleet.save_persistables(exe, "dirname", paddle.static.default_main_program())
save_one_table ( table_id, path, mode ) [source]

save_one_table

save fleet one table from path

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.save_one_table(0, "path", mode=0)
save_dense_params ( executor, dirname, scope, program, var_names=None ) [source]

save_dense_params

save fleet one table from path

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()
>>> import paddle
>>> place = paddle.CPUPlace()
>>> exe =  paddle.static.Executor(place)

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.save_dense_params(exe, "path", scope=paddle.static.global_scope(), program=paddle.static.default_main_program())
set_date ( table_id, day_id ) [source]

set_date

set_date for gpups table

Returns

None

Examples

>>> import paddle.distributed.fleet as fleet
>>> fleet.init()

>>> # build net
>>> # fleet.distributed_optimizer(...)

>>> fleet.set_date(0, "20250101")
distributed_optimizer ( optimizer, strategy=None ) [source]

distributed_optimizer

Optimizer for distributed training.

For the distributed training, this method would rebuild a new instance of DistributedOptimizer. Which has basic Optimizer function and special features for distributed training.

Parameters
  • optimizer (Optimizer) – The executor to run for init server.

  • strategy (DistributedStrategy) – Extra properties for distributed optimizer. It is recommended to use DistributedStrategy in fleet.init(). The strategy here is for compatibility. If the strategy in fleet.distributed_optimizer() is not None, then it will overwrite the DistributedStrategy in fleet.init(), which will take effect in distributed training.

Returns

instance of fleet.

Return type

Fleet

Examples

>>> import paddle
>>> import paddle.distributed.fleet as fleet
>>> fleet.init(is_collective=True)
>>> linear = paddle.nn.Linear(10, 10)
>>> strategy = fleet.DistributedStrategy()
>>> optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters=linear.parameters())
>>> optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy)
get_loss_scaling ( )

get_loss_scaling

Return the real-time loss scaling factor.

amp_init ( place, scope=None, test_program=None, use_fp16_test=False )

amp_init

Init the amp training, such as cast fp32 parameters to fp16 type.

Parameters
  • place (CUDAPlace) – place is used to initialize fp16 parameters with fp32 values.

  • scope (Scope) – The scope is used to find fp32 parameters.

  • test_program (Program) – The program is used for testing.

  • use_fp16_test (bool) – Whether to use fp16 testing.

Examples

>>> import paddle
>>> import paddle.nn.functional as F
>>> paddle.enable_static()

>>> def run_example_code():
...     place = paddle.CUDAPlace(0)
...     exe = paddle.static.Executor(place)
...     data = paddle.static.data(name='X', shape=[None, 1, 28, 28], dtype='float32')
...     conv2d = paddle.static.nn.conv2d(input=data, num_filters=6, filter_size=3)
...     # 1) Use fp16_guard to control the range of fp16 kernels used.
...     with paddle.static.amp.fp16_guard():
...         bn = paddle.static.nn.batch_norm(input=conv2d, act="relu")
...         pool = F.max_pool2d(bn, kernel_size=2, stride=2)
...         hidden = paddle.static.nn.fc(pool, size=10)
...         loss = paddle.mean(hidden)
...     # 2) Create the optimizer and set `multi_precision` to True.
...     # Setting `multi_precision` to True can avoid the poor accuracy
...     # or the slow convergence in a way.
...     optimizer = paddle.optimizer.Momentum(learning_rate=0.01, multi_precision=True)
...     # 3) These ops in `custom_black_list` will keep in the float32 computation type.
...     amp_list = paddle.static.amp.CustomOpLists(
...         custom_black_list=['pool2d'])
...     # 4) The entry of Paddle AMP.
...     # Enable pure fp16 training by setting `use_pure_fp16` to True.
...     optimizer = paddle.static.amp.decorate(
...         optimizer,
...         amp_list,
...         init_loss_scaling=128.0,
...         use_dynamic_loss_scaling=True,
...         use_pure_fp16=True)
...     # If you don't use the default_startup_program(), you should pass
...     # your defined `startup_program` into `minimize`.
...     optimizer.minimize(loss)
...     exe.run(paddle.static.default_startup_program())
...     # 5) Use `amp_init` after FP32 parameters initialization(such as `exe.run(startup_program)`).
...     # If you want to perform the testing process, you should pass `test_program` into `amp_init`.
...     optimizer.amp_init(place, scope=paddle.static.global_scope())

>>> if paddle.is_compiled_with_cuda() and len(paddle.static.cuda_places()) > 0:
...     run_example_code()
qat_init ( place, scope=None, test_program=None )

qat_init

Init the qat training, such as insert qdq ops and scale variables.

Parameters
  • place (CUDAPlace) – place is used to initialize scale parameters.

  • scope (Scope) – The scope is used to find parameters and variables.

  • test_program (Program) – The program is used for testing.

minimize ( loss, startup_program=None, parameter_list=None, no_grad_set=None ) [source]

minimize

Add distributed operations to minimize loss by updating parameter_list.

Parameters
  • loss (Tensor) – A Tensor containing the value to minimize.

  • startup_program (Program, optional) – Program for initializing parameters in parameter_list. The default value is None, at this time default_startup_program will be used.

  • parameter_list (Iterable, optional) – Iterable of Tensor or Tensor.name to update to minimize loss. The default value is None, at this time all parameters will be updated.

  • no_grad_set (set, optional) – Set of Tensor or Tensor.name that don’t need to be updated. The default value is None.

Returns

tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) tensor pairs, param is Parameter, grad is the gradient value corresponding to the parameter. The returned tuple can be passed to fetch_list in Executor.run() to indicate program pruning. If so, the program will be pruned by feed and fetch_list before run, see details in Executor.

Return type

tuple

Examples

>>> import paddle
>>> paddle.enable_static()
>>> import paddle.distributed.fleet as fleet
>>> import paddle.nn.functional as F

>>> hid_dim = 10
>>> label_dim = 2
>>> input_x = paddle.static.data(name='x', shape=[None, 13], dtype='float32')
>>> input_y = paddle.static.data(name='y', shape=[None, 1], dtype='int64')
>>> fc_1 = paddle.static.nn.fc(x=input_x, size=hid_dim, activation='tanh')
>>> fc_2 = paddle.static.nn.fc(x=fc_1, size=hid_dim, activation='tanh')
>>> prediction = paddle.static.nn.fc(x=[fc_2], size=label_dim, activation='softmax')
>>> cost = F.cross_entropy(input=prediction, label=input_y)
>>> avg_cost = paddle.mean(x=cost)

>>> fleet.init(is_collective=True)
>>> strategy = fleet.DistributedStrategy()
>>> linear = paddle.nn.Linear(10, 10)
>>> optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters=linear.parameters())
>>> optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy)
>>> optimizer.minimize(avg_cost)

>>> # for more examples, please reference https://github.com/PaddlePaddle/PaddleFleetX