get_rank

paddle.distributed. get_rank ( group=None ) [source]

Returns the rank of current trainer in the given group, ranks are consecutive integers in [0, world_size). If none of the group is given, the global group will be used as default.

Parameters

group (Group, optional) – The communication group you want to get rank of current trainer, use global group as default if group is None.

Returns

(int) The rank of current trainer in the given group. Return -1 if the process is not part of the given group.

Warning

Argument group only supports in dygraph mode.

Examples

>>> 
>>> # Execute this script using distributed launch with one card configs.
>>> import paddle
>>> import paddle.distributed as dist

>>> dist.init_parallel_env()
>>> print("The rank is %d" % dist.get_rank())
The rank is 0