class paddle.distributed. ParallelMode [source]

There are all the parallel modes currently supported: - DATA_PARALLEL: Distribute input data to different devices. - TENSOR_PARALLEL: Shards tensors in the network to different devices. - PIPELINE_PARALLEL: Place different layers of the network on different devices. - SHARDING_PARALLEL: Segment the model parameters, parameter gradients and optimizer states

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/distributed/fleet/base/ of paddle.distributed.fleet.base.topology.ParallelMode, line 6)

Unexpected indentation.

corresponding to the parameters to each device.


import paddle
parallel_mode = paddle.distributed.ParallelMode
print(parallel_mode.DATA_PARALLEL)  # 0