DistributeTranspilerConfig¶
- api_attr
declarative programming (static graph)
-
class
paddle.fluid.transpiler.
DistributeTranspilerConfig
[source] A configuration class that provide support for transpiler distributed jobs. Some important parameters are explained as follows:
-
slice_var_up
(bool)¶ Whether to do Tensor slice for parameter servers, default is True.
-
split_method
(PSDispatcher)¶ Methods of dispatching parameters for server, api_fluid_transpiler_RoundRobin or HashName can be used and default is RoundRobin. Try to choose the best method to balance loads for parameter servers.
-
min_block_size
(int)¶ Minimum number of split elements in block, default is 8192.
According to : https://github.com/PaddlePaddle/Paddle/issues/8638#issuecomment-369912156 We can use bandwidth efficiently when data size is larger than 2MB.If you want to change it, please be sure you have read the slice_variable function. You can find the definition of slice_variable in https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/transpiler/distribute_transpiler.py .
Examples
from paddle.fluid.transpiler.ps_dispatcher import RoundRobin import paddle.fluid as fluid config = fluid.DistributeTranspilerConfig() config.slice_var_up = True config.split_method = RoundRobin config.min_block_size = 81920
-