save save ( state_dict, path, **configs ) [source]

Save a state dict to the specified path in both distributed and single-card environment.


Now supports saving state_dict of Layer/Optimizer, Tensor and nested structure containing Tensor, Program.


Different from, since the save result of is a single file, there is no need to distinguish multiple saved files by adding a suffix. The argument path of will be directly used as the saved file name instead of a prefix. In order to unify the saved file name format, we recommend using the paddle standard suffix: 1. for Layer.state_dict , recommend to use .pdparams ; 2. for Optimizer.state_dict , recommend to use .pdopt . For specific examples, please refer to API code examples.

  • obj (Object) – The object to be saved.

  • path (str|BytesIO) – The path/buffer of the object to be saved. If saved in the current directory, the input path string will be used as the file name.

  • protocol (int, optional) – The protocol version of pickle module must be greater than 1 and less than 5. Default: 4.

  • **configs (dict, optional) –

    optional keyword arguments. The following options are currently supported:

    1. use_binary_format(bool):

      To be used in When the saved object is static graph variable, you can specify use_binary_for_var. If True, save the file in the c++ binary format when saving a single static graph variable; otherwise, save it in pickle format. Default: False.

    2. gather_to(int|list|tuple|None):

      To specify which global rank to save in.Defalut is None. None value means distributed saving with no gathering to a single card.

    3. state_type(str):

      Value can be ‘params’ or ‘opt’, specifying to save parametres or optimizer state.

    4. max_grouped_size(str|int):

      To limit the max size(how many bits) a object group to be transfered a time. If str, the format must be as num+’G/M/K’, for example, 3G, 2K, 10M, etc. Default is 3G.




>>> import paddle
>>> paddle.distributed.init_process_group(backend='nccl')
>>> paddle.distributed.fleet.init(is_collective=True)

>>> model = build_model()
>>> optimizer = build_optimizer(model)

>>> dist_optimizer = paddle.distributed_optimizer(optimizer)
>>> dist_model = paddle.distributed_optimizer(model)

>>> # gather params to rank 0 and then save
>>>, path="path/to/save.pdparams", gather_to=[0], state_type="params")

>>> # save whoe params on all ranks
>>>, path="path/to/save.pdparams", gather_to=[0,1], state_type="params")

>>> # save optimizer state dict on rank 0
>>>, path="path/to/save.pdopt", gather=0, state_type="opt")