paddle. load ( path, **configs ) [source]

Load an object can be used in paddle from specified path.


Now only supports load state_dict of Layer or Optimizer.


In order to use the model parameters saved by paddle more efficiently, paddle.load supports loading state_dict of Layer from the result of other save APIs except paddle.save , but the argument path format is different: 1. loading from paddle.static.save or paddle.Model().save(training=True) , path needs to be a complete file name, such as model.pdparams or model.pdopt ; 2. loading from paddle.jit.save or paddle.static.save_inference_model or paddle.Model().save(training=False) , path need to be a file prefix, such as model/mnist, and paddle.load will get information from mnist.pdmodel and mnist.pdiparams ; 3. loading from paddle 1.x APIs paddle.fluid.io.save_inference_model or paddle.fluid.io.save_params/save_persistables , path need to be a directory, such as model and model is a directory.


If you load state_dict from the saved result of static mode API such as paddle.static.save or paddle.static.save_inference_model , the structured variable name in dynamic mode will cannot be restored. You need to set the argument use_structured_name=False when using Layer.set_state_dict later.

  • path (str) – The path to load the target object. Generally, the path is the target file path. When loading state_dict from the saved result of the API used to save the inference model, the path may be a file prefix or directory.

  • **configs (dict, optional) – other load configuration options for compatibility. We do not recommend using these configurations, they may be removed in the future. If not necessary, DO NOT use them. Default None. The following options are currently supported: (1) model_filename (str): The inference model file name of the paddle 1.x save_inference_model save format. Default file name is __model__ . (2) params_filename (str): The persistable variables file name of the paddle 1.x save_inference_model save format. No default file name, save variables separately by default.


a target object can be used in paddle

Return type



import paddle

emb = paddle.nn.Embedding(10, 10)
layer_state_dict = emb.state_dict()
paddle.save(layer_state_dict, "emb.pdparams")
scheduler = paddle.optimizer.lr.NoamDecay(
    d_model=0.01, warmup_steps=100, verbose=True)
adam = paddle.optimizer.Adam(
opt_state_dict = adam.state_dict()
paddle.save(opt_state_dict, "adam.pdopt")

load_layer_state_dict = paddle.load("emb.pdparams")
load_opt_state_dict = paddle.load("adam.pdopt")