load_inference_model

paddle.static. load_inference_model ( path_prefix, executor, **kwargs ) [source]

Load inference model from a given path. By this API, you can get the model structure(Inference Program) and model parameters.

Parameters
  • path_prefix (str | None) – One of the following: - Directory path to save model + model name without suffix. - Set to None when reading the model from memory.

  • executor (Executor) – The executor to run for loading inference model. See Executor for more details about it.

  • kwargs

    Supported keys including ‘model_filename’, ‘params_filename’. Attention please, kwargs is used for backward compatibility mainly.

    • model_filename(str): specify model_filename if you don’t want to use default name.

    • params_filename(str): specify params_filename if you don’t want to use default name.

Returns

The return of this API is a list with three elements: (program, feed_target_names, fetch_targets). The program is a Program (refer to Basic Concept), which is used for inference. The feed_target_names is a list of str, which contains names of variables that need to feed data in the inference program. The fetch_targets is a list of Variable (refer to Basic Concept). It contains variables from which we can get inference results.

Return type

list

Examples

>>> import paddle
>>> import numpy as np

>>> paddle.enable_static()

# Build the model
>>> startup_prog = paddle.static.default_startup_program()
>>> main_prog = paddle.static.default_main_program()
>>> with paddle.static.program_guard(main_prog, startup_prog):
...     image = paddle.static.data(name="img", shape=[64, 784])
...     w = paddle.create_parameter(shape=[784, 200], dtype='float32')
...     b = paddle.create_parameter(shape=[200], dtype='float32')
...     hidden_w = paddle.matmul(x=image, y=w)
...     hidden_b = paddle.add(hidden_w, b)
>>> exe = paddle.static.Executor(paddle.CPUPlace())
>>> exe.run(startup_prog)

# Save the inference model
>>> path_prefix = "./infer_model"
>>> paddle.static.save_inference_model(path_prefix, [image], [hidden_b], exe)

>>> [inference_program, feed_target_names, fetch_targets] = (
...     paddle.static.load_inference_model(path_prefix, exe))
>>> tensor_img = np.array(np.random.random((64, 784)), dtype=np.float32)
>>> results = exe.run(inference_program,
...               feed={feed_target_names[0]: tensor_img},
...               fetch_list=fetch_targets)

# In this example, the inference program was saved in file
# "./infer_model.pdmodel" and parameters were saved in file
# " ./infer_model.pdiparams".
# By the inference program, feed_target_names and
# fetch_targets, we can use an executor to run the inference
# program to get the inference result.