load_inference_model

paddle.static. load_inference_model ( path_prefix, executor, **kwargs ) [源代码]

从指定文件路径中加载预测模型,包括模型结构和模型参数。

参数

  • path_prefix (str|None) – 模型的存储目录 + 模型名称(不包含后缀)。如果是 None,表示从内存加载模型。

  • executor (Executor) – 运行模型的 executor,详见 执行引擎

  • kwargs - 支持的 key 包括 'model_filename', 'params_filename'。(注意:kwargs 主要是用来做反向兼容的)。

    • model_filename (str) - 自定义 model_filename。

    • params_filename (str) - 自定义 params_filename。

返回

该接口返回一个包含三个元素的列表 [program,feed_target_names, fetch_targets]。它们的含义描述如下:

  • program (Program)– Program (详见 基础概念 )类的实例。此处它被用于预测,因此可被称为 Inference Program。

  • feed_target_names (list)– 字符串列表,包含着 Inference Program 预测时所需提供数据的所有变量名称(即所有输入变量的名称)。

  • fetch_targets (list)– Variable (详见 基础概念 )类型列表,包含着模型的所有输出变量。通过这些输出变量即可得到模型的预测结果。

代码示例

>>> import paddle
>>> import numpy as np

>>> paddle.enable_static()

# Build the model
>>> startup_prog = paddle.static.default_startup_program()
>>> main_prog = paddle.static.default_main_program()
>>> with paddle.static.program_guard(main_prog, startup_prog):
...     image = paddle.static.data(name="img", shape=[64, 784])
...     w = paddle.create_parameter(shape=[784, 200], dtype='float32')
...     b = paddle.create_parameter(shape=[200], dtype='float32')
...     hidden_w = paddle.matmul(x=image, y=w)
...     hidden_b = paddle.add(hidden_w, b)
>>> exe = paddle.static.Executor(paddle.CPUPlace())
>>> exe.run(startup_prog)

# Save the inference model
>>> path_prefix = "./infer_model"
>>> paddle.static.save_inference_model(path_prefix, [image], [hidden_b], exe)

>>> [inference_program, feed_target_names, fetch_targets] = (
...     paddle.static.load_inference_model(path_prefix, exe))
>>> tensor_img = np.array(np.random.random((64, 784)), dtype=np.float32)
>>> results = exe.run(inference_program,
...               feed={feed_target_names[0]: tensor_img},
...               fetch_list=fetch_targets)

# In this example, the inference program was saved in file
# "./infer_model.pdmodel" and parameters were saved in file
# " ./infer_model.pdiparams".
# By the inference program, feed_target_names and
# fetch_targets, we can use an executor to run the inference
# program to get the inference result.