save_inference_model

paddle.static. save_inference_model ( path_prefix, feed_vars, fetch_vars, executor, **kwargs ) [source]

Save current model and its parameters to given path. i.e. Given path_prefix = "PATH/modelname", after invoking save_inference_model(path_prefix, feed_vars, fetch_vars, executor), you will find two files named modelname.pdmodel and modelname.pdiparams under PATH, which represent your model and parameters respectively.

Parameters
  • path_prefix (str) – Directory path to save model + model name without suffix.

  • feed_vars (Tensor | list[Tensor]) – Variables needed by inference.

  • fetch_vars (Tensor | list[Tensor]) – Variables returned by inference.

  • executor (Executor) – The executor that saves the inference model. You can refer to Executor for more details.

  • kwargs

    Supported keys including ‘program’ and “clip_extra”. Attention please, kwargs is used for backward compatibility mainly.

    • program(Program): specify a program if you don’t want to use default main program.

    • clip_extra(bool): the flag indicating whether to clip extra information for every operator. Default: True.

    • legacy_format(bool): whether to save inference model in legacy format. Default: False.

Returns

None

Examples

>>> import paddle

>>> paddle.enable_static()

>>> path_prefix = "./infer_model"

# User defined network, here a softmax regression example
>>> image = paddle.static.data(name='img', shape=[None, 28, 28], dtype='float32')
>>> label = paddle.static.data(name='label', shape=[None, 1], dtype='int64')
>>> predict = paddle.static.nn.fc(image, 10, activation='softmax')

>>> loss = paddle.nn.functional.cross_entropy(predict, label)

>>> exe = paddle.static.Executor(paddle.CPUPlace())
>>> exe.run(paddle.static.default_startup_program())

# Feed data and train process

# Save inference model. Note we don't save label and loss in this example
>>> paddle.static.save_inference_model(path_prefix, [image], [predict], exe)

# In this example, the save_inference_mode inference will prune the default
# main program according to the network's input node (img) and output node(predict).
# The pruned inference program is going to be saved in file "./infer_model.pdmodel"
# and parameters are going to be saved in file "./infer_model.pdiparams".