save_inference_model

paddle.fluid.io. save_inference_model ( dirname, feeded_var_names, target_vars, executor, main_program=None, model_filename=None, params_filename=None, export_for_deployment=True, program_only=False, clip_extra=True ) [source]

Prune the given main_program to build a new program especially for inference, and then save it and all related parameters to given dirname . If you just want to save parameters of your trained model, please use the api_fluid_io_save_params . You can refer to Save and Load a Model for more details.

Note

The dirname is used to specify the folder where inference model structure and parameters are going to be saved. If you would like to save params of Program in separate files, set params_filename None; if you would like to save all params of Program in a single file, use params_filename to specify the file name.

Parameters
  • dirname (str) – The directory path to save the inference model.

  • feeded_var_names (list[str]) – list of string. Names of variables that need to be fed data during inference.

  • target_vars (list[Variable]) – list of Variable. Variables from which we can get inference results.

  • executor (Executor) – The executor that saves the inference model. You can refer to Executor for more details.

  • main_program (Program, optional) – The original program, which will be pruned to build the inference model. If is set None, the global default _main_program_ will be used. Default: None.

  • model_filename (str, optional) – The name of file to save the inference program itself. If is set None, a default filename __model__ will be used.

  • params_filename (str, optional) – The name of file to save all related parameters. If it is set None, parameters will be saved in separate files .

  • export_for_deployment (bool, optional) – If True, programs are modified to only support direct inference deployment. Otherwise, more information will be stored for flexible optimization and re-training. Currently, only True is supported. Default: True.

  • program_only (bool, optional) – If True, It will save inference program only, and do not save params of Program. Default: False.

Returns

list, The fetch variables’ name list.

Examples

import paddle
import paddle.fluid as fluid

paddle.enable_static()
path = "./infer_model"

# User defined network, here a softmax regession example
image = fluid.data(name='img', shape=[None, 28, 28], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
feeder = fluid.DataFeeder(feed_list=[image, label], place=fluid.CPUPlace())
predict = fluid.layers.fc(input=image, size=10, act='softmax')

loss = fluid.layers.cross_entropy(input=predict, label=label)
avg_loss = paddle.mean(loss)

exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())

# Feed data and train process

# Save inference model. Note we don't save label and loss in this example
fluid.io.save_inference_model(dirname=path,
                              feeded_var_names=['img'],
                              target_vars=[predict],
                              executor=exe)

# In this example, the save_inference_mode inference will prune the default
# main program according to the network's input node (img) and output node(predict).
# The pruned inference program is going to be saved in the "./infer_model/__model__"
# and parameters are going to be saved in separate files under folder
# "./infer_model".