fluid.io

load_inference_model

paddle.fluid.io.load_inference_model(dirname, executor, model_filename=None, params_filename=None, pserver_endpoints=None)[source]

Load the inference model from a given directory. By this API, you can get the model structure(Inference Program) and model parameters. If you just want to load parameters of the pre-trained model, please use the load_params API. You can refer to Save and Load a Model for more details.

Parameters
  • dirname (str) – The given directory path.

  • executor (Executor) – The executor to run for loading inference model. See Executor for more details about it.

  • model_filename (str, optional) – The name of file to load the inference program. If it is None, the default filename __model__ will be used. Default: None.

  • params_filename (str, optional) – The name of file to load all parameters. It is only used for the case that all parameters were saved in a single binary file. If parameters were saved in separate files, set it as None. Default: None.

  • pserver_endpoints (list, optional) – It is only needed by the distributed inference. If using a distributed look up table during the training, this table is also needed by the inference process. Its value is a list of pserver endpoints.

Returns

The return of this API is a list with three elements: (program, feed_target_names, fetch_targets). The program is a Program (refer to Basic Concept), which is used for inference. The feed_target_names is a list of str, which contains names of variables that need to feed data in the inference program. The fetch_targets is a list of Variable (refer to Basic Concept). It contains variables from which we can get inference results.

Return type

list

Raises

ValueError – If dirname is not a existing directory.

Examples

import paddle.fluid as fluid
import numpy as np

# Build the model
main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(main_prog, startup_prog):
    data = fluid.layers.data(name="img", shape=[64, 784], append_batch_size=False)
    w = fluid.layers.create_parameter(shape=[784, 200], dtype='float32')
    b = fluid.layers.create_parameter(shape=[200], dtype='float32')
    hidden_w = fluid.layers.matmul(x=data, y=w)
    hidden_b = fluid.layers.elementwise_add(hidden_w, b)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(startup_prog)

# Save the inference model
path = "./infer_model"
fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'],
             target_vars=[hidden_b], executor=exe, main_program=main_prog)

# Demo one. Not need to set the distributed look up table, because the
# training doesn't use a distributed look up table.
[inference_program, feed_target_names, fetch_targets] = (
    fluid.io.load_inference_model(dirname=path, executor=exe))
tensor_img = np.array(np.random.random((1, 64, 784)), dtype=np.float32)
results = exe.run(inference_program,
              feed={feed_target_names[0]: tensor_img},
              fetch_list=fetch_targets)

# Demo two. If the training uses a distributed look up table, the pserver
# endpoints list should be supported when loading the inference model.
# The below is just an example.
endpoints = ["127.0.0.1:2023","127.0.0.1:2024"]
[dist_inference_program, dist_feed_target_names, dist_fetch_targets] = (
    fluid.io.load_inference_model(dirname=path,
                                  executor=exe,
                                  pserver_endpoints=endpoints))

# In this example, the inference program was saved in the file
# "./infer_model/__model__" and parameters were saved in
# separate files under the directory "./infer_model".
# By the inference program, feed_target_names and
# fetch_targets, we can use an executor to run the inference
# program for getting the inference result.

load_params

paddle.fluid.io.load_params(executor, dirname, main_program=None, filename=None)[source]

This API filters out all parameters from the give main_program and then tries to load these parameters from the directory dirname or the file filename.

Use the dirname to specify the directory where parameters were saved. If parameters were saved in separate files under the directory dirname, set filename as None; if all parameters were saved in a single file, use filename to specify the file name.

Note:

Some variables are not Parameter while they are necessary for training, such as learning rate, global step, etc. So you cannot save and continue your training just by using save_params and load_params. Please use save_persistables and load_persistables instead.

If you want to load the pre-trained model structure and parameters for the inference, please use the load_inference_model API. You can refer to Save and Load a Model for more details.

Parameters
  • executor (Executor) – The executor used for loading parameters. See Executor for more details about it.

  • dirname (str) – The directory path.

  • main_program (Program, optional) – The program whose parameters will be loaded. If it is None, the default_main_program will be used automatically. See Basic Concept for more about Program. Default: None.

  • filename (str, optional) – The file which saved all parameters. If parameters were saved in separated files, set it to None. Default: None.

Returns

None

Examples

import paddle.fluid as fluid

exe = fluid.Executor(fluid.CPUPlace())
param_path = "./my_paddle_model"
prog = fluid.default_main_program()
fluid.io.load_params(executor=exe, dirname=param_path,
                    main_program=None)

load_persistables

paddle.fluid.io.load_persistables(executor, dirname, main_program=None, filename=None)[source]

This API filters out all variables with persistable==True from the given main_program and then tries to load these variables from the directory dirnameme or the file filename.

Use the dirname to specify the directory where persistable variables (refer to Save and Load a Model) were saved. If variables were saved in separate files, set filename as None; if all variables were saved in a single file, use filename to specify the file name.

Parameters
  • executor (Executor) – The executor used for loading persistable variables. See Executor for more details about it.

  • dirname (str) – The directory path.

  • main_program (Program, optional) – The program whose persistbale variables will be loaded. If it is None, the default_main_program will be used automatically. See Basic Concept for more about Program. Default: None.

  • filename (str, optional) – The file which saved all persistable variables. If variables were saved in separated files, set it to None. Default: None.

Returns

None

Examples

import paddle.fluid as fluid

exe = fluid.Executor(fluid.CPUPlace())
param_path = "./my_paddle_model"
prog = fluid.default_main_program()
fluid.io.load_persistables(executor=exe, dirname=param_path,
                           main_program=None)

load_vars

paddle.fluid.io.load_vars(executor, dirname, main_program=None, vars=None, predicate=None, filename=None)[source]

This API loads variables from files by executor.

There are two ways to specify the variables to be loaded: the first way, set variables in a list and assign it to the vars; the second way, use the predicate function to select variables that make predicate(variable) == True. The first way has a higher priority.

The dirname is used to specify the folder where to load variables. If variables were saved in separate files in the folder dirname, set filename None. If all variables were saved in a single file, use filename to specify it.

Parameters
  • executor (Executor) – The executor to run for loading variables.

  • dirname (str) – The folder where to load the variables.

  • main_program (Program, optional) – The program whose variables will be loaded. If it is None, the default main program will be used automatically. Default: None

  • vars (list[Variable], optional) – The list that contains all variables to be loaded. Default: None

  • predicate (function, optional) – The function selects variables that make predicate(variable) == True. Default: None

  • filename (str, optional) – The file which saved all required variables. If variables were saved in separate files, set it to be None. Default: None

Returns

None

Raises

TypeError – If main_program is not an instance of Program nor None.

Examples

import paddle.fluid as fluid

main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(main_prog, startup_prog):
    data = fluid.layers.data(name="img", shape=[64, 784], append_batch_size=False)
    w = fluid.layers.create_parameter(shape=[784, 200], dtype='float32', name='fc_w')
    b = fluid.layers.create_parameter(shape=[200], dtype='float32', name='fc_b')
    hidden_w = fluid.layers.matmul(x=data, y=w)
    hidden_b = fluid.layers.elementwise_add(hidden_w, b)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(startup_prog)

# The first usage: using `vars` to specify the variables.
path = "./my_paddle_vars"
var_list = [w, b]
fluid.io.save_vars(executor=exe, dirname=path, vars=var_list,
                   filename="vars_file")
fluid.io.load_vars(executor=exe, dirname=path, vars=var_list,
                   filename="vars_file")
# w and b will be loaded, and they are supposed to
# be saved in the same file named 'var_file' in the path "./my_paddle_vars".

# The second usage: using the `predicate` function to select variables
param_path = "./my_paddle_model"
def name_has_fc(var):
    res = "fc" in var.name
    return res
fluid.io.save_vars(executor=exe, dirname=param_path, main_program=main_prog,
                  vars=None, predicate=name_has_fc)
fluid.io.load_vars(executor=exe, dirname=param_path, main_program=main_prog,
                   vars=None, predicate=name_has_fc)
# Load All variables in the `main_program` whose name includes "fc".
# And all the variables are supposed to be saved in separate files.

PyReader

class paddle.fluid.io.PyReader(feed_list=None, capacity=None, use_double_buffer=True, iterable=True, return_list=False)[source]

Create a reader object for data feeding in Python. Data would be prefetched using Python thread and be pushed into a queue asynchronously. Data in the queue would be extracted automatically when Executor.run(…) is called.

Parameters
  • feed_list (list(Variable)|tuple(Variable)) – feed variable list. The variables should be created by fluid.layers.data().

  • capacity (int) – capacity of the queue maintained in PyReader. The unit is batch number. Set larger capacity if your reader is fast.

  • use_double_buffer (bool) – whether to use double_buffer_reader. If use_double_buffer=True, PyReader would prefetch next batch data asynchronously, so it would speed up data feeding and occupies a little more CPU or GPU memory, i.e., the memory of one batch input data.

  • iterable (bool) – whether the created PyReader is iterable.

  • return_list (bool) – whether the return value on each device is presented as a list. It is only valid when iterable=True. If return_list=False, the return value on each device would be a dict of str -> LoDTensor, where the key of the dict is the name of each feeded variables. If return_list=True, the return value on each device would be a list(LoDTensor). It is recommended to use return_list=False in static graph mode and use return_list=True in dygraph mode.

Returns

the created reader object.

Return type:

reader(Reader)

Examples

  1. If iterable = False, the created PyReader object is almost the same as fluid.layers.py_reader(). Operators would be inserted into the program. User should call start() before each epoch and catch fluid.core.EOFException thrown by Executor.run() when epoch ends. Once the exception is caught, user should call reset() to reset the reader manually.

import paddle
import paddle.fluid as fluid
import numpy as np

EPOCH_NUM = 3
ITER_NUM = 5
BATCH_SIZE = 3

def network(image, label):
    # User-defined network, here is an example of softmax regression.
    predict = fluid.layers.fc(input=image, size=10, act='softmax')
    return fluid.layers.cross_entropy(input=predict, label=label)

def reader_creator_random_image_and_label(height, width):
    def reader():
        for i in range(ITER_NUM):
            fake_image = np.random.uniform(low=0,
                                           high=255,
                                           size=[height, width])
            fake_label = np.ones([1])
            yield fake_image, fake_label
    return reader

image = fluid.data(name='image', shape=[None, 784, 784], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')

reader = fluid.io.PyReader(feed_list=[image, label],
                           capacity=4,
                           iterable=False)

user_defined_reader = reader_creator_random_image_and_label(784, 784)
reader.decorate_sample_list_generator(
    paddle.batch(user_defined_reader, batch_size=BATCH_SIZE))
loss = network(image, label)
executor = fluid.Executor(fluid.CPUPlace())
executor.run(fluid.default_startup_program())
for i in range(EPOCH_NUM):
    reader.start()
    while True:
        try:
            executor.run(feed=None)
        except fluid.core.EOFException:
            reader.reset()
            break
  1. If iterable=True, the created PyReader object is decoupled with the program. No operator would be inserted into the program. In this case, the created reader is a Python generator, which is iterable. User should feed the data yielded from PyReader object into Executor.run(feed=...).

import paddle
import paddle.fluid as fluid
import numpy as np

EPOCH_NUM = 3
ITER_NUM = 5
BATCH_SIZE = 10

def network(image, label):
    # User-defined network, here is an example of softmax regression.
    predict = fluid.layers.fc(input=image, size=10, act='softmax')
    return fluid.layers.cross_entropy(input=predict, label=label)

def reader_creator_random_image(height, width):
    def reader():
        for i in range(ITER_NUM):
            fake_image = np.random.uniform(low=0, high=255, size=[height, width])
            fake_label = np.ones([1])
            yield fake_image, fake_label
    return reader

image = fluid.data(name='image', shape=[None, 784, 784], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
reader = fluid.io.PyReader(feed_list=[image, label], capacity=4, iterable=True, return_list=False)

user_defined_reader = reader_creator_random_image(784, 784)
reader.decorate_sample_list_generator(
    paddle.batch(user_defined_reader, batch_size=BATCH_SIZE),
        fluid.core.CPUPlace())

loss = network(image, label)
executor = fluid.Executor(fluid.CPUPlace())
executor.run(fluid.default_startup_program())

for _ in range(EPOCH_NUM):
    for data in reader():
        executor.run(feed=data, fetch_list=[loss])
  1. If return_list=True, the return values would be presented as list instead of dict. This is usually used in dygraph mode.

import paddle
import paddle.fluid as fluid
import numpy as np

ITER_NUM = 5
BATCH_SIZE = 10

def reader_creator_random_image(height, width):
    def reader():
        for i in range(ITER_NUM):
            yield np.random.uniform(low=0, high=255, size=[height, width]),                            np.random.random_integers(low=0, high=9, size=[1])
    return reader

place = fluid.CPUPlace()
with fluid.dygraph.guard(place):
    py_reader = fluid.io.PyReader(capacity=2, return_list=True)
    user_defined_reader = reader_creator_random_image(784, 784)
    py_reader.decorate_sample_list_generator(
        paddle.batch(user_defined_reader, batch_size=BATCH_SIZE),
        place)
    for image, label in py_reader():
        relu = fluid.layers.relu(image)
start()

Start the data feeding thread. Can only call when the reader object is not iterable.

Example

import paddle
import paddle.fluid as fluid
import numpy as np

BATCH_SIZE = 10

def generator():
    for i in range(5):
        yield np.random.uniform(low=0, high=255, size=[784, 784]),

image = fluid.data(name='image', shape=[None, 784, 784], dtype='float32')
reader = fluid.io.PyReader(feed_list=[image], capacity=4, iterable=False)
reader.decorate_sample_list_generator(
    paddle.batch(generator, batch_size=BATCH_SIZE))

executor = fluid.Executor(fluid.CPUPlace())
executor.run(fluid.default_startup_program())
for i in range(3):
    reader.start()
    while True:
        try:
            executor.run(feed=None)
        except fluid.core.EOFException:
            reader.reset()
            break
reset()

Reset the reader object when fluid.core.EOFException raises. Can only call when the reader object is not iterable.

Example

import paddle
import paddle.fluid as fluid
import numpy as np

BATCH_SIZE = 10

def generator():
    for i in range(5):
        yield np.random.uniform(low=0, high=255, size=[784, 784]),

image = fluid.data(name='image', shape=[None, 784, 784], dtype='float32')
reader = fluid.io.PyReader(feed_list=[image], capacity=4, iterable=False)
reader.decorate_sample_list_generator(
    paddle.batch(generator, batch_size=BATCH_SIZE))

executor = fluid.Executor(fluid.CPUPlace())
executor.run(fluid.default_startup_program())
for i in range(3):
    reader.start()
    while True:
        try:
            executor.run(feed=None)
        except fluid.core.EOFException:
            reader.reset()
            break
decorate_sample_generator(sample_generator, batch_size, drop_last=True, places=None)

Set the data source of the PyReader object.

The provided sample_generator should be a Python generator, which yields list(numpy.ndarray)-typed data of each sample.

places must be set when the PyReader object is iterable.

If all inputs have no lods, this method is faster than decorate_sample_list_generator(paddle.batch(sample_generator, ...)) .

Parameters
  • sample_generator (generator) – Python generator that yields list(numpy.ndarray)-typed sample data.

  • batch_size (int) – batch size. Must be larger than 0.

  • drop_last (bool) – Whether to drop the last batch when sample number is less than batch_size.

  • places (None|list(CUDAPlace)|list(CPUPlace)) – place list. Must be provided when PyReader is iterable.

Example

import paddle.fluid as fluid
import numpy as np

EPOCH_NUM = 3
ITER_NUM = 15
BATCH_SIZE = 3

def network(image, label):
    # User-defined network, here is an example of softmax regression.
    predict = fluid.layers.fc(input=image, size=10, act='softmax')
    return fluid.layers.cross_entropy(input=predict, label=label)

def random_image_and_label_generator(height, width):
    def generator():
        for i in range(ITER_NUM):
            fake_image = np.random.uniform(low=0,
                                           high=255,
                                           size=[height, width])
            fake_label = np.array([1])
            yield fake_image, fake_label
    return generator

image = fluid.data(name='image', shape=[None, 784, 784], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
reader = fluid.io.PyReader(feed_list=[image, label], capacity=4, iterable=True)

user_defined_generator = random_image_and_label_generator(784, 784)
reader.decorate_sample_generator(user_defined_generator,
                                 batch_size=BATCH_SIZE,
                                 places=[fluid.CPUPlace()])
loss = network(image, label)
executor = fluid.Executor(fluid.CPUPlace())
executor.run(fluid.default_startup_program())

for _ in range(EPOCH_NUM):
    for data in reader():
        executor.run(feed=data, fetch_list=[loss])
decorate_sample_list_generator(reader, places=None)

Set the data source of the PyReader object.

The provided reader should be a Python generator, which yields list(numpy.ndarray) typed batched data.

places must be set when the PyReader object is iterable.

Parameters
  • reader (generator) – Python generator that yields list(numpy.ndarray)-typed batched data.

  • places (None|list(CUDAPlace)|list(CPUPlace)) – place list. Must be provided when PyReader is iterable.

Example

import paddle
import paddle.fluid as fluid
import numpy as np

EPOCH_NUM = 3
ITER_NUM = 15
BATCH_SIZE = 3

def network(image, label):
    # User-defined network, here is an example of softmax regression.
    predict = fluid.layers.fc(input=image, size=10, act='softmax')
    return fluid.layers.cross_entropy(input=predict, label=label)

def random_image_and_label_generator(height, width):
    def generator():
        for i in range(ITER_NUM):
            fake_image = np.random.uniform(low=0,
                                           high=255,
                                           size=[height, width])
            fake_label = np.ones([1])
            yield fake_image, fake_label
    return generator

image = fluid.data(name='image', shape=[None, 784, 784], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
reader = fluid.io.PyReader(feed_list=[image, label], capacity=4, iterable=True)

user_defined_generator = random_image_and_label_generator(784, 784)
reader.decorate_sample_list_generator(
    paddle.batch(user_defined_generator, batch_size=BATCH_SIZE),
    fluid.core.CPUPlace())

loss = network(image, label)
executor = fluid.Executor(fluid.core.CPUPlace())
executor.run(fluid.default_startup_program())

for _ in range(EPOCH_NUM):
    for data in reader():
        executor.run(feed=data, fetch_list=[loss])
decorate_batch_generator(reader, places=None)

Set the data source of the PyReader object.

The provided reader should be a Python generator, which yields numpy.ndarray-typed or LoDTensor-typed batched data.

places must be set when the PyReader object is iterable.

Parameters
  • reader (generator) – Python generator that yields LoDTensor-typed batched data.

  • places (None|list(CUDAPlace)|list(CPUPlace)) – place list. Must be provided when PyReader is iterable.

Example

import paddle.fluid as fluid
import numpy as np

EPOCH_NUM = 3
ITER_NUM = 15
BATCH_SIZE = 3

def network(image, label):
    # User-defined network, here is an example of softmax regression.
    predict = fluid.layers.fc(input=image, size=10, act='softmax')
    return fluid.layers.cross_entropy(input=predict, label=label)

def random_image_and_label_generator(height, width):
    def generator():
        for i in range(ITER_NUM):
            batch_image = np.random.uniform(low=0,
                                            high=255,
                                            size=[BATCH_SIZE, height, width])
            batch_label = np.ones([BATCH_SIZE, 1])
            batch_image = batch_image.astype('float32')
            batch_label = batch_label.astype('int64')
            yield batch_image, batch_label
    return generator

image = fluid.data(name='image', shape=[None, 784, 784], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
reader = fluid.io.PyReader(feed_list=[image, label], capacity=4, iterable=True)

user_defined_generator = random_image_and_label_generator(784, 784)
reader.decorate_batch_generator(user_defined_generator, fluid.CPUPlace())

loss = network(image, label)
executor = fluid.Executor(fluid.CPUPlace())
executor.run(fluid.default_startup_program())

for _ in range(EPOCH_NUM):
    for data in reader():
        executor.run(feed=data, fetch_list=[loss])
next()

Get the next item in the DataLoader object. This method should not be called by users directly. It is used for implementing iterator protocol of Python 2.x inside PaddlePaddle framework.

save_inference_model

paddle.fluid.io.save_inference_model(dirname, feeded_var_names, target_vars, executor, main_program=None, model_filename=None, params_filename=None, export_for_deployment=True, program_only=False)[source]

Prune the given main_program to build a new program especially for inference, and then save it and all related parameters to given dirname . If you just want to save parameters of your trained model, please use the save_params . You can refer to Save and Load a Model for more details.

Note

The dirname is used to specify the folder where inference model structure and parameters are going to be saved. If you would like to save params of Program in separate files, set params_filename None; if you would like to save all params of Program in a single file, use params_filename to specify the file name.

Parameters
  • dirname (str) – The directory path to save the inference model.

  • feeded_var_names (list[str]) – list of string. Names of variables that need to be feeded data during inference.

  • target_vars (list[Variable]) – list of Variable. Variables from which we can get inference results.

  • executor (Executor) – The executor that saves the inference model. You can refer to Executor for more details.

  • main_program (Program, optional) – The original program, which will be pruned to build the inference model. If is setted None, the global default _main_program_ will be used. Default: None.

  • model_filename (str, optional) – The name of file to save the inference program itself. If is setted None, a default filename __model__ will be used.

  • params_filename (str, optional) – The name of file to save all related parameters. If it is setted None, parameters will be saved in separate files .

  • export_for_deployment (bool) – If True, programs are modified to only support direct inference deployment. Otherwise, more information will be stored for flexible optimization and re-training. Currently, only True is supported. Default: True.

  • program_only (bool, optional) – If True, It will save inference program only, and do not save params of Program. Default: False.

Returns

The fetch variables’ name list

Return Type:

list

Raises
  • ValueError – If feed_var_names is not a list of basestring, an exception is thrown.

  • ValueError – If target_vars is not a list of Variable, an exception is thrown.

Examples

import paddle.fluid as fluid

path = "./infer_model"

# User defined network, here a softmax regresssion example
image = fluid.data(name='img', shape=[None, 28, 28], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
feeder = fluid.DataFeeder(feed_list=[image, label], place=fluid.CPUPlace())
predict = fluid.layers.fc(input=image, size=10, act='softmax')

loss = fluid.layers.cross_entropy(input=predict, label=label)
avg_loss = fluid.layers.mean(loss)

exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())

# Feed data and train process

# Save inference model. Note we don't save label and loss in this example
fluid.io.save_inference_model(dirname=path,
                              feeded_var_names=['img'],
                              target_vars=[predict],
                              executor=exe)

# In this example, the save_inference_mode inference will prune the default
# main program according to the network's input node (img) and output node(predict).
# The pruned inference program is going to be saved in the "./infer_model/__model__"
# and parameters are going to be saved in separate files under folder
# "./infer_model".

save_params

paddle.fluid.io.save_params(executor, dirname, main_program=None, filename=None)[source]

This operator saves all parameters from the main_program to the folder dirname or file filename. You can refer to Save and Load a Model for more details.

Use the dirname to specify the saving folder. If you would like to save parameters in separate files, set filename None; if you would like to save all parameters in a single file, use filename to specify the file name.

Note

Some variables are not Parameter while they are necessary for training, such as learning rate, global step, etc. So you can NOT save and continue your training just by save_params and load_params. Please use save_persistables and load_persistables instead.

If you want to save your model for the inference, please use the save_inference_model. You can refer to Save and Load a Model for more details.

Parameters
  • executor (Executor) – The executor to run for saving parameters, You can refer to Executor.

  • dirname (str) – The saving directory path.

  • main_program (Program, optional) – The program whose parameters will be saved. You can refer to Basic Concept for more details. If it is None, the default main program will be used. Default: None

  • filename (str, optional) – The file to save all parameters. If you prefer to save parameters in different files, set it to None. Default: None

Returns

None

Examples

import paddle.fluid as fluid

params_path = "./my_paddle_model"
image = fluid.data(name='img', shape=[None, 28, 28], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
feeder = fluid.DataFeeder(feed_list=[image, label], place=fluid.CPUPlace())
predict = fluid.layers.fc(input=image, size=10, act='softmax')

loss = fluid.layers.cross_entropy(input=predict, label=label)
avg_loss = fluid.layers.mean(loss)

exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
fluid.io.save_params(executor=exe, dirname=params_path)
# The parameters weights and bias of the fc layer in the network are going to
# be saved in different files in the path "./my_paddle_model"

save_persistables

paddle.fluid.io.save_persistables(executor, dirname, main_program=None, filename=None)[source]

This operator saves all persistable variables from main_program to the folder dirname or file filename. You can refer to Save and Load a Model for more details. And then saves these persistables variables to the folder dirname or file filename.

The dirname is used to specify the folder where persistable variables are going to be saved. If you would like to save variables in separate files, set filename None; if you would like to save all variables in a single file, use filename to specify the file name.

Parameters
  • executor (Executor) – The executor to run for saving persistable variables. You can refer to Executor for more details.

  • dirname (str) – The saving directory path.

  • main_program (Program, optional) – The program whose persistbale variables will be saved. You can refer to Basic Concept for more details. If it is None, the default main program will be used. Default: None.

  • filename (str, optional) – The file to save all variables. If you prefer to save variables in different files, set it to None. Default: None.

Returns

None

Examples

import paddle.fluid as fluid

dir_path = "./my_paddle_model"
file_name = "persistables"
image = fluid.data(name='img', shape=[None, 28, 28], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
feeder = fluid.DataFeeder(feed_list=[image, label], place=fluid.CPUPlace())

predict = fluid.layers.fc(input=image, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=predict, label=label)
avg_loss = fluid.layers.mean(loss)
exe = fluid.Executor(fluid.CPUPlace())
exe.run(fluid.default_startup_program())
fluid.io.save_persistables(executor=exe, dirname=dir_path, filename=file_name)
# The persistables variables weights and bias in the fc layer of the network
# are going to be saved in the same file named "persistables" in the path
# "./my_paddle_model"

save_vars

paddle.fluid.io.save_vars(executor, dirname, main_program=None, vars=None, predicate=None, filename=None)[source]

This API saves specific variables in the Program to files.

There are two ways to specify the variables to be saved: set variables in a list and assign it to the vars, or use the predicate function to select variables that make predicate(variable) == True. The first way has a higher priority.

The dirname is used to specify the folder where to save variables. If you prefer to save variables in separate files in the dirname floder, do not set filename. If you prefer to save all variables in a single file, use filename to specify it.

Parameters
  • executor (Executor) – The executor to run for saving variables.

  • dirname (str) – The folder where to save variables.

  • main_program (Program, optional) – The program whose variables will be saved. If it is None, the default main program will be used automatically. Default: None

  • vars (list[Variable], optional) – The list contains all variables to be saved. Default: None

  • predicate (function, optional) – The function selects the variables that make predicate(variable) == True. Default: None

  • filename (str, optional) – If you prefer to save all variables in a single file, use filename to specify it. Otherwise, let filename be None. Default: None

Returns

None

Raises

TypeError – If main_program is not an instance of Program nor None.

Examples

import paddle.fluid as fluid

main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(main_prog, startup_prog):
    data = fluid.layers.data(name="img", shape=[64, 784], append_batch_size=False)
    w = fluid.layers.create_parameter(shape=[784, 200], dtype='float32', name='fc_w')
    b = fluid.layers.create_parameter(shape=[200], dtype='float32', name='fc_b')
    hidden_w = fluid.layers.matmul(x=data, y=w)
    hidden_b = fluid.layers.elementwise_add(hidden_w, b)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(startup_prog)

# The first usage: use `vars` to set the saved variables.
var_list = [w, b]
path = "./my_paddle_vars"
fluid.io.save_vars(executor=exe, dirname=path, vars=var_list,
                filename="vars_file")
# w and b will be save in a file named "var_file".

# The second usage: use `predicate` to select the saved variable.
def name_has_fc(var):
    res = "fc" in var.name
    return res
param_path = "./my_paddle_model"
fluid.io.save_vars(executor=exe, dirname=param_path, main_program=main_prog, vars=None, predicate = name_has_fc)
# all variables whose names contain "fc " are saved.