Executor

class paddle.fluid.executor.Executor(place)[source]

An Executor in Python, supports single/multiple-GPU running, and single/multiple-CPU running. When construction the Executor, the device is required.

Parameters

place (fluid.CPUPlace()|fluid.CUDAPlace(n)) – This parameter represents the executor run on which device.

Returns

Executor

Examples

import paddle.fluid as fluid
import paddle.fluid.compiler as compiler
import numpy
import os

use_cuda = True
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)

train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
    data = fluid.data(name='X', shape=[None, 1], dtype='float32')
    hidden = fluid.layers.fc(input=data, size=10)
    loss = fluid.layers.mean(hidden)
    fluid.optimizer.SGD(learning_rate=0.01).minimize(loss)

# Run the startup program once and only once.
# Not need to optimize/compile the startup program.
startup_program.random_seed=1
exe.run(startup_program)

# Run the main program directly without compile.
x = numpy.random.random(size=(10, 1)).astype('float32')
loss_data, = exe.run(train_program,
                     feed={"X": x},
                     fetch_list=[loss.name])

# Or, compiled the program and run. See `CompiledProgram`
# for more detail.
# NOTE: If you use CPU to run the program, you need
# to specify the CPU_NUM, otherwise, fluid will use
# all the number of the logic core as the CPU_NUM,
# in that case, the batch size of the input should be
# greater than CPU_NUM, if not, the process will be
# failed by an exception.
if not use_cuda:
    os.environ['CPU_NUM'] = str(2)

compiled_prog = compiler.CompiledProgram(
    train_program).with_data_parallel(
    loss_name=loss.name)
loss_data, = exe.run(compiled_prog,
                     feed={"X": x},
                     fetch_list=[loss.name])
close()

Close the executor. This interface is used for distributed training (PServers mode). This executor can not be used after calling the interface, because this interface releases resources associated with the current Trainer.

Returns

None

Examples

import paddle.fluid as fluid

cpu = fluid.CPUPlace()
exe = fluid.Executor(cpu)
# execute training or testing
exe.close()
run(program=None, feed=None, fetch_list=None, feed_var_name='feed', fetch_var_name='fetch', scope=None, return_numpy=True, use_program_cache=False)

Run the specified Program or CompiledProgram. It should be noted that the executor will execute all the operators in Program or CompiledProgram without pruning some operators of the Program or CompiledProgram according to fetch_list. And you could specify the scope to store the Variables during the executor running if the scope is not set, the executor will use the global scope, i.e. fluid.global_scope().

Parameters
  • program (Program|CompiledProgram) – This parameter represents the Program or CompiledProgram to be executed. If this parameter is not provided, that parameter is None, the program will be set to fluid.default_main_program(). The default is None.

  • feed (list|dict) – This parameter represents the input variables of the model. If it is single card training, the feed is dict type, and if it is multi-card training, the parameter feed can be dict or list type variable. If the parameter type is dict, the data in the feed will be split and sent to multiple devices (CPU/GPU), that is to say, the input data will be evenly sent to different devices, so you should make sure the number of samples of the current mini-batch must be greater than the number of places; if the parameter type is list, those data are copied directly to each device, so the length of this list should be equal to the number of places. The default is None.

  • fetch_list (list) – This parameter represents the variables that need to be returned after the model runs. The default is None.

  • feed_var_name (str) – This parameter represents the name of the input variable of the feed operator. The default is “feed”.

  • fetch_var_name (str) – This parameter represents the name of the output variable of the fetch operator. The default is “fetch”.

  • scope (Scope) – the scope used to run this program, you can switch it to different scope. default is fluid.global_scope()

  • return_numpy (bool) – This parameter indicates whether convert the fetched variables (the variable specified in the fetch list) to numpy.ndarray. if it is False, the type of the return value is a list of LoDTensor. The default is True.

  • use_program_cache (bool) – This parameter indicates whether the input Program is cached. If the parameter is True, the model may run faster in the following cases: the input program is fluid.Program, and the parameters(program, feed variable name and fetch_list variable) of this interface remains unchanged during running. The default is False.

Returns

The fetched result list.

Return type

List

Notes

  1. If it is multi-card running and the feed parameter is dict type, the input data will be evenly sent to different cards. For example, using two GPUs to run the model, the input sample number is 3, that is, [0, 1, 2], the sample number on GPU0 is 1, that is, [0], and the sample number on GPU1 is 2, that is, [1, 2]. If the number of samples is less than the number of devices, the program will throw an exception, so when running the model, you should make sure that the number of samples of the last batch of the data set should be greater than the number of CPU cores or GPU cards, if it is less than, it is recommended that the batch be discarded.

  2. If the number of CPU cores or GPU cards available is greater than 1, the fetch results are spliced together in dimension 0 for the same variable values (variables in fetch_list) on different devices.

Examples

import paddle.fluid as fluid
import numpy

# First create the Executor.
place = fluid.CPUPlace() # fluid.CUDAPlace(0)
exe = fluid.Executor(place)

data = fluid.data(name='X', shape=[None, 1], dtype='float32')
hidden = fluid.layers.fc(input=data, size=10)
loss = fluid.layers.mean(hidden)
adam = fluid.optimizer.Adam()
adam.minimize(loss)

# Run the startup program once and only once.
exe.run(fluid.default_startup_program())

x = numpy.random.random(size=(10, 1)).astype('float32')
outs = exe.run(feed={'X': x},
               fetch_list=[loss.name])
infer_from_dataset(program=None, dataset=None, scope=None, thread=0, debug=False, fetch_list=None, fetch_info=None, print_period=100, fetch_handler=None)

Infer from a pre-defined Dataset. Dataset is defined in paddle.fluid.dataset. Given a program, either a program or compiled program, infer_from_dataset will consume all data samples in dataset. Input scope can be given by users. By default, scope is global_scope(). The total number of thread run in training is thread. Thread number used in training will be minimum value of threadnum in Dataset and the value of thread in this interface. Debug can be set so that executor will display Run-Time for all operators and the throughputs of current infer task.

The document of infer_from_dataset is almost the same as train_from_dataset, except that in distributed training, push gradients will be disabled in infer_from_dataset. infer_from_dataset() can be used for evaluation in multi-threadvery easily.

Parameters
  • program (Program|CompiledProgram) – the program that needs to be run, if not provided, then default_main_program (not compiled) will be used.

  • dataset (paddle.fluid.Dataset) – dataset created outside this function, a user should provide a well-defined dataset before calling this function. Please check the document of Dataset if needed. default is None

  • scope (Scope) – the scope used to run this program, you can switch it to different scope for each run. default is global_scope

  • thread (int) – number of thread a user wants to run in this function. Default is 0, which means using thread num of dataset

  • debug (bool) – whether a user wants to run infer_from_dataset, default is False

  • fetch_list (Variable List) – fetch variable list, each variable will be printed during training, default is None

  • fetch_info (String List) – print information for each variable, default is None

  • print_period (int) – the number of mini-batches for each print, default is 100

  • fetch_handler (FetchHandler) – a user define class for fetch output.

Returns

None

Examples

import paddle.fluid as fluid

place = fluid.CPUPlace() # you can set place = fluid.CUDAPlace(0) to use gpu
exe = fluid.Executor(place)
x = fluid.layers.data(name="x", shape=[10, 10], dtype="int64")
y = fluid.layers.data(name="y", shape=[1], dtype="int64", lod_level=1)
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_use_var([x, y])
dataset.set_thread(1)
filelist = [] # you should set your own filelist, e.g. filelist = ["dataA.txt"]
dataset.set_filelist(filelist)
exe.run(fluid.default_startup_program())
exe.infer_from_dataset(program=fluid.default_main_program(),
                       dataset=dataset)
train_from_dataset(program=None, dataset=None, scope=None, thread=0, debug=False, fetch_list=None, fetch_info=None, print_period=100, fetch_handler=None)

Train from a pre-defined Dataset. Dataset is defined in paddle.fluid.dataset. Given a program, either a program or compiled program, train_from_dataset will consume all data samples in dataset. Input scope can be given by users. By default, scope is global_scope(). The total number of thread run in training is thread. Thread number used in training will be minimum value of threadnum in Dataset and the value of thread in this interface. Debug can be set so that executor will display Run-Time for all operators and the throughputs of current training task.

Note: train_from_dataset will destroy all resources created within executor for each run.

Parameters
  • program (Program|CompiledProgram) – the program that needs to be run, if not provided, then default_main_program (not compiled) will be used.

  • dataset (paddle.fluid.Dataset) – dataset created outside this function, a user should provide a well-defined dataset before calling this function. Please check the document of Dataset if needed.

  • scope (Scope) – the scope used to run this program, you can switch it to different scope for each run. default is global_scope

  • thread (int) – number of thread a user wants to run in this function. Default is 0, which means using thread num of dataset

  • debug (bool) – whether a user wants to run train_from_dataset

  • fetch_list (Variable List) – fetch variable list, each variable will be printed during training

  • fetch_info (String List) – print information for each variable, its length should be equal to fetch_list

  • print_period (int) – the number of mini-batches for each print, default is 100

  • fetch_handler (FetchHandler) – a user define class for fetch output.

Returns

None

Examples

import paddle.fluid as fluid

place = fluid.CPUPlace() # you can set place = fluid.CUDAPlace(0) to use gpu
exe = fluid.Executor(place)
x = fluid.layers.data(name="x", shape=[10, 10], dtype="int64")
y = fluid.layers.data(name="y", shape=[1], dtype="int64", lod_level=1)
dataset = fluid.DatasetFactory().create_dataset()
dataset.set_use_var([x, y])
dataset.set_thread(1)
filelist = [] # you should set your own filelist, e.g. filelist = ["dataA.txt"]
dataset.set_filelist(filelist)
exe.run(fluid.default_startup_program())
exe.train_from_dataset(program=fluid.default_main_program(),
                       dataset=dataset)