Program

class paddle.fluid.Program[source]

Create Python Program. It has at least one Block, when the control flow op like conditional_block, while While is included, it will contain nested block.

Please reference the framework.proto for details.

A set of Program usually contains startup program and main program. A startup program is set to contain some initial work, eg. initialize the Parameter, and the main program will contain the network structure and vars for train.

A set of Program can be used for test or train, in train program , Paddle will contain all content to build a train network, in test program Paddle will prune some content which is irrelevant to test, eg. backward ops and vars.

Notes:

we have default_startup_program and default_main_program by default, a pair of them will shared the parameters. The default_startup_program only run once to initialize parameters, default_main_program run in every mini batch and adjust the weights.

Returns

An empty Program.

Return type

Program

Examples

import paddle.fluid as fluid

main_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(main_program=main_program, startup_program=startup_program):
    x = fluid.layers.data(name="x", shape=[-1, 784], dtype='float32')
    y = fluid.layers.data(name="y", shape=[-1, 1], dtype='int32')
    z = fluid.layers.fc(name="fc", input=x, size=10, act="relu")

print("main program is: {}".format(main_program))
print("start up program is: {}".format(startup_program))
to_string(throw_on_error, with_details=False)

To debug string.

Parameters
  • throw_on_error (bool) – raise Value error when any of required fields is not set.

  • with_details (bool) – True if more details about variables and parameters, e.g., trainable, optimize_attr, need to print.

Returns

The debug string describe current Program.

Return type

str

Raises

ValueError – If any of required fields is not set and throw_on_error is True.

Examples

import paddle.fluid as fluid

prog = fluid.default_main_program()
prog_string = prog.to_string(throw_on_error=True, with_details=False)
print("program string without detial: {}".format(prog_string))
prog_string_with_detail = prog.to_string(throw_on_error=True, with_details=True)
print("program string with detial: {}".format(prog_string_with_detail))
clone(for_test=False)
Notes:

1. Program.clone() method DOES NOT clone DataLoader .

2. Recommend you to use clone before using Opimizer.minimize.

3. This API has no effect in Dygraph Mode

Create a new Program with forward content of original one when for_test=True. Create a new Program as the same as original one when for_test=False

Some operators, e.g., batch_norm , behave differently between training and testing. They have an attribute, is_test, to control this behaviour. This method will change the is_test attribute of them to True when for_test=True.

  • Set for_test to False when we want to clone the program for training.

  • Set for_test to True when we want to clone the program for testing. We will prune the backward and optimize part of the program when you use clone after Opimizer.minimize, but we still recommend you to use clone before using Opimizer.minimize.

For Example:
test_program = fluid.default_main_program().clone(for_test=True)
# Here we use clone before Momentum
optimizer = fluid.optimizer.Momentum(learning_rate=0.01, momentum=0.9)
optimizer.minimize()
Parameters

for_test (bool) – True if change the is_test attribute of operators to True.

Returns

A new Program with forward content of original one when for_test=True. A new Program as the same as original one when for_test=False

Return type

Program

Examples:

Notes: The Program’s order maybe different after clone and this will not affect your training or testing progress. In the following example we give you an simple method print_prog(program) to print Program Descs inorder to make sure you have same print result after clone:

import paddle.fluid as fluid
import six


def print_prog(prog):
    for name, value in sorted(six.iteritems(prog.block(0).vars)):
        print(value)
    for op in prog.block(0).ops:
        print("op type is {}".format(op.type))
        print("op inputs are {}".format(op.input_arg_names))
        print("op outputs are {}".format(op.output_arg_names))
        for key, value in sorted(six.iteritems(op.all_attrs())):
            if key not in ['op_callstack', 'op_role_var']:
                print(" [ attrs: {}:   {} ]".format(key, value))
  1. To clone a test program, the sample code is:
    import paddle.fluid as fluid
    import six
    
    def print_prog(prog):
        for name, value in sorted(six.iteritems(prog.block(0).vars)):
            print(value)
        for op in prog.block(0).ops:
            print("op type is {}".format(op.type))
            print("op inputs are {}".format(op.input_arg_names))
            print("op outputs are {}".format(op.output_arg_names))
            for key, value in sorted(six.iteritems(op.all_attrs())):
                if key not in ['op_callstack', 'op_role_var']:
                    print(" [ attrs: {}:   {} ]".format(key, value))
    
    train_program = fluid.Program()
    startup_program = fluid.Program()
    
    # startup_program is used to do some parameter init work,
    # and main program is used to hold the network
    with fluid.program_guard(train_program, startup_program):
        with fluid.unique_name.guard():
            img = fluid.layers.data(name='image', shape=[784])
            hidden = fluid.layers.fc(input=img, size=200, act='relu')
            hidden = fluid.layers.dropout(hidden, dropout_prob=0.5)
            loss = fluid.layers.cross_entropy(
                                      input=fluid.layers.fc(hidden, size=10, act='softmax'),
                        label=fluid.layers.data(name='label', shape=[1], dtype='int64'))
            avg_loss = fluid.layers.mean(loss)
            test_program = train_program.clone(for_test=False)
    print_prog(test_program)
    
    # Due to parameter sharing usage for train and test, so we need to use startup program of train
    # instead of using test startup program, while nothing is in test's startup program
    
    # In Paddle Fluid we will share weights by using the same Variable name. In train and test program
    # all parameters will have the same name and this can make train and test program sharing parameters,
    # that's why we need to use startup program of train. And for startup program of test, it has nothing,
    # since it is a new program.
    
    with fluid.program_guard(train_program, startup_program):
        with fluid.unique_name.guard():
            sgd = fluid.optimizer.SGD(learning_rate=1e-3)
            sgd.minimize(avg_loss)
    
  2. The clone method can be avoid if you create program for training and program for testing individually.
    import paddle.fluid as fluid
    import six
    
    def print_prog(prog):
        for name, value in sorted(six.iteritems(prog.block(0).vars)):
            print(value)
        for op in prog.block(0).ops:
            print("op type is {}".format(op.type))
            print("op inputs are {}".format(op.input_arg_names))
            print("op outputs are {}".format(op.output_arg_names))
            for key, value in sorted(six.iteritems(op.all_attrs())):
                if key not in ['op_callstack', 'op_role_var']:
                    print(" [ attrs: {}:   {} ]".format(key, value))
    def network(is_test):
        img = fluid.layers.data(name='image', shape=[784])
        hidden = fluid.layers.fc(input=img, size=200, act='relu')
        hidden = fluid.layers.dropout(hidden, dropout_prob=0.5)
        loss = fluid.layers.cross_entropy(
            input=fluid.layers.fc(hidden, size=10, act='softmax'),
            label=fluid.layers.data(name='label', shape=[1], dtype='int64'))
        avg_loss = fluid.layers.mean(loss)
        return avg_loss
    
    
    train_program_2 = fluid.Program()
    startup_program_2 = fluid.Program()
    test_program_2 = fluid.Program()
    with fluid.program_guard(train_program_2, startup_program_2):
        with fluid.unique_name.guard():
             sgd = fluid.optimizer.SGD(learning_rate=1e-3)
             sgd.minimize(avg_loss)
    # the test startup program is not used.
    with fluid.program_guard(test_program_2, fluid.Program()):
        with fluid.unique_name.guard():
            loss = network(is_test=True)
    print(test_program_2)
    

The two code snippets above will generate and print same programs.

static parse_from_string(binary_str)
Notes:

1. All information about parameters will be lost after serialization

2. This API has no effect in Dygraph mode

Deserialize a Program from protobuf binary string. This method always use to save and load model

Parameters

binary_str_type (str) – the binary prootbuf string.

Returns

A deserialized Program.

Return type

Program

Examples

import paddle.fluid as fluid

startup_prog = fluid.Program()
main_prog = fluid.Program()
with fluid.program_guard(startup_prog, main_prog):
    x = fluid.layers.data(
        name='X', shape=[1000, 784], dtype='float32', append_batch_size=False)

    y = fluid.layers.data(
        name='Y', shape=[784, 100], dtype='float32', append_batch_size=False)

    z = fluid.layers.mul(x=x, y=y)

    binary_str = fluid.default_main_program().desc.serialize_to_string()
    prog_restored = fluid.default_main_program().parse_from_string(binary_str)

    print(fluid.default_main_program())
    print(prog_restored)
num_blocks

The number of Block in this Program.

Notes: This API has no effect in Dygraph mode

Returns

num of Block in current Program

Return type

int(Platform-dependent size)

Examples

import paddle.fluid as fluid

prog = fluid.default_main_program()
num_blocks = prog.num_blocks
print(num_blocks)
random_seed

The default random seed for random operators in Program. 0 means get the random seed from random device.

Notes: It must be set before the operators have been added.

Returns

Random seed in current Program

Return type

int64

Examples

import paddle.fluid as fluid

prog = fluid.default_main_program()
random_seed = prog.random_seed
x_var = fluid.layers.data(name="X", shape=[3,3], dtype="float32", append_batch_size=False)

# Here we need to set random seed before we use fluid.layers.dropout
print(random_seed)
prog.random_seed = 1
z_var = fluid.layers.dropout(x_var, 0.7)

print(prog.random_seed)
global_block()
Notes:

This API has no effect in Dygraph mode

Get the first Block of this Program.

Returns

The first Block of this Program.

Return type

Block

Examples

import paddle.fluid as fluid

prog = fluid.default_main_program()
gb_block = prog.global_block()
print(gb_block)
block(index)
Notes:

This API has no effect in Dygraph mode

Get the index Block of this Program

Parameters

index (int) –

Returns

The index block

Return type

Block

Examples

import paddle.fluid as fluid

prog = fluid.default_main_program()
block_0 = prog.block(0)
print(block_0)
current_block()
Notes:

This API has no effect in Dygraph mode

Get the current Block . The current Block is the Block to append operators.

Returns

The index Block

Return type

Block

Examples

import paddle.fluid as fluid

prog = fluid.default_main_program()
current_blk = prog.current_block()
print(current_blk)
list_vars()

Get all Variable from this Program. A iterable object is returned.

Returns

The Generator will yield every variable in this program.

Return type

iterable Variable

Examples

import paddle.fluid as fluid

prog = fluid.default_main_program()
img = fluid.layers.data(name='img', shape=[1,28,28], dtype='float32')
label = fluid.layers.data(name='label', shape=[128,1], dtype='int64')
for var in prog.list_vars():
    print(var)