TracedLayer

api_attr

imperative programming (dynamic graph)

class paddle.fluid.dygraph.TracedLayer(program, parameters, feed_names, fetch_names)[source]

TracedLayer is used to convert a forward dygraph model to a static graph model. This is mainly used to save the dygraph model for online inference using C++. Besides, users can also do inference in Python using the converted static graph model, which usually has better performance than the original dygraph model.

TracedLayer would run the static graph model using Executor and CompiledProgram . The static graph model would share parameters with the dygraph model.

All TracedLayer objects should not be created by constructor and should be created by static method TracedLayer.trace(layer, inputs) .

The TracedLayer can only be used to convert the data-independent dygraph model into the static graph model, which means the dygraph model should be independent with the tensor data and shape.

static trace(layer, inputs)

This method is the only allowed method to create TracedLayer object. It would call the layer(*inputs) method to run the dygraph model and convert it into a static graph model.

Parameters
  • layer (dygraph.Layer) – the layer object to be traced.

  • inputs (list(Variable)) – the input variables of the layer object.

Returns

A tuple of 2 items, whose the first item is the output of layer(*inputs) , and the second item is the created TracedLayer object.

Return type

tuple

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear, to_variable, TracedLayer
import numpy as np

class ExampleLayer(fluid.dygraph.Layer):
    def __init__(self):
        super(ExampleLayer, self).__init__()
        self._fc = Linear(3, 10)

    def forward(self, input):
        return self._fc(input)

with fluid.dygraph.guard():
    layer = ExampleLayer()
    in_np = np.random.random([2, 3]).astype('float32')
    in_var = to_variable(in_np)
    out_dygraph, static_layer = TracedLayer.trace(layer, inputs=[in_var])

    # run the static graph model using Executor inside
    out_static_graph = static_layer([in_var])

    print(len(out_static_graph)) # 1
    print(out_static_graph[0].shape) # (2, 10)

    # save the static graph model for inference
    static_layer.save_inference_model(dirname='./saved_infer_model')
set_strategy(build_strategy=None, exec_strategy=None)

Set the strategies when running static graph model.

Parameters
  • build_strategy (BuildStrategy, optional) – build strategy of CompiledProgram inside TracedLayer. Default None.

  • exec_strategy (ExecutionStrategy, optional) – execution strategy of CompiledProgram inside TracedLayer. Default None.

Returns

None

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear, to_variable, TracedLayer
import numpy as np

class ExampleLayer(fluid.dygraph.Layer):
    def __init__(self):
        super(ExampleLayer, self).__init__()
        self._fc = Linear(3, 10)

    def forward(self, input):
        return self._fc(input)

with fluid.dygraph.guard():
    layer = ExampleLayer()
    in_np = np.random.random([2, 3]).astype('float32')
    in_var = to_variable(in_np)

    out_dygraph, static_layer = TracedLayer.trace(layer, inputs=[in_var])

    build_strategy = fluid.BuildStrategy()
    build_strategy.enable_inplace = True

    exec_strategy = fluid.ExecutionStrategy()
    exec_strategy.num_threads = 2

    static_layer.set_strategy(build_strategy=build_strategy, exec_strategy=exec_strategy)
    out_static_graph = static_layer([in_var])
save_inference_model(dirname, feed=None, fetch=None)

Save the TracedLayer to a model for inference. The saved inference model can be loaded by C++ inference APIs.

Parameters
  • dirname (str) – the directory to save the inference model.

  • feed (list[int], optional) – the input variable indices of the saved inference model. If None, all input variables of the TracedLayer object would be the inputs of the saved inference model. Default None.

  • fetch (list[int], optional) – the output variable indices of the saved inference model. If None, all output variables of the TracedLayer object would be the outputs of the saved inference model. Default None.

Returns

None

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear, to_variable, TracedLayer
import numpy as np

class ExampleLayer(fluid.dygraph.Layer):
    def __init__(self):
        super(ExampleLayer, self).__init__()
        self._fc = Linear(3, 10)

    def forward(self, input):
        return self._fc(input)

save_dirname = './saved_infer_model'
in_np = np.random.random([2, 3]).astype('float32')

with fluid.dygraph.guard():
    layer = ExampleLayer()
    in_var = to_variable(in_np)
    out_dygraph, static_layer = TracedLayer.trace(layer, inputs=[in_var])
    static_layer.save_inference_model(save_dirname, feed=[0], fetch=[0])

place = fluid.CPUPlace()
exe = fluid.Executor(place)
program, feed_vars, fetch_vars = fluid.io.load_inference_model(save_dirname,
                                    exe)

fetch, = exe.run(program, feed={feed_vars[0]: in_np}, fetch_list=fetch_vars)
print(fetch.shape) # (2, 10)