py_func

paddle.fluid.layers.py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None)[source]

This API is used to register customized OP to Fluid. The forward function of the registered OP is func and the backward function of that is backward_func. Paddle will call func at forward runtime and call backward_func at backward runtime(if backward_func is not None). x is the input of func, whose type must be LoDTensor; out is the output of func, whose type can be either LoDTensor or NumPy array.

The input of the backward function backward_func is x, out and the gradient of out. If some variables of out have no gradient, the relevant input variable of backward_func is None. If some variables of x do not have a gradient, the user should return None in backward_func.

The data type and shape of out should also be set correctly before this API is called, and the data type and shape of the gradient of out and x will be inferred automatically.

This API can also be used to debug the neural network by setting the func as a function that only print variables.

Parameters
  • func (callable) – The forward function of the registered OP. When the network is running, the forward output out will be calculated according to this function and the forward input x.

  • x (Variable) – The input of the forward function func, its type can be Variable | tuple[Variable] | list[Variale], in which Variable is LoDTensor.

  • out (Variable) – The output of the forward function func, its type can be Variable | tuple[Variable] | list[Variale], in which Variable can be either LoDTensor or NumPy array. Since Paddle cannot automatically infer the shape and data type of out, out must be created in advance.

  • backward_func (callable, optional) – The backward function of the registered OP. Its default value is None, which means there is no reverse calculation. If it is not None, backward_func is called to calculate the gradient of x when the network is at backward runtime.

  • skip_vars_in_backward_input (Variable, optional) – It’s used to limit the input variable list of backward_func, and it can be single Variable, tuple[Variable] or list[Variable]. It must belong to either x or out. The default value is None, which means that no variables need to be removed from x and out. If it is not None, these variables will not be the input of backward_func. This parameter is only useful when backward_func is not None.

Returns

The output out of the forward function func.

Return type

Variable

Examples

import paddle.fluid as fluid
import six

def create_tmp_var(name, dtype, shape):
return fluid.default_main_program().current_block().create_var(
name=name, dtype=dtype, shape=shape)

# Tanh activation function provided by Paddle C++ op
# Here, tanh is used as an example to show how to use py_func
def tanh(x):
    return np.tanh(x)

# Skip forward input x
def tanh_grad(y, dy):
    return np.array(dy) * (1 - np.square(np.array(y)))

def debug_func(x):
    print(x)

def simple_net(img, label):
    hidden = img
    for idx in six.moves.range(4):
        hidden = fluid.layers.fc(hidden, size=200)
        new_hidden = create_tmp_var(name='hidden_{}'.format(idx),
            dtype=hidden.dtype, shape=hidden.shape)

        # User-defined forward and backward
        hidden = fluid.layers.py_func(func=tanh, x=hidden,
            out=new_hidden, backward_func=tanh_grad,
            skip_vars_in_backward_input=hidden)

        # User-defined debugging layer, which can print out variable details
        fluid.layers.py_func(func=debug_func, x=hidden, out=None)

    prediction = fluid.layers.fc(hidden, size=10, act='softmax')
    loss = fluid.layers.cross_entropy(input=prediction, label=label)
    return fluid.layers.mean(loss)