Layer

class paddle.fluid.dygraph.Layer(name_scope=None, dtype=VarType.FP32)[source]

Dynamic graph Layer based on OOD, includes the parameters of the layer, the structure of the forward graph and so on.

Parameters
  • name_scope (str, optional) – prefix name used by the layer to name parameters. If prefix is “my_layer”, parameter name in MyLayer can be “my_layer_0.w_n”, where “w” is the parameter base name and “n” is an unique suffix auto-generated. If None, prefix name will be snake cased class name. Default: None.

  • dtype (str or core.VarDesc.VarType, optional) – data type of this parameter. If set str, it can be “bool”, “float16”, “float32”, “float64”, “int8”, “int16”, “int32”, “int64”, “uint8” or “uint16”. Default: core.VarDesc.VarType.FP32

Returns

None

train()

Sets this Layer and all its sublayers to training mode. This only effects certain modules like Dropout and BatchNorm.

Returns

None

eval()

Sets this Layer and all its sublayers to evaluation mode. This only effects certain modules like Dropout and BatchNorm.

Returns

None

full_name()

Full name for this layer, composed by name_scope + “/” + MyLayer.__class__.__name__

Returns

full name of this layer.

Return type

str

register_forward_post_hook(hook)

Register a forward post-hook for Layer. The hook will be called after forward function has been computed.

It should have the following form, input and output of the hook is input and output of the Layer respectively. User can use forward post-hook to change the output of the Layer or perform information statistics tasks on the Layer.

hook(Layer, input, output) -> None or modified output

Parameters

hook (function) – a function registered as a forward post-hook

Returns

a HookRemoveHelper object that can be used to remove the added hook by calling hook_remove_helper.remove() .

Return type

HookRemoveHelper

Examples

import paddle.fluid as fluid
import numpy as np

# the forward_post_hook change the output of the layer: output = output * 2
def forward_post_hook(layer, input, output):
    # user can use layer, input and output for information statistis tasks

    # change the output
    return output * 2

with fluid.dygraph.guard():
    linear = fluid.Linear(13, 5, dtype="float32")

    # register the hook
    forward_post_hook_handle = linear.register_forward_post_hook(forward_post_hook)

    value1 = np.arange(26).reshape(2, 13).astype("float32")
    in1 = fluid.dygraph.to_variable(value1)

    out0 = linear(in1)

    # remove the hook
    forward_post_hook_handle.remove()

    out1 = linear(in1)

    # hook change the linear's output to output * 2, so out0 is equal to out1 * 2.
    assert (out0.numpy() == (out1.numpy()) * 2).any()
register_forward_pre_hook(hook)

Register a forward pre-hook for Layer. The hook will be called before forward function has been computed.

It should have the following form, input of the hook is input of the Layer, hook can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple). User can use forward pre-hook to change the input of the Layer or perform information statistics tasks on the Layer.

hook(Layer, input) -> None or modified input

Parameters

hook (function) – a function registered as a forward pre-hook

Returns

a HookRemoveHelper object that can be used to remove the added hook by calling hook_remove_helper.remove() .

Return type

HookRemoveHelper

Examples

import paddle.fluid as fluid
import numpy as np

# the forward_post_hook change the input of the layer: input = input * 2
def forward_pre_hook(layer, input):
    # user can use layer and input for information statistis tasks

    # change the input
    input_return = (input[0] * 2)
    return input_return

with fluid.dygraph.guard():
    linear = fluid.Linear(13, 5, dtype="float32")

    # register the hook
    forward_pre_hook_handle = linear.register_forward_pre_hook(forward_pre_hook)

    value0 = np.arange(26).reshape(2, 13).astype("float32")
    in0 = fluid.dygraph.to_variable(value0)
    out0 = linear(in0)

    # remove the hook
    forward_pre_hook_handle.remove()

    value1 = value0 * 2
    in1 = fluid.dygraph.to_variable(value1)
    out1 = linear(in1)

    # hook change the linear's input to input * 2, so out0 is equal to out1.
    assert (out0.numpy() == out1.numpy()).any()
create_parameter(shape, attr=None, dtype='float32', is_bias=False, default_initializer=None)

Create parameters for this layer.

Parameters
  • shape (list) – Shape of the parameter.

  • attr (ParamAttr, optional) – Parameter attribute of weight. Please refer to ParamAttr. Default: None.

  • dtype (str or core.VarDesc.VarType or str, optional) – Data type of this parameter. If set str, it can be “bool”, “float16”, “float32”, “float64”, “int8”, “int16”, “int32”, “int64”, “uint8” or “uint16”. Default: “float32”.

  • is_bias (bool, optional) – if this is a bias parameter. Default: False.

  • default_initializer (Initializer, optional) – the default initializer for this parameter. If set None, default initializer will be set to XavierInitializer and ConstantInitializer for non-bias and bias parameter, respectively. Default: None.

Returns

created parameter.

Return type

Variable

create_variable(name=None, persistable=None, dtype=None, type=VarType.LOD_TENSOR)

Create Variable for this layer.

Parameters
  • name (str, optional) – name of the variable. Please refer to Name . Default: None

  • persistable (bool, optional) – if set this variable persistable. Default: False

  • dtype (str or core.VarDesc.VarType, optional) – data type of this parameter. If set str, it can be “bool”, “float16”, “float32”, “float64”, “int8”, “int16”, “int32”, “int64”, “uint8” or “uint16”. If set None, it will be core.VarDesc.VarType.FP32. Default: None

  • type (core.VarDesc.VarType, optional) – type of the variable. No need to set this parameter. Default: core.VarDesc.VarType.LOD_TENSOR

Returns

created Variable.

Return type

Variable

parameters(include_sublayers=True)

Returns a list of all Parameters from current layer and its sub-layers.

Parameters

include_sublayers (bool, optional) – Whether include the parameters of sublayers. If True, also include the parameters from sublayers. Default: True

Returns

a list of Parameters.

Return type

list of Variable

sublayers(include_sublayers=True)

Returns a list of sub layers.

Parameters

include_sublayers (bool, optional) – Whether return the sublayers of sublayers. If True, also include the sublayers of sublayers. Default: True

Returns

a list of sub layers.

Return type

list of Layer

named_parameters(prefix='', include_sublayers=True)

Returns an iterator over all parameters in the Layer, yielding tuple of name and parameter.

Parameters
  • prefix (str, optional) – Prefix to prepend to all parameter names. Default: ‘’.

  • include_sublayers (bool, optional) – Whether include the parameters of sublayers. If True, also include the named parameters from sublayers. Default: True.

Yields

(string, Parameter) – Tuple of name and Parameter

Examples

import paddle.fluid as fluid

with fluid.dygraph.guard():
    fc1 = fluid.Linear(10, 3)
    fc2 = fluid.Linear(3, 10, bias_attr=False)
    model = fluid.dygraph.Sequential(fc1, fc2)
    for name, param in model.named_parameters():
        print(name, param)
named_sublayers(prefix='', include_sublayers=True, include_self=False, layers_set=None)

Returns an iterator over all sublayers in the Layer, yielding tuple of name and sublayer. The duplicate sublayer will only be yielded once.

Parameters
  • prefix (str, optional) – Prefix to prepend to all parameter names. Default: ‘’.

  • include_sublayers (bool, optional) – Whether include the sublayers. Default: True.

  • include_self (bool, optional) – Whether include the Layer itself. Default: False.

  • layers_set (set, optioanl) – The set to record duplicate sublayers. Default: None.

Yields

(string, Layer) – Tuple of name and Layer

Examples

import paddle.fluid as fluid

with fluid.dygraph.guard():
    fc1 = fluid.Linear(10, 3)
    fc2 = fluid.Linear(3, 10, bias_attr=False)
    model = fluid.dygraph.Sequential(fc1, fc2)
    for prefix, layer in model.named_sublayers():
        print(prefix, layer)
clear_gradients()

Clear the gradients of all parameters for this layer.

Returns

None

Examples

import paddle.fluid as fluid
import numpy as np

with fluid.dygraph.guard():
    value = np.arange(26).reshape(2, 13).astype("float32")
    a = fluid.dygraph.to_variable(value)
    linear = fluid.Linear(13, 5, dtype="float32")
    adam = fluid.optimizer.Adam(learning_rate=0.01,
                                parameter_list=linear.parameters())
    out = linear(a)
    out.backward()
    adam.minimize(out)
    linear.clear_gradients()
forward(*inputs, **kwargs)

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

add_sublayer(name, sublayer)

Adds a sub Layer instance.

Added sublayer can be accessed by self.name

Parameters
  • name (str) – name of this sublayer.

  • sublayer (Layer) – an instance of Layer.

Returns

the sublayer passed in.

Return type

Layer

add_parameter(name, parameter)

Adds a Parameter instance.

Added parameter can be accessed by self.name

Parameters
  • name (str) – name of this sublayer.

  • parameter (Parameter) – an instance of Parameter.

Returns

the parameter passed in.

Return type

Parameter

state_dict(destination=None, include_sublayers=True, structured_name_prefix='')

Get all parameters of current layer and its sub-layers. And set all the parameters into a dict

Parameters
  • destination (dict, optional) – If provide, all the parameters will set to this dict . Default: None

  • include_sublayers (bool, optional) – If true, also include the parameters from sublayers. Default: True

Retruns:

dict: a dict contains all the parameters

Examples

import paddle.fluid as fluid
with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])

    state_dict = emb.state_dict()
    fluid.save_dygraph( state_dict, "paddle_dy")
set_dict(stat_dict, include_sublayers=True, use_structured_name=True)

Set parameters from stat_dict. All the parameters will be reset by the tensor in the stat_dict

Parameters
  • state_dict (dict) – Dict contains all the parameters

  • include_sublayers (bool, optional) – If true, also include the parameters from sublayers. Default: True

  • use_structured_name (bool, optional) – If true, use structured name as key, otherwise, use parameter name as key. Default: True

Returns

None

Examples

import paddle.fluid as fluid
with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])

    state_dict = emb.state_dict()
    fluid.save_dygraph( state_dict, "paddle_dy")

    para_state_dict, _ = fluid.load_dygraph( "paddle_dy")

    emb.set_dict( para_state_dict )
load_dict(stat_dict, include_sublayers=True, use_structured_name=True)

Set parameters from stat_dict. All the parameters will be reset by the tensor in the stat_dict

This api will be Deprecated. Please use set_dict

Parameters
  • state_dict (dict) – Dict contains all the parameters

  • include_sublayers (bool, optional) – If true, also include the parameters from sublayers. Default: True

  • use_structured_name (bool, optional) – If true, use structured name as key, otherwise, use parameter name as key. Default: True

Returns

None

Examples

import paddle.fluid as fluid
with fluid.dygraph.guard():
    emb = fluid.dygraph.Embedding([10, 10])

    state_dict = emb.state_dict()
    fluid.save_dygraph( state_dict, "paddle_dy")

    para_state_dict, _ = fluid.load_dygraph( "paddle_dy")

    emb.load_dict( para_state_dict )