sequence_conv

api_attr

declarative programming (static graph)

paddle.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=True, padding_start=None, bias_attr=None, param_attr=None, act=None, name=None)[source]

Notes: The Op only receives LoDTensor as input. If your input is Tensor, please use conv2d Op.(fluid.layers. conv2d ).

This operator receives input sequences with variable length and other convolutional configuration parameters(num_filters, filter_size) to apply the convolution operation. It fills all-zero padding data on both sides of the sequence by default to ensure that the output is the same length as the input. You can customize the padding behavior by configuring the parameter padding_start .

Warning: the parameter padding take no effect and will be deprecated in the future.

Here we will illustrate the details of the padding operation:
For a mini-batch of 2 variable lengths sentences, containing 3, and 1 time-steps:
Assumed input (X) is a [4, N] float LoDTensor, and for the sake of simplicity, we assume N=2.
input.data = [[1, 1],
              [2, 2],
              [3, 3],
              [4, 4]]

This is to say that input (X) has 4 words and the dimension of each word
representation is 2.

* Case1:

    If padding_start is -1 and filter_size is 3.
    The length of padding data is calculated as follows:
    up_pad_len = max(0, -padding_start) = 1
    down_pad_len = max(0, filter_size + padding_start - 1) = 1

    The output of the input sequence after padding is:
    data_aftet_padding = [[0, 0, 1, 1, 2, 2],
                          [1, 1, 2, 2, 3, 3],
                          [2, 2, 3, 3, 0, 0],
                          [0, 0, 4, 4, 0, 0]]

    It will be multiplied by the filter weight to get the final output.
    Assume num_filters = 3
    output.data = [[ 0.3234, -0.2334,  0.7433],
                   [ 0.5646,  0.9464, -0.1223],
                   [-0.1343,  0.5653,  0.4555],
                   [ 0.9954, -0.1234, -0.1234]]
    output.shape = [4, 3]     # 3 = num_filters
    output.lod = [[0, 3, 4]]  # Remain the same
Parameters
  • input (Variable) – LoDTensor with shape \((M, K)\), where M is the total time-step of mini-batch and K is hidden_size of input. Only lod_level of 1 is supported. The data type should be float32 or float64.

  • num_filters (int) – the number of filters.

  • filter_size (int) – the height of filter. Specified filter width is not supported, the width is hidden_size by default. Default: 3.

  • filter_stride (int) – stride of the filter. Currently only supports stride = 1.

  • padding (bool) – the parameter padding take no effect and will be discarded in the future. Currently, it will always pad input to make sure the length of the output is the same as input whether padding is set true or false. Because the length of input sequence may be shorter than filter_size, which will cause the convolution result to not be computed correctly. These padding data will not be trainable or updated while training. Default: True.

  • padding_start (int) – It is used to indicate the start index for padding the input sequence, which can be negative. The negative number means to pad |padding_start| time-steps of all-zero data at the beginning of each instance. The positive number means to skip padding_start time-steps of each instance, and it will pad \(filter\_size + padding\_start - 1\) time-steps of all-zero data at the end of the sequence to ensure that the output is the same length as the input. If set None, the same length \(\frac{filter\_size}{2}\) of data will be filled on both sides of the sequence. If set 0, the length of \(filter\_size - 1\) data is padded at the end of each input sequence. Default: None.

  • bias_attr (ParamAttr) – To specify the bias parameter property. Default: None, which means the default bias parameter property is used. See usage for details in ParamAttr .

  • param_attr (ParamAttr) – To specify the weight parameter property. Default: None, which means the default weight parameter property is used. See usage for details in ParamAttr .

  • act (str) – Activation to be applied to the output of this layer, such as tanh, softmax, sigmoid, relu. For more information, please refer to Activation Function . Default: None.

  • name (str, optional) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name .

Returns

LoDTensor with the same length as input. The data type is float32 or float64, which is same as input.

Return type

Variable

Examples

import paddle.fluid as fluid

x = fluid.data(name='x', shape=[-1, 10], dtype='float32', lod_level=1)
x_conved = fluid.layers.sequence_conv(input=x, num_filters=2, filter_size=3, padding_start=-1)