# Conv1DTranspose¶

class paddle.nn. Conv1DTranspose ( in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, dilation=1, weight_attr=None, bias_attr=None, data_format='NCL' ) [source]

This interface is used to construct a callable object of the Conv1DTranspose class. For more details, refer to code examples. The 1-D convolution transpose layer calculates the output based on the input, filter, and dilation, stride, padding. Input(Input) and output(Output) are in ‘NCL’ format or ‘NLC’ where N is batch size, C is the number of channels, L is the length of the feature. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input $$X$$, the equation is:

$Out = \sigma (W \ast X + b)$

Where:

• $$X$$: Input value, a 3-D Tensor with ‘NCL’ format or ‘NLC’ format.

• $$W$$: Kernel value, a 3-D Tensor with ‘MCK’ format.

• $$\\ast$$: Convolution operation.

• $$b$$: Bias value, a 2-D Tensor with shape [M, 1].

• $$\\sigma$$: Activation function.

• $$Out$$: Output value, a 3-D Tensor with data format ‘NCL’ of ‘NLC’, the shape of $$Out$$ and $$X$$ may be different.

Example

• Input:

Input shape: $$(N, C_{in}, L_{in})$$

Filter shape: $$(C_{in}, C_{out}, L_f)$$

• Output:

Output shape: $$(N, C_{out}, L_{out})$$

Where

$\begin{split}L^\prime_{out} &= (L_{in} - 1) * stride - pad_top - pad_bottom + dilation * (L_f - 1) + 1 \\\\ L_{out} &\in [ L^\prime_{out}, L^\prime_{out} + stride ]\end{split}$

Note

The conv1d_transpose can be seen as the backward of the conv1d. For conv1d, when stride > 1, conv1d maps multiple input shape to the same output shape, so for conv1d_transpose, when stride > 1, input shape maps multiple output shape. If output_size is None, $$L_{out} = L^\prime_{out}$$; else, the $$L_{out}$$ of the output size must between $$L^\prime_{out}$$ and $$L^\prime_{out} + stride$$.

Parameters
• in_channels (int) – The number of channels in the input image.

• out_channels (int) – The number of the filter. It is as same as the output feature map.

• kernel_size (int|tuple|list, optional) – The filter size. If kernel_size is a tuple/list, it must contain one integers, (kernel_size). None if use output size to calculate kernel_size. Default: None. kernel_size and output_size should not be None at the same time.

• stride (int|tuple|list, optional) – The stride size. It means the stride in transposed convolution. If stride is a tuple/list, it must contain one integer, (stride_size). Default: stride = 1.

• output_padding (int|list|tuple, optional) – The count of zeros to be added to tail of each dimension. If it is a tuple/list, it must contain one integer. Default: 0.

• groups (int, optional) – The groups number of the Conv2D transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups = 1.

• bias (bool, optional) – Whether to use bias. Default: True.

• dilation (int|tuple|list, optional) – The dilation size. It means the spacing between the kernel points. If dilation is a tuple/list, it must contain one integer, (dilation_size). Default: dilation = 1.

• weight_attr (ParamAttr, optional) – The parameter attribute for learnable parameters/weights of conv1d_transpose. If it is set to None or one attribute of ParamAttr, conv1d_transpose will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.

• bias_attr (ParamAttr|bool, optional) – The parameter attribute for the bias of conv1d_transpose. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv1d_transpose will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.

Attribute:

weight (Parameter): the learnable weights of filters of this layer. bias (Parameter or None): the learnable bias of this layer.

Shape:

• x(Tensor): 3-D tensor with shape (batch, in_channels, length) when data_format is “NCL” or shape (batch, length, in_channels) when data_format is “NLC”.

• weight(Tensor): 3-D tensor with shape (in_channels, out_channels, kernel_length).

• bias(Tensor): 1-D tensor with shape (out_channels).

• output_size(int|tuple|list, optional): The output image size. If output size is a tuple/list, it must contain one integer, (feature_length). None if use kernel_size, padding, output_padding and stride to calculate output_size. If output_size and kernel_size are specified at the same time, They should follow the formula above. Default: None. output_size and kernel_size should not be None at the same time.

• output(Tensor): 3-D tensor with same shape as input x.

Examples

import paddle

# shape: (1, 2, 4)
x = paddle.to_tensor([[[4, 0, 9, 7],
[8, 0, 9, 2]]], dtype="float32")
# shape: (2, 1, 2)
[[4, 2]]], dtype="float32")

conv = Conv1DTranspose(2, 1, 2)
conv.weight.set_value(w)
y = conv(x)
print(y)
# Tensor(shape=[1, 1, 5], dtype=float32, place=Place(gpu:0), stop_gradient=False,
#        [[[60., 16., 99., 75., 4. ]]])

forward ( x, output_size=None )

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
• *inputs (tuple) – unpacked tuple arguments

• **kwargs (dict) – unpacked dict arguments