# Conv2DTranspose¶

class paddle.nn. Conv2DTranspose ( in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, dilation=1, groups=1, weight_attr=None, bias_attr=None, data_format='NCHW' ) [source]

This interface is used to construct a callable object of the Conv2DTranspose class. For more details, refer to code examples. The convolution2D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input and output are in NCHW format. Where N is batch size, C is the number of feature map, H is the height of the feature map, and W is the width of the feature map. Filter’s shape is [CMHW] , where C is the number of input feature map, M is the number of output feature map, H is the height of the filter, and W is the width of the filter. If the groups is greater than 1, C will equal the number of input feature map divided by the groups. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result. The details of convolution transpose layer, please refer to the following explanation and references conv2dtranspose . For each input $$X$$, the equation is:

$Out = \sigma (W \ast X + b)$

Where:

• $$X$$: Input value, a Tensor with NCHW format.

• $$W$$: Filter value, a Tensor with shape [CMHW] .

• $$\\ast$$: Convolution operation.

• $$b$$: Bias value, a 1-D Tensor with shape [M].

• $$\\sigma$$: Activation function.

• $$Out$$: Output value, the shape of $$Out$$ and $$X$$ may be different.

Parameters
• in_channels (int) – The number of channels in the input image.

• out_channels (int) – The number of channels produced by the convolution.

• kernel_size (int|list|tuple) – The kernel size. If kernel_size is a list/tuple, it must contain two integers, (kernel_size_H, kernel_size_W). Otherwise, the kernel will be a square.

• stride (int|list|tuple, optional) – The stride size. If stride is a list/tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: 1.

• output_padding (int|list|tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0.

• dilation (int|list|tuple, optional) – The dilation size. If dilation is a list/tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: 1.

• groups (int, optional) – The groups number of the Conv2D transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: 1.

• weight_attr (ParamAttr, optional) – The parameter attribute for learnable weights(Parameter) of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.

• bias_attr (ParamAttr|bool, optional) – The attribute for the bias of conv2d_transpose. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv2d_transpose will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.

• data_format (str, optional) – Data format that specifies the layout of input. It can be “NCHW” or “NHWC”. Default: “NCHW”.

Attribute:

weight (Parameter): the learnable weights of filters of this layer.

bias (Parameter or None): the learnable bias of this layer.

Shape:

• x: $$(N, C_{in}, H_{in}, W_{in})$$

• weight: $$(C_{in}, C_{out}, K_{h}, K_{w})$$

• bias: $$(C_{out})$$

• output: $$(N, C_{out}, H_{out}, W_{out})$$

Where

\begin{align}\begin{aligned}H^\prime_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (kernel\_size[0] - 1) + 1\\W^\prime_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (kernel\_size[1] - 1) + 1\\H_{out} &\in [ H^\prime_{out}, H^\prime_{out} + strides[0] )\\W_{out} &\in [ W^\prime_{out}, W^\prime_{out} + strides[1] )\end{aligned}\end{align}

Examples

x_var = paddle.uniform((2, 4, 8, 8), dtype='float32', min=-1., max=1.)

conv = nn.Conv2DTranspose(4, 6, (3, 3))
y_var = conv(x_var)
y_np = y_var.numpy()
print(y_np.shape)
# (2, 6, 10, 10)
forward ( x, output_size=None )

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
• *inputs (tuple) – unpacked tuple arguments

• **kwargs (dict) – unpacked dict arguments