Pad1D

class paddle.nn. Pad1D ( padding, mode='constant', value=0.0, data_format='NCL', name=None ) [source]

This interface is used to construct a callable object of the Pad1D class. Pad tensor according to pad, mode and value. If mode is reflect, pad[0] and pad[1] must be no greater than width-1.

Parameters
  • padding (Tensor|list[int]|int) – The padding size with data type 'int'. If is 'int', use the same padding in both dimensions. Else [len(padding)/2] dimensions of input will be padded. The pad has the form (pad_left, pad_right).

  • mode (str, optional) –

    Four modes: 'constant' (default), 'reflect', 'replicate', 'circular'. Default: 'constant'.

    • ’constant’ mode, uses a constant value to pad the input tensor.

    • ’reflect’ mode, uses reflection of the input boundaries to pad the input tensor.

    • ’replicate’ mode, uses input boundaries to pad the input tensor.

    • ’circular’ mode, uses circular input to pad the input tensor.

  • value (float, optional) – The value to fill the padded areas. Default is \(0.0\).

  • data_format (str, optional) – An string from: 'NCL', 'NLC'. Specify the data format of the input data. Default: 'NCL'.

  • name (str, optional) – For details, please refer to Name. Generally, no setting is required. Default: 'None'.

Returns

None

Examples

>>> import paddle
>>> import paddle.nn as nn

>>> input_shape = (1, 2, 3)
>>> pad = [1, 2]
>>> mode = "constant"
>>> data = paddle.arange(paddle.prod(paddle.to_tensor(input_shape)), dtype="float32").reshape(input_shape) + 1
>>> my_pad = nn.Pad1D(padding=pad, mode=mode)
>>> result = my_pad(data)
>>> print(result)
Tensor(shape=[1, 2, 6], dtype=float32, place=Place(cpu), stop_gradient=True,
[[[0., 1., 2., 3., 0., 0.],
  [0., 4., 5., 6., 0., 0.]]])
forward ( x )

forward

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( )

extra_repr

Extra representation of this layer, you can have custom implementation of your own layer.