Stub

class paddle.nn.quant. Stub ( observer=None ) [source]

The stub is used as placeholders that will be replaced by observers before PTQ or QAT. It is hard to assign a quantization configuration to a functional API called in the forward of a layer. Instead, we can create a stub and add it to the sublayers of the layer. And call the stub before the functional API in the forward. The observer held by the stub will observe or quantize the inputs of the functional API.

Parameters
  • observer (QuanterFactory) –

  • none. (It will use a global configuration to create the observers if the 'observer' is) –

Examples

>>> import paddle
>>> from paddle.nn.quant import Stub
>>> from paddle.quantization.quanters import FakeQuanterWithAbsMaxObserver
>>> from paddle.nn import Conv2D
>>> from paddle.quantization import QAT, QuantConfig

>>> quanter = FakeQuanterWithAbsMaxObserver(moving_rate=0.9)
>>> class Model(paddle.nn.Layer):
...     def __init__(self, num_classes=10):
...         super().__init__()
...         self.conv = Conv2D(3, 6, 3, stride=1, padding=1)
...         self.quant = Stub(quanter)
...
...     def forward(self, inputs):
...         out = self.conv(inputs)
...         out = self.quant(out)
...         return paddle.nn.functional.relu(out)

>>> model = Model()
>>> q_config = QuantConfig(activation=quanter, weight=quanter)
>>> qat = QAT(q_config)
>>> quant_model = qat.quantize(model)
>>> print(quant_model)
Model(
    (conv): QuantedConv2D(
        (weight_quanter): FakeQuanterWithAbsMaxObserverLayer()
        (activation_quanter): FakeQuanterWithAbsMaxObserverLayer()
    )
    (quant): QuanterStub(
        (_observer): FakeQuanterWithAbsMaxObserverLayer()
    )
)
forward ( input )

forward

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments