paddle.fluid.dygraph.amp.auto_cast. amp_guard ( enable=True, custom_white_list=None, custom_black_list=None ) [source]


Create a context which enables auto-mixed-precision(AMP) of operators executed in imperative mode. If enabled, the input data type (float32 or float16) of each operator is decided by autocast algorithm for better performance.

Commonly, it is used together with AmpScaler to achieve Auto-Mixed-Precision in imperative mode.

  • enable (bool, optional) – Enable auto-mixed-precision or not. Default is True.

  • custom_white_list (set|list, optional) – The custom white_list.

  • custom_black_list (set|list, optional) – The custom black_list.


import numpy as np
import paddle.fluid as fluid

data = np.random.uniform(-1, 1, [10, 3, 32, 32]).astype('float32')
with fluid.dygraph.guard():
    conv2d = fluid.dygraph.Conv2D(3, 2, 3)
    data = fluid.dygraph.to_variable(data)
    with fluid.dygraph.amp_guard():
        conv = conv2d(data)
        print(conv.dtype) # FP16
    with fluid.dygraph.amp_guard(enable=False):
        conv = conv2d(data)
        print(conv.dtype) # FP32