auto_cast

paddle.amp. auto_cast ( enable=True, custom_white_list=None, custom_black_list=None ) [source]

Create a context which enables auto-mixed-precision(AMP) of operators executed in dynamic graph mode. If enabled, the input data type (float32 or float16) of each operator is decided by autocast algorithm for better performance.

Commonly, it is used together with GradScaler to achieve Auto-Mixed-Precision in imperative mode.

Parameters
  • enable (bool, optional) – Enable auto-mixed-precision or not. Default is True.

  • custom_white_list (set|list|tuple, optional) – The custom white_list. It’s the set of ops that support fp16 calculation and are considered numerically-safe and performance-critical. These ops will be converted to fp16.

  • custom_black_list (set|list|tuple, optional) – The custom black_list. The set of ops that support fp16 calculation and are considered numerically-dangerous and whose effects may also be observed in downstream ops. These ops will not be converted to fp16.

Examples

import paddle

conv2d = paddle.nn.Conv2D(3, 2, 3, bias_attr=False)
data = paddle.rand([10, 3, 32, 32])

with paddle.amp.auto_cast():
    conv = conv2d(data)
    print(conv.dtype) # FP16

with paddle.amp.auto_cast(enable=False):
    conv = conv2d(data)
    print(conv.dtype) # FP32

with paddle.amp.auto_cast(custom_black_list={'conv2d'}):
    conv = conv2d(data)
    print(conv.dtype) # FP32

a = paddle.rand([2,3])
b = paddle.rand([2,3])
with paddle.amp.auto_cast(custom_white_list={'elementwise_add'}):
    c = a + b
    print(c.dtype) # FP16

Used in the guide/tutorials