auto_cast

paddle.amp. auto_cast ( enable=True, custom_white_list=None, custom_black_list=None, level='O1', dtype='float16', use_promote=True ) [source]

Create a context which enables auto-mixed-precision(AMP) of operators executed in dynamic graph mode. If enabled, the input data type (float32, float16 or bfloat16) of each operator is decided by autocast algorithm for better performance.

Commonly, it is used together with GradScaler and decorator to achieve Auto-Mixed-Precision in imperative mode.

Parameters
  • enable (bool, optional) – Enable auto-mixed-precision or not. Default is True.

  • custom_white_list (set|list|tuple, optional) – A default white list is already set. Usually there is no need to set custom white list. The set of ops should be considered numerically-safe and performance-critical. These ops will be converted to float16/bfloat16.

  • custom_black_list (set|list|tuple, optional) – A default black list is already set. You can set a custom black list according to the model. The set of ops are considered numerically-dangerous and whose effects may also be observed in downstream ops. These ops will not be converted to float16/bfloat16.

  • level (str, optional) – Auto mixed precision level. Accepted values are “O1”, “O2” and “OD”: At the O1 level, operators in the white list will use float16/bfloat16 inputs for calculations, and operators in the black list will use float32 inputs for calculations. At the O2 level, model’s parameters will be casted to float16/bfloat16 by using decorator, and operators that have all float16/bfloat16 inputs will be converted to float16/bfloat16, and that have any float32 input will be converted to float32. For the OD level, operators in default white list will compute in float16/bfloat16, and the others will compute in float32. Default is O1.

  • dtype (str, optional) – Whether to use ‘float16’ or ‘bfloat16’. Default is ‘float16’.

  • use_promote (bool, optional) – Whether to promotes to fp32 when op has any float32 inputs. It is only supported when amp level is O2. Default is True.

Examples

>>> 
>>> import paddle

>>> conv2d = paddle.nn.Conv2D(3, 2, 3, bias_attr=False)
>>> data = paddle.rand([10, 3, 32, 32])

>>> with paddle.amp.auto_cast():
...     conv = conv2d(data)
...     print(conv.dtype)
>>> 
paddle.float16
>>> 

>>> with paddle.amp.auto_cast(enable=False):
...     conv = conv2d(data)
...     print(conv.dtype)
>>> 
paddle.float32
>>> 

>>> with paddle.amp.auto_cast(custom_black_list={'conv2d'}):
...     conv = conv2d(data)
...     print(conv.dtype)
>>> 
paddle.float32
>>> 

>>> a = paddle.rand([2, 3])
>>> b = paddle.rand([2, 3])
>>> with paddle.amp.auto_cast(custom_white_list={'elementwise_add'}):
...     c = a + b
...     print(c.dtype)
>>> 
paddle.float16
>>> 

>>> with paddle.amp.auto_cast(custom_white_list={'elementwise_add'}, level='O2'):
...     d = a + b
...     print(d.dtype)
>>> 
paddle.float16
>>> 

Used in the guide/tutorials