autocast
- paddle. autocast ( device_type: str | None, dtype: _DTypeLiteral = 'float16', enabled: bool = True, cache_enabled: bool = True ) AbstractContextManager [source]
-
Create a context which enables auto-mixed-precision(AMP) of operators executed in dynamic graph mode. If enabled, the input data type (float32, float16 or bfloat16) of each operator is decided by autocast algorithm for better performance.
Commonly, it is used together with GradScaler and decorator to achieve Auto-Mixed-Precision in imperative mode.
- Parameters
-
device_type (str, optional) – Device type.But because the paddle does not distinguish between devices, this parameter does not work
enable (bool, optional) – Enable auto-mixed-precision or not. Default is True.
dtype (str, optional) – Whether to use ‘float16’ or ‘bfloat16’. Default is ‘float16’.
cache_enabled (bool, optional) – whether to enable cache or not. Default is True. But this parameter is not used
Note
paddle.cuda.amp.
Examples
>>> >>> import paddle >>> conv2d = paddle.nn.Conv2D(3, 2, 3, bias_attr=False) >>> data = paddle.rand([10, 3, 32, 32]) >>> with paddle.amp.auto_cast(): ... conv = conv2d(data) ... print(conv.dtype) >>> paddle.float16 >>> >>> with paddle.amp.auto_cast(enable=False): ... conv = conv2d(data) ... print(conv.dtype) >>> paddle.float32 >>>
