empty_cache
- paddle.cuda. empty_cache ( ) None [source]
-
Release all unoccupied cached memory currently held by the caching allocator so that those can be used in other application and visible in nvidia-smi.
- Returns
-
None
Examples
>>> >>> import paddle >>> # Create a tensor to allocate memory >>> tensor = paddle.randn([1000, 1000], device='cuda') >>> # Delete the tensor to free memory (but it may still be cached) >>> del tensor >>> # Release the cached memory >>> paddle.cuda.empty_cache()
