CUDAPlace

class paddle. CUDAPlace

CUDAPlace is a descriptor of a device. It represents a GPU device allocated or to be allocated with Tensor. Each CUDAPlace has a dev_id to indicate the graphics card ID represented by the current CUDAPlace, staring from 0. The memory of CUDAPlace with different dev_id is not accessible. Numbering here refers to the logical ID of the visible graphics card, not the actual ID of the graphics card. You can set visible GPU devices by setting the CUDA_VISIBLE_DEVICES environment variable. When the program starts, visible GPU devices will be numbered from 0. If CUDA_VISIBLE_DEVICES is not set, all devices are visible by default, and the logical ID is the same as the actual ID.

Parameters

id (int) – GPU device ID.

Examples

>>> 
>>> import paddle
>>> place = paddle.CUDAPlace(0)