CUDAPlace

class paddle.fluid.CUDAPlace
Note:

For multi-card tasks, please use FLAGS_selected_gpus environment variable to set the visible GPU device. The next version will fix the problem with CUDA_VISIBLE_DEVICES environment variable.

CUDAPlace is a descriptor of a device. It represents a GPU device allocated or to be allocated with Tensor or LoDTensor. Each CUDAPlace has a dev_id to indicate the graphics card ID represented by the current CUDAPlace, staring from 0. The memory of CUDAPlace with different dev_id is not accessible. Numbering here refers to the logical ID of the visible graphics card, not the actual ID of the graphics card. You can set visible GPU devices by setting the CUDA_VISIBLE_DEVICES environment variable. When the program starts, visible GPU devices will be numbered from 0. If CUDA_VISIBLE_DEVICES is not set, all devices are visible by default, and the logical ID is the same as the actual ID.

Parameters

id (int) – GPU device ID.

Examples

import paddle.fluid as fluid
gpu_place = fluid.CUDAPlace(0)