memory_allocated¶
- paddle.device.cuda. memory_allocated ( device=None ) [source]
- 
         Return the current size of gpu memory that is allocated to tensor of the given device. Note The size of GPU memory allocated to tensor is 256-byte aligned in Paddle, which may be larger than the memory size that tensor actually need. For instance, a float32 tensor with shape [1] in GPU will take up 256 bytes memory, even though storing a float32 data requires only 4 bytes. - Parameters
- 
           device (paddle.CUDAPlace or int or str) – The device, the id of the device or the string name of device like ‘gpu:x’. If device is None, the device is the current device. Default: None. 
- Returns
- 
           The current size of gpu memory that is allocated to tensor of the given device, in bytes. 
- Return type
- 
           int 
 Examples # required: gpu import paddle memory_allocated_size = paddle.device.cuda.memory_allocated(paddle.CUDAPlace(0)) memory_allocated_size = paddle.device.cuda.memory_allocated(0) memory_allocated_size = paddle.device.cuda.memory_allocated("gpu:0") 
