paddle.static. device_guard ( device=None ) [source]

The API only supports static mode.

A context manager that specifies the device on which the OP will be placed.


device (str|None) – Specify the device to use in the context. It should be cpu, gpu or gpu:x, where x is the index of the GPUs. When it is set to ‘cpu’ or ‘gpu’, all OPs created in the context will be placed on CPUPlace or CUDAPlace. When ‘gpu’ is set and the program runs on single-card, the device index will be the same as the device on which the executor runs. Default: None, OPs in this context will be automatically assigned devices.


import paddle

support_gpu = paddle.is_compiled_with_cuda()
place = paddle.CPUPlace()
if support_gpu:
    place = paddle.CUDAPlace(0)

# if GPU is supported, the three OPs below will be automatically assigned to CUDAPlace(0)
data1 = paddle.full(shape=[1, 3, 8, 8], fill_value=0.5, dtype='float32')
data2 = paddle.full(shape=[1, 3, 64], fill_value=0.5, dtype='float32')
shape = paddle.shape(data2)

with paddle.static.device_guard("cpu"):
    # Ops created here will be placed on CPUPlace
    shape = paddle.slice(shape, axes=[0], starts=[0], ends=[4])
with paddle.static.device_guard('gpu'):
    # if GPU is supported, OPs created here will be placed on CUDAPlace(0), otherwise on CPUPlace
    out = paddle.reshape(data1, shape=shape)

exe = paddle.static.Executor(place)
result =[out])