device_guard

api_attr

declarative programming (static graph)

paddle.fluid.device_guard(device=None)[source]
Notes:

The API only supports static mode.

A context manager that specifies the device on which the OP will be placed.

Parameters

device (str|None) – Specify the device to use in the context. It should be ‘cpu’ or ‘gpu’, When it is set to ‘cpu’ or ‘gpu’, all OPs created in the context will be placed on CPUPlace or CUDAPlace. When ‘gpu’ is set and the program runs on single-card, the device index will be the same as the device on which the executor runs. Default: None, OPs in this context will be automatically assigned devices.

Examples

import paddle.fluid as fluid

support_gpu = fluid.is_compiled_with_cuda()
place = fluid.CPUPlace()
if support_gpu:
    place = fluid.CUDAPlace(0)

# if GPU is supported, the three OPs below will be automatically assigned to CUDAPlace(0)
data1 = fluid.layers.fill_constant(shape=[1, 3, 8, 8], value=0.5, dtype='float32')
data2 = fluid.layers.fill_constant(shape=[1, 3, 5, 5], value=0.5, dtype='float32')
shape = fluid.layers.shape(data2)

with fluid.device_guard("cpu"):
    # Ops created here will be placed on CPUPlace
    shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4])
with fluid.device_guard('gpu'):
    # if GPU is supported, OPs created here will be placed on CUDAPlace(0), otherwise on CPUPlace
    out = fluid.layers.crop_tensor(data1, shape=shape)

exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
result = exe.run(fetch_list=[out])