Variable

class paddle.fluid.Variable(block, type=VarType.LOD_TENSOR, name=None, shape=None, dtype=None, lod_level=None, capacity=None, persistable=None, error_clip=None, stop_gradient=False, is_data=False, need_check_feed=False, belong_to_optimizer=False, **kwargs)[source]
Notes:

The constructor of Variable should not be invoked directly.

In Static Graph Mode: Please use Block.create_var to create a Static variable which has no data until being feed.

In Dygraph Mode: Please use to_variable to create a dygraph variable with real data

In Fluid, every input and output of an OP is a variable. In most cases, variables are used for holding different kinds of data or training labels. A variable belongs to a Block . All variable has its own name and two variables in different Block could have the same name.

There are many kinds of variables. Each kind of them has its own attributes and usages. Please refer to the framework.proto for details.

Most of a Variable’s member variables can be set to be None. It mean it is not available or will be specified later.

Examples

In Static Graph Mode:

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')

System Message: WARNING/2 (/usr/local/lib/python2.7/dist-packages/paddle/fluid/framework.py:docstring of paddle.fluid.Variable, line 31)

Explicit markup ends without a blank line; unexpected unindent.

In Dygraph Mode:

import paddle.fluid as fluid
import numpy as np

with fluid.dygraph.guard():
    new_variable = fluid.dygraph.to_variable(np.arange(10))
detach()
Notes:

This API is ONLY available in Dygraph mode

Returns a new Variable, detached from the current graph.

Returns

The detached Variable.

Return type

( Variable | dtype is same as current Variable)

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph.base import to_variable
from paddle.fluid.dygraph import Linear
import numpy as np

data = np.random.uniform(-1, 1, [30, 10, 32]).astype('float32')
with fluid.dygraph.guard():
    linear = Linear(32, 64)
    data = to_variable(data)
    x = linear(data)
    y = x.detach()
numpy()
Notes:

This API is ONLY available in Dygraph mode

Returns a numpy array shows the value of current Variable

Returns

The numpy value of current Variable.

Return type

ndarray

Returns type:

ndarray: dtype is same as current Variable

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph.base import to_variable
from paddle.fluid.dygraph import Linear
import numpy as np

data = np.random.uniform(-1, 1, [30, 10, 32]).astype('float32')
with fluid.dygraph.guard():
    linear = Linear(32, 64)
    data = to_variable(data)
    x = linear(data)
    print(x.numpy())
set_value(value)
Notes:

This API is ONLY available in Dygraph mode

Set a new value for this Variable.

Parameters

value (Variable|np.ndarray) – the new value.

Examples

import paddle.fluid as fluid
from paddle.fluid.dygraph.base import to_variable
from paddle.fluid.dygraph import Linear
import numpy as np

data = np.ones([3, 1024], dtype='float32')
with fluid.dygraph.guard():
    linear = fluid.dygraph.Linear(1024, 4)
    t = to_variable(data)
    linear(t)  # call with default weight
    custom_weight = np.random.randn(1024, 4).astype("float32")
    linear.weight.set_value(custom_weight)  # change existing weight
    out = linear(t)  # call with different weight
backward(backward_strategy=None)
Notes:

This API is ONLY available in Dygraph mode

Run backward of current Graph which starts from current Variable

Parameters

backward_strategy (BackwardStrategy) – The Backward Strategy to run backward

Returns

None

Return type

NoneType

Examples

import paddle.fluid as fluid
import numpy as np

x = np.ones([2, 2], np.float32)
with fluid.dygraph.guard():
    inputs2 = []
    for _ in range(10):
        tmp = fluid.dygraph.base.to_variable(x)
        # if we don't set tmp's stop_gradient as False then, all path to loss will has no gradient since
        # there is no one need gradient on it.
        tmp.stop_gradient=False
        inputs2.append(tmp)
    ret2 = fluid.layers.sums(inputs2)
    loss2 = fluid.layers.reduce_sum(ret2)
    backward_strategy = fluid.dygraph.BackwardStrategy()
    backward_strategy.sort_sum_gradient = True
    loss2.backward(backward_strategy)
gradient()
Notes:

This API is ONLY available in Dygraph mode

Get the Gradient of Current Variable

Returns

if Variable’s type is LoDTensor, return numpy value of the gradient of current Variable, if Variable’s type is SelectedRows, return tuple of ndarray, first element of tuple is numpy value of the gradient of current Variable, second element of tuple is numpy value of the rows of current Variable.

Return type

ndarray or tuple of ndarray

Examples

import paddle.fluid as fluid
import numpy as np

# example1: return ndarray
x = np.ones([2, 2], np.float32)
with fluid.dygraph.guard():
    inputs2 = []
    for _ in range(10):
        tmp = fluid.dygraph.base.to_variable(x)
        tmp.stop_gradient=False
        inputs2.append(tmp)
    ret2 = fluid.layers.sums(inputs2)
    loss2 = fluid.layers.reduce_sum(ret2)
    backward_strategy = fluid.dygraph.BackwardStrategy()
    backward_strategy.sort_sum_gradient = True
    loss2.backward(backward_strategy)
    print(loss2.gradient())

# example2: return tuple of ndarray
with fluid.dygraph.guard():
    embedding = fluid.dygraph.Embedding(
        size=[20, 32],
        param_attr='emb.w',
        is_sparse=True)
    x_data = np.arange(12).reshape(4, 3).astype('int64')
    x_data = x_data.reshape((-1, 3, 1))
    x = fluid.dygraph.base.to_variable(x_data)
    out = embedding(x)
    out.backward()
    print(embedding.weight.gradient())
clear_gradient()
Notes:

1. This API is ONLY available in Dygraph mode

2. Use it only Variable has gradient, normally we use this for Parameters since other temporal Variable will be deleted by Python’s GC

Clear (set to 0 ) the Gradient of Current Variable

Returns: None

Examples

import paddle.fluid as fluid
import numpy as np

x = np.ones([2, 2], np.float32)
with fluid.dygraph.guard():
    inputs2 = []
    for _ in range(10):
        tmp = fluid.dygraph.base.to_variable(x)
        tmp.stop_gradient=False
        inputs2.append(tmp)
    ret2 = fluid.layers.sums(inputs2)
    loss2 = fluid.layers.reduce_sum(ret2)
    backward_strategy = fluid.dygraph.BackwardStrategy()
    backward_strategy.sort_sum_gradient = True
    loss2.backward(backward_strategy)
    print(loss2.gradient())
    loss2.clear_gradient()
    print("After clear {}".format(loss2.gradient()))
to_string(throw_on_error, with_details=False)

Get debug string.

Parameters
  • throw_on_error (bool) – True if raise an exception when self is not initialized.

  • with_details (bool) – more details about variables and parameters (e.g. trainable, optimize_attr, …) will be printed when with_details is True. Default value is False;

Returns

The debug string.

Return type

str

Examples

import paddle.fluid as fluid

cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')
print(new_variable.to_string(True))
print("=============with detail===============")
print(new_variable.to_string(True, True))
stop_gradient

Indicating if we stop gradient from current Variable

Notes: This Property has default value as True in Dygraph mode, while Parameter’s default value is False. However, in Static Graph Mode all Variable’s default stop_gradient value is False

Examples

import paddle.fluid as fluid
import numpy as np

with fluid.dygraph.guard():
    value0 = np.arange(26).reshape(2, 13).astype("float32")
    value1 = np.arange(6).reshape(2, 3).astype("float32")
    value2 = np.arange(10).reshape(2, 5).astype("float32")
    linear = fluid.Linear(13, 5, dtype="float32")
    linear2 = fluid.Linear(3, 3, dtype="float32")
    a = fluid.dygraph.to_variable(value0)
    b = fluid.dygraph.to_variable(value1)
    c = fluid.dygraph.to_variable(value2)
    out1 = linear(a)
    out2 = linear2(b)
    out1.stop_gradient = True
    out = fluid.layers.concat(input=[out1, out2, c], axis=1)
    out.backward()

    assert linear.weight.gradient() is None
    assert (out1.gradient() == 0).all()
persistable

Indicating if we current Variable should be long-term alive

Notes: This Property will be deprecated and this API is just to help user understand concept

1. All Variable’s persistable is False except Parameters.

2. In Dygraph mode, this property should not be changed

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')
print("persistable of current Var is: {}".format(new_variable.persistable))
grad_name

Indicating name of the gradient Variable of current Variable.

**Notes: This is a read-only property. It simply returns name of

System Message: WARNING/2 (/usr/local/lib/python2.7/dist-packages/paddle/fluid/framework.py:docstring of paddle.fluid.Variable.grad_name, line 5); backlink

Inline strong start-string without end-string.

gradient Variable from a naming convention but doesn’t guarantee the gradient exists.**

Examples


import paddle.fluid as fluid

x = fluid.data(name=”x”, shape=[-1, 23, 48], dtype=’float32’) print(x.grad_name) # output is “x@GRAD

name

Indicating name of current Variable

Notes: If it has two or more Varaible share the same name in the same Block , it means these Variable will share content in no- Dygraph mode. This is how we achieve Parameter sharing

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')
print("name of current Var is: {}".format(new_variable.name))
shape

Indicating shape of current Variable

Notes: This is a read-only property

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')
print("shape of current Var is: {}".format(new_variable.shape))
dtype

Indicating data type of current Variable

Notes: This is a read-only property

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')
print("Dtype of current Var is: {}".format(new_variable.dtype))
lod_level

Indicating LoD info of current Variable, please refer to api_fluid_LoDTensor_en to check the meaning of LoD

Notes:

1. This is a read-only property

2. Don’t support this property in Dygraph mode, it’s value should be 0(int)

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')
print("LoD Level of current Var is: {}".format(new_variable.lod_level))
type

Indicating Type of current Variable

Notes: This is a read-only property

Examples

import paddle.fluid as fluid
cur_program = fluid.Program()
cur_block = cur_program.current_block()
new_variable = cur_block.create_var(name="X",
                                    shape=[-1, 23, 48],
                                    dtype='float32')
print("Type of current Var is: {}".format(new_variable.type))
astype(dtype)
Notes:

The variable must be a Tensor

Cast a variable to a specified data type.

Parameters
  • self (Variable) – The source variable

  • dtype – The target data type

Returns

Variable with new dtype

Return type

Variable

Examples

In Static Graph Mode:

import paddle.fluid as fluid

startup_prog = fluid.Program()
main_prog = fluid.Program()
with fluid.program_guard(startup_prog, main_prog):
    original_variable = fluid.data(name = "new_variable", shape=[2,2], dtype='float32')
    new_variable = original_variable.astype('int64')
    print("new var's dtype is: {}".format(new_variable.dtype))

In Dygraph Mode:

import paddle.fluid as fluid
import numpy as np

x = np.ones([2, 2], np.float32)
with fluid.dygraph.guard():
    original_variable = fluid.dygraph.to_variable(x)
    print("original var's dtype is: {}, numpy dtype is {}".format(original_variable.dtype, original_variable.numpy().dtype))
    new_variable = original_variable.astype('int64')
    print("new var's dtype is: {}, numpy dtype is {}".format(new_variable.dtype, new_variable.numpy().dtype))