# control_flow¶

## array_length¶

`paddle.fluid.layers.``array_length`(array)[源代码]

• array (LOD_TENSOR_ARRAY)-输入数组，用来计算数组长度

```import paddle.fluid as fluid
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = fluid.layers.array_write(tmp, i=i)
arr_len = fluid.layers.array_length(arr)
```

`paddle.fluid.layers.``array_read`(array, i)[源代码]

```Given:
array = [0.6,0.1,0.3,0.1]
And:
I=2
Then:
output = 0.3
```

• array (Variable|list)-输入张量，存储要读的数据
• i (Variable|list)-输入数组中数据的索引

```import paddle.fluid as fluid
array = fluid.layers.create_array(dtype='float32')
i = fluid.layers.fill_constant(shape=[1],dtype='int64',value=10)
```

## array_write¶

`paddle.fluid.layers.``array_write`(x, i, array=None)[源代码]

• x (Variable|list) – 待从中读取数据的输入张量(tensor)
• i (Variable|list) – 输出结果 `LOD_TENSOR_ARRAY` 的下标, 该下标指向输入张量 `x` 写入输出数组的位置
• array (Variable|list) – 会被输入张量 `x` 写入的输出结果 `LOD_TENSOR_ARRAY` 。如果该项值为None， 一个新的 `LOD_TENSOR_ARRAY` 将会被创建并作为结果返回

```import paddle.fluid as fluid
tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = fluid.layers.array_write(tmp, i=i)
```

## create_array¶

`paddle.fluid.layers.``create_array`(dtype)[源代码]

• dtype (int |float) — lod_tensor_array中存储元素的数据类型。

```import paddle.fluid as fluid
data = fluid.layers.create_array(dtype='float32')
```

## DynamicRNN¶

class `paddle.fluid.layers.``DynamicRNN`(name=None)[源代码]

memory用于缓存分段数据。memory的初始值可以是零，也可以是其他变量。

```import paddle.fluid as fluid

sentence = fluid.layers.data(name='sentence', shape=[1], dtype='int64', lod_level=1)
embedding = fluid.layers.embedding(input=sentence, size=[65536, 32], is_sparse=True)

drnn = fluid.layers.DynamicRNN()
with drnn.block():
word = drnn.step_input(embedding)
prev = drnn.memory(shape=[200])
hidden = fluid.layers.fc(input=[word, prev], size=200, act='relu')
drnn.update_memory(prev, hidden)  # set prev to hidden
drnn.output(hidden)

# 获得上一个timestep的rnn，该值是一个编码后的结果
rnn_output = drnn()
last = fluid.layers.sequence_last_step(rnn_output)
```
`step_input`(x, level=0)

• x (Variable) - 含lod信息的输入序列
• level (int) - 用于拆分步骤的LOD层级，默认值0

`static_input`(x)

• x (Variable) - 输入序列

```import paddle.fluid as fluid

sentence = fluid.layers.data(name='sentence', dtype='float32', shape=[32], lod_level=1)
encoder_proj = fluid.layers.data(name='encoder_proj', dtype='float32', shape=[32], lod_level=1)
decoder_boot = fluid.layers.data(name='boot', dtype='float32', shape=[10], lod_level=1)

drnn = fluid.layers.DynamicRNN()
with drnn.block():
current_word = drnn.step_input(sentence)
encoder_word = drnn.static_input(encoder_proj)
hidden_mem = drnn.memory(init=decoder_boot, need_reorder=True)
fc_1 = fluid.layers.fc(input=encoder_word, size=30, bias_attr=False)
fc_2 = fluid.layers.fc(input=current_word, size=30, bias_attr=False)
decoder_inputs = fc_1 + fc_2
h, _, _ = fluid.layers.gru_unit(input=decoder_inputs, hidden=hidden_mem, size=30)
drnn.update_memory(hidden_mem, h)
out = fluid.layers.fc(input=h, size=10, bias_attr=True, act='softmax')
drnn.output(out)

rnn_output = drnn()
```
`block`()

`memory`(init=None, shape=None, value=0.0, need_reorder=False, dtype='float32')

```import paddle.fluid as fluid

sentence = fluid.layers.data(name='sentence', shape=[32], dtype='float32', lod_level=1)
boot_memory = fluid.layers.data(name='boot', shape=[10], dtype='float32', lod_level=1)

drnn = fluid.layers.DynamicRNN()
with drnn.block():
word = drnn.step_input(sentence)
memory = drnn.memory(init=boot_memory, need_reorder=True)
hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh')
drnn.update_memory(ex_mem=memory, new_mem=hidden)
drnn.output(hidden)

rnn_output = drnn()
```

```import paddle.fluid as fluid

sentence = fluid.layers.data(name='sentence', dtype='float32', shape=[32], lod_level=1)

drnn = fluid.layers.DynamicRNN()
with drnn.block():
word = drnn.step_input(sentence)
memory = drnn.memory(shape=[10], dtype='float32', value=0)
hidden = fluid.layers.fc(input=[word, memory], size=10, act='tanh')
drnn.update_memory(ex_mem=memory, new_mem=hidden)
drnn.output(hidden)

rnn_output = drnn()
```

• init (Variable|None) – 初始化的Variable
• shape (list|tuple) – memory shape，形状不包含batch_size
• value (float) – 初始化的值
• need_reorder (bool) – memory初始化依赖于输入样本时设置为True
• dtype (str|numpy.dtype) – 初始化memory的数据类型

`update_memory`(ex_mem, new_mem)

• ex_mem （memory Variable）- memory 变量（Variable）
• new_mem （memory Variable）- RNN块中生成的平坦变量（plain variable）

`output`(*outputs)

• *outputs - 输出变量。

## equal¶

`paddle.fluid.layers.``equal`(x, y, cond=None)[源代码]

equal 该层返回 \(x==y\) 按逐元素运算而得的真值。

• x (Variable)-equal的第一个操作数
• y (Variable)-equal的第二个操作数
• cond (Variable|None)-输出变量（可选），用来存储equal的结果

```import paddle.fluid as fluid
label = fluid.layers.data(name="label", shape=[3,10,32,32], dtype="float32")
limit = fluid.layers.data(name="limit", shape=[3,10,32,32], dtype="float32")
less = fluid.layers.equal(x=label,y=limit)
```

## greater_equal¶

`paddle.fluid.layers.``greater_equal`(x, y, cond=None)[源代码]

• x (Variable) - greater_equal 的第一个操作数
• y (Variable) - greater_equal 的第二个操作数
• cond (Variable|None) - 可选的输出变量，存储 greater_equal 的结果

```import paddle.fluid as fluid
out = fluid.layers.greater_equal(x=label, y=limit)
```

## greater_than¶

`paddle.fluid.layers.``greater_than`(x, y, cond=None)[源代码]

• x (Variable) - greater_than 的第一个操作数
• y (Variable) - greater_than 的第二个操作数
• cond (Variable|None) - 可选的输出变量，存储 greater_than 的结果

```import paddle.fluid as fluid
out = fluid.layers.greater_than(x=label, y=limit)
```

## IfElse¶

class `paddle.fluid.layers.``IfElse`(cond, name=None)[源代码]

if-else控制流。

• cond (Variable)-用于比较的条件
• Name (str,默认为空（None）)-该层名称

```import paddle.fluid as fluid

image = fluid.layers.data(name="X", shape=[2, 5, 5], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
limit = fluid.layers.fill_constant_batch_size_like(
input=label, dtype='int64', shape=[1], value=5.0)
cond = fluid.layers.less_than(x=label, y=limit)
ie = fluid.layers.IfElse(cond)
with ie.true_block():
true_image = ie.input(image)
hidden = fluid.layers.fc(input=true_image, size=100, act='tanh')
prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
ie.output(prob)

with ie.false_block():
false_image = ie.input(image)
hidden = fluid.layers.fc(
input=false_image, size=200, act='tanh')
prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
ie.output(prob)
prob = ie()
```

## increment¶

`paddle.fluid.layers.``increment`(x, value=1.0, in_place=True)[源代码]

`x` 中元素个数必须为1

• x (Variable|list) – 含有输入值的张量(tensor)
• value (float) – 需要增加在 `x` 变量上的值
• in_place (bool) – 判断是否在x变量本身执行操作，True原地执行，False时，返回增加后的副本

```import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[1], dtype='float32',
append_batch_size=False)
data = fluid.layers.increment(x=data, value=3.0, in_place=True)
```

## is_empty¶

`paddle.fluid.layers.``is_empty`(x, cond=None)[源代码]

• x (Variable)-测试的变量
• cond (Variable|None)-输出参数。返回给定x的测试结果，默认为空（None）

```import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[4, 32, 32], dtype="float32")
res = fluid.layers.is_empty(x=input)
# or:
# fluid.layers.is_empty(x=input, cond=res)
```

## less_equal¶

`paddle.fluid.layers.``less_equal`(x, y, cond=None)[源代码]

• x (Variable) - less_equal 的第一个操作数
• y (Variable) - less_equal 的第二个操作数
• cond (Variable|None) - 可选的输出变量，存储 less_equal 的结果

```import paddle.fluid as fluid
out = fluid.layers.less_equal(x=label, y=limit)
```

## less_than¶

`paddle.fluid.layers.``less_than`(x, y, force_cpu=None, cond=None)[源代码]

• x (Variable) – `less_than` 运算的左操作数
• y (Variable) – `less_than` 运算的右操作数
• force_cpu (BOOLEAN) – 值True则强制将输出变量写入CPU内存中。否则，将其写入目前所在的运算设备上。默认为True
• cond (Variable|None) – 可选的用于存储 `less_than` 输出结果的变量，为None则由函数自动生成Out变量

```import paddle.fluid as fluid
label = fluid.layers.data(name='y', shape=[1], dtype='int64')
limit = fluid.layers.fill_constant(shape=[1], dtype='int64', value=5)
cond = fluid.layers.less_than(x=label, y=limit)
```

## not_equal¶

`paddle.fluid.layers.``not_equal`(x, y, cond=None)[源代码]

• x (Variable) - not_equal 的第一个操作数
• y (Variable) - not_equal 的第二个操作数
• cond (Variable|None) - 可选的输出变量，存储 not_equal 的结果

```import paddle.fluid as fluid
out = fluid.layers.not_equal(x=label, y=limit)
```

## Print¶

`paddle.fluid.layers.``Print`(input, first_n=-1, message=None, summarize=-1, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both')[源代码]

Print操作命令

• input (Variable)-将要打印的张量
• summarize (int)-打印张量中的元素数目，如果值为-1则打印所有元素
• message (str)-字符串类型消息，作为前缀打印
• first_n (int)-只记录first_n次数
• print_tensor_name (bool)-打印张量名称
• print_tensor_type (bool)-打印张量类型
• print_tensor_shape (bool)-打印张量维度
• print_tensor_lod (bool)-打印张量lod
• print_phase (str)-打印的阶段，包括 `forward` , `backward``both` .若设置为 `backward` 或者 `both` ,则打印输入张量的梯度。

```import paddle.fluid as fluid

input = fluid.layers.data(name="input", shape=[4, 32, 32], dtype="float32")
input = fluid.layers.Print(input, message = "The content of input layer:")
# value = some_layer(...)
# Print(value, summarize=10,
#     message="The content of some_layer: ")
```

## reorder_lod_tensor_by_rank¶

`paddle.fluid.layers.``reorder_lod_tensor_by_rank`(x, rank_table)[源代码]

```例如:

X 中的第四个序列（即索引为3的序列，后面以此类推）会变成排列后的batch中的第一个，紧接着就是原来batch中的第一个元素，第三个元素，和第二个元素。

```

• x(Variable) - (LoDTensor)，待根据提供的 `RankTable` 进行排序的LoD tensor
• rank_table(Variable) - 变量

```import paddle.fluid as fluid
data_desc = (['input', [9], 0], ['ref', [5], 1])
data = fluid.layers.data(name=data_desc[0][0], shape=data_desc[0][1])
rank_data = fluid.layers.data(name=data_desc[1][0], shape=data_desc[1][1])
table = fluid.layers.control_flow.lod_rank_table(rank_data)
new_data = fluid.layers.reorder_lod_tensor_by_rank(
x=data, rank_table=table)
```

## StaticRNN¶

class `paddle.fluid.layers.``StaticRNN`(name=None)[源代码]

StaticRNN可以处理一批序列数据。每个样本序列的长度必须相等。StaticRNN将拥有自己的参数，如输入、输出和存储器等。请注意，输入的第一个维度表示序列长度，且输入的所有序列长度必须相同。并且输入和输出的每个轴的含义是相同的。

```import paddle.fluid as fluid

vocab_size, hidden_size=10000, 200
x = layers.data(name="x", shape=[-1, 1, 1], dtype='int64')
x_emb = layers.embedding(
input=x,
size=[vocab_size, hidden_size],
dtype='float32',
is_sparse=False)
x_emb = layers.transpose(x_emb, perm=[1, 0, 2])

rnn = fluid.layers.StaticRNN()
with rnn.step():
word = rnn.step_input(x_emb)
prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word)
hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu')
rnn.update_memory(prev, hidden)  # set prev to hidden
rnn.step_output(hidden)

result = rnn()
```

StaticRNN将序列展开为时间步长。用户需要定义如何在with步骤中处理每个时间步长。

StaticRNN可以将多个变量标记为其输出。使用rnn()获取输出序列。

`step`()

`memory`(init=None, shape=None, batch_ref=None, init_value=0.0, init_batch_dim_idx=0, ref_batch_dim_idx=1)

• init (Variable|None) - 初始化过的变量，如果没有设置，则必须提供shape和batch_ref，默认值None
• shape (list|tuple) - boot memory的形状，注意其不包括batch_size，默认值None
• batch_ref (Variable|None) - batch引用变量，默认值None
• init_value (float) - boot memory的初始化值，默认值0.0
• init_batch_dim_idx (int) - init变量的batch_size轴，默认值0
• ref_batch_dim_idx (int) - batch_ref变量的batch_size轴

```import paddle.fluid as fluid

vocab_size, hidden_size=10000, 200
x = layers.data(name="x", shape=[-1, 1, 1], dtype='int64')
x_emb = layers.embedding(
input=x,
size=[vocab_size, hidden_size],
dtype='float32',
is_sparse=False)
x_emb = layers.transpose(x_emb, perm=[1, 0, 2])

rnn = fluid.layers.StaticRNN()
with rnn.step():
word = rnn.step_input(x_emb)
prev = rnn.memory(shape=[-1, hidden_size], batch_ref = word)
hidden = fluid.layers.fc(input=[word, prev], size=hidden_size, act='relu')
rnn.update_memory(prev, hidden)
```
`step_input`(x)

• x (Variable) – 输入序列，x的形状应为[seq_len, ...]。

`step_output`(o)

-o (Variable) – 输出序列

`output`(*outputs)

-outputs – 输出变量

`update_memory`(mem, var)

• mem (Variable) – 内存变量
• var (Variable) – RNN块中产生的普通变量

## Switch¶

class `paddle.fluid.layers.``Switch`(name=None)[源代码]

Switch类实现的功能十分类似if-elif-else。它可以在学习率调度器(learning rate scheduler)中调整学习率。

```语义上，
1. switch控制流挨个检查cases
2. 各个case的条件是一个布尔值(boolean)，它是一个标量(scalar)变量
3. 它将执行第一个匹配的case后面的分支，如果没有匹配的case，但若存在一个default case,则会执行default case后面的语句
4. 一旦匹配了一个case,它降会执行这个case所对应的分支，且仅此分支。
```

```import paddle.fluid as fluid

lr = fluid.layers.create_global_var(
shape=[1],
value=0.0,
dtype='float32',
persistable=True,
name="learning_rate")
zero_var = fluid.layers.fill_constant(
shape=[1], dtype='float32', value=0.0)
one_var = fluid.layers.fill_constant(
shape=[1], dtype='float32', value=1.0)
two_var = fluid.layers.fill_constant(
shape=[1], dtype='float32', value=2.0)

global_step = fluid.layers.autoincreased_step_counter(
counter_name='@LR_DECAY_COUNTER@', begin=0, step=1)

with fluid.layers.control_flow.Switch() as switch:
with switch.case(global_step == zero_var):
fluid.layers.assign(input=one_var, output=lr)
with switch.default():
fluid.layers.assign(input=two_var, output=lr)
```

## While¶

class `paddle.fluid.layers.``While`(cond, is_test=False, name=None)[源代码]

• cond (Variable) – 用于比较的条件
• is_test (bool) – 用于表明是不是在测试阶段执行
• name (str) - 该层的命名

```import paddle.fluid as fluid

i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=0)
d0 = fluid.layers.data("d0", shape=[10], dtype='float32')
data_array = fluid.layers.array_write(x=d0, i=i)
array_len = fluid.layers.fill_constant(shape=[1],dtype='int64', value=3)

cond = fluid.layers.less_than(x=i, y=array_len)
while_op = fluid.layers.While(cond=cond)
with while_op.block():
i = fluid.layers.increment(x=i, value=1, in_place=True)

fluid.layers.less_than(x=i, y=array_len, cond=cond)
```