# 动态图¶

## 一、环境配置¶

```import paddle
import paddle.nn.functional as F
import numpy as np

```
```2.3.0
```

## 二、基本用法¶

```a = paddle.randn([4, 2])
b = paddle.arange(1, 3, dtype="float32")

print(a)
print(b)

c = a + b
print(c)

d = paddle.matmul(a, b)
print(d)
```
```W0509 14:36:44.526748   119 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0509 14:36:44.531500   119 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.

Tensor(shape=[4, 2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
[[ 0.02909037,  0.31686500],
[ 1.21520174,  0.43905804],
[ 0.29906181,  1.46106803],
[ 0.16497211, -1.44989705]])
Tensor(shape=[2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
[1., 2.])
Tensor(shape=[4, 2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
[[1.02909040, 2.31686497],
[2.21520185, 2.43905807],
[1.29906178, 3.46106815],
[1.16497207, 0.55010295]])
Tensor(shape=[4], dtype=float32, place=Place(gpu:0), stop_gradient=True,
[ 0.66282034,  2.09331775,  3.22119784, -2.73482203])
```

## 三、使用 python 的控制流¶

```a = paddle.to_tensor(np.array([1, 2, 3]))
b = paddle.to_tensor(np.array([4, 5, 6]))

for i in range(10):
if r > 0.5:
c = paddle.pow(a, i) + b
print("{} +> {}".format(i, c.numpy()))
else:
c = paddle.pow(a, i) - b
print("{} -> {}".format(i, c.numpy()))
```
```0 -> [-3 -4 -5]
1 -> [-3 -3 -3]
2 +> [ 5  9 15]
3 -> [-3  3 21]
4 +> [ 5 21 87]
5 +> [  5  37 249]
6 +> [  5  69 735]
7 +> [   5  133 2193]
8 +> [   5  261 6567]
9 +> [    5   517 19689]
```

## 四、构建更加灵活的网络：控制流¶

• 使用动态图可以用来创建更加灵活的网络，比如根据控制流选择不同的分支网络，和方便的构建权重共享的网络。接下来来看一个具体的例子，在这个例子中，第二个线性变换只有 0.5 的可能性会运行。

• 在 sequence to sequence with attention 的机器翻译的示例中，你会看到更实际的使用动态图构建 RNN 类的网络带来的灵活性。

```class MyModel(paddle.nn.Layer):
def __init__(self, input_size, hidden_size):
super().__init__()
self.linear1 = paddle.nn.Linear(input_size, hidden_size)
self.linear2 = paddle.nn.Linear(hidden_size, hidden_size)
self.linear3 = paddle.nn.Linear(hidden_size, 1)

def forward(self, inputs):
x = self.linear1(inputs)
x = F.relu(x)

if (
[
1,
]
)
> 0.5
):
x = self.linear2(x)
x = F.relu(x)

x = self.linear3(x)

return x
```
```total_data, batch_size, input_size, hidden_size = 1000, 64, 128, 256

x_data = np.random.randn(total_data, input_size).astype(np.float32)
y_data = np.random.randn(total_data, 1).astype(np.float32)

model = MyModel(input_size, hidden_size)

learning_rate=0.01, parameters=model.parameters()
)

for t in range(200 * (total_data // batch_size)):
idx = np.random.choice(total_data, batch_size, replace=False)
x = paddle.to_tensor(x_data[idx, :])
y = paddle.to_tensor(y_data[idx, :])
y_pred = model(x)

loss = loss_fn(y_pred, y)
if t % 200 == 0:
print(t, loss.numpy())

loss.backward()
optimizer.step()
```
```0 [1.3522581]
200 [0.64742535]
400 [0.4166624]
600 [0.23887901]
800 [0.07141486]
1000 [0.12339798]
1200 [0.05505134]
1400 [0.03840963]
1600 [0.02036735]
1800 [0.01209428]
2000 [0.00706512]
2200 [0.00202894]
2400 [0.00118904]
2600 [0.0007184]
2800 [0.00157895]
```

## 五、构建更加灵活的网络：共享权重¶

• 使用动态图还可以更加方便的创建共享权重的网络，下面的示例展示了一个共享了权重的简单的 AutoEncoder。

• 你也可以参考图像搜索的示例看到共享参数权重的更实际的使用。

```inputs = paddle.rand((256, 64))

linear = paddle.nn.Linear(64, 8, bias_attr=False)

for i in range(10):
hidden = linear(inputs)
# weight from input to hidden is shared with the linear mapping from hidden to output
outputs = paddle.matmul(hidden, linear.weight, transpose_y=True)
loss = loss_fn(outputs, inputs)
loss.backward()
print("step: {}, loss: {}".format(i, loss.numpy()))
optimizer.step()
```
```step: 0, loss: [0.33135626]
step: 1, loss: [0.28557813]
step: 2, loss: [0.2574758]
step: 3, loss: [0.23101914]
step: 4, loss: [0.20071073]
step: 5, loss: [0.168224]
step: 6, loss: [0.13872787]
step: 7, loss: [0.11772966]
step: 8, loss: [0.10650167]
step: 9, loss: [0.1003489]
```