# Tensor¶

`Tensor` 是Paddle中最为基础的数据结构，有几种创建Tensor的不同方式：

• 用预先存在的 `data` 数据创建1个Tensor，请参考 to_tensor

• 创建一个指定 `shape` 的Tensor，请参考 oneszerosfull

• 创建一个与其他Tensor具有相同 `shape``dtype` 的Tensor，请参考 ones_likezeros_likefull_like

## dtype¶

```import paddle
print("tensor's type is: {}".format(x.dtype))
```

```import paddle
z = x * y
z.backward()
```

## name¶

```import paddle
# Tensor name: generated_tensor_0
```

## ndim¶

```import paddle
print("Tensor's number of dimensition: ", paddle.to_tensor([[1, 2], [3, 4]]).ndim)
# Tensor's number of dimensition: 2
```

## persistable¶

```import paddle
print("Whether Tensor is persistable: ", paddle.to_tensor(1).persistable)
# Whether Tensor is persistable: false
```

## place¶

```import paddle
print(cpu_tensor.place)
```

## shape¶

```import paddle
print("Tensor's shape: ", paddle.to_tensor([[1, 2], [3, 4]]).shape)
# Tensor's shape: [2, 2]
```

```import paddle
```

## astype(dtype)¶

• dtype (str) - 转换后的dtype，支持'bool'，'float16'，'float32'，'float64'，'int8'，'int16'， 'int32'，'int64'，'uint8'。

```import paddle
print("original tensor's dtype is: {}".format(x.dtype))
print("new tensor's dtype is: {}".format(x.astype('float64').dtype))
```

## backward(retain_graph=False)¶

• retain_graph (bool, optional) - 如果为False，反向计算图将被释放。如果在backward()之后继续添加OP， 需要设置为True，此时之前的反向计算图会保留。将其设置为False会更加节省内存。默认值：False。

```import paddle
import numpy as np

x = np.ones([2, 2], np.float32)
inputs = []
for _ in range(10):
# if we don't set tmp's stop_gradient as False then, all path to loss will has no gradient since
# there is no one need gradient on it.
inputs.append(tmp)
loss.backward()
```

## chunk(chunks, axis=0, name=None)¶

```import paddle
import numpy as np

x = np.ones([2, 2], np.float32)
inputs2 = []
for _ in range(10):
inputs2.append(tmp)
loss2.backward()
```

## clone()¶

```import paddle

clone_x = x.clone()
y = clone_x**2
y.backward()

clone_x = x.clone()
z = clone_x**3
z.backward()
```

## cosh(name=None)¶

```import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
print(out)
# [1.08107237 1.02006674 1.00500417 1.04533851]
```

## cpu()¶

```import paddle
print(x.place)    # CUDAPlace(0)

y = x.cpu()
print(y.place)    # CPUPlace
```

## cuda(device_id=None, blocking=False)¶

• device_id (int, optional) - 目标GPU的设备Id，默认为None，此时为当前Tensor的设备Id，如果当前Tensor不在GPU上，则为0。

• blocking (bool, optional) - 如果为False并且当前Tensor处于固定内存上，将会发生主机到设备端的异步拷贝。否则，会发生同步拷贝。默认为False。

```import paddle
print(x.place)    # CUDAPlace(0)

y = x.cpu()
print(y.place)    # CPUPlace
```

## detach()¶

```import paddle
import numpy as np

data = np.random.uniform(-1, 1, [30, 10, 32]).astype('float32')
x = linear(data)
y = x.detach()
```

## dim()¶

```import paddle
print("Tensor's number of dimensition: ", paddle.to_tensor([[1, 2], [3, 4]]).dim())
# Tensor's number of dimensition: 2
```

## gather_nd(index, name=None)¶

`Tensor.grad` 相同，查看一个Tensor的梯度，数据类型为numpy.ndarray。

```import paddle
z = x * y
z.backward()
```

## ndimension()¶

```import paddle
print("Tensor's number of dimensition: ", paddle.to_tensor([[1, 2], [3, 4]]).ndimension())
# Tensor's number of dimensition: 2
```

## numpy()¶

```import paddle
import numpy as np

data = np.random.uniform(-1, 1, [30, 10, 32]).astype('float32')
x = linear(data)
print(x.numpy())
```

## pin_memory(y, name=None)¶

```import paddle
print(x.place)      # CUDAPlace(0)

y = x.pin_memory()
print(y.place)      # CUDAPinnedPlace
```

## set_value(value)¶

• value (Tensor|np.ndarray) - 需要被设置的值，类型为Tensor或者numpy.array。

```import paddle
import numpy as np

data = np.ones([3, 1024], dtype='float32')
linear(input)  # call with default weight
custom_weight = np.random.randn(1024, 4).astype("float32")
linear.weight.set_value(custom_weight)  # change existing weight
out = linear(input)  # call with different weight
```

## sinh(name=None)¶

```import paddle

x = paddle.to_tensor([-0.4, -0.2, 0.1, 0.3])
out = x.sinh()
print(out)
# [-0.41075233 -0.201336    0.10016675  0.30452029]
```