# reduce_all¶

`paddle.fluid.layers.``reduce_all`(input, dim=None, keep_dim=False, name=None)[source]

This OP computes the `logical and` of tensor elements over the given dimension, and output the result.

Parameters
• input (Variable) – The input variable which is a Tensor or LoDTensor, the input data type should be bool.

• dim (list|int|optional) – The dimension along which the logical and is computed. If `None`, compute the logical and over all elements of `input` and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\). The default value is None.

• keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the `input` unless `keep_dim` is true. The default value is False.

• name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The default value is None.

Returns

The reduced tensor variable with `logical and` in given dims.

Return type

Variable, the output data type is bool.

Examples

```import paddle.fluid as fluid
import numpy as np

# x is a bool Tensor variable with following elements:
#    [[True, False]
#     [True, True]]
x = layers.assign(np.array([[1, 0], [1, 1]], dtype='int32'))
x = layers.cast(x, 'bool')

out = layers.reduce_all(x)  # False
out = layers.reduce_all(x, dim=0)  # [True, False]
out = layers.reduce_all(x, dim=-1)  # [False, True]
# keep_dim=False, x.shape=(2,2), out.shape=(2,)

out = layers.reduce_all(x, dim=1, keep_dim=True)  # [[False], [True]]
# keep_dim=True, x.shape=(2,2), out.shape=(2,1)
```