Fluid describes neural network configuration in the form of abstract grammar tree similar to that of a programming language, and the user’s description of computation will be written into a Program. Program in Fluid replaces the concept of models in traditional frameworks. It can describe any complex model through three execution structures: sequential execution, conditional selection and loop execution. Writing
Program is very close to writing a common program. If you have tried programming before, you will naturally apply your expertise to it.
A model is a Fluid
Programand can contain more than one
Programconsists of nested
Block, and the concept of
Blockcan be analogized to a pair of braces in C++ or Java, or an indentation block in Python.
Blockis composed of three ways: sequential execution, conditional selection or loop execution, which constitutes complex computational logic.
Blockcontains descriptions of computation and computational objects. The description of computation is called Operator; the object of computation (or the input and output of Operator) is unified as Tensor. In Fluid, Tensor is represented by 0-leveled LoD-Tensor .
Block is the concept of variable scope in advanced languages. In programming languages, Block is a pair of braces, which contains local variable definitions and a series of instructions or operators. Control flow structures
for in programming languages can be equivalent to the following counterparts in deep learning:
for, while loop
a series of layers
As mentioned above,
Block in Fluid describes a set of Operators that include sequential execution, conditional selection or loop execution, and the operating object of Operator: Tensor.
In Fluid, all operations of data are represented by
Operator . In Python,
Operator in Fluid is encapsulated into modules like
This is because some common operations on Tensor may consist of more basic operations. For simplicity, some encapsulation of the basic Operator is carried out inside the framework, including the creation of learnable parameters relied by an Operator, the initialization details of learnable parameters, and so on, so as to reduce the cost of further development.
More information can be read for reference. Fluid Design Idea
Variable can contain any type of value – in most cases a LoD-Tensor.
All the learnable parameters in the model are kept in the memory space in form of
Variable . In most cases, you do not need to create the learnable parameters in the network by yourself. Fluid provides encapsulation for almost common basic computing modules of the neural network. Taking the simplest full connection model as an example, calling
fluid.layers.fc directly creates two learnable parameters for the full connection layer, namely, connection weight (W) and bias, without explicitly calling
Variable related interfaces to create learnable parameters.