Activation Function

The activation function incorporates non-linearity properties into the neural network.

PaddlePaddle Fluid supports most of the activation functions, including:

api_fluid_layers_relu, api_fluid_layers_tanh, api_fluid_layers_sigmoid, api_fluid_layers_elu, api_fluid_layers_relu6, api_fluid_layers_pow, api_fluid_layers_stanh, api_fluid_layers_hard_sigmoid, api_fluid_layers_swish, api_fluid_layers_prelu, api_fluid_layers_brelu, api_fluid_layers_leaky_relu, api_fluid_layers_soft_relu, api_fluid_layers_thresholded_relu, api_fluid_layers_maxout, api_fluid_layers_logsigmoid, api_fluid_layers_hard_shrink, api_fluid_layers_softsign, api_fluid_layers_softplus, api_fluid_layers_tanh_shrink, api_fluid_layers_softshrink, api_fluid_layers_exp.

Fluid provides two ways to use the activation function:

  • If a layer interface provides act variables (default None), we can specify the type of layer activation function through this parameter. This mode supports common activation functions relu, tanh, sigmoid, identity.

conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
  • Fluid provides an interface for each Activation, and we can explicitly call it.

conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3)
relu1 = fluid.layers.relu(conv2d)