Activation Function

The activation function incorporates non-linearity properties into the neural network.

PaddlePaddle Fluid supports most of the activation functions, including:

relu, tanh, sigmoid, elu, relu6, pow, stanh, hard_sigmoid, swish, prelu, brelu, leaky_relu, soft_relu, thresholded_relu, maxout, logsigmoid, hard_shrink, softsign, softplus, tanh_shrink, softshrink, exp.

Fluid provides two ways to use the activation function:

  • If a layer interface provides act variables (default None), we can specify the type of layer activation function through this parameter. This mode supports common activation functions relu, tanh, sigmoid, identity.

conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
  • Fluid provides an interface for each Activation, and we can explicitly call it.

conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3)
relu1 = fluid.layers.relu(conv2d)