declarative programming (static graph)
inplace_abn(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=True, use_global_stats=False, act_alpha=1.0)
In-place Activation Batch Normalization Layer
This layer calculates batch normalization and activation with in-place memory. For batch normalization calculations, see fluid.layers.batch_norm. For in-place activation batch normalization, see In-Place Activated BatchNorm for Memory-Optimized Training of DNNs
inplace_abn only support activation type as None, identity, leaky_relu, elu currently. inplace_abn only support data type as float32, float64 currently.
if build_strategy.sync_batch_norm=True, the batch_norm in network will use sync_batch_norm automatically. is_test = True can only be used in test program and inference program, is_test CANNOT be set to True in train program, if you want to use global status from pre_train model in train program, please set use_global_stats = True.
input (Variable) – The rank of input variable can be 2, 3, 4, 5. The data type is float16 or float32 or float64.
act (string, Default None) – Activation type, linear|relu|prelu|…
is_test (bool, Default False) – A flag indicating whether it is in test phrase or not.
momentum (float|Variable, Default 0.9) – The value used for the moving_mean and moving_var computation. This should be a float number or a Variable with shape  and data type as float32. The updated formula is: \(moving\_mean = moving\_mean * momentum + new\_mean * (1. - momentum)\) \(moving\_var = moving\_var * momentum + new\_var * (1. - momentum)\) Default is 0.9.
epsilon (float, Default 1e-05) – A value added to the denominator for numerical stability. Default is 1e-5.
param_attr (ParamAttr|None) – The parameter attribute for Parameter scale of inplace_abn. If it is set to None or one attribute of ParamAttr, inplace_abn will create ParamAttr as param_attr, the name of scale can be set in ParamAttr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
bias_attr (ParamAttr|None) – The parameter attribute for the bias of inplace_abn. If it is set to None or one attribute of ParamAttr, inplace_abn will create ParamAttr as bias_attr, the name of bias can be set in ParamAttr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
data_layout (str, optional) – Specify the data format of the input, and the data format of the output will be consistent with that of the input. An optional string from: “NCHW”, “NHWC”. The default is “NCHW”. When it is “NCHW”, the data is stored in the order of: [batch_size, input_channels, input_height, input_width].
name (str|None) – For detailed information, please refer to Name. Usually name is no need to set and None by default.
moving_mean_name (str, Default None) – The name of moving_mean which store the global Mean. If it is set to None, inplace_abn will save global mean with a random name, otherwise, inplace_abn will save global mean with the string.
moving_variance_name (str, Default None) – The name of the moving_variance which store the global Variance. If it is set to None, inplace_abn, will save global variance with a random name, otherwise, inplace_abn will save global variance with the string.
do_model_average_for_mean_and_var (bool, Default True) – Whether parameter mean and variance should do model average when model average is enabled.
use_global_stats (bool, Default False) – Whether to use global mean and variance. In inference or test mode, set use_global_stats to true or is_test to true, and the behavior is equivalent. In train mode, when setting use_global_stats True, the global mean and variance are also used during train period.
act_alpha (float, Default 1.0) – when activation is in [‘elu’, ‘identity’, ‘leaky_relu’], inplace activative batch normalization will be used, and alpha parameter for activation can be given by this parameter.
A Variable holding Tensor which is the result after applying batch normalization and activation on the input, has same shape and data type with input.
import paddle.fluid as fluid x = fluid.data(name='x', shape=[3, 7, 3, 7], dtype='float32') hidden1 = fluid.layers.fc(input=x, size=200, param_attr='fc1.w') hidden2 = fluid.layers.inplace_abn(input=hidden1) hidden3 = fluid.layers.inplace_abn(input=hidden2, act='leaky_relu', act_alpha=0.2)