ParamAttr

class paddle.fluid.ParamAttr(name=None, initializer=None, learning_rate=1.0, regularizer=None, trainable=True, do_model_average=True)[source]

Create a object to represent the attribute of parameter. The attributes are: name, initializer, learning rate, regularizer, trainable, gradient clip, and model average.

Note

gradient_clip of ParamAttr HAS BEEN DEPRECATED since 2.0. It is recommended to set grad_clip in optimizer to clip gradient. There are three clipping strategies: GradientClipByGlobalNorm , GradientClipByNorm , GradientClipByValue .

Parameters
  • name (str, optional) – The parameter’s name. Default None, meaning that the name would be created automatically.

  • initializer (Initializer, optional) – The method to initial this parameter. Default None, meaning that the weight parameter is initialized by Xavier initializer, and the bias parameter is initialized by 0.

  • learning_rate (float) – The parameter’s learning rate. The learning rate when optimize is the global learning rates times the parameter’s learning rate times the factor of learning rate scheduler. Default 1.0.

  • regularizer (WeightDecayRegularizer, optional) – Regularization strategy. There are two method: L1Decay , L2Decay . If regularizer is also set in optimizer (such as SGDOptimizer ), that regularizer setting in optimizer will be ignored. Default None, meaning there is no regularization.

  • trainable (bool) – Whether this parameter is trainable. Default True.

  • do_model_average (bool) – Whether this parameter should do model average when model average is enabled. Default False.

Examples

import paddle.fluid as fluid

w_param_attrs = fluid.ParamAttr(name="fc_weight",
                                learning_rate=0.5,
                                regularizer=fluid.regularizer.L2Decay(1.0),
                                trainable=True)
print(w_param_attrs.name) # "fc_weight"
x = fluid.data(name='X', shape=[None, 1], dtype='float32')
y_predict = fluid.layers.fc(input=x, size=10, param_attr=w_param_attrs)