AdamOptimizer¶
-
class
paddle.fluid.optimizer.
AdamOptimizer
( learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, parameter_list=None, regularization=None, grad_clip=None, name=None, lazy_mode=False ) [source] -
The Adam optimizer uses an optimization described at the end of section 2 of Adam paper , it can dynamically adjusts the learning rate of each parameter using the 1st moment estimates and the 2nd moment estimates of the gradient.
The parameter
param_out
update rule with gradientgrad
:\[ \begin{align}\begin{aligned}t & = t + 1\\\begin{split}moment\_1\_out & = {\\beta}_1 * moment\_1 + (1 - {\\beta}_1) * grad\end{split}\\\begin{split}moment\_2\_out & = {\\beta}_2 * moment\_2 + (1 - {\\beta}_2) * grad * grad\end{split}\\\begin{split}learning\_rate & = learning\_rate * \\ \\frac{\sqrt{1 - {\\beta}_2^t}}{1 - {\\beta}_1^t}\end{split}\\\begin{split}param\_out & = param - learning\_rate * \\frac{moment\_1}{\sqrt{moment\_2} + \epsilon}\end{split}\end{aligned}\end{align} \]Related paper: Adam: A Method for Stochastic Optimization
- Parameters
-
learning_rate (float|Variable, optional) – The learning rate used to update
Parameter
. It can be a float value or aVariable
with a float type. The default value is 0.001.beta1 (float|Variable, optional) – The exponential decay rate for the 1st moment estimates. It should be a float number or a Variable with shape [1] and data type as float32. The default value is 0.9.
beta2 (float|Variable, optional) – The exponential decay rate for the 2nd moment estimates. It should be a float number or a Variable with shape [1] and data type as float32. The default value is 0.999.
epsilon (float, optional) – A small float value for numerical stability. The default value is 1e-08.
parameter_list (Iterable, optional) – Iterable of
Variable
names to update to minimizeloss
. This parameter is required in dygraph mode. The default value is None in static mode, at this time all parameters will be updated.regularization (WeightDecayRegularizer, optional) –
- The strategy of regularization. There are two method:
-
api_fluid_regularizer_L1Decay , api_fluid_regularizer_L2Decay . If a parameter has set
regularizer using api_fluid_ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.
grad_clip (GradientClipBase, optional) – Gradient cliping strategy, it’s an instance of some derived class of
GradientClipBase
. There are three cliping strategies ( api_fluid_clip_GradientClipByGlobalNorm , api_fluid_clip_GradientClipByNorm , api_fluid_clip_GradientClipByValue ). Default None, meaning there is no gradient clipping.name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. The default value is None.
lazy_mode (bool, optional) – The official Adam algorithm has two moving-average accumulators. The accumulators are updated at every step. Every element of the two moving-average is updated in both dense mode and sparse mode. If the size of parameter is very large, then the update may be very slow. The lazy mode only update the element that has gradient in current mini-batch, so it will be much more faster. But this mode has different semantics with the original Adam algorithm and may lead to different result. The default value is False.
Examples
import paddle import paddle.fluid as fluid place = fluid.CPUPlace() main = fluid.Program() with fluid.program_guard(main): x = fluid.data(name='x', shape=[None, 13], dtype='float32') y = fluid.data(name='y', shape=[None, 1], dtype='float32') y_predict = fluid.layers.fc(input=x, size=1, act=None) cost = fluid.layers.square_error_cost(input=y_predict, label=y) avg_cost = fluid.layers.mean(cost) adam_optimizer = fluid.optimizer.AdamOptimizer(0.01) adam_optimizer.minimize(avg_cost) fetch_list = [avg_cost] train_reader = paddle.batch( paddle.dataset.uci_housing.train(), batch_size=1) feeder = fluid.DataFeeder(place=place, feed_list=[x, y]) exe = fluid.Executor(place) exe.run(fluid.default_startup_program()) for data in train_reader(): exe.run(main, feed=feeder.feed(data), fetch_list=fetch_list)
# Adam with beta1/beta2 as Variable import paddle import paddle.fluid as fluid import paddle.fluid.layers.learning_rate_scheduler as lr_scheduler place = fluid.CPUPlace() main = fluid.Program() with fluid.program_guard(main): x = fluid.data(name='x', shape=[None, 13], dtype='float32') y = fluid.data(name='y', shape=[None, 1], dtype='float32') y_predict = fluid.layers.fc(input=x, size=1, act=None) cost = fluid.layers.square_error_cost(input=y_predict, label=y) avg_cost = fluid.layers.mean(cost) # define beta decay variable def get_decayed_betas(beta1_init, beta2_init, decay_steps, decay_rate): global_step = lr_scheduler._decay_step_counter() beta1 = fluid.layers.create_global_var( shape=[1], value=float(beta1_init), dtype='float32', # set persistable for save checkpoints and resume persistable=True, name="beta1") beta2 = fluid.layers.create_global_var( shape=[1], value=float(beta2_init), dtype='float32', # set persistable for save checkpoints and resume persistable=True, name="beta2") div_res = global_step / decay_steps decayed_beta1 = beta1_init * (decay_rate**div_res) decayed_beta2 = beta2_init * (decay_rate**div_res) fluid.layers.assign(decayed_beta1, beta1) fluid.layers.assign(decayed_beta2, beta2) return beta1, beta2 beta1, beta2 = get_decayed_betas(0.9, 0.99, 1e5, 0.9) adam_optimizer = fluid.optimizer.AdamOptimizer( learning_rate=0.01, beta1=beta1, beta2=beta2) adam_optimizer.minimize(avg_cost) fetch_list = [avg_cost] train_reader = paddle.batch( paddle.dataset.uci_housing.train(), batch_size=1) feeder = fluid.DataFeeder(place=place, feed_list=[x, y]) exe = fluid.Executor(place) exe.run(fluid.default_startup_program()) for data in train_reader(): exe.run(main, feed=feeder.feed(data), fetch_list=fetch_list)
-
clear_gradients
( ) -
Clear the gradients of all optimized parameters for model.
If not, new gradient will accumulat on previous gradient.
- Returns
-
None
Examples
import paddle.fluid as fluid import numpy as np with fluid.dygraph.guard(): value = np.arange(26).reshape(2, 13).astype("float32") a = fluid.dygraph.to_variable(value) linear = fluid.Linear(13, 5, dtype="float32") # This can be any optimizer supported by dygraph. adam = fluid.optimizer.Adam(learning_rate = 0.01, parameter_list = linear.parameters()) out = linear(a) out.backward() adam.minimize(out) adam.clear_gradients()
-
current_step_lr
( ) -
- Api_attr
-
imperative
Get current step learning rate. The return value is all the same When LearningRateDecay is not used, otherwise return the step learning rate.
- Returns
-
The learning rate of the current step.
- Return type
-
float
Examples
import paddle.fluid as fluid import numpy as np # example1: LearningRateDecay is not used, return value is all the same with fluid.dygraph.guard(): emb = fluid.dygraph.Embedding([10, 10]) adam = fluid.optimizer.Adam(0.001, parameter_list = emb.parameters()) lr = adam.current_step_lr() print(lr) # 0.001 # example2: PiecewiseDecay is used, return the step learning rate with fluid.dygraph.guard(): inp = np.random.uniform(-0.1, 0.1, [10, 10]).astype("float32") linear = fluid.dygraph.nn.Linear(10, 10) inp = fluid.dygraph.to_variable(inp) out = linear(inp) loss = fluid.layers.reduce_mean(out) bd = [2, 4, 6, 8] value = [0.2, 0.4, 0.6, 0.8, 1.0] adam = fluid.optimizer.Adam(fluid.dygraph.PiecewiseDecay(bd, value, 0), parameter_list=linear.parameters()) # first step: learning rate is 0.2 np.allclose(adam.current_step_lr(), 0.2, rtol=1e-06, atol=0.0) # True # learning rate for different steps ret = [0.2, 0.2, 0.4, 0.4, 0.6, 0.6, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0] for i in range(12): adam.minimize(loss) lr = adam.current_step_lr() np.allclose(lr, ret[i], rtol=1e-06, atol=0.0) # True
-
minimize
( loss, startup_program=None, parameter_list=None, no_grad_set=None ) -
Add operations to minimize
loss
by updatingparameter_list
.- Parameters
-
loss (Variable) – A
Variable
containing the value to minimize.startup_program (Program, optional) – api_fluid_Program for initializing parameters in
parameter_list
. The default value is None, at this time api_fluid_default_startup_program will be used.parameter_list (Iterable, optional) – Iterable of
Variable
orVariable.name
to update to minimizeloss
. The default value is None, at this time all parameters will be updated.no_grad_set (set, optional) – Set of
Variable
orVariable.name
that don’t need to be updated. The default value is None.
- Returns
-
tuple (optimize_ops, params_grads), A list of operators appended by minimize and a list of (param, grad) variable pairs, param is
Parameter
, grad is the gradient value corresponding to the parameter. The returned tuple can be passed tofetch_list
inExecutor.run()
to indicate program pruning. If so, the program will be pruned byfeed
andfetch_list
before run, see details inExecutor
. - Return type
-
tuple
Examples
Please refer to the example of current Optimizer.
-
set_dict
( state_dict ) -
Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.
- Parameters
-
state_dict (dict) – Dict contains all the Variable needed by optimizer
- Returns
-
None
Examples
import paddle import paddle.fluid as fluid paddle.disable_static() emb = paddle.nn.Embedding(10, 10) state_dict = emb.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") scheduler = paddle.optimizer.lr.NoamDecay( d_model=0.01, warmup_steps=100, verbose=True) adam = paddle.optimizer.Adam( learning_rate=scheduler, parameters=emb.parameters()) state_dict = adam.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") para_state_dict, opti_state_dict = fluid.load_dygraph("paddle_dy")
-
set_lr
( value ) -
- Api_attr
-
imperative
Set the value of the learning rate manually in the optimizer. If the optimizer use LearningRateDecay, this API cannot be invoked, because it will lead to conflict.
- Parameters
-
value (float|Variable) – the value of learning rate
- Returns
-
None
Examples
import paddle.fluid as fluid with fluid.dygraph.guard(): linear = fluid.dygraph.nn.Linear(10, 10) adam = fluid.optimizer.Adam(0.1, parameter_list=linear.parameters()) # set learning rate manually by python float value lr_list = [0.2, 0.3, 0.4, 0.5, 0.6] for i in range(5): adam.set_lr(lr_list[i]) lr = adam.current_step_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.2 # current lr is 0.3 # current lr is 0.4 # current lr is 0.5 # current lr is 0.6 # set learning rate manually by framework Variable lr_var = fluid.layers.create_global_var( shape=[1], value=0.7, dtype='float32') adam.set_lr(lr_var) lr = adam.current_step_lr() print("current lr is {}".format(lr)) # Print: # current lr is 0.7
-
set_state_dict
( state_dict ) -
Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be changed.
- Parameters
-
state_dict (dict) – Dict contains all the Variable needed by optimizer
- Returns
-
None
Examples
import paddle import paddle.fluid as fluid paddle.disable_static() emb = paddle.nn.Embedding(10, 10) state_dict = emb.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") scheduler = paddle.optimizer.lr.NoamDecay( d_model=0.01, warmup_steps=100, verbose=True) adam = paddle.optimizer.Adam( learning_rate=scheduler, parameters=emb.parameters()) state_dict = adam.state_dict() fluid.save_dygraph(state_dict, "paddle_dy") para_state_dict, opti_state_dict = fluid.load_dygraph("paddle_dy")
-
state_dict
( ) -
Get state dict information from optimizer. It contain all the variable used by optimizer. For Adam optimizer, contains beta1, beta2, momentum etc. If LearningRateDecay have been used, global_step will be include in state dict. If the optimizer never be called(minimize function), the state_dict is empty.
Args: None :returns: dict contains all the variable used by optimizer :rtype: state_dict(dict)
Examples
import paddle.fluid as fluid with fluid.dygraph.guard(): emb = fluid.dygraph.Embedding([10, 10]) adam = fluid.optimizer.Adam(0.001, parameter_list=emb.parameters()) state_dict = adam.state_dict()