LBFGS

class paddle.optimizer. LBFGS ( learning_rate=1.0, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None, parameters=None, weight_decay=None, grad_clip=None, name=None ) [source]

The L-BFGS is a quasi-Newton method for solving an unconstrained optimization problem over a differentiable function. Closely related is the Newton method for minimization. Consider the iterate update formula:

\[x_{k+1} = x_{k} + H_k \nabla{f_k}\]

If \(H_k\) is the inverse Hessian of \(f\) at \(x_k\), then it’s the Newton method. If \(H_k\) is symmetric and positive definite, used as an approximation of the inverse Hessian, then it’s a quasi-Newton. In practice, the approximated Hessians are obtained by only using the gradients, over either whole or part of the search history, the former is BFGS, the latter is L-BFGS.

Reference:

Jorge Nocedal, Stephen J. Wright, Numerical Optimization, Second Edition, 2006. pp179: Algorithm 7.5 (L-BFGS).

Parameters
  • learning_rate (float, optional) – learning rate .The default value is 1.

  • max_iter (int, optional) – maximal number of iterations per optimization step. The default value is 20.

  • max_eval (int, optional) – maximal number of function evaluations per optimization step. The default value is max_iter * 1.25.

  • tolerance_grad (float, optional) – termination tolerance on first order optimality The default value is 1e-5.

  • tolerance_change (float, optional) – termination tolerance on function value/parameter changes. The default value is 1e-9.

  • history_size (int, optional) – update history size. The default value is 100.

  • line_search_fn (string, optional) – either ‘strong_wolfe’ or None. The default value is strong_wolfe.

  • parameters (list|tuple, optional) – List/Tuple of Tensor names to update to minimize loss. This parameter is required in dygraph mode. The default value is None.

  • weight_decay (float|WeightDecayRegularizer, optional) – The strategy of regularization. It canbe a float value as coeff of L2 regularization or L1Decay, L2Decay. If a parameter has set regularizer using ParamAttr already, the regularization setting here in optimizer will be ignored for this parameter. Otherwise, the regularization setting here in optimizer will take effect. Default None, meaning there is no regularization.

  • grad_clip (GradientClipBase, optional) – Gradient cliping strategy, it’s an instance of some derived class of GradientClipBase . There are three cliping strategies ( ClipGradByGlobalNorm , ClipGradByNorm , ClipGradByValue ). Default None, meaning there is no gradient clipping.

  • name (str, optional) – Normally there is no need for user to set this property. For more information, please refer to Name. The default value is None.

Returns

the final loss of closure.

Return type

loss (Tensor)

Examples

>>> import paddle
>>> import numpy as np

>>> paddle.disable_static()
>>> np.random.seed(0)
>>> np_w = np.random.rand(1).astype(np.float32)
>>> np_x = np.random.rand(1).astype(np.float32)

>>> inputs = [np.random.rand(1).astype(np.float32) for i in range(10)]
>>> # y = 2x
>>> targets = [2 * x for x in inputs]

>>> class Net(paddle.nn.Layer):
...     def __init__(self):
...         super().__init__()
...         w = paddle.to_tensor(np_w)
...         self.w = paddle.create_parameter(shape=w.shape, dtype=w.dtype, default_initializer=paddle.nn.initializer.Assign(w))
...
...     def forward(self, x):
...         return self.w * x
...
>>> net = Net()
>>> opt = paddle.optimizer.LBFGS(learning_rate=1, max_iter=1, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn='strong_wolfe', parameters=net.parameters())
>>> def train_step(inputs, targets):
...     def closure():
...         outputs = net(inputs)
...         loss = paddle.nn.functional.mse_loss(outputs, targets)
...         print('loss: ', loss.item())
...         opt.clear_grad()
...         loss.backward()
...         return loss
...     opt.step(closure)
...
>>> for input, target in zip(inputs, targets):
...     input = paddle.to_tensor(input)
...     target = paddle.to_tensor(target)
...     train_step(input, target)
state_dict ( )

state_dict

Returns the state of the optimizer as a dict.

Returns

state, a dict holding current optimization state. Its content differs between optimizer classes.

Examples

>>> import paddle

>>> paddle.disable_static()

>>> net = paddle.nn.Linear(10, 10)
>>> opt = paddle.optimizer.LBFGS(
...     learning_rate=1,
...     max_iter=1,
...     max_eval=None,
...     tolerance_grad=1e-07,
...     tolerance_change=1e-09,
...     history_size=100,
...     line_search_fn='strong_wolfe',
...     parameters=net.parameters(),
>>> )

>>> def train_step(inputs, targets):
...     def closure():
...         outputs = net(inputs)
...         loss = paddle.nn.functional.mse_loss(outputs, targets)
...         opt.clear_grad()
...         loss.backward()
...         return loss
...
...     opt.step(closure)
...
>>> inputs = paddle.rand([10, 10], dtype="float32")
>>> targets = paddle.to_tensor([2 * x for x in inputs])

>>> n_iter = 0
>>> while n_iter < 20:
...     loss = train_step(inputs, targets)
...     n_iter = opt.state_dict()["state"]["func_evals"]
...     print("n_iter:", n_iter)
...
step ( closure )

step

Performs a single optimization step.

Parameters
  • closure (callable) – A closure that reevaluates the model

  • loss. (and returns the) –

Examples

>>> import paddle

>>> paddle.disable_static()

>>> inputs = paddle.rand([10, 10], dtype="float32")
>>> targets = paddle.to_tensor([2 * x for x in inputs])

>>> net = paddle.nn.Linear(10, 10)
>>> opt = paddle.optimizer.LBFGS(
...     learning_rate=1,
...     max_iter=1,
...     max_eval=None,
...     tolerance_grad=1e-07,
...     tolerance_change=1e-09,
...     history_size=100,
...     line_search_fn='strong_wolfe',
...     parameters=net.parameters(),
>>> )

>>> def closure():
...     outputs = net(inputs)
...     loss = paddle.nn.functional.mse_loss(outputs, targets)
...     print("loss:", loss.item())
...     opt.clear_grad()
...     loss.backward()
...     return loss
...
>>> opt.step(closure)
append_regularization_ops ( parameters_and_grads, regularization=None )

append_regularization_ops

Create and add backward regularization Operators

Creates and adds backward regularization operators in the BlockDesc. This will add gradients of the regularizer function to the gradients of the parameters and return these modified gradients. This is the same as implementing weight decay in optimizers for regularization.

Parameters
  • parameters_and_grads – A list of (parameters, gradients) pairs that need to be regularized.

  • regularization – A global regularizer. If the parameter is not set. It will be applied with regularizer.

Returns

list of (parameters, gradients) pair with the regularized gradient

Return type

list[(Variable, Variable)]

Raises

Exception – Unknown regularization type

clear_grad ( set_to_zero=True )

clear_grad

Clear the gradients of all optimized parameters for model.

If not, new gradient will accumulat on previous gradient.

There are two method to clear grad: set_to_zero or delete grad.

Parameters

set_to_zero (bool, optional) – If set grads to zero or not, default is True.

Returns

None

Examples

>>> import paddle

>>> a = paddle.arange(26, dtype="float32").reshape([2, 13])
>>> linear = paddle.nn.Linear(13, 5)
>>> # This can be any optimizer supported by dygraph.
>>> adam = paddle.optimizer.Adam(learning_rate = 0.01,
...                             parameters = linear.parameters())
>>> out = linear(a)
>>> out.backward()
>>> adam.step()
>>> adam.clear_grad()
get_lr ( )

get_lr

Get current learning rate of optimizer. If ‘LRScheduler’ is not used, the return value is all the same. If ‘LRScheduler’ is used, the return value is the current scheduled learing rete.

Returns

The current learning rate of optimizer.

Return type

float

Examples

>>> # train on default dynamic graph mode
>>> import paddle
>>> import numpy as np
>>> emb = paddle.nn.Embedding(10, 3)

>>> ## example1: LRScheduler is not used, return the same value is all the same
>>> adam = paddle.optimizer.Adam(0.01, parameters = emb.parameters())
>>> for batch in range(10):
...     input = paddle.randint(low=0, high=5, shape=[5])
...     out = emb(input)
...     out.backward()
...     print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.01
...     adam.step()
Learning rate of step0: 0.01
Learning rate of step1: 0.01
Learning rate of step2: 0.01
Learning rate of step3: 0.01
Learning rate of step4: 0.01
Learning rate of step5: 0.01
Learning rate of step6: 0.01
Learning rate of step7: 0.01
Learning rate of step8: 0.01
Learning rate of step9: 0.01

>>> ## example2: StepDecay is used, return the scheduled learning rate
>>> scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1)
>>> adam = paddle.optimizer.Adam(scheduler, parameters = emb.parameters())
>>> for batch in range(10):
...     input = paddle.randint(low=0, high=5, shape=[5])
...     out = emb(input)
...     out.backward()
...     print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05...
...     adam.step()
...     scheduler.step()
Learning rate of step0: 0.5
Learning rate of step1: 0.5
Learning rate of step2: 0.05
Learning rate of step3: 0.05
Learning rate of step4: 0.005000000000000001
Learning rate of step5: 0.005000000000000001
Learning rate of step6: 0.0005000000000000001
Learning rate of step7: 0.0005000000000000001
Learning rate of step8: 5.000000000000001e-05
Learning rate of step9: 5.000000000000001e-05

>>> # train on static graph mode
>>> paddle.enable_static()
>>> main_prog = paddle.static.Program()
>>> start_prog = paddle.static.Program()
>>> with paddle.static.program_guard(main_prog, start_prog):
...     x = paddle.static.data(name='x', shape=[None, 10])
...     z = paddle.static.nn.fc(x, 100)
...     loss = paddle.mean(z)
...     scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.5, step_size=2, gamma=0.1)
...     adam = paddle.optimizer.Adam(learning_rate=scheduler)
...     adam.minimize(loss)

>>> exe = paddle.static.Executor()
>>> exe.run(start_prog)
>>> for batch in range(10):
...     print("Learning rate of step{}: {}".format(batch, adam.get_lr())) # 0.5->0.05->0.005...
...     out = exe.run(main_prog, feed={'x': np.random.randn(3, 10).astype('float32')})
...     scheduler.step()
Learning rate of step0: 0.5
Learning rate of step1: 0.5
Learning rate of step2: 0.05
Learning rate of step3: 0.05
Learning rate of step4: 0.005000000000000001
Learning rate of step5: 0.005000000000000001
Learning rate of step6: 0.0005000000000000001
Learning rate of step7: 0.0005000000000000001
Learning rate of step8: 5.000000000000001e-05
Learning rate of step9: 5.000000000000001e-05
minimize ( loss, startup_program=None, parameters=None, no_grad_set=None )

minimize

Empty method. LBFGS optimizer does not use this way to minimize loss. Please refer ‘Examples’ of LBFGS() above for usage.

set_lr ( value )

set_lr

Api_attr

imperative

Set the value of the learning rate manually in the optimizer. If the optimizer use LRScheduler, this API cannot be invoked, because it will lead to conflict.

Parameters

value (float) – the value of learning rate

Returns

None

Examples

>>> import paddle
>>> linear = paddle.nn.Linear(10, 10)

>>> adam = paddle.optimizer.Adam(0.1, parameters=linear.parameters())

>>> # set learning rate manually by python float value
>>> lr_list = [0.2, 0.3, 0.4, 0.5, 0.6]
>>> for i in range(5):
...     adam.set_lr(lr_list[i])
...     lr = adam.get_lr()
...     print("current lr is {}".format(lr))
current lr is 0.2
current lr is 0.3
current lr is 0.4
current lr is 0.5
current lr is 0.6
set_lr_scheduler ( scheduler )

set_lr_scheduler

Api_attr

imperative

Set the LRScheduler of the learning rate manually in the optimizer. If the optimizer already used LRScheduler previously, this API will set it be the new one.

Parameters

scheduler (LRScheduler) – the LRScheduler of learning rate

Returns

None

Examples

>>> import paddle
>>> linear = paddle.nn.Linear(10, 10)

>>> adam = paddle.optimizer.Adam(0.1, parameters=linear.parameters())

>>> # set learning rate manually by class LRScheduler
>>> scheduler = paddle.optimizer.lr.MultiStepDecay(learning_rate=0.5, milestones=[2,4,6], gamma=0.8)
>>> adam.set_lr_scheduler(scheduler)
>>> lr = adam.get_lr()
>>> print("current lr is {}".format(lr))
current lr is 0.5

>>> # set learning rate manually by another LRScheduler
>>> scheduler = paddle.optimizer.lr.StepDecay(learning_rate=0.1, step_size=5, gamma=0.6)
>>> adam.set_lr_scheduler(scheduler)
>>> lr = adam.get_lr()
>>> print("current lr is {}".format(lr))
current lr is 0.1
set_state_dict ( state_dict )

set_state_dict

Load optimizer state dict. For Adam optimizer, contains beta1, beta2, momentum etc. If LRScheduler have been used, global_step will be changed.

Parameters

state_dict (dict) – Dict contains all the Tensor needed by optimizer

Returns

None

Examples

>>> import paddle

>>> emb = paddle.nn.Embedding(10, 10)

>>> layer_state_dict = emb.state_dict()
>>> paddle.save(layer_state_dict, "emb.pdparams")

>>> scheduler = paddle.optimizer.lr.NoamDecay(
...     d_model=0.01, warmup_steps=100, verbose=True)
>>> adam = paddle.optimizer.Adam(
...     learning_rate=scheduler,
...     parameters=emb.parameters())
>>> opt_state_dict = adam.state_dict()
>>> paddle.save(opt_state_dict, "adam.pdopt")

>>> opti_state_dict = paddle.load("adam.pdopt")
>>> adam.set_state_dict(opti_state_dict)