EarlyStopping

class paddle.callbacks. EarlyStopping ( monitor='loss', mode='auto', patience=0, verbose=1, min_delta=0, baseline=None, save_best_model=True ) [source]

Stop training when the given monitor stopped improving during evaluation by setting model.stop_training=True.

Parameters
  • monitor (str) – Quantity to be monitored. Default: ‘loss’.

  • mode (str|None) – Mode should be one of ‘auto’, ‘min’ or ‘max’. In ‘min’ mode, training will stop until monitored quantity stops decreasing. In ‘max’ mode, training will stop until monitored quantity stops increasing. In ‘auto’ mode, exact mode can be inferred by the name of monitor. If ‘acc’ in monitor, the mode will be considered as ‘max’, otherwise the mode will be set to ‘min’. Default: ‘auto’.

  • patience (int) – Number of epochs with no improvement after which training will be stopped. Default: 0.

  • verbose (int) – The verbosity mode, should be 0 or 1. When verbose=0, logs will not be printed. When verbose=1, logs will be printed. Default: 1.

  • min_delta (int|float) – The minimum change of monitored quantity. If the change is less than min_delta, model could be considered as no improvement. Default: 0.

  • baseline (int|float|None) – Baseline value for the monitored quantity. Training will stop if the model doesn’t show improvement over the baseline. Default: None.

  • save_best_model (bool) – Whether to save best model. Default: True.

Examples

>>> import paddle
>>> from paddle import Model
>>> from paddle.static import InputSpec
>>> from paddle.vision.models import LeNet
>>> from paddle.vision.datasets import MNIST
>>> from paddle.metric import Accuracy
>>> from paddle.nn import CrossEntropyLoss
>>> import paddle.vision.transforms as T

>>> device = paddle.set_device('cpu')
>>> sample_num = 200
>>> save_dir = './best_model_checkpoint'
>>> transform = T.Compose(
...     [T.Transpose(), T.Normalize([127.5], [127.5])])
>>> train_dataset = MNIST(mode='train', transform=transform)
>>> val_dataset = MNIST(mode='test', transform=transform)
>>> net = LeNet()
>>> optim = paddle.optimizer.Adam(
...     learning_rate=0.001, parameters=net.parameters())

>>> inputs = [InputSpec([None, 1, 28, 28], 'float32', 'x')]
>>> labels = [InputSpec([None, 1], 'int64', 'label')]

>>> model = Model(net, inputs=inputs, labels=labels)
>>> model.prepare(
...     optim,
...     loss=CrossEntropyLoss(reduction="sum"),
...     metrics=[Accuracy()])
>>> callbacks = paddle.callbacks.EarlyStopping(
...     'loss',
...     mode='min',
...     patience=1,
...     verbose=1,
...     min_delta=0,
...     baseline=None,
...     save_best_model=True)
>>> model.fit(train_dataset,
...           val_dataset,
...           batch_size=64,
...           log_freq=200,
...           save_freq=10,
...           save_dir=save_dir,
...           epochs=20,
...           callbacks=[callbacks])
on_train_begin ( logs=None )

on_train_begin

Called at the start of training.

Parameters

logs (dict) – The logs is a dict or None.

on_eval_end ( logs=None )

on_eval_end

Called at the end of evaluation.

Parameters

logs (dict) – The logs is a dict or None. The logs passed by paddle.Model is a dict contains ‘loss’, metrics and ‘batch_size’ of last batch of validation dataset.