update_loss_scaling

paddle.fluid.contrib.mixed_precision.amp_nn. update_loss_scaling ( x, found_inf, prev_loss_scaling, num_good_steps, num_bad_steps, incr_every_n_steps, decr_every_n_nan_or_inf, incr_ratio, decr_ratio, stop_update=False, name=None ) [source]

Update loss scaling according to overall gradients. If all gradients is finite after incr_every_n_steps, loss scaling will increase by incr_ratio. Otherwise, loss scaling will decrease by decr_ratio after decr_every_n_nan_or_inf steps and each step some gradients are infinite.

Parameters
  • x (list|tuple) – The input tensors of update_loss_scaling operator.

  • found_inf (Variable) – A boolean variable indicates whether there is any infinite gradient.

  • prev_loss_scaling (Variable) – Previous loss scaling.

  • num_good_steps (Variable) – A variable accumulates good steps in which all gradients are finite.

  • num_bad_steps (Variable) – A variable accumulates bad steps in which some gradients are infinite.

  • incr_every_n_steps (int) – A variable represents increasing loss scaling every n consecutive steps with finite gradients.

  • decr_every_n_nan_or_inf (int) – A variable represents decreasing loss scaling every n accumulated steps with nan or inf gradients.

  • incr_ratio (float) – The multiplier to use when increasing the loss scaling.

  • decr_ratio (float) – The less-than-one-multiplier to use when decreasing loss scaling.