Convergence problems

Introduction

Currently, the NR method is the most used method, but its convergence is not always ensured, particularly in 3D.

The solutions are proposed (in 2D and 3D) in order to improve convergence in certain particular cases.

Convergence conditions

The NR method converges rapidly when:

  • the R(X) function verifies certain conditions of monotony (1)
  • and we start from an initial point close to the solution (2)

(1) Monotony conditions

The second derivative must be different from zero, i.e. there are no inflection points on the B(H) characteristic.

(2) Initial point

The Newton-Raphson method is very efficient when the iterative process starts from an initial point close to the solution . If the starting value is too far from the solution, the NR method can enter an infinite loop without producing any improved approximation.

A control of the number of iterations is always necessary.

Under-relaxation method (3D)

There are numerical techniques called of “under-relaxation” that allow the improving of the convergence of the iterative process. An additional coefficient, called under-relaxation coefficient, is introduced in the formula of the iterative evaluation of vector solution:

[Xi+1] = [Xi] + α[ΔXi]

where:

α is the under-relaxation coefficient, having values within the range ]0,1]

The under-relaxation technique ensures the convergence of the solving process for nonlinear problems when the magnetic scalar potential formulation is used. This is the automatic formulation in Flux 3D problems.