Controlling the behavior of the algorithm

There are many parameters that control the behavior of the Generalized Reduced Gradient algorithm. To access the parameters, click Config in the Optimization Properties dialog box.

Each parameter has a default value that is appropriate for most problems. You are not required to take any action to use the default parameter values; however, at times, it may be necessary to set one or more of the parameters to a new value to make Embed more efficient or to make it possible to solve a difficult problem.

Several of the numerical tolerances are based on the value of Error Tolerance. This value should not be greater than 1.0e-2. If it is, Embed uses 1.0e-2 in its place to set the other tolerances. The Error Tolerance should reflect the accuracy of the Embed simulation computations and are based on the step size and other Embed settings.

The following table describes the tolerances and algorithmic options.

 

Parameter

Description

Default Value

Doscale

Scaling.

0   No scaling.

1   The problem is scaled so that the maximum value of any row or column of the initial gradient array is less than or equal to 1.0.

0

ephlep

A nonzero value of phlep causes Phase I to consider the true objective along with the sum of infeasibilities and may yield a better point at the end of Phase I.

0

epinit

Choosing a value for epinit different from epnewt has helped solve a few problems that were not solved otherwise. Suggested values are epinit = 1.0e-4, epnewt = 1.0e-6.

To run the problem with epnewt initially set fairly large and then tighten at the end of the optimization, assign epinit the initial tolerance and epnewt the final one.

Error Tolerance value

epnewt

The most critical tolerance is epnewt. Increasing it can sometimes speed convergence by requiring fewer Newton iterations, while decreasing it occasionally yields a more accurate solution or gets the iterations moving if the algorithm gets stuck. Values larger than 1.0e-2 should be treated cautiously, as should values smaller than 1.0e-6.

A constraint is assumed to be binding if it is within epnewt of one of its bounds. epnewt and epinit should be set together so that epinit ≤ epnewt.

Error Tolerance value

epskt

The convergence criteria and requires that the K-T factor is ≤ epskt.

0.01

epspiv

If, in constructing the basis inverse, the absolute value of a prospective pivot element is less than epspiv, the pivot is rejected and another pivot element is being sought.

If the problem is degenerate and this is slowing computations, choosing a larger value for epspiv may help by allowing pivots on elements that were previously rejected. If convergence of the iterations is a problem, reducing epspiv and/or increasing epnewt may help.

Error Tolerance value

epstop

This specifies the convergence criteria. If the fractional change in the objective function is less than epstop for nstop consecutive iterations, and if the K-T factor is ≤ epskt, the program accepts the current point as optimal.

Choosing a smaller value for epstop usually improves the accuracy of the final solution.

Embed accepts the current point as optimal if the Kuhn-Tucker optimality conditions are satisfied to within epstop, that is, if the K-T factor is ≤ epstop.

Error Tolerance*10

ipr

Print level for Embed report.

0   Print initial and final variable and function values

1   Print initial and final variable and function values plus one summary line for each one dimensional search.

Values of ipr > 1 and ≤ 6 are permitted, but require knowledge of the internal workings of Embed and are not recommended for general use.

1

iquad

Method for initial estimates of basic variables for each one-dimensional search.

0     Tangent vectors and linear extrapolation

1     Quadratic extrapolation

Note that quadratic extrapolation can often speed computations by providing better initial values for the iterations. This option is selected by iquad = 1. It is unnecessary if all constraints are linear

0

itlim

If the Newton procedure takes itlim iterations without converging, the iterations are stopped and corrective action taken.

10

kderiv

The central differences, kderiv = 1, are more accurate than forward differences. Central differences are exact for quadratic functions, while forward differences are exact only for linear functions. However, because central differences require two function evaluations per derivative — while forward differences require only one —selecting kderiv = 1 may double your computing time.

For more information on forward differences and central differences, see Inaccurate numerical derivatives.

0

limeval

Limit on the number of simulation runs. limeval=0 permits an unlimited number of simulation runs.

Max Optimization Steps value

limser

If the number of completed 1D searches exceeds limser, Embed terminates and returns inform = 3.

10000

maximize

The objective function is maximized if maximize = 1. The default is to minimize the objective function.

0

Monitor

The report produced by Embed is written to VSMGRG2.TXT located in the directory with the current diagram.

Setting monitor = 1 instructs Embed to display the report while the optimization run is being performed. The Monitor option provides a convenient way to keep track of long optimization runs. The monitor displays the Embed report in a window with menu items that can be used to save the report in a file for future reference.

0

nstop

If the fractional change in the objective function is less than epstop for nstop consecutive iterations, Embed accepts the current point as optimal.

3

ph1eps

If ph1eps is nonzero, the phase 1 objective is augmented by a multiple of the true objective. The multiple is selected so that, at the initial point, the ratio of the true objective and the sum of the infeasibilities is ph1eps. Setting ph1eps = 0.0 produces the most efficient way to reach a feasible point (a point where all constraints are satisfied). Setting ph1eps > 0.0 causes Embed to reach feasibility without ignoring the objective function.

0.0

pstep

This is the step size used for estimating partial derivatives of functions with respect to the variables.

Error Tolerance