# Global Search Option

A common discussion that arises when an optimization problem is solved is whether or not the obtained optimum is a local or global optimum.

Local approximation based methods (gradient-based optimizations) are more susceptible to finding a local optimum, while global approximation methods (response surface methods) and exploratory techniques (genetic algorithms) are less susceptible than local approximation based methods to finding a local optimum. In other words, these techniques improve the chances of finding a more global optimum. However, no algorithm can guarantee that the optimum found is in fact the global optimum. An optimum can be guaranteed to be the global optimum only if the optimization problem is convex. For a convex optimization problem, the objective function and feasible domain need to be convex. Unfortunately, in reality, most engineering problems being solved cannot be shown to be convex. Therefore, for practical problems, a global optimum remains elusive. Different algorithm types simply alter the chances of finding a more global optimum, not guarantee it. With that consideration, it is important to keep in mind that algorithms which improve the chances come at a computational cost, and most often this can be significant.

Consider the non-convex function, `f(x)`, bounded by -a ≤ x ≤ b.
Optimizing a design from design starting point `A` will result in
the optimum solution, `P`. Similarly, optimizing a design from
starting point `B` will result in the same optimum solution,
`P`. On the other hand, optimizing a design from initial design
starting point `C` will result in the optimum solution,
`Q`. From this, it can be seen that through the multiple
starting points approach, a global optimum cannot be guaranteed (as with any other
algorithm), but at the same time, the chances of finding a more global optimum are
improved.

The Global Search Option (GSO) in OptiStruct is supported for Size and Shape Optimization.

## Input

Identifying a global search optimization study is done through the DGLOBAL entry in the I/O Options section of the input deck, and the parameters required to setup and run a GSO are defined on the DGLOBAL Bulk Data Entry.

The Global Search Optimization can be run in parallel with the help of the Domain Decomposition Method (DDM). In this process, the
starting points (NPOINT) are assigned in parallel to the DDM MPI processes (first
level of parallelization). If enough number of cores are available on the machine,
you can use
PARAM,`DDMNGRPS`,<ngrps>
to divide the number of MPI processes (`-np`

) into groups. The
starting points are again assigned sequentially, but now they are assigned to the
MPI process groups (instead of individual MPI processes). Each MPI process group
will now solve a starting point via DDM geometric partitioning (second level of
parallelization).

For additional information, refer to DDM Level 2 – Parallelization of Geometric Partitions in Domain Decomposition Method (DDM).

## Output

If multiple starting points from defined via DGLOBAL entry converge to multiple unique designs, then the output files corresponding to each unique design are retained in your working directory. If all starting points converge to a single design, then only one set of output files are retained, corresponding to this common design.