# The Optimization Problem Formulation

Optimization algorithms work to minimize (or maximize) an objective function subject to constraints on design variables and responses.

The optimization to be solved is expressed as follows:
 Minimize: ${\psi }_{0}\left(x,b\right)$ (objective function) Subject to: (inequality constraints) ${b}_{L}\le b\le {b}_{U}$ (equality constraints)(design limits)

The functions are assumed to have the form:

${\psi }_{k}\left(x,b\right)={\psi }_{k0}\left(x,b\right)+\underset{{T}_{0}}{\overset{Tf}{\int }}{L}_{k}\left(x,b\right)dt$

In this formulation:
• $b$ is an n-dimensional vector of real-valued design variables
• ${b}_{L}$ and ${b}_{U}$ are the lower and upper bounds, respectively, on the design variables
• $x$ is the set of states that the solver uses to represent the system
• ${\psi }_{k0}\left(x,b\right)$ is the value of the function from a previous simulation. For the first simulation, it is always zero.

The goal of the optimization effort is to minimize the objective function ${\psi }_{0}\left(x,b\right)$ while satisfying the constraints . Constraints are assumed to be inherently nonlinear. They can be either inequality or equality constraints.

${b}_{L}$ and ${b}_{U}$ define the lower and upper bounds for the elements of $b$ . The set of all allowable values of $b$ is known as the design space for the problem. A design point or a sample point is a particular set of values within the design space.

A design point is said to be feasible if and only if it satisfies all the constraints. Correspondingly, a design point is said to be infeasible if it violates one or more of the constraints. Our aim, of course, is to find a feasible design. Sometimes, due to the presence of constraints, this may not be possible.

Many different methods are available for solving the above-mentioned optimization problem. All of these iterate on the design $b$ in some manner to find a better solution. A generic algorithm that describes this process is:
• An initial value for the design variables $b$ is provided to the optimizer.
• The response quantities are computed by running a simulation.
• Some algorithm, sometimes a sensitivity-based method, is applied to generate a new $b$ that will either reduce the objective function, reduce the amount of infeasibility, or both.
• When a sensitivity-based method is used, the optimizer also needs to compute the sensitivity of the functions with respect to the design $b$ . This means the optimizer requires the matrix of partial derivatives, $\left[\frac{\partial \psi \left(x,b\right)}{\partial b}\right]$ .
• In an iterative fashion, new designs ( $b$ ) are generated by the optimizer until a determination is made that the optimizer has found a minimum or the iteration limits have been exceeded.

In some instances you may want to maximize the value of a certain objective. Without any loss of generality, you can convert it to a minimization problem by simply negating the objective you calculate. Thus, if you want to maximize a function $\chi \left(x,b\right)$ , you can convert it to a minimization problem by defining the cost function as $\psi \left(x,b\right)=-\chi \left(x,b\right)$ .