Multi-objective optimization

Multi-objective optimization

Multi-objective optimization (or multi-objective programming),[1][2] also known as multi-criteria or multi-attribute optimization, is the process of simultaneously optimizing two or more conflicting objectives subject to certain constraints.

Multiobjective optimization problems can be found in various fields: product and process design, finance, aircraft design, the oil and gas industry, automobile design, or wherever optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Maximizing profit and minimizing the cost of a product; maximizing performance and minimizing fuel consumption of a vehicle; and minimizing weight while maximizing the strength of a particular component are examples of multi-objective optimization problems.

For nontrivial multiobjective problems, one cannot identify a single solution that simultaneously optimizes each objective. While searching for solutions, one reaches points such that, when attempting to improve an objective further, other objectives suffer as a result. A tentative solution is called non-dominated, Pareto optimal, or Pareto efficient if it cannot be eliminated from consideration by replacing it with another solution which improves an objective without worsening another one. Finding such non-dominated solutions, and quantifying the trade-offs in satifying the different objectives, is the goal when setting up and solving a multiobjective optimization problem.

Contents

Introduction

In mathematical terms, the multiobjective problem can be written as:


\begin{align}
\min_{x} &\left[\mu_1(x), \mu_2(x),\dots, \mu_n(x) \right]^T & \\
\text{s.t.} & \\
g(x) & \le 0 \\
h(x) & = 0 \\
x_l \le & x  \le x_u 
\end{align}

where μi is the i-th objective function, g and h are the inequality and equality constraints, respectively, and x is the vector of optimization or decision variables. The solution to the above problem is a set of Pareto points. Thus, instead of being a unique solution to the problem, the solution to a multiobjective problem is a possibly infinite set of Pareto points.

A design point in objective space μ * is termed Pareto optimal if there does not exist another feasible design objective vector μ such that  \mu_i \leq \mu_i^* for all i \in \left\{ {1,2,\dots,n } \right\}, and \mu _j  < \mu_j^* for at least one index of j, j \in \left\{ {1,2,\dots,n } \right\}.

Solution methods

There exist many methods to finding a solution to a multiobjective optimization problem, some of which are explained below:

Constructing a single aggregate objective function (AOF)

This is an intuitive approach to solving the multi-objective problem. The basic idea is to combine all of the objectives into a single objective function, called the AOF, such as the well-known weighted linear sum of the objectives. This objective function is optimized subject to technological constraints specifying how much of one objective must be sacrificed, from any given starting point, in order to gain a certain amount regarding the other objective. These technological constraints frequently come in the form f(x_1,x_2) \ge 0 for some function f, where x1 and x2 are the objectives (e.g., strength and lightness of a product).

Often the aggregate objective function is not linear in the objectives, but rather is non-linear, expressing increasing marginal dissatisfaction with greater incremental sacrifices in the value of either objective. Furthermore, sometimes the aggregate objective function is additively separable, so that it is expressed as a weighted average of a non-linear function of one objective and a non-linear function of another objective. Then the optimal solution obtained will depend on the relative values of the weights specified. For example, if one is trying to maximize the strength of a machine component and minimize the production cost, and if a higher weight is specified for the cost objective compared to the strength, the solution will be one that favors lower cost over higher strength.

The weighted sum method, like any method of selecting a single solution as preferable to all others, is essentially subjective, in that a decision manager needs to supply the weights. Moreover, this approach may prove difficult to implement if the Pareto frontier is not globally convex and/or the objective function to be minimized is not globally concave.

The objective way of characterizing multi-objective problems, by identifying multiple Pareto optimal candidate solutions, requires a Pareto-compliant ranking method, favoring non-dominated solutions, as seen in current multi-objective evolutionary approaches such as NSGA-II [3] and SPEA2. Here, no weight is required and thus no a priori information on the decision-maker's preferences is needed.[4] However, to decide upon one of the Pareto-efficient options as the one to adopt requires information about the decision-maker's preferences. Thus the objective characterization of the problem is simply the first stage in a two-stage analysis, consisting of (1) identifying the non-dominated possibilities, and (2) choosing among them.

The NBI, NC, SPO and DSD methods

The Normal Boundary Intersection (NBI)[5][6], Normal Constraint (NC)[7][8], Successive Pareto Optimization (SPO)[9], and Directed Search Domain (DSD)[10] methods solve the multi-objective optimization problem by constructing several AOFs. The solution of each AOF yields a Pareto point, whether locally or globally.

The NC and DSD methods suggest two different filtering procedures to remove locally Pareto points. The AOFs are constructed with the target of obtaining evenly distributed Pareto points that give a good impression (approximation) of the real set of Pareto points.

The DSD, NC and SPO methods generate solutions that represent some peripheral regions of the set of Pareto points for more than two objectives that are known to be not represented by the solutions generated with the NBI method.

According to Erfani and Utyuzhnikov, the DSD method works reasonably more efficiently than its NC and NBI counterparts on some difficult test cases in the literature.[10]

Evolutionary algorithms

Evolutionary algorithms are popular approaches to solving multiobjective optimization. Currently most evolutionary optimizers apply Pareto-based ranking schemes. Genetic algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and Strength Pareto Evolutionary Algorithm 2 (SPEA-2) have become standard approaches, although some schemes based on particle swarm optimization and simulated annealing[11] are significant. The main advantage of evolutionary algorithms, when applied to solve multi-objective optimization problems, is the fact that they typically optimize sets of solutions, allowing computation of an approximation of the entire Pareto front in a single algorithm run. The main disadvantage of evolutionary algorithms is the much lower speed.

Other methods

  • Multiobjective Optimization using Evolutionary Algorithms (MOEA).[4][12][13]
  • PGEN (Pareto surface generation for convex multiobjective instances)[14]
  • IOSO (Indirect Optimization on the basis of Self-Organization)

Applications

Economics

In economics, the study of resource allocation under scarcity, many problems involve multiple objectives along with constraints on what combinations of those objectives are attainable.

For example, a consumer's demands for various goods are determined by the process of maximization of the utility derived from those goods, subject to a constraint based on how much income is available to spend on those goods and on the prices of those goods. This constraint allows more of one good to be purchased only at the sacrifice of consuming less of another good; therefore, the various objectives (more consumption of each good is preferred) are in conflict with each other according to this constraint. A common method for analyzing such a problem is to use a graph of indifference curves, representing preferences, and a budget constraint, representing the trade-offs that the consumer is faced with.

Another example involves the production possibilities frontier, which specifies what combinations of various types of goods can be produced by a society with certain amounts of various resources. The frontier specifies the trade-offs that the society is faced with — if the society is fully utilizing its resources, more of one good can be produced only at the expense of producing less of another good. A society must then use some process to choose among the possibilities on the frontier.

Macroeconomic policy-making is a context requiring multi-objective optimization. Typically a central bank must choose a stance for monetary policy that balances competing objectives — low inflation, low unemployment, low balance of trade deficit, etc. To do this, the central bank uses a model of the economy that quantitatively describes the various causal linkages in the economy; it simulates the model repeatedly under various possible stances of monetary policy, in order to obtain a menu of possible predicted outcomes for the various variables of interest. Then in principle it can use an aggregate objective function to rate the alternative sets of predicted outcomes, although in practice central banks use a non-quantitative, judgement-based, process for ranking the alternatives and making the policy choice.

Finance

In finance, a common problem is to choose a portfolio when there are two conflicting objectives — the desire to have the expected value of portfolio returns be as high as possible, and the desire to have risk, measured by the standard deviation of portfolio returns, be as low as possible. This problem is often represented by a graph in which the efficient frontier shows the best combinations of risk and expected return that are available, and in which indifference curves show the investor's preferences for various risk-expected return combinations. The problem of optimizing a function of the expected value (first moment) and the standard deviation (square root of the second moment) of portfolio return is called a two-moment decision model.

Linear programming applications

In linear programming problems, a linear objective function is optimized subject to linear constraints. Typically multiple variables of concern appear in the objective function. A vast body of research has been devoted to methods of solving these problems. Because the efficient set, the set of combinations of values of the various variables of interest having the feature that none of the variables can be given a better value without hurting the value of another variable, is piecewise linear and not continuously differentiable, the problem is not dealt with by first specifying all the points on the Pareto-efficient set; instead, solution procedures utilize the aggregate objective function right from the start.

Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multi-commodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution. Linear programming is heavily used in microeconomics and company management, for dealing with such issues as planning, production, transportation, technology, and so forth.

Optimal control applications

In engineering and economics, many problems involve multiple objectives which are not describable as the-more-the-better or the-less-the-better; instead, there is an ideal target value for each objective, and the desire is to get as close as possible to the desired value of each objective. For example, one might want to adjust a rocket's fuel usage and orientation so that it arrives both at a specified place and at a specified time; or one might want to conduct open market operations so that both the inflation rate and the unemployment rate are as close as possible to their desired values.

Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty. Commonly a multi-objective quadratic objective function is used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value. Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time, intertemporal optimization techniques are employed.

See also

References

  1. ^ Steuer, R.E. (1986). Multiple Criteria Optimization: Theory, Computations, and Application. New York: John Wiley & Sons, Inc. ISBN 047188846X. 
  2. ^ Sawaragi, Y.; Nakayama, H. and Tanino, T. (1985). Theory of Multiobjective Optimization (vol. 176 of Mathematics in Science and Engineering). Orlando, FL: Academic Press Inc. ISBN 0126203709. 
  3. ^ Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. (2002). "A fast and elitist multi-objective genetic algorithm: NSGA-II". IEEE Transactions on Evolutionary Computation 6 (2): 182–197. http://dx.doi.org/10.1109/4235.996017. 
  4. ^ a b Deb, K. (2001). Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons. ISBN 978-0471873396. 
  5. ^ Das, I.; Dennis, J. E. (1998). "Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems". SIAM Journal on Optimization 8: 631–657. 
  6. ^ "Normal-Boundary Intersection: An Alternate Method For Generating Pareto Optimal Points In Multicriteria Optimization Problems" (pdf). http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19970005647_1997005080.pdf. 
  7. ^ Messac, A.; Ismail-Yahaya, A.; Mattson, C.A. (2003). "The normalized normal constraint method for generating the Pareto frontier". Structural and multidisciplinary optimization 25 (2): 86–98. 
  8. ^ Messac, A.; Mattson, C. A. (2004). "Normal constraint method with guarantee of even representation of complete Pareto frontier". AIAA journal 42 (10): 2101–2111. 
  9. ^ Mueller-Gritschneder, Daniel; Graeb, Helmut; Schlichtmann, Ulf (2009). "A Successive Approach to Compute the Bounded Pareto Front of Practical Multiobjective Optimization Problems". SIAM Journal on Optimization 20 (2): 915–934. 
  10. ^ a b Erfani, Tohid; Utyuzhnikov, Sergei V. (2011). "Directed Search Domain: A Method for Even Generation of Pareto Frontier in Multiobjective Optimization" (pdf). Journal of Engineering Optimization 43 (5): 1–18. http://personalpages.manchester.ac.uk/postgrad/tohid.erfani/TohidErfaniSUtyuzhnikov.pdf. Retrieved October 17, 2011. 
  11. ^ Suman, B.; Kumar, P. (2006). "A survey of simulated annealing as a tool for single and multiobjective optimization". Journal of the Operational Research Society 57 (10): 1143–1160. http://dx.doi.org/10.1057/palgrave.jors.2602068. 
  12. ^ Coello Coello, C. A.; Lamont, G. B.; Van Veldhuizen, D. A. (2007). Evolutionary Algorithms for Solving Multi-Objective Problems (2 ed.). Springer. ISBN 978-0-387-33254-3. 
  13. ^ Das, S.; Panigrahi, B. K. (2008). Rabuñal, J. R.; Dorado, J.; Pazos, A.. eds. Multi-objective Evolutionary Algorithms, Encyclopedia of Artificial Intelligence. 3. Idea Group Publishing. pp. 1145–1151. 
  14. ^ Craft, D.; Halabi, T.; Shih, H.; Bortfeld, T. (2006). "Approximating convex Pareto surfaces in multiobjective radiotherapy planning". Medical Physics 33 (9): 3399–3407. 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Multiobjective optimization — Multi objective optimization (or programming), [cite book|last = Steuer|first = R.E.|title = Multiple Criteria Optimization: Theory, Computations, and Application|publisher = John Wiley Sons, Inc|date = 1986|location = New York|isbn = 047188846X] …   Wikipedia

  • Optimization (mathematics) — In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an… …   Wikipedia

  • Mathematical optimization — For other uses, see Optimization (disambiguation). The maximum of a paraboloid (red dot) In mathematics, computational science, or management science, mathematical optimization (alternatively, optimization or mathematical programming) refers to… …   Wikipedia

  • Nexus (Process integration and optimization) — Nexus Screenshot of Nexus 1.1.05 Developer(s) iChrome …   Wikipedia

  • Particle swarm optimization — (PSO) is a swarm intelligence based algorithm to find a solution to an optimization problem in a search space, or model and predict social behavior in the presence of objectives.OverviewParticle swarm optimization is a stochastic, population… …   Wikipedia

  • Multi-criteria decision analysis — Multiple criteria decision making or multiple criteria decision analysis is a sub discipline of operations research that explicitly considers multiple criteria in decision making environments. Whether in our daily lives or in professional… …   Wikipedia

  • Multi-armed bandit — A multi armed bandit is like a slot machine with multiple levers. In statistics, particularly in the design of sequential experiments, a multi armed bandit takes its name from a traditional slot machine (one armed bandit). Multiple levers are… …   Wikipedia

  • Ant colony optimization algorithms — Ant behavior was the inspiration for the metaheuristic optimization technique. In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be… …   Wikipedia

  • Bee colony optimization — The bee colony optimization algorithm is inspired by the behaviour of a honey bee colony in nectar collection. This biologically inspired approach is currently being employed to solve continuous optimization problems, training neural networks,… …   Wikipedia

  • Collective Optimization Database — is an open repository to enable sharing of benchmarks, data sets and optimization cases from the community, provide web services and plugins to analyze optimization data and predict program transformations or better hardware designs for multi… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”