Constraint optimization

Constraint optimization

In constraint satisfaction, constrained optimization (also called constraint optimization) seeks for a solution maximizing or minimizing a cost function.

Contents

Definition

A constraint optimization problem can be defined as a regular constraint satisfaction problem in which constraints are weighted and the goal is to find a solution maximizing the weight of satisfied constraints.

Alternatively, a constraint optimization problem can be defined as a regular constraint satisfaction problem augmented with a number of "local" cost functions. The aim of constraint optimization is to find a solution to the problem whose cost, evaluated as the sum of the cost functions, is maximized or minimized. The regular constraints are called hard constraints, while the cost functions are called soft constraints. These names illustrate that hard constraints are to be necessarily satisfied, while soft constraints only express a preference of some solutions (those having a high or low cost) over other ones (those having lower/higher cost).

A general constrained optimization problem may be written as follows:


\begin{array}{rcll}
\max &~& f(\mathbf{x}) & \\
\mathrm{subject~to} &~& g_i(\mathbf{x}) = c_i &\mathrm{for~} i=1,\cdots,n \quad \rm{Equality~constraints} \\
 &~& h_j(\mathbf{x}) \le d_j &\mathrm{for~} j=1,\cdots,m \quad \rm{Inequality~constraints} 
\end{array}

Where \mathbf{x} is a vector residing in a n-dimensional space, f(\mathbf{x}) is a scalar valued objective function,  g_i(\mathbf{x}) = c_i ~\mathrm{for~} i=1,\cdots,n and  h_j(\mathbf{x}) \le d_j ~\mathrm{for~} j=1,\cdots,m  are constraint functions that need to be satisfied.

Solution methods

Branch and bound

Constraint optimization can be solved by branch and bound algorithms. These are backtracking algorithms storing the cost of the best solution found during execution and use it for avoiding part of the search. More precisely, whenever the algorithm encounters a partial solution that cannot be extended to form a solution of better cost than the stored best cost, the algorithm backtracks, instead of trying to extend this solution.

Assuming that cost is to be maximized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.

On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists. As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.

First-choice bounding functions

One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately. For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed. The sum of these values is an upper bound because the soft constraints cannot assume a higher value. It is exact because the maximal values of soft constraints may derive from different evaluations: a soft constraint may be maximal for x = a while another constraint is maximal for x = b.

Russian doll search

This method runs a branch-and-bound algorithm on n problems, where n is the number of variables. Each such problem is the subproblem obtained by dropping a sequence of variables x_1,\ldots,x_i from the original problem, along with the constraints containing them. After the problem on variables x_{i+1},\ldots,x_n is solved, its optimal cost can be used as an upper bound while solving the other problems,

In particular, the cost estimate of a solution having x_{i+1},\ldots,x_n as unassigned variables is added to the cost that derives from the evaluated variables. Virtually, this corresponds on ignoring the evaluated variables and solving the problem on the unassigned ones, except that the latter problem has already been solved. More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.

Bucket elimination

The bucket elimination algorithm can be adapted for constraint optimization. A given variable can be indeed removed from the problem by replacing all soft constraints containing it with a new soft constraint. The cost of this new constraint is computed assuming a maximal value for every value of the removed variable. Formally, if x is the variable to be removed, C_1,\ldots,C_n are the soft constraints containing it, and y_1,\ldots,y_m are their variables except x, the new soft constraint is defined by:

C(y_1=a_1,\ldots,y_n=a_n) = \max_a \sum_i C_i(x=a,y_1=a_1,\ldots,y_n=a_n)

Bucket elimination works with an (arbitrary) ordering of the variables. Every variable is associated a bucket of constraints; the bucket of a variable contains all constraints having the variable has the highest in the order. Bucket elimination proceed from the last variable to the first. For each variable, all constraints of the bucket are replaced as above to remove the variable. The resulting constraint is then placed in the appropriate bucket.

See also

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Distributed constraint optimization — (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is… …   Wikipedia

  • Optimization (mathematics) — In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an… …   Wikipedia

  • optimization — /op teuh meuh zay sheuhn/ 1. the fact of optimizing; making the best of anything. 2. the condition of being optimized. 3. Math. a mathematical technique for finding a maximum or minimum value of a function of several variables subject to a set of …   Universalium

  • Constraint (mathematics) — In mathematics, a constraint is a condition that a solution to an optimization problem must satisfy. There are two types of constraints: equality constraints and inequality constraints. The set of solutions that satisfy all constraints is called… …   Wikipedia

  • Constraint programming — Programming paradigms Agent oriented Automata based Component based Flow based Pipelined Concatenative Concurrent computin …   Wikipedia

  • Constraint satisfaction — In artificial intelligence and operations research, constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is therefore a vector of variables that… …   Wikipedia

  • Constraint Composite Graph — The constraint composite graph is a node weighted undirected graph associated with a given combinatorial optimization problem posed as a weighted constraint satisfaction problem. Developed and introduced by Satish Kumar Thittamaranahalli (T. K.… …   Wikipedia

  • Constraint Based Routing — Constrained Shortest Path First (CSPF) ist eine Erweiterung für Kürzester Pfad Algorithmen. Der von CSPF berechnete kürzeste Pfad erfüllt zusätzliche Nebenbedingungen. Dies bedeutet, dass der Kürzeste Pfad Algorithmus berechnet wird, nachdem die… …   Deutsch Wikipedia

  • constraint — noun /kənˈstreɪnt/ a) Something that constrains. b) A condition that a solution to an optimization problem must satisfy. See Also: restraint …   Wiktionary

  • Cooperative optimization — is a global optimization method invented by chinese mathematician Xiao Fei Huang, that can solve real world NP hard optimization problems (up to millions of variables) with outstanding performances and unprecedented speeds. It knows whether a… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”