Control-Lyapunov function

Control-Lyapunov function

In control theory, a control-Lyapunov function V(x,u) [1]is a generalization of the notion of Lyapunov function V(x) used in stability analysis. The ordinary Lyapunov function is used to test whether a dynamical system is stable (more restrictively, asymptotically stable). That is, whether the system starting in a state x \ne 0 in some domain D will remain in D, or for asymptotic stability will eventually return to x = 0. The control-Lyapunov function is used to test whether a system is feedback stabilizable, that is whether for any state x there exists a control u(x,t) such that the system can be brought to the zero state by applying the control u.

More formally, suppose we are given a dynamical system


\dot{x}(t)=f(x(t))+g(x(t))\, u(t),

where the state x(t) and the control u(t) are vectors.

Definition. A control-Lyapunov function is a function V(x,u) that is continuous, positive-definite (that is V(x,u) is positive except at x = 0 where it is zero), proper (that is V(x)\to \infty as |x|\to \infty), and such that


\forall x \ne 0, \exists u \qquad \dot{V}(x,u) < 0.

The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy to zero, that is to bring the system to a stop. This is made rigorous by the following result:

Artstein's theorem. The dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

It may not be easy to find a control-Lyapunov function for a given system, but if we can find one thanks to some ingenuity and luck, then the feedback stabilization problem simplifies considerably, in fact it reduces to solving a static non-linear programming problem


u^*(x) = \arg\min_u \nabla V(x,u) \cdot f(x,u)

for each state x.

The theory and application of control-Lyapunov functions were developed by Z. Artstein and E. D. Sontag in the 1980s and 1990s.

Contents

Example

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by


m(1+q^2)\ddot{q}+b\dot{q}+K_0q+K_1q^3=u

Now given the desired state, qd, and actual state, q, with error, e = qdq, define a function r as


r=\dot{e}+\alpha e

A Control-Lyapunov candidate is then


V=\frac{1}{2}r^2

which is positive definite for all  q \ne 0, \dot{q} \ne 0.

Now taking the time derivative of V


\dot{V}=r\dot{r}

\dot{V}=(\dot{e}+\alpha e)(\ddot{e}+\alpha \dot{e})

The goal is to get the time derivative to be


\dot{V}=-\kappa V

which is globally exponentially stable if V is globally positive definite (which it is).

Hence we want the rightmost bracket of \dot{V},


(\ddot{e}+\alpha \dot{e})=(\ddot{q}_d-\ddot{q}+\alpha \dot{e})

to fulfill the requirement


(\ddot{q}_d-\ddot{q}+\alpha \dot{e}) = -\frac{\kappa}{2}(\dot{e}+\alpha e)

which upon substitution of the dynamics, \ddot{q}, gives


(\ddot{q}_d-\frac{u-K_0q-K_1q^3-b\dot{q}}{m(1+q^2)}+\alpha \dot{e}) = -\frac{\kappa}{2}(\dot{e}+\alpha e)

Solving for u yields the control law


u= m(1+q^2)(\ddot{q}_d + \alpha \dot{e}+\frac{\kappa}{2}r )+K_0q+K_1q^3+b\dot{q}

with κ and α, both greater than zero, as tunable parameters

This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected


\dot{V}=-\kappa V

which is a linear first order differential equation which has solution

V = V(0)e − κt

And hence the error and error rate, remembering that V=\frac{1}{2}(\dot{e}+\alpha e)^2, exponentially decay to zero.

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for V and solve for e. This is left as an exercise for the reader but the first few steps at the solution are:


r\dot{r}=-\frac{\kappa}{2}r^2

\dot{r}=-\frac{\kappa}{2}r

r=r(0)e^{-\frac{\kappa}{2} t}

\dot{e}+\alpha e= (\dot{e}(0)+\alpha e(0))e^{-\frac{\kappa}{2} t}

which can then be solved using any linear differential equation methods.

Notes

  1. ^ Freeman (46)

References

See also


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Lyapunov function — In mathematics, Lyapunov functions are functions which can be used to prove the stability of a certain fixed point in a dynamical system or autonomous differential equation. Named after the Russian mathematician Aleksandr Mikhailovich Lyapunov,… …   Wikipedia

  • Lyapunov stability — In mathematics, the notion of Lyapunov stability occurs in the study of dynamical systems. In simple terms, if all solutions of the dynamical system that start out near an equilibrium point x e stay near x e forever, then x e is Lyapunov stable.… …   Wikipedia

  • Lyapunov equation — In control theory, the discrete Lyapunov equation is of the form:A X A^H X + Q = 0where Q is a hermitian matrix. The continuous Lyapunov equation is of form:AX + XA^H + Q = 0.The Lyapunov equation occurs in many branches of control theory, such… …   Wikipedia

  • Lyapunov redesign — In nonlinear control, the technique of Lyapunov redesign refers to the design where a stabilizing state feedback controller can be constructed with knowledge of the Lyapunov function V. Consider the system:dot{x} = f(t,x)+G(t,x) [u+delta(t, x,… …   Wikipedia

  • Control theory — For control theory in psychology and sociology, see control theory (sociology) and Perceptual Control Theory. The concept of the feedback loop to control the dynamic behavior of the system: this is negative feedback, because the sensed value is… …   Wikipedia

  • Aleksandr Lyapunov — Infobox Scientist name = Aleksandr Mikhailovich Lyapunov caption = birth date = birth date|1857|6|6 birth place = Yaroslavl, Imperial Russia death date = death date and age|1918|11|3|1857|6|6 death place = residence = Russia citizenship =… …   Wikipedia

  • Sliding mode control — In control theory, sliding mode control is a type of variable structure control where the dynamics of a nonlinear system is altered via application of a high frequency switching control. This is a state feedback control scheme where the feedback… …   Wikipedia

  • Nonlinear control — is the area of control engineering specifically involved with systems that are nonlinear, time variant, or both. Many well established analysis and design techniques exist for LTI systems (e.g., root locus, Bode plot, Nyquist criterion, state… …   Wikipedia

  • Radial basis function network — A radial basis function network is an artificial neural network that uses radial basis functions as activation functions. They are used in function approximation, time series prediction, and control.Network architectureRadial basis function (RBF) …   Wikipedia

  • Bang–bang control — A water heater that maintains desired temperature by turning the applied power on and off (as opposed to continuously varying electrical voltage or current) based on temperature feedback is an example application of bang–bang control. Although… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”