$\DeclareMathOperator*{\argmax}{argmax}$ $\DeclareMathOperator{\defeq}{\stackrel{def}{=}}$ $\DeclareMathOperator{\Tr}{Tr}$ $\DeclareMathOperator{\rank}{rank}$ $\DeclareMathOperator{\sign}{sign}$

Rylan Schaeffer > Learning > Dynamical Systems & Control

Terminology

Some key words:
  • Autonomous = time invariant e.g. $\dot{x} = f(x)$
  • Non-autonomous = time varient e.g. $\dot{x} = f(x, t)$

Stability Definitions

Stability of Linear, Time-Invariant (LTI) Systems

Consider an autonomous linear system $\dot{x} = A x$

Linear Stability Analysis

Consider a first order, linear differential equation $$ \dot{x} \defeq \frac{dx}{dt} = A x$$ Since a system has a fixed point at $x = 0$. We might want to understand whether, and how quickly, a point near the origin flows towards or away from the fixed point. Concretely, let $x$ be the phase space of some system and let $x^*$ be a fixed point of the system. We can define a small perturbation away from the fixed point as $\eta(t) \defeq x(t) - x^*$. Note that this is equivalent to defining a time-varying variable $x(t) = x^* + \eta(t)$. We'd like to understand whether this small pertubation, $\eta(t)$ grows or decays over time, and to answer such a question, we can construct a differential equation to describe the behavior of $\eta(t)$.

$\begin{align} \frac{d}{dt}\eta(t) &= \frac{d}{dt} (x(t) - x^*)\\ &= \frac{d}{dt} x(t)\\ \end{align}$

Using the definition of $x(t) = x^* + \eta(t)$, we can construct a Taylor Series expansion of $\frac{d}{dt} x(t)$ at $x^*$:

$\begin{align} \frac{d}{dt} x(t) &= \frac{d}{dt} (x^* + \eta (t))\\ &\approx \frac{d}{dt} x^* + \eta(t) \frac{d^2}{dt^2} x^* + O(\eta^2)\\ \end{align}$

Since $x^*$ is a fixed point, $\frac{d}{dt} x^* = 0$ and thus:

$\begin{align} \frac{d}{dt}\eta(t) = \eta(t) \frac{d^2}{dt^2} x^* \end{align}$

This is a linear equation in $\eta$ and is therefore called the "linearization about $x^*$". If $\frac{d^2}{dt^2} x^*$ is also zero, the $O(\eta^2)$ term that we previously ignored will become relevant. This linearization tells us that if the second derivative at $x^*$ is negative, then the perturbation will shrink, and inversely, if the second derivative is positive, the perturbation will grow larger. The absolute value of the second derivative, $|d^2 x^* / dt^2|$ tells us how quickly the pertubation will grow/shrink and its reciprocal $\frac{1}{|d^2 x^* / dt^2|}$ is called the characteristic time scale which expresses how much time is required to modify $x$ by a fixed amount around $x^*$.

Stability of Non-Linear Systems

For non-linear systems, multiple notions of stability exist. The first notion of stability is an equilibrium point $x_{eq}$, defined as a point in the phase space with all derivatives equal to 0. However, equilibrium points aren't the only focus. We also care about the behavior of the system in general. Consequently, we have a sequence of definitions of stability.

We say that an equilibrium point $x_{eq}$ is stable a point that starts within a small ball (defined by radius $r$) around the equilibrium point can be bounded in how far it can move away from the point. Intuitively, this means that the the system is guaranteed to remain "close" to the equilibrium point if it starts close to the point. Formally, we write: $$ \forall R > 0, \exists 0 < r < R \text{ s.t. } ||x(0) - x_{eq}|| < r \Rightarrow ||x(t) - x_{eq}|| < R$$

We say that an equilibrium point is asymptotically stable if it is stable and if starting near the point guarantees that the system will converge back to the equilibrium point. Formally, we write: $$\exists r_0 > 0 \text{ s. t. } ||x(0) - x_{eq}|| < r_0 \Rightarrow \lim_{t \to \infty}||x(t) - x_{eq} || \rightarrow 0 $$

We say that an equilibrium point is global asymptotically stable if that initial radius $r_0 = \infty$. In this case, we say that the system (not just the point!) is globally asymptotically stable because we know that there can only be one equilibrium point. If there was another, then starting at the second equilibrium point wouldn't converge to the first equilibrium point, contradicting the claim that the point is globally asymptocally stable.

We say that an equilibrium point is exponentially stable if starting near the equilibrium point ensures that (potentially after some elapsed time) the system converges exponentially fast to the fixed point. Formally, we write that $$\exists \delta > 0, \exists \alpha > 1, \exists \lambda > 0 \text{ s.t. } ||x(t) - x_{eq}|| < \delta \Rightarrow \forall t\geq 0, ||x(t) - x_{eq} || \leq \alpha || x(0) - x_{eq} || e^{- \lambda t}$$ One cool point is that if we write $\alpha$, which is just a positive value, as $e^{\lambda \tau}$, we see that the bound of $\alpha || x(0) - x_{eq} || e^{- \lambda t}$ can be rewritten as $ || x(0) - x_{eq} || e^{- \lambda (t - \tau)}$. This shows that $\alpha$ can be viewed as a time delay: the state will converge exponentially fast starting from $\tau_0$. We say that an equilibrium point is globally exponentially stable if $r_0 = \infty$. Similar to global asymptotic stability, we can describe both the equilibrium point and the system entirely as globally exponentially stable since only one equilibrium point can exist.

Note that exponential stability implies stability. To see this, suppose that a point $x_{eq}$ that is exponentially stable. For any bounding radius $R$, we can always find a corresponding $r < R$ to ensure that. Define $r = \min \{\frac{R}{\alpha}, \delta \}$ and assume that $||x(0)|| < r$. \begin{align*} ||x(t) - x_{eq}|| &\leq \alpha ||x(0) - x_{eq} ||e^{-\lambda t} & \text{Defn. of exponential stability and } ||x(0|| < \delta\\ &\leq \alpha ||x(0) - x_{eq} || & e^{-\lambda t} \text{ is monotonically decreasing for increasing } t \geq 0\\ &< \alpha r & \text{By assumption, } ||x(0)|| < r\\ &< \alpha \frac{R}{\alpha} & \text{Holds regardless of whether } r = \frac{R}{\alpha} \text{ or } r = d\\ &< R \end{align*} Thus we conclude that exponential stability implies stability.

Characterizing Stability via Linearization

Linearization around a fixed point is the first method for studying the type of stability the fixed point has. The idea is to Taylor-series expand about the equilibrium point to first order and disregard higher order terms (HOT). Suppose our system is $\dot{x} = f(x)$. Then linearizing about a fixed point $x_{eq} = 0$ (if the fixed point isn't at $0$, perform a coordinate transform) yields $$f(x) = f(0, 0) + \frac{\partial f}{\partial x}|_{x=0, u=0} x + \text{ H.O.T.}$$ Since $f(0) = 0$ due to being a fixed point, $$\dot{x} = f(x) = \frac{\partial f}{\partial x}|_{x=0,\\u=0} x + \text{ H.O.T.}$$ If we drop the higher order terms, we call the simplified system the linearization or linear approximation of the system. $$ \dot{x} = \frac{\partial f}{\partial x}|_{x=0,\\u=0} x$$ The matrix $A \defeq \frac{\partial f}{\partial x}|_{x=0, u=0}$ has a name: the Jacobian. This leads us to Lypaunov's Linearization Method, which states:

Theorem: Let $\dot{x} = Ax$ be the linearization of some dynamical system.

  • If all eigenvalues of the Jacobian are in the left half of the complex plane (i.e. the linearized system is strictly stable), then the equilibrium point in the original dynamical system is asymptotically stable.
  • If at least one eigenvalue of the Jacobian is in the right half of the complex plane (i.e. the linearized system is unstable), then the equilibrium point in the original dynamical system is unstable.
  • If the eigenvalues of the Jacobian are in the left half of the complex plane or on the vertical axis (i.e. the linearized system is marginally stable), then no conclusion can be drawn about the equilibrium point in the original dynamical system.

Proof: TODO

To understand the third possible outcome of the theorem, compare the systems $\dot{v} + |v|v = 0$ and $\dot{v} - |v|v = 0$. Both systems' linearizations are $\dot{v}=0$, but the first system is asymptotically stable around $v=0$ while the other system is unstable around $v=0$. The reason is that the higher order terms we disregarded when linearizing the system contained the information necessary to make this determination. For another simple example, consider $\dot{x} = ax + x^5$ with corresponding linearization $\dot{x} = ax$. Clearly, if $a < 0$, the system converges, and if $a > 0$, the system diverges, but we can't say anything for $a = 0$.

Characterizing Stability via Lyapunov Functions (Lyapunov's Direct Method)

Linearization is a useful technique in the local vicinity of a fixed point, but it isn't sufficient. We'd like to be able to describe the behavior of the system in general, even when far away from fixed points, and without having to integrate the differential equation. This led to Lyapunov's Direct Method, which is inspired by the idea that for a system with dissipating energy, the system must eventually settle down to a fixed point. Professor Slotine used the example of a pendulum; if $\theta$ is the angle away from the pendulum hanging straight down, both $\theta = 0$ and $\theta = \pi$ are fixed points. What differentiates them, though, is that the pendulum hanging straight down has no energy, meaning it is stable, whereas the pendulum upright has potential energy; if the pendulum is perturbed away from the upright position, and if the pendulum has friction, then the pendulum will eventually settle down to the bottom equilibrium. This motivates the idea of Lyapunov functions, which are energy-like functions that can be used to describe the behavior of the system more broadly than just around fixed points. The idea is to find a scalar function for the system which is non-negative and zero only at equilibrium; then, depending on the time derivative of the Lyapunov function, we can start drawing conclusions about the system's behavior.

We'll first need some definitions. We say that a scalar function $V(x)$ is locally positive definite if within some ball $B_R$, $V(x) \geq 0$ with $V(x) = 0 \leftrightarrow x=0$. If the function $V(x) = 0$ at any other $x \neq 0$, we say that the point is locally positive semi-definite. If $R = \infty$, then we say the function is globally positive definite or globally positive semi-definite, respectively. (Globally) negative and negative semi-definite functions are defined similarly. We say that a scalar function $V(x)$ is a Lyapunov function if (a) $V(x)$ is locally positive definite, (b) $V(x)$ has continuous partial derivatives and (c) its time derivative $\dot{V}(t)$ is negative semi definite. If we can define a Lyapunov function for a system around equilibrium points, we can start to make claims about the stability of the equilibria.

Lypanunov's Direct Method Theorem: If a system has a Lyapunov function $V(x)$ defined around equilibrium point $x=0$ (if equilibrium is not at 0, perform a coordinate transform), then the equilibrium point is stable. If the Lyapunov function's time derivative $\dot{V}(x)$ is negative definite, then the equilibrium point is asymptotically stable. In table form:

$V(x)$ $\dot{V}(x)$ Conclusion
Locally Positive Definite Locally Negative Semi-Definite Stable
Locally Positive Definite Locally Negative Definite Asymptotically Stable
Globally Positive Definite Globally Negative Definite Globally Asymptotically Stable

(Local Stability) Proof: Consider the first case i.e. $\dot{V}(x)$ is negative semi-definite. We want to show that $x=0$ is stable, which means showing that $\forall R > 0, \exists r < R$ such that $||x(0)|| < r \Rightarrow ||x(t)|| < R$. Find the minimum of $V(x)$ on the ball $B_R$. Since $V(x)$ is positive definite, $V(x) = 0$ only at 0 and $V(x)$ positive elsewhere. Since $V(x)$ is continuous and $0$ only at $x=0$, a ball $B_r$ around $x=0$ must exist such that the maximum of $V(x)$ on $B_r$ is equal to the minimum of $V(x)$ on $B_R$. Then, provided a point starts within $B_r$, since $V(x)$ cannot increase, the point cannot escape $B_R$ since this would require having a larger $V(x)$ than it starts with. The physical intuition is that the minimum $V(x)$ on $B_R$ determines an energy threshold. We then find the region entirely below this energy threshold (which is guaranteed to exist because $V(x)$ is continuous and $V(0) = 0$), and since energy can't increase, we know points within this interior region are trapped below the energy threshold.

(Local Asymptotic Stability) Proof: Consider the second case i.e. $\dot{V}(x)$ is now negative definite, not negative semi-definite. For the same reasoning as above, we know that there exists a ball $B_r$ around $x(0)$ must exist such that starting in $B_r$ guarantees the state never escapes $B_R$. We want to show that there also exists some $r_0>0$ such that if $||x(0)|| < r_0 \Rightarrow ||x(t)|| \rightarrow 0$ as $t \rightarrow \infty$. To show this, we'll use proof by contradiction, requiring a few clean steps. (1) $V$ is lower bounded and $\dot{V}$ is negative definite, meaning $V \rightarrow L$ for some limit $L$. If $L = 0$, then the system has converged to the fixed point and the point is thus asymptotically stable. Using proof by contradiction, we'll show that $L$ cannot be anything except 0. Assume for purposes of contradiction that $L \ neq 0$. Since $V$ is $0$ only at 0 and $V$ is smooth, there exists some distance $d_L > 0$ such that $||x(t)|| > d_L$. (2) Since $\dot{V}$ is negative definite, $\dot{V}$ is 0 only at $x=0$, and since $\dot{V}$ is smooth, $\dot{V} < \min \{ \dot{V}(x) | x(t) > d_L \} \defeq \eta$; simply put, $\dot{V}$ cannot approach zero because the function is 0 only at 0 and $V$'s limit $L \neq 0$ prevents the system from approaching $x=0$. But then, by the fundamental theorem of calculus, we know that $V(t) = V(0) + \int_0^t \dot{V}(s) \, ds \leq V(0) + \eta t$. This means that $V(t)$ must decrease at least linearly with respect to time, but this is impossible since $V(t)$ is lower bounded! Thus we find our contradiction and the conclude that any non-zero limit $L$ is impossible.

However, both of these statements are local. We'd like to understand the global stability of the equilibrium point(s). What additional conditions are required to make a convincing statement? In order to assert global stability or global asymptotic stability, we need to impose additional requirements. One possible requirement is that $V(x)$ be radially unbounded, which is defined as $V(x) \rightarrow \infty$ as $|x| \rightarrow \infty$. This condition prevents the state from finding a solution whereby $V(x)$ tends towards but never reaches 0 by moving as far away from the equilibrium point as possible.

Example: Consider $\dot{x} + c(x) = 0$, where $x c(x) \geq 0$ and $c(x)$ is continuous, which implies that and $x c(x) = 0 \Leftrightarrow c(x) = 0$. In English, this means that $c(x)$ has the same sign as $x$. Choose $V(x) = x^2 \Rightarrow \dot{V}(x) = 2 x \dot{x} = - 2 x c(x) \leq 0$. Since $V(x)$ is positive definite and radially unbounded and $\dot{V}(x)$ is negative definite, the equilibrium point $x=0$ is globally asymptotically stable. We couldn't have obtained this insight from linearizing around the fixed point because we don't know what $c(x)$ is, but with Lyapunov's direct method, we are able to that the origin is globally asymptotically stable.

Properties:One property of Lyapunov functions is that many can exist for a single system. Suppose $V$ is a Lyapunov function. Then $\forall \rho > 0, \alpha > 1$, so is $$V_1(x) = \rho V^{\alpha}$$ Suppose that $\phi(\cdot)$ is a scalar, differentiable, monotonically increasing function of its scalar argument. Supposing again that $V(x)$ is a Lyapunov function, so is: $$V_2(x) = \phi(V(x)) - \phi(0) $$ To see this requires showing three facts:

  1. $V_2(x)$ is positive definite (PD). $V$ is PD, which means that $V(x) = 0 \Leftrightarrow x = 0$ and thus $ V_2(0) = \phi(0) - \phi(0) = 0$. To show that $\forall x \neq 0, \, V_2(x) > 0$, we know that because $V$ is PD, $\forall x \neq 0, \, V > 0$ and because $\phi$ is a strictly monotonically increasing function, $\forall x \neq x, \, \phi(V(x)) - \phi(0) > 0$. Thus we conclude that $V_2$ is positive definite.
  2. $V_2(x)$ has continuous partial derivatives. We are told that $\phi$ is differentiable and by virtue of $V$ being a Lyapunov function, it has continuous partial derivatives. Thus $\frac{\partial V_2(x)}{\partial x_i} = \frac{d \phi(x)}{d V(x)} \frac{\partial V(x)}{\partial x_i}$ and we conclude that $V_2(x)$ has continuous partial derivatives.
  3. $\dot{V}_2(x)$ is negative semi-definite (NSD). Because $\phi$ is a strictly monotonically increasing function, $\frac{d \phi (V(x))}{d V(x)} > 0$, and because $V$ is a Lyapunov function, $\dot{V}(x) \leq 0$. By the chain rule, $\dot{V}_2(x) = \frac{d}{dt} V'(x) = \frac{d \phi (V(x))}{d V(x)} \dot{V}(x)$. $\dot{V}_2(x)$ is the product of a positive term and non-positive term, meaning $\dot{V}_2(x) \leq 0$ is NSD.

Problems: Although succinct and elegant, Lyapunov's direct method has some shortcomings. First, it doesn't tell us how to define the Lyapunov function $V$. Second, possibly infinite functions can exist for one system, and all of them may lack some interpretable meaning. The above example uses a strong prior that the energy of a physical system should be equal to the kinetic plus potential energy, but this isn't generally applicable for all systems. Third, Lyapunov's direct method's requirement that $\dot{V}$ is unnecessarily strict; as we'll see with La Salle's Invariant Set Theorems in the next section, we can relax this requirement and still make statements concerning the stability of the system's fixed point(s).

Characterizing Stability via Invariant Sets (La Salle's Method)

One shortcoming of demonstrating stability using Lyapunov's direct method is that if $\dot{V}(x)$ is only negative semi-definite, we can't say anything about the asymptotic behavior of the system. But this feels counter-intuitive. For instance, consider a swinging pendulum with dampening. The velocity $\dot{\theta} = 0$ at the top of each swing, meaning that $\dot{V}(\theta) = \frac{dV(\theta)}{d\theta} \dot{\theta} = 0$ but we would expect the system to asymptote towards $\theta = 0$ provided it doesn't start at $\theta = \pi$. This is a shortcoming we can rectify using the notion of invariant sets.

An invariant set $R$ is a subset of the state space such that if a point starts in the set $R$, it never leaves the set $R$. Some examples of invariant sets include (a) the set of a single equilibrium point, (b) the set of all equilibrium points, (c) the entire phase space, (d) the trajectory of a point $x(t)$. Invariant sets will help us rectify the shortcoming of Lyapunov's Direct Method since it will allow us to differentiate the bottom of the pendulum $(\theta = 0, \dot{\theta} = 0)$ from the top of the pendulum swings.

La Salle's (Local) Invariant Set Theorem: Consider an autonomous system $\dot{x} = f(x)$ and let $V(x)$ be a scalar function with continuous first partial derivatives (note that we do not assume $V(x)$ to be positive definite). Assume that for some $l > 0$, the region $\Omega_l = \{x | V(x) < l \}$ is bounded and $\forall x \in \Omega_l, \, \dot{V}(x) \leq 0$. Let $R$ be the set of all points within $\Omega$ where $\dot{V}(x) = 0$ and let $M$ be the largest invariant set in $R$ i.e. the union of all invariant sets in $R$. Then every point $x(t)$ originating in $\Omega_l$ tends to $M$ as $t \rightarrow \infty$.

La Salle's (Global) Invariant Set Theorem: As with Lyapunov's Direct Method, there is also a global equivalent of the local theorem. Consider an autonomous system $\dot{x} = f(x)$ and let $V(x)$ be a scalar function with continuous first partial derivatives. Define $R = \{x | \dot{V}(x) = 0 \}$ and let $M$ be the largest invariant set in $R$. If $V(x)$ is radially unbounded i.e. $||x|| \rightarrow \infty \Rightarrow V(x) \rightarrow \infty$ and if $\dot{V} \leq 0$ over the whole state space, then all states globally asymptotically converge to M as $t \rightarrow \infty$.

Example: Consider $\ddot{x} + \dot{x}^3 + x^5 = x^4 \sin^2(x)$. Note that the system has a fixed point at $x=0, \dot{x} = 0$. Define $$V(x, \dot x) = \frac{1}{2} \dot x ^2 + \int_0^x u^t - u^4 \sin^2 (x)$$ Then $$\dot{V} = - \dot{x}^4 \leq 0 $$ Thus, because $\dot{x}$ is globally negative definite, by the Global Invariant Set Theorem, we conclude that the origin is asymptotically stable. We can make this statement regardless of whether $V$ is positive definite.

Example: One nice example that makes use of both the local and global invariant set theorems in the following. Consider the system: \begin{align*} \dot{x}_1 &= x_2 - x_1^7 (x_1^4 + 2 x_2^2 - 10)\\ \dot{x}_2 &= -x_1^3 - 3 x_2^5(x_1^4 + 2 x_2^2 - 10) \end{align*} Note that there are two invariant sets: the origin $(x=0, \dot{x} = 0$, and the limit cycle $x_1^4 + 2 x_2^2 - 10$. To see this last one, note that: $$\frac{d}{dt}(x_1^4 + 2 x_2^2 - 10) = -(4 x_1^{10} + 12 x_2^6)(x_1^4 + 2 x_2^2 - 10) $$ We'll consider a Lyapunov function that measures the distance from a point to the limit cycle: $$V(x, \dot{x}) = (x_1^4 + 2 x_2^2 - 10)^2 $$ Taking the time derivative, we see that $$\dot{V} = - 8(x_1^{10} + 3 x_2 ^6)(x_1^4 + 2 x_2^2 - 10)^2 $$ Thus $\dot{V}$ is negative except when $x_1^{10} + 3 x_2 ^6 = 0$ or $x_1^4 + 2 x_2^2 - 10 = 0$, and then $\dot{V} = 0$. By the Global Invariant Set Theorem, we conclude that all states globally asymptotically converge to either the limit cycle or the origin. But we can actually go further and make one additional statement about the system. We can answer where points enclosed between the origin and the limit cycle flow towards using the Local Invariant Set Theorem. Define $l=100$ and $\Omega_l = \{ x | V(x) < l\}$. This excludes the origin, since the origin is exactly at $V(x) = 100$. Then by the Local Invariant Set Theorem, all points between the limit cycle and the origin flow to the limit cycle. The origin is therefore unstable.

Characterizing Stability via Barbalat's Lemma

In Lyapunov's direct method and La Salle's invariant sets, we considered an autonomous system $\dot{x} = f(x)$. Now, suppose we consider a nonautonomous system $\dot{x} = f(x, t)$. Can we characterize the stability of fixed points as we did previously? The answer is yes, but not so straighforwardly. Previously, we relied on that fact that because the system was autonomous, if we defined our Lyapunov function $V(x) let $V(x)$ be a scalar function with continuous first partial derivatives. However, in both cases, Both theorems required that $\dot{V}$ be negative definite Barbalat's Lemma

Suppose that $V(x, t) \geq 0$, $\frac{d}{dt} V(x, t) \leq 0$ and $\frac{d^2}{dt^2}V(x,t)$ is bounded. Then

Constructing Lyapunov Functions

Since both Lyapunov's Direct Method and La Salle's Invariant Set Theorem both require cleverly constructing functions with particular properties, it's worth taking a moment to consider common approaches for defining Lyapunov functions.

Energy = Kinetic + Potential

Since almost any real system dissipates energy, a useful starting point is defining the Lyapunov function as the sum of the system's kinetic energy plus potential energy. For example, consider $$\ddot{x} + b(\dot{x}) + c(x) = 0$$ where $b(\dot{x})$ has the same sign as $\dot{x}$ and $c(x)$ has the same sign as $x$. One immediate consequence is that a fixed point exists at $(x=0, \dot{x}=0)$. This is also the only fixed point as a fixed point requires $\dot x = 0$ and $\ddot x = 0$, which means $0 = c(x)$ must hold. But since $c(x)$ is zero only at $x=0$, $(x=0, \dot{x}=0)$ must be the only fixed point. Now, we define $$V(x) = \frac{1}{2} \dot{x}^2 + \int_0^x c(u) du $$ which if we view the system as a non-linear mass-spring with damping, can be seen as the kinetic energy of the mass plus the energy stored in the spring. If we consider the time derivative of $V(x)$: $$ \dot{V}(x) = \dot{x} \ddot{x} + c(x) \dot{x} = \dot{x}(-b(\dot{x}) + -c(x)) + c(x) \dot{x} = -\dot{x} b(\dot{x})$$ Since $b$ shares the same sign as $\dot{x}$, $\dot{V}(x)$ is negative semi-definite. Thus, by the local invariant set theorem, the fixed point $(x=0, \dot{x}=0)$ is locally asymptotically stable. If $\int_0^x c(t) dt$ is radially unbounded, then $V(x)$ is radially unbounded and the fixed point becomes globally asymptotically stable. This is a result we could not have achieved using Lyapunov's direct method.

Linear Dynamical Systems

1-Dimensional

2-Dimensional

Consider the linear, first order dynamical system $\dot{x} = A x $ with $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \in \mathbb{C}^2$: If we consider the characteristic polynomial of $A$, we see that $A$ has two eigenvalues $\lambda_1, \lambda_2$ which depend on the trace of $A$, $\Tr(A) = \lambda_1 + \lambda_2 = a_{11} + a_{22}$, and the determinant of $A$, $\det(A) = \lambda_1 \lambda_2 = a_{11} a_{22} - a_{12} a_{21}$: \begin{align*} \det(A - \lambda I) &= \lambda^2 + - (a_{11} + a_{22})\lambda + (a_{11} a_{22} - a_{12}a_{21})\\ &= \lambda^2 - \Tr(A) \lambda + \det(A) \end{align*} Solving for the roots of this 2nd degree polynomial, we see that the eigenvalues are: $$\lambda = \Tr(A) \pm \frac{\sqrt{\Tr(A)^2 - 4 \det(A)}}{2}$$ Depending on the possible values of $\lambda_1$ and $\lambda_2$, we can classify the behavior of the fixed point $x_{eq} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$.

Class Name Conditions Intuition Visualization
Stable Spiral $\Tr(A) < 0$ and $Tr(A)^2 - 4 \det(A) < 0$ Both eigenvalues have negative real components and non-zero imaginary components. The system will flow towards the fixed point, with rotation caused by the imaginary components.
Unstable Spiral $\Tr(A) > 0$ and $Tr(A)^2 - 4 \det(A) < 0$ Both eigenvalues have positive real components and non-zero imaginary components. The system will flow away from the fixed point, with rotation caused by the imaginary components.

Lyapunov Analysis of Linaer, Time-Invariant Systems

Consider a linear, time-invariant system $\dot{x} = A x $ and a candidate Lyapunov function involving a time-invariant matrix $P$. $$V(x) = x^T P x $$ Because $P$ is constant with respect to time, the function's time derivative is: $$ \dot{V}(x) = \dot{x}^T P x + x^T P \dot{x} = x^T (A^T P + P A ) x $$ For any linear system, we know that the origin is the only fixed point. We would like to use Lyapunov analysis to determine whether the origin is stable or unstable. If we choose $P$ as a symmetric positive definite matrix, we know that the Lyapunov function will be positive definite. If we also choose $P$ such that $-Q \defeq A^T P + P A$ is negative definite, then by Lyapunov's direct method, we can conclude that the origin is strictly stable (i.e. all eigenvalues' real components $<0$). This leads to the following theorem.

Theorem: A LTI system $\dot{x} = A x$ is strictly stable (all of $A$'s eigenvalues' real components $<0$ or equivalently, $A$ is Hurwitz) if and only if for any symmetric positive definite matrix $Q$, the unique solution $P$ to $-Q = A^T P + PA$ is also symmetric positive definite.

Proof: Choose a positive definite $Q$. Because $A$ is Hurwitz, $\lim_{t \to \infty} A t \rightarrow 0$. Using this property and the Fundamental Theorem of Calculus, we can write negative definite $-Q$ as: $$-Q = \int_0^{\infty} \frac{d}{du} \exp(A^T u) Q \exp(Au) du $$ Taking the derivative inside the integral and applying the chain rule, we see: $$-Q = A^T \int_0^{\infty} \exp(A^T u) Q \exp(Au) du + \int_0^{\infty} \exp(A^T u) Q \exp(Au) du \, A$$ Define $$P \defeq \int_0^{\infty} \exp(A^T u) Q \exp(Au) du$$ Note that $P$ is positive definite because $Q$ is. Then $$ -Q = A^T P + P A$$ Define $V = x^T P x \Rightarrow \dot{V} = -x^T Q x$. By Lyapunov's Direct Method, we conclude that the origin is strictly stable.

The other direction is also straightforward. Suppose there exist symmetric positive definite $P, Q$.

Lyapunov Analysis of Linear, Time-Varying Systems

For a linear, time-invariant system $\dot{x} = Ax$, the eigenvalues of $A$ tell us a tremendous amount. Can we draw similar insights from the eigenvalues of a time-varying matrix $A$? The answer is trickier. Consider the matrix $A = \begin{bmatrix} -1 & e^{2t} \\ 0 & -1 \end{bmatrix}$, which has eigenvalue $-1$ with multiplicity $2$. Solving component-wise, we see that while $x_2$ converges, $x_1$ diverges: \begin{align*} \dot{x}_2 = -x_2 &\Rightarrow x_2(t) = x_2(0) e^{-t}\\ \dot{x}_1 = -x_1 + e^{2t} x_2(0) e^{-t} &\Rightarrow x_1(t) = x_1(0) e^{-t} + x_2(0) e^{t} \end{align*}

Why does such behavior emerge? TODO

However, all is not lost. As long as the symmetric part of $A$ is negative definite, then we can ensure the origin is exponentially stable. Suppose w.l.o.g. all eigenvalues of $\frac{1}{2}(A(t)^T + A(t)) \leq -\lambda$ for some positive $\lambda$, then $x \rightarrow 0$ with rate $\lambda$. To show this, pick $V = x^T x$: \begin{align*} V &\defeq x^T x\\ \dot{V} &= x^T (A^T + A) x\\ &\leq -2 \lambda x^T x\\ &= -2 \lambda V \end{align*} Since $V \geq 0$ and $V(t) \leq V(0) e^{-2 \lambda t}$, we see that the system converges exponentially quickly to the origin.

Linear Control

Trajectory Tracking

The following sections concern trajectory tracking i.e. controlling a system to ensure it follows a pre-specified trajectory, denoted $x_d$. When tracking and controlling a system, there can be many unknown quantities such as the dynamics, the parameters, the accuracy of measurement, etc. Much like Bayesian statisticians make a distinction between unmodeled (epistemic) uncertainty and modeled (aleatoric) uncertainty, control theorists make the same distinction but using the terms unstructured uncertainty/unmodeled dynamics for the first and structured uncertainty/parametric uncertainty for the second.

Robust Control

Consider a potentially non-linear, potentially non-autonomous system that relates the $n$-th time derivative of the state $x$ to the 0th through $n-1$th derivatives, along with a controller $u(x,t)$ that can influence the system. $$x^{(n)} = f(x, \dot{x}, \ddot{x}, ..., x^{(n-1)}, t) + b(x, \dot{x}, \ddot{x}, ..., x^{(n-1)} , t) u(x, t) $$ For brevity, we define $x = \{x, \dot{x}, \ddot{x}, ..., x^{(n-1)} \}$. Our system then becomes $$x^{(n)} = f(x, t) + b(x, t) u $$ The functions $f, b$ might be complicated, non-linear monstrosities, involving parameters that we are unaware of. We would like to answer whether it is possible to produce a desired trajectory $x_d$ even in the face of these unknowns. The answer is yes, with the insight that if we can bound the possible range of $f$ and $b$, then we can choose $u$ to overpower both and ensure that $x \rightarrow x_d$.

Adaptive Control