Skip to content

Commit 8bc97ba

Browse files
committed
Final touches
1 parent 2868a9a commit 8bc97ba

6 files changed

Lines changed: 94 additions & 2139 deletions

BlockA/SysReg_crashcourse_TimeDomain_BlockA.py

Lines changed: 0 additions & 983 deletions
This file was deleted.

BlockA/SysReg_crashcourse_TimeDomain_BlockA_01_Basics.py

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,13 @@
1313
# ## Systems and differential equations
1414
#         *The basics of all things control*
1515
#
16-
# *Systems are all and all is a system.* So you can decribe dynamic processses with physics, then you get equations with time derivatives. So for Linear-Time-Independent (LTI) systems, the differential equation with output $y(t)$ and input $u(t)$ is
16+
# *Systems are all and all is a system.* So you can decribe dynamic processses with physics, then you get equations with time derivatives. So for Linear-Time-Independent (LTI) systems, the differential
17+
# equation with output $y(t)$ and input $u(t)$ is
1718
# $$ \frac{d^n}{d^nt}y + a_1\frac{d^{n-1}}{d^{n-1}t}y + \cdots + a_n y = \frac{d^m}{d^mt}u + b_1\frac{d^{m-1}}{d^{m-1}t}u + \cdots + b_m u.$$
1819
# Here, $n$ is called the order of the system.
1920
#
20-
# We want to be able to solve Initial Value Problems (IVPs), because that's cool and it shows us what the system will do from some initial state. This solution is built in two steps, the homogenous solution and satisfying the initial conditions.
21+
# We want to be able to solve Initial Value Problems (IVPs), because that's cool and it shows us what the system will do from some initial state. This solution is built in two steps, the homogenous
22+
# solution and satisfying the initial conditions.
2123
#
2224
# The homogenous solution is obtained by equation the left hand side to zero. For this, we'll steal a little ahead from the Laplace transform where
2325
# $$ \frac{d^n}{d^nt} \overset{\mathcal{F}}{\rightarrow} s^n. $$
@@ -31,12 +33,15 @@
3133
# The solution to these types of ODEs are sums of exponentials of the form $c_k e^{\lambda_k t}$, where $c_k$ is determined through the initial condition. Therefore, the complete solution is
3234
# $$ y(t) = \sum_{k= 1}^n c_k e^{\lambda_k t} .$$
3335
#
34-
# If all $\mathfrak{R}(\lambda_k) < 0$ then $y(t)$ goes to 0 and the system is stable. The Routh-Hurwitz criterion gives the stability requirements up to the third degree. Stability is guaranteed for systems of order
36+
# If all $\mathfrak{Re}(\lambda_k) < 0$ then $y(t)$ goes to 0 and the system is stable. The Routh-Hurwitz criterion gives the stability requirements up to the third degree. Stability is guaranteed for
37+
# systems of order
3538
# 1. if $a_1 > 0$.
3639
# 2. if $a_1, a_2 > 0$.
3740
# 3. if $a_1, a_2, a_3 >0$ and $ a_1a_2 > a_3$.
3841
#
39-
# Real-valued $\lambda_k$ yield an exponential trajectory $e^{\lambda_k t}$. Complex-values come in conjugate pairs so they yield an exponential trajectory multiplied by a cosine. Because $\lambda_k, \lambda_{k+1} = \epsilon \pm j\omega$, the trajectory of this pair becomes $e^{(\epsilon \pm j\omega)t} = e^{\epsilon t} (e^{j\omega t} + e^{-j\omega t}) = e^{\epsilon t} (2\cos(\omega t))$. Play around with the code block here to get a feel for it!
42+
# Real-valued $\lambda_k$ yield an exponential trajectory $e^{\lambda_k t}$. Complex-values come in conjugate pairs so they yield an exponential trajectory multiplied by a cosine. Because $\lambda_k,
43+
# \lambda_{k+1} = \epsilon \pm j\omega$, the trajectory of this pair becomes $e^{(\epsilon \pm j\omega)t} = e^{\epsilon t} (e^{j\omega t} + e^{-j\omega t}) = e^{\epsilon t} (2\cos(\omega t))$. Play around
44+
# with the code block here to get a feel for it!
4045
#
4146

4247
# %%
@@ -67,10 +72,13 @@
6772
# ## State space representation
6873
# &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*A blessing from above*
6974
#
70-
# Cool we know systems now, but it's a bit of an ugly system representation to be honest. Luckily we live in a universe where the state space representation exists and we can recast any system as a vector first order ODE and an output equation. This representation is of the form $$ \dot x = f(x, u)$$ $$ y = h(x, u),$$
71-
# where $x$ is the state of the system, $u$ the input, and $y$ the output. The state dynamics are described with $f: \mathbb{R}^n\times\mathbb{R}^p\rightarrow\mathbb{R}^n$ and the output measurements with $h: \mathbb{R}^n\times\mathbb{R}^p\rightarrow\mathbb{R}^q$.
75+
# Cool we know systems now, but it's a bit of an ugly system representation to be honest. Luckily we live in a universe where the state space representation exists and we can recast any system as a vector
76+
# first order ODE and an output equation. This representation is of the form $$ \dot x = f(x, u)$$ $$ y = h(x, u),$$
77+
# where $x$ is the state of the system, $u$ the input, and $y$ the output. The state dynamics are described with $f: \mathbb{R}^n\times\mathbb{R}^p\rightarrow\mathbb{R}^n$ and the output measurements with
78+
# $h: \mathbb{R}^n\times\mathbb{R}^p\rightarrow\mathbb{R}^q$.
7279
#
73-
# What black magic do we perform to get these first order ODEs? Well, suppose you have a second order ODE in $v$, $\ddot v = \dot v + v$, then this is equivalent to the first order ODE $$ \begin{bmatrix}\dot v\\\ddot v\end{bmatrix} = \begin{bmatrix}0&1\\1&1\end{bmatrix}\begin{bmatrix}\dot v\\ v\end{bmatrix}.$$
80+
# What black magic do we perform to get these first order ODEs? Well, suppose you have a second order ODE in $v$, $\ddot v = \dot v + v$, then this is equivalent to the first order ODE $$ \begin{bmatrix}
81+
# \dot v\\\ddot v\end{bmatrix} = \begin{bmatrix}0&1\\1&1\end{bmatrix}\begin{bmatrix}\dot v\\ v\end{bmatrix}.$$
7482
#
7583
# Last but not least of the amazing aspects of the state space representation: there are many nice numerical integrators to simulate them. Think forward Euler or Runge-Kutta.
7684

@@ -88,7 +96,8 @@
8896
# ## Equilibrium points
8997
# &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*Forever and unchanging*
9098
#
91-
# Often, systems will have equilibria, meaning there are states where system will remain over time. These state-input point pairs are denoted as $(x_e, u_e)$ or $(\bar x, \bar u)$. I'll use the latter since this is more traditional in control and we have to appease our control elders (JW). So equilibria remain unchanging, so the derivative of the state is zero. In mathematicians' language
99+
# Often, systems will have equilibria, meaning there are states where system will remain over time. These state-input point pairs are denoted as $(x_e, u_e)$ or $(\bar x, \bar u)$. I'll use the latter
100+
# since this is more traditional in control and we have to appease our control elders. So equilibria remain unchanging, so the derivative of the state is zero. In mathematicians' language
92101
# $$ \dot{\bar x} = f(\bar x, \bar u) = 0. $$
93102
# Finding the equilibria states as a function of the equilibria inputs is as simple as solving this equation.
94103
#

BlockA/SysReg_crashcourse_TimeDomain_BlockA_02_StateSpace.py

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@
3838
# &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*Most control engineers aren't as stable as their systems*
3939
#
4040
# If you know epsilon-delta proofs, I'm sorry for your loss. There is an epsilon-delta definition of stability in the slides, have fun. For normal people stability means that a certain system trajectory
41-
# stays bounded under bounded disturbances. Even simpler: a stable system always returns to certain points. The stability of certain equilibria is easy to assess, since the linearisation is a good
41+
# stays bounded under bounded inputs. Even simpler: a stable system always returns to certain points. The stability of certain equilibria is easy to assess, since the linearisation is a good
4242
# approximation locally at the equilibrium point, the stability of the linearised system indicates the stability of the equilibrium point too.
4343
#
4444
# ### Stability of linear systems
@@ -47,7 +47,7 @@
4747
# 1. The characteristic polynomial is equivalent to det($sI-A$) for complex $s$.
4848
# 2. The roots of the characteristic polynomial are equivalent to the eigenvalues of $A$.
4949
#
50-
# That second point sounds familiar! We already saw stability related to the roots of the characteristic polynomial before. So, the eigenvalues, $\Lambda$, of $A$ reveal the stability of the system.
50+
# That second point sounds familiar! We already saw stability related to the roots of the characteristic polynomial before. So, the eigenvalues of $A$, $\Lambda$, reveal the stability of the system.
5151
# Surprisingly though, there are three types of stability, not two:
5252
# 1. Unstable $\leftarrow \exist\mathfrak{R}(\lambda)>0, \lambda\in\Lambda$.
5353
# 2. Neutrally stable $\leftarrow \mathfrak{R}(\lambda)\leq 0, \forall\lambda\in\Lambda$ with at most one eigenvalue at 0 or on conjugate pair with real part 0.
@@ -139,14 +139,20 @@
139139
# and $$ y = Cx + Du \rightarrow y = \underbrace{CT^{-1}}_{\tilde C}z + Du = \tilde C z + Du.$$
140140
# **Important: note that the input/output behaviour remains unchanged under state transformations. This is only a system-internal operation.**
141141
#
142-
# For $A$ with unique eigenvalues, the system is called diagonalisable, because taking the inverse transformation, $T^{-1}$, to be the horizontally stacked eigenvectors of $A$ results in a diagonal
142+
# For systems with unique eigenvalues, the system is called diagonalisable, because taking the inverse transformation, $T^{-1}$, to be the horizontally stacked eigenvectors of $A$ results in a diagonal
143143
# $\tilde A$. Sometimes stuff is named nice and descriptive.
144144

145145
# %% [markdown]
146146
# <div style="text-align:center;background-color:tomato;">End of lecture 4</div>
147147

148148
# %% [markdown]
149149
# ## Reachability
150+
# Now there is a nice closed form expression of system trajectories given any input! By convolution of the input, we can express the state and output trajectory of any LTI system as
151+
# $$ x(t) = e^{At}x_0 + \int_0^t e^{A(t-\tau)}Bu(\tau)d\tau$$
152+
# and
153+
# $$ y(t) = Cx(t) + Du(t) = Ce^{At}x_0 + \int_0^t Ce^{A(t-\tau)}Bu(\tau)d\tau + Du(t).$$
154+
# Here, the first term is the effect of the initial condition, the second term is influenced by the input and the third term is the direct feedthrough.
155+
#
150156
# This output trajectory expression answers some interesting questions too: what states are we able to control the system to? This is called the reachability of the system! Remember we have equilibria
151157
# $(\bar x, \bar u) \leftarrow A\bar x + B\bar u = 0$? Well then if $A$ is invertible this means that $\bar x = -A^{-1}B\bar u$. So, if $A^{-1}B$ is full rank, we can attain any steady state we'd desire!
152158
#
@@ -228,7 +234,7 @@
228234
for zeta, axIdx2 in zip(Zeta, range(len(Zeta))):
229235
Pq = cm.ss(1/(s**2 + 2*zeta*omega0*s + omega0**2), dt=0)
230236
response = cm.forced_response(Pq, T=T, U=np.ones_like(T))
231-
ax[axIdx1, axIdx2].plot(response.time, response.outputs)
237+
ax[axIdx1, axIdx2].plot(response.time, response.outputs, 'k')
232238

233239
[ax[0,p].set(title=f"$\zeta={Zeta[p]}$") for p in range(len(Zeta))]
234240
[ax[p,0].set(ylabel=f"$\omega_0={Omega0[p]}$") for p in range(len(Omega0))]
@@ -264,14 +270,14 @@
264270
response_dom_2d = cm.forced_response(P_dom_2d, T=T_dom, U=np.ones_like(T_dom))
265271

266272
fig, ax = plt.subplots(1, 2)
267-
ax[0].plot(P_dom.poles().real, P_dom.poles().imag, 'x', color='tab:blue', label="Poles")
268-
ax[0].plot(P_dom_2d.poles().real, P_dom_2d.poles().imag, '+', color='tab:orange', label="Dominant poles")
273+
ax[0].plot(P_dom.poles().real, P_dom.poles().imag, 'kx', label="Poles")
274+
ax[0].plot(P_dom_2d.poles().real, P_dom_2d.poles().imag, 'k+', label="Dominant poles")
269275
ax[0].set(title="Pole map", xlabel="Re($\lambda$)", ylabel="Im($\lambda$)")
270276
ax[0].legend()
271277

272278

273-
l0 = ax[1].plot(response_dom.time, response_dom.outputs, color="tab:blue", label="Original sys.")
274-
l1 = ax[1].twinx().plot(response_dom_2d.time, response_dom_2d.outputs, '--', color="tab:orange", label="2nd order sys.")
279+
l0 = ax[1].plot(response_dom.time, response_dom.outputs, 'k', label="Original sys.")
280+
l1 = ax[1].twinx().plot(response_dom_2d.time, response_dom_2d.outputs, 'k--', label="2nd order sys.")
275281
ax[1].legend(handles = [l0[0], l1[0]])
276282
ax[1].set(title="Step response (ignore scaling)", xlabel="t/s")
277283
display(fig)

0 commit comments

Comments
 (0)