|
38 | 38 | # *Most control engineers aren't as stable as their systems* |
39 | 39 | # |
40 | 40 | # If you know epsilon-delta proofs, I'm sorry for your loss. There is an epsilon-delta definition of stability in the slides, have fun. For normal people stability means that a certain system trajectory |
41 | | -# stays bounded under bounded disturbances. Even simpler: a stable system always returns to certain points. The stability of certain equilibria is easy to assess, since the linearisation is a good |
| 41 | +# stays bounded under bounded inputs. Even simpler: a stable system always returns to certain points. The stability of certain equilibria is easy to assess, since the linearisation is a good |
42 | 42 | # approximation locally at the equilibrium point, the stability of the linearised system indicates the stability of the equilibrium point too. |
43 | 43 | # |
44 | 44 | # ### Stability of linear systems |
|
47 | 47 | # 1. The characteristic polynomial is equivalent to det($sI-A$) for complex $s$. |
48 | 48 | # 2. The roots of the characteristic polynomial are equivalent to the eigenvalues of $A$. |
49 | 49 | # |
50 | | -# That second point sounds familiar! We already saw stability related to the roots of the characteristic polynomial before. So, the eigenvalues, $\Lambda$, of $A$ reveal the stability of the system. |
| 50 | +# That second point sounds familiar! We already saw stability related to the roots of the characteristic polynomial before. So, the eigenvalues of $A$, $\Lambda$, reveal the stability of the system. |
51 | 51 | # Surprisingly though, there are three types of stability, not two: |
52 | 52 | # 1. Unstable $\leftarrow \exist\mathfrak{R}(\lambda)>0, \lambda\in\Lambda$. |
53 | 53 | # 2. Neutrally stable $\leftarrow \mathfrak{R}(\lambda)\leq 0, \forall\lambda\in\Lambda$ with at most one eigenvalue at 0 or on conjugate pair with real part 0. |
|
139 | 139 | # and $$ y = Cx + Du \rightarrow y = \underbrace{CT^{-1}}_{\tilde C}z + Du = \tilde C z + Du.$$ |
140 | 140 | # **Important: note that the input/output behaviour remains unchanged under state transformations. This is only a system-internal operation.** |
141 | 141 | # |
142 | | -# For $A$ with unique eigenvalues, the system is called diagonalisable, because taking the inverse transformation, $T^{-1}$, to be the horizontally stacked eigenvectors of $A$ results in a diagonal |
| 142 | +# For systems with unique eigenvalues, the system is called diagonalisable, because taking the inverse transformation, $T^{-1}$, to be the horizontally stacked eigenvectors of $A$ results in a diagonal |
143 | 143 | # $\tilde A$. Sometimes stuff is named nice and descriptive. |
144 | 144 |
|
145 | 145 | # %% [markdown] |
146 | 146 | # <div style="text-align:center;background-color:tomato;">End of lecture 4</div> |
147 | 147 |
|
148 | 148 | # %% [markdown] |
149 | 149 | # ## Reachability |
| 150 | +# Now there is a nice closed form expression of system trajectories given any input! By convolution of the input, we can express the state and output trajectory of any LTI system as |
| 151 | +# $$ x(t) = e^{At}x_0 + \int_0^t e^{A(t-\tau)}Bu(\tau)d\tau$$ |
| 152 | +# and |
| 153 | +# $$ y(t) = Cx(t) + Du(t) = Ce^{At}x_0 + \int_0^t Ce^{A(t-\tau)}Bu(\tau)d\tau + Du(t).$$ |
| 154 | +# Here, the first term is the effect of the initial condition, the second term is influenced by the input and the third term is the direct feedthrough. |
| 155 | +# |
150 | 156 | # This output trajectory expression answers some interesting questions too: what states are we able to control the system to? This is called the reachability of the system! Remember we have equilibria |
151 | 157 | # $(\bar x, \bar u) \leftarrow A\bar x + B\bar u = 0$? Well then if $A$ is invertible this means that $\bar x = -A^{-1}B\bar u$. So, if $A^{-1}B$ is full rank, we can attain any steady state we'd desire! |
152 | 158 | # |
|
228 | 234 | for zeta, axIdx2 in zip(Zeta, range(len(Zeta))): |
229 | 235 | Pq = cm.ss(1/(s**2 + 2*zeta*omega0*s + omega0**2), dt=0) |
230 | 236 | response = cm.forced_response(Pq, T=T, U=np.ones_like(T)) |
231 | | - ax[axIdx1, axIdx2].plot(response.time, response.outputs) |
| 237 | + ax[axIdx1, axIdx2].plot(response.time, response.outputs, 'k') |
232 | 238 |
|
233 | 239 | [ax[0,p].set(title=f"$\zeta={Zeta[p]}$") for p in range(len(Zeta))] |
234 | 240 | [ax[p,0].set(ylabel=f"$\omega_0={Omega0[p]}$") for p in range(len(Omega0))] |
|
264 | 270 | response_dom_2d = cm.forced_response(P_dom_2d, T=T_dom, U=np.ones_like(T_dom)) |
265 | 271 |
|
266 | 272 | fig, ax = plt.subplots(1, 2) |
267 | | -ax[0].plot(P_dom.poles().real, P_dom.poles().imag, 'x', color='tab:blue', label="Poles") |
268 | | -ax[0].plot(P_dom_2d.poles().real, P_dom_2d.poles().imag, '+', color='tab:orange', label="Dominant poles") |
| 273 | +ax[0].plot(P_dom.poles().real, P_dom.poles().imag, 'kx', label="Poles") |
| 274 | +ax[0].plot(P_dom_2d.poles().real, P_dom_2d.poles().imag, 'k+', label="Dominant poles") |
269 | 275 | ax[0].set(title="Pole map", xlabel="Re($\lambda$)", ylabel="Im($\lambda$)") |
270 | 276 | ax[0].legend() |
271 | 277 |
|
272 | 278 |
|
273 | | -l0 = ax[1].plot(response_dom.time, response_dom.outputs, color="tab:blue", label="Original sys.") |
274 | | -l1 = ax[1].twinx().plot(response_dom_2d.time, response_dom_2d.outputs, '--', color="tab:orange", label="2nd order sys.") |
| 279 | +l0 = ax[1].plot(response_dom.time, response_dom.outputs, 'k', label="Original sys.") |
| 280 | +l1 = ax[1].twinx().plot(response_dom_2d.time, response_dom_2d.outputs, 'k--', label="2nd order sys.") |
275 | 281 | ax[1].legend(handles = [l0[0], l1[0]]) |
276 | 282 | ax[1].set(title="Step response (ignore scaling)", xlabel="t/s") |
277 | 283 | display(fig) |
0 commit comments