Abstract

The control of complex systems is often challenging due to high-dimensional nonlinear models, unmodeled phenomena, and parameter uncertainty. The increasing ubiquity of sensors measuring such systems and increased computational resources has led to an interest in purely data-driven control methods, particularly using the Koopman operator. In this paper, we elucidate the construction of a linear predictor based on a sequence of time realizations of observables drawn from a data archive of different trajectories combined with subspace identification methods for linear systems. This approach is free of any predefined set of basis functions but instead depends on the time realization of these basis functions. The prediction and control are demonstrated with examples. The basis functions can be constructed using time-delayed coordinates of the outputs, enabling the application to purely data-driven systems. The paper thus shows the link between Koopman operator-based control methods and classical subspace identification methods. The approach in this paper can be extended to adaptive online learning and control.

1 Introduction

Controlling complex autonomous systems, such as swimming robots, off-road vehicles, wind turbines, transportation networks, and power grids poses significant challenges due to high-dimensional nonlinear models, unmodeled phenomena, and parameter uncertainties. The increasing ubiquity of sensors has led to an abundance of data from such systems, which combined with improved computational resources has led to an interest in purely data-driven methods for modeling, analysis, and control of such systems [13]. These data-driven approaches have increasingly been based on linear operators associated with nonlinear dynamical systems. The Perron–Frobenius operator propagates the densities in the state space of a dynamical system, and the Koopman operator propagates the observables along the flow in the state space of a dynamical system, see for example Ref. [4]. By considering the evolution of functions of the states rather than the states themselves, these operator-based methods transform a nonlinear system into an infinite-dimensional linear system, which can be approximated by a finite-dimensional linear system such as in Refs. [5,6]. Additionally, many efforts have been made recently to extend these results from autonomous dynamical systems to input–output systems, where the operator approximation must account for the effect of a control input to predict the output variables. For example, extended dynamic mode decomposition (EDMD) coupled with model predictive control (MPC) such as in Ref. [7] or with LPV modes such as in Ref. [8] and variations thereof have become popular. More recently, ideas from subspace identification (SSID) for linear systems, such as from Refs. [9,10] have been used by Refs. [11,12] to develop a method based on identifying the significant modes in the evolution of an observable along a single trajectory and then using Gaussian processes (GP) as a mapping to this identified lifted state. The current paper builds on these ideas of subspace identification and clarifies and extends them to include observation data from multiple trajectories and further shows that basis functions for the observables can be defined using only data from output variables.

The main contributions of this paper are as follows. We demonstrate the relation between a Koopman operator associated with a nonlinear system and subspace identification methods of linear systems. We identify the relation between the approximate Koopman operator that is calculated using techniques like EDMD versus subspace methods. In the approach using subspace methods, the approximate Koopman operator that is calculated propagates time realizations of observables. Therefore, output data from a single trajectory do not (usually) sample the state space dynamics, as they are difficult to ensure that a nonlinear system is sufficiently excited, an assumption that underlies the subspace identification methods with a single data record, see for example Ref. [13]. We show that using multiple data records improves the accuracy of prediction. Multiple data records have been shown to be useful even in subspace identification for linear systems in Ref. [14] since a sufficiently long single data record may not be available in practice and, more importantly, the observables could be projected on any set of basis functions, such as Gaussian processes. Usually, the basis functions are functions of the state, and in purely data-driven systems, such functions cannot be constructed since explicit knowledge of the states is not available nor are different observables realized or sampled. This paper shows that the basis functions can be constructed using time-delayed coordinates of the outputs, enabling the application to purely data-driven systems.

The rest of the paper is organized as follows. In Sec. 2, we describe the approximations of the different “versions” of the Koopman operator and in particular the approximation that propagates time realizations of the observables. In Sec. 3, we formulate the subspace identification problem for the linearized dynamics using multiple data records. In Sec. 4, we review GP as basis functions (of state variable) for observables and in Sec. 4.2, we reformulate the GP as basis functions of time-delayed observables. Section 5 prediction is obtain using GP-SSID and demonstrate numerical results in Sec. 6. Section 7 conlcudes the paper.

2 Formulation for Koopman Operator Based Control

Consider the autonomous dynamical system defined on a manifold M
(1)
with f:MTxM and suppose the bounded function g:MR is an observable. The flow map of the dynamical system, Ft:MM propagates the state as x(t + t0) = Ft(x(t0)). The Koopman operator K:LL propagates the observable as Kg(x(t0))=gFt(x(t0))). For the discrete-time dynamical system
(2)
where xtM and F:MM, the action of the Koopman operator on an observable function g:MR is given as follows:
(3)
In practice, the action of the infinite-dimensional Koopman operator is approximated by its action on a finite-dimensional space called the lifted space. This has frequently been done through by projection to a finite set of basis functions called the dictionary [5,7] that can either be defined a priori or learned as part of the system identification procedure or through the use of neural networks such as in Refs. [15,16]. Here, we take a different approach and treat the sequence of measurements ytRp as evaluations of an observable function of the state. This observable function is treated as a linear combination of finite basis functions Ψ(xt), which would be propagated by an approximation of the Koopman operator KΨ
(4)
as shown in Fig. 1. Here c is a vector of projection coefficients for the projection of g onto the function space spanned by the dictionary functions Ψ. In order to obtain an appropriate set of dictionary functions, Ψ, we first obtain a lifted state ztRr and a corresponding operator K, which propagates it forward in time by applying ideas of subspace identification using temporal sequences of data without first approximating the operator KΨ or specifying the basis functions Ψ. These lifted states zt are then regarded as evaluations at a particular xt of a realization of a random variable in a function space. This notion of a random variable in a function space allows us to apply Gaussian process regression (see Ref. [17]), which can be viewed as obtaining an a posteriori distribution in a function space, conditioned on the given data. In other words, we model the lifted states as a Gaussian process
(5)
Further, if we only have access to an output measurement yt, which can be regarded as a function of the state xt, the same lifting procedure can be applied to yt, as this can be interpreted as indirectly lifting the state xt. The measurements yt can then be retrieved as a projection by the operator C from the lifted space of realized observables. This matrix C is also obtained through the subspace identification procedure, but in the Koopman interpretation, its rows contain the projection coefficients c from Eq. (4) corresponding to the particular observable functions associated with the output measurements yt. In this interpretation, a Koopman operator can be constructed that propagates the observables as functions of time without first specifying or learning a dictionary of basis functions.
Fig. 1
Commutative diagram showing the propagation of the observable g(x) by the operator K, the propagation of the projected observables Ψ(x) by the projected operator KΨ and their time realization zt by the operator K
Fig. 1
Commutative diagram showing the propagation of the observable g(x) by the operator K, the propagation of the projected observables Ψ(x) by the projected operator KΨ and their time realization zt by the operator K
Close modal
Fig. 2
(a) Training data set for the model with multiple data records and (b) training data for the single trajectory case. Points are the training data for the GP and the lines show the full trajectories.
Fig. 2
(a) Training data set for the model with multiple data records and (b) training data for the single trajectory case. Points are the training data for the GP and the lines show the full trajectories.
Close modal
The formulation can be extended to a control system on realizations of observables z(t) with a control input at time t as utRm
(6a)
(6b)

3 Koopman Operator Via Observables From Multiple Data Records

We begin by collecting a dataset of input–output data from N trajectories of n + 1 time-steps each, Di={uti,yti}t=0n for i=1,,N. We define the Hankel matrices for each trajectory of n + 1 measured output and input data and with l time-delayed coordinates as
In order to identify the system using the multiple trajectory records, these Hankel matrices are collected to form mosaic-Hankel matrices, see Ref. [18] as shown below
(7)
where YlRlp×(nl)N and UlRlm×(nl)N.

3.1 System Matrix Calculation Using SSID.

Assuming that we have collected the trajectory data and organized it into a mosaic-Hankel matrix as mentioned above, we proceed by computing the lifted states Z0lift and system matrices K, C, B, and D using an SSID algorithm for multiple trajectories. We outline the procedure below, following closely the presentation of Ref. [14]. The system (6) can be concatenated and represented in the form as
(8)
where
Z0lift=[z01znl1z02znlN1z0NznlN] where ΓlRlp×r is the extended observability matrix, HlRlp×lm is the block-Toeplitz matrix, and Z0liftRr×(nl)N is the matrix of lifted states for the initial state of each column trajectory of mosaic-Hankel matrix. The subspace system identification algorithm proceeds by projecting Eq. (8) onto the orthogonal complement of the row space of Ul. The projection operator for this is given by ΠUl=IUl(UlUl)Ul and this projection yields YlΠUl=ΓlZ0liftΠUl. With this, we extract column space of Γl by taking the singular value decomposition of YlΠUl
(9)
where Σ1 contains the r largest singular values, and so we consider an approximation of Γl to be Γ^r=Q1Σ112. By selecting the r largest singular values we are determining the order of the lifted dynamics. Here, the first p rows of Γ^r give us the C matrix
(10)
and with this, the K matrix can be found by solving the following least squares problem:
(11)
where the : operator is used as in the familiar matlab syntax and () is the Moore-Penrose pseudoinverse.
We again form a least squares problem to find matrices B, D, and Z0lift, using the equation below where i=1,2,,N and t=0,1,2,,l.
(12)
We first vectorize and collect all outputs in one vector and write it in the form
(13)
With this, the matrices B, D, and the initial lifted state Z0lift are given by the least square solution
(14)

4 Gaussian Processes as Basis Functions

The SSID procedure outlined in the previous section for identifying the lifted state z and system matrices provides a method for obtaining the system matrices from input–output data collected from the system. However, it is also necessary to construct a mapping from the original states, xt (or partial observation yt) to the lifted state. For this, we employ Gaussian process regression, based on the idea discussed in Sec. 2 that the lifted states zt obtained in subspace identification can be viewed as evaluations at a particular xt of a dictionary function, which is itself a realization of a random variable in a function space. In Sec. 4.1, we give the procedure when measurements of the full state observable are available and in Sec. 4.2, we consider the case where only a partial observation is available.

4.1 Gaussian Processes as Basis Functions of Full State Observable.

The purpose of the Gaussian process model here is to give the mapping zt = Ψ(xt) from the original system states xRd to the lifted states zRr. For this, we use the data gathered above as Z0lift and corresponding states X=[x01,,xnl1,,x0N,,xnlN] obtained from the first p rows of the mosaic-Hankel matrix Yl for the case when we have full state observable that is yt = xt. These data are then used to model each lifted state as a Gaussian process
(15)
where i = 1, 2, …, r. The posterior mean and covariance are calculated as
(16)
(17)
where the kernel matrices are found by evaluating the kernel function. In particular, we are using the Automatic Relevance Determination (ARD) squared exponential kernel. The kernel parameters can be optimized through log-likelihood maximization. Here this is done using the matlab toolbox for Gaussian process regression to fit each of the r GPs independently.

4.2 Gaussian Processes as Basis Functions of Time-Delayed Observables.

Often in system identification theory [9], using time-delayed state observables to identify the whole nonlinear states can be an effective way to estimate the state of a system even if the full state is not directly measured. Therefore, when the output is not a full state, the Gaussian process model gives the mapping from the output and the time-delayed coordinates of outputs and inputs to the lifted states zt=Ψ(x^).
(18)
These time-delayed observables X^ with the corresponding realizations of the lifted state Z0lift is used to model the lifted state as Gaussian process.
(19)
The posterior mean and covariance are calculated using Eqs. (16) and (17).

5 Prediction Using GP-SSID

For predicting the evolution of the system from a given initial condition x0, its corresponding initial condition of the lifted state is represented as a Gaussian random variable, found by evaluating the GP at x0.
(20)
The mean and covariance of the initial lifted state are found by evaluating the posterior mean and covariance for each lifting function, using Eqs. (16) and (17).
(21a)
(21b)
Then, denote the mean and covariance of the lifted state by z¯=μz|D and P=kz|D, respectively. This mean and covariance are propagated forward in time using the system matrices as
(22a)
(22b)
and the estimated mean of the output, y^ along with its covariance Q at each step can be obtained as
(23a)
(23b)
where t is the time index.

6 Results

6.1 Linear Predictor Without Control.

We apply the proposed method to obtain the Koopman operator for the nonlinear Duffing oscillator using multiple and single data records. Then, we compare the open-loop prediction of each linear model against the true trajectory simulated using the nonlinear model
(24)
The training data set is collected by discretizing the nonlinear system (24) using a fourth- order Runge–Kutta method and time-step of 0.01 s. For the first linear model (multiple data records), we sample 200 trajectories all randomly initialized for xR2 in the range [−3, 3] and simulate each trajectory for 150 time-steps as shown in Fig. 2. We then construct the mosaic-Hankel matrix and follow the procedure discussed in Sec. 3 to construct a higher dimensional linear system. For the second model, we sample a single trajectory of 8000 time-steps long to construct the Koopman operator. We select 200 data points of the system states and the corresponding time realization of the basis functions (Zlift) from both models, which we use to train the Gaussian processes. In the case of the multiple trajectory model, we select r = 10, while for the single trajectory model, we select r = 20. For predicting the forward dynamics, we first use the respective trained Gaussian processes to lift the initial states to the lifted space and then propagate forward using Eqs. (22) and (23).

To compare the prediction of the proposed GP-SSID algorithm to the traditional EDMD algorithm, see for instance, Ref. [5], we use the same data from the multiple trajectory data set to construct the Koopman operator using the EDMD method with the choice of 25 radial basis functions and the states themselves, which gives us the total number of basis functions r = 27. We then compare predictions using 500 randomly sampled initial points in the state space, enabling us to predict a 2-second trajectory for each initial point. The average root mean squared error for the SSID model trained with multiple trajectories is 0.0728, while for the single trajectory model, it is 1.2093, and for the K-EDMD method average root-mean-squared error (RMSE) is 0.5424. Figures 3(a)3(c) show one of the prediction example with an initial state at [ −1.1, − 1]. Figure 3(d) illustrates the increase in RMSE as the prediction time increases.

Fig. 3
Comparison of the prediction of multiple trajectory model (solid line), single trajectory model (dash-dotted line), and Koopman EDMD model (dotted line) with the solution obtained by direct integration of the nonlinear system (dash-squared line) (24) for the initial state [ − 1.1, − 1]
Fig. 3
Comparison of the prediction of multiple trajectory model (solid line), single trajectory model (dash-dotted line), and Koopman EDMD model (dotted line) with the solution obtained by direct integration of the nonlinear system (dash-squared line) (24) for the initial state [ − 1.1, − 1]
Close modal

6.2 Model Predictive Control With Koopman Operator.

In this section, we consider a Duffing oscillator with a control input
(25)
For comparison purposes, we identify two linear models, one based on multiple data records and the other on a single trajectory. Next, we use linear MPC technique to control the nonlinear system (25) and compare these two linear models in terms of the performance index of the MPC and convergence to the desired output.

We gather training data sets shown in Fig. 4; for the first model, we use multiple data records by simulating 100 randomly sampled initial states forward for 500 time-steps with random control inputs u ∈ [−3, 3]. For the second model, we propagate an initial condition for 8000 time-steps with a random control input of a similar range. Subsequently, we construct mosaic-Hankel matrices from the collected data, identify the linear system matrices, and train the basis function (Gaussian Processes) for both models, as discussed in Secs. 3 and 4. The size of the lifted states selected is r = 35 for both the models.

Fig. 4
Training data set for the Duffing oscillator (25). Star points are the training data for the GPs, and the circular points show the full trajectories: (a) multiple data records and (b) single trajectory.
Fig. 4
Training data set for the Duffing oscillator (25). Star points are the training data for the GPs, and the circular points show the full trajectories: (a) multiple data records and (b) single trajectory.
Close modal
Linear MPC techniques are utilized with the identified linear models to guide the system’s states toward a desired state/trajectory. MPC optimizes the performance of dynamic systems by solving a sequence of optimization problems, determining the best control inputs that minimize the deviation between the desired and actual outputs while ensuring compliance with imposed constraints. We can formulate the MPC as a quadratic optimization problem with constraints as
(26)
where Qj=5,QNp=10, and Rj = 10−3 are the positive semi-definite weight matrices for the outputs and inputs, ti is the current time, Np = 100 is the prediction horizon, and yr is the reference trajectory that is to be tracked. In this example, we want to drive a given initial state x0 to a desired state xdes = [2, 0]. At the start of the MPC, the Gaussian processes are used to lift the initial state x0. Then MPC predicts for Np-time-step, and one control step is implemented, which updates the initial state. Each of these feedback updates is passed again through the Gaussian process, and the process continues. Figure 5 shows the results for the model which uses single trajectory data, and in Fig. 6 are the results for the model using multiple data records. From both figures, we can observe that the multiple data record model performs better and tracks the desired reference trajectory, as shown in Fig. 6.
Fig. 5
MPC results for linear model constructed using single trajectory data. The initial state x0 = [ − 0.7160, − 0.9789] and the target state is [2, 0].
Fig. 5
MPC results for linear model constructed using single trajectory data. The initial state x0 = [ − 0.7160, − 0.9789] and the target state is [2, 0].
Close modal
Fig. 6
Comparison of the MPC results from the linear models constructed using GP-SSID with multiple data records (solid line) and K-EDMD (dotted line) with model-based nonlinear MPC solutions (dash-dotted line). The initial state x0 = [−0.7160, −0.9789] and the target state is [2, 0].
Fig. 6
Comparison of the MPC results from the linear models constructed using GP-SSID with multiple data records (solid line) and K-EDMD (dotted line) with model-based nonlinear MPC solutions (dash-dotted line). The initial state x0 = [−0.7160, −0.9789] and the target state is [2, 0].
Close modal
Fig. 7
Comparison of MPC results with control horizon Nc = 50 obtained from the linear models constructed using GP-SSID with multiple data records (in solid line) and K-EDMD (dotted line). The initial state x0 = [−0.7160, − 0.9789] and the target state is [2, 0].
Fig. 7
Comparison of MPC results with control horizon Nc = 50 obtained from the linear models constructed using GP-SSID with multiple data records (in solid line) and K-EDMD (dotted line). The initial state x0 = [−0.7160, − 0.9789] and the target state is [2, 0].
Close modal

To assess the effectiveness of our proposed methodology in contrast to established data-driven techniques for addressing the presented optimal control problem, we leverage the K-EDMD algorithm as outlined in Ref. [5]. We employ this algorithm to construct a linear system using the same multiple trajectory dataset. This construction involves using 36 radial basis functions alongside the states themselves, leading to generating a Koopman operator with a total of r = 38 basis functions. Subsequently, we apply an MPC formulation, as detailed in Sec. 6.2, tailored to the linear system derived through the K-EDMD algorithm. The results of our MPC solutions are illustrated in Fig. 6. Notably, our proposed methodology exhibits faster convergence compared to the K-EDMD approach. Furthermore, the terminal error is larger in the case of the K-EDMD algorithm.

While the GP-SSID based Koopman linear MPC is well-suited for data-driven systems, we also compare this approach with a purely model-based nonlinear model predictive control approach with a similar cost function. We observe that the model-based controller exhibits faster convergence and achieves a lower cost when compared to the data-driven Koopman operator-based linear controllers. However, the computational time required for linear MPC based on both the GP-SSID and K-EDMD approaches is lower in comparison to the computational demands of the nonlinear model-based controller for the same time-step δt = 0.01 s. As an illustration of the difference in computational time, for the example shown in Fig. 6, GP-SSID and K-EDMD methods take about 101 s and 105 s, respectively, and the nonlinear MPC takes 1108 s.

The GP-SSID method consistently outperforms the K-EDMD algorithm in terms of prediction accuracy, as illustrated in Fig. 3 and while it seems that the MPC implementations of both approaches (as shown in Fig. 6) yield similar solutions, this similarity in performance is a result of using a one-step control horizon, continually updating the initial conditions. Nonetheless, it’s essential to note that the one-step control horizon also entails a higher computational cost and may make online implementation infeasible. Therefore we evaluate the performance of both algorithms with a larger control horizon of Nc = 50. For the example considered in Fig. 7 when Nc is increased to 50 the computational time for both GP-SSID and K-EDMD approaches is about 1.7 s although the terminal error with the K-EDMD approach grows compared to the GP-SSID approach.

6.3 Model Predictive Control With Koopman Operator for Partially Observed State.

Consider the Duffing oscillator with control in Eq. (25), which is partially observed such that
(27)
Collecting data and identifying the Koopman operator using subspace identification remains similar to the previous example in Sec. 6.2. We sample 100 trajectories with random initial states and propagate them for 400 time-steps with random control inputs. To train the basis function (Gaussian processes), we use output y along with five time-delayed inputs and outputs with the corresponding time realizations of the lifted function ZliftR25. We formulate a similar MPC model (26) as discussed in the previous section to control the output x2 to track reference signal. Here, the weight matrices used are Qj = 100 and Rj = 0.001. The results in Fig. 8 show that we can use this method to track a reference trajectory for a partially observed system.
Fig. 8
MPC result for tracking the observable x2 to 0
Fig. 8
MPC result for tracking the observable x2 to 0
Close modal

7 Conclusion

In this paper, we have explored connections between the classical system identification technique of subspace identification and recently developed methods for system identification based on constructing a finite-dimensional approximation of the Koopman operator. In particular, we have shown that these methods can be directly related by considering the latent state identified through SSID to be an evaluation of a set of dictionary functions at a particular state of the underlying nonlinear system. With this formulation, we have shown that for nonlinear systems, it is necessary to consider multiple data records, as a single trajectory does not sufficiently explore the state space in most cases. This is indicated by improved prediction and control performance. We also show that the same formulation can be applied in cases where only partial state measurements are available.

Footnote

1

Paper presented at the 2023 Modeling, Estimation, and Control Conference (MECC 2023), Lake Tahoe, NV, Oct. 2–5. Paper No. MECC2023-159.

Acknowledgment

This work was supported by Grant No. 2021612 from the National Science Foundation and Grant No. 13204704 from the Office of Naval Research.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Rowley
,
C. W.
,
Mezic
,
I.
,
Bagheri
,
S.
,
Schlatter
,
P.
, and
Henningson
,
D. S.
,
2009
, “
Spectral Analysis of Nonlinear Flows
,”
J. Fluid. Mech.
,
641
, pp.
115
127
.
2.
Susuki
,
Y.
, and
Mezić
,
I.
,
2014
, “
Nonlinear Koopman Modes and Power System Stability Assessment Without Models
,”
IEEE Trans. Power Syst.
,
29
(
2
), pp.
899
907
.
3.
Hou
,
Z.-S.
, and
Wang
,
Z.
,
2013
, “
From Model-Based Control to Data-Driven Control: Survey, Classification and Perspective
,”
Inf. Sci.
,
235
, pp.
3
35
.
4.
Lasota
,
A.
, and
Mackey
,
M. C.
,
1994
,
Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics
,
Springer
,
New York
.
5.
Williams
,
M. O.
,
Kevrekidis
,
I. G.
, and
Rowley
,
C. W.
,
2015
, “
A Data-Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition
,”
J. Nonlinear Sci.
,
25
(
6
), pp.
1307
1346
.
6.
Klus
,
S.
,
Koltai
,
P.
, and
Schütte
,
C.
,
2016
, “
On the Numerical Approximation of the Perron-Frobenius and Koopman Operator
,”
J. Comput. Dyn.
,
3
(
1
), pp.
51
79
.
7.
Korda
,
M.
, and
Mezic
,
I.
,
2018
, “
Linear Predictors for Nonlinear Dynamical Systems: Koopman Operator Meets Model Predictive Control
,”
Automatica
,
93
, pp.
149
160
.
8.
Williams
,
M. O.
,
Hemati
,
M. S.
,
Dawson
,
S. T.
,
Kevrekidis
,
I. G.
, and
Rowley
,
C. W.
,
2016
, “
Extending Data-Driven Koopman Analysis to Actuated Systems
,”
IFAC-PapersOnLine
,
49
(
18
), pp.
704
709
.
9.
Ljung
,
L.
,
1998
,
System Identification
,
Birkhäuser
,
Boston, MA
.
10.
Overschee
,
P.
, and
De Moor
,
B.
,
1996
,
Subspace Identification for Linear Systems
,
Springer
,
New York
.
11.
Lian
,
Y.
, and
Jones
,
C. N.
,
2019
, “
Learning Feature Maps of the Koopman Operator: A Subspace Viewpoint
,”
2019 IEEE 58th Conference on Decision and Control (CDC)
,
Nice, France
,
Dec. 11–13
, pp.
860
866
.
12.
Lian
,
Y.
, and
Jones
,
C. N.
,
2020
, “
On Gaussian Process Based Koopman Operators
,”
IFAC-PapersOnLine
,
53
(
2
), pp.
449
455
.
13.
Willems
,
J. C.
,
Rapisarda
,
P.
,
Markovsky
,
I.
, and
De Moor
,
B. L.
,
2005
, “
A Note on Persistency of Excitation
,”
Syst. Control Lett.
,
54
(
4
), pp.
325
329
.
14.
Holcomb
,
C. M.
, and
Bitmead
,
R. R.
,
2017
, “
Subspace Identification With Multiple Data Records: Unlocking the Archive
,” Preprint arXiv:1704.02635.
15.
Li
,
Q.
,
Deitrich
,
F.
,
Bollt
,
E. M.
, and
Kevrekidis
,
I. G.
,
2017
, “
Extended Dynamic Mode Decomposition With Dictionary Learning: A Data-Driven Adaptive Spectral Decomposition of the Koopman Operator
,”
CHAOS
,
27
(
10
), p.
103111
.
16.
Lusch
,
B.
,
Kutz
,
J. N.
, and
Brunton
,
S. L.
,
2018
, “
Deep Learning for Universal Linear Embeddings of Nonlinear Dynamics
,”
Nat. Commun.
,
9
(
1
), p.
4950
.
17.
Rasmussen
,
C. E.
, and
Williams
,
C. K. I.
,
2006
, “Gaussian Processes for Machine Learning,”
Adaptive Computation and Machine Learning
,
MIT Press
,
Cambridge, MA
. OCLC: ocm61285753.
18.
van Waarde
,
H. J.
,
De Persis
,
C.
,
Camlibel
,
M. K.
, and
Tesi
,
P.
,
2020
, “
Willems’ Fundamental Lemma for State-Space Systems and Its Extension to Multiple Datasets
,”
IEEE Control Syst. Lett.
,
4
(
3
), pp.
602
607
.