Towards explainable data-driven predictive control with regularizations
-
Manuel Klädtke
and Moritz Schulze Darup
Abstract
Data-driven predictive control (DPC), using linear combinations of recorded trajectory data, has recently emerged as a popular alternative to traditional model predictive control (MPC). Without an explicitly enforced prediction model, the effects of commonly used regularization terms – and the resulting predictions – can be opaque. This opacity may lead to practical challenges, such as reliance on empirical tuning of regularization parameters based on closed-loop performance, and potentially misleading heuristic interpretations of norm-based regularizations. However, by examining the structure of the underlying optimal control problem (OCP), more precise and insightful interpretations of regularization effects can be derived. In this paper, we demonstrate how to analyze the predictive behavior of DPC through implicit predictors and the trajectory-specific effects of quadratic regularization. We further extend these results to cover typical DPC modifications, including DPC for affine systems, offset regularizations, slack variables, and terminal constraints. Additionally, we provide a simple but general result on (recursive) feasibility in DPC. This work aims to enhance the explainability and reliability of DPC by providing a deeper understanding of these regularization mechanisms.
Zusammenfassung
Datengetriebene prädiktive Regelung (DPC), welche im Gegensatz zur klassischen Modellprädiktiven Regelung (MPC) prädizierte Trajektorien durch Linearkombinationen aufgezeichneter Datentrajektorien generiert, hat sich als populäre Alternative etabliert. Durch das Fehlen eines expliziten Prädiktionsmodells sind die Effekte der verwendeten Regularisierungen und deren Auswirkungen auf das Prädiktionsverhalten häufig schwer durchschaubar. Dies kann zu praktischen Herausforderungen wie empirisches Auslegen von Regularisierungsparametern und möglicherweise irreführenden Interpretationen von Norm-basierten Regularisierungen führen. In dieser Arbeit analysieren wir die Struktur des DPC unterliegenden Optimalsteuerungsproblems (OCP) und verschaffen so Einsichten in das implizite Prädiktionsverhalten und die Trajektorien-spezifischen Effekte von quadratischer Regularisierung. Diese Resultate erweitern wir auf affine DPC Ansätze, Offset-Regularisierungen, Schlupfvariablen und Terminalbeschränkungen. Dieser Beitrag zielt darauf ab, die Erklärbarkeit von DPC durch ein tieferes Verständnis der Regularisierungsmechanismen zu verbessern.
1 Introduction
Data-driven predictive control (DPC, [1], [2], [3]), utilizing linear combinations of recorded trajectory data to make predictions, rather than relying on an explicit system model, has recently emerged as a popular alternative to classical model predictive control (MPC, [4]). This control paradigm exemplifies a direct data-driven control scheme, contrasting with indirect (model-based) methods, as visualized in Figure 1. Originally, DPC is theoretically grounded in a result from behavioral system theory [5], known as the Willems’ Fundamental Lemma. Furthermore, DPC yields an exact system representation and equivalence to MPC if the underlying system is deterministic LTI [2], [6]. While this exactness extends to certain classes of deterministic nonlinear systems [6], [7], it is typically lost in realistic cases involving noise, disturbances, and general nonlinearities. To address this, regularization terms are often added to the DPC objective function [2], initially motivated by their relation to distributional robustness [8]. Since then, regularized DPC schemes have shown promising performance, and further theoretical justification via robustness and stability results [1], [9], [10], [11]. However, the heuristic nature of regularizations often obscures their direct impact on the synthesis of predicted system trajectories from data, and since DPC operates without an explicitly enforced prediction model, the precise influence of regularizations on the resulting predictions can be challenging to discern. Therefore, our aim is to provide a deeper analysis that clarifies the interaction of control objective, constraints, and regularizations in DPC. Specifically, we propose two tools for this analysis, namely the trajectory-specific effect of regularizations (see Definition 1) and implicit predictors (see Definition 2), first introduced in [12]. The former reformulates regularization costs, translating their effect from auxiliary variables to the actual predicted system variables. The latter yields a model-based perspective on DPC by generating prediction mappings that align with DPC’s actual predictions (see Figure 1). We demonstrate the use of these analysis tools by summarizing previous results from [12], [13], [14] and extend them towards common modifications in DPC. The paper is organized as follows. First, in Section 2 we summarize fundamentals on direct data-driven predictions and regularized DPC, and discuss a numerical example, which is used for visualization throughout the paper. In Section 3, we further motivate our approach and explain its underlying assumptions. Section 4 presents the trajectory-specific effect of regularization and summarizes related findings from [12], [13], [14]. In Section 5, we introduce implicit predictors and demonstrate their use by summarizing results from [12], [13], [14]. Section 6 expands these results to common DPC modifications, including affine systems, offset regularization, slack variables, and (terminal) equality constraints, and provides a general feasibility result. Finally, we conclude our work in Section 7 and preview future applications for our proposed analysis tools.

Direct data-driven control schemes aim to design control directly from data. This is in contrast to the indirect (i.e., model-based) data-driven control design paradigm. Here, we aim for an indirect viewpoint (highlighted in blue) on the predictions made by direct schemes via implicit predictors.
2 Preliminaries and running example
2.1 Fundamentals of direct data-driven predictions
Instead of utilizing a discrete-time state-space model with input
Here, the dimensions of the data matrix
and assuming L is greater than the lag ι of the system, the image
Note that the lag ι is an integer invariant [5] of the system, i.e., invariant with respect to the considered representation. In context of the state-space representation (2), it is also known as its observability index, which is defined as the smallest integer ι, for which
is satisfied. The generalized persistency of excitation condition (3) provides the theoretical foundation for the linear combinations (1), since
Representing system trajectories in this way is also known as an image representation of the system, as opposed to, e.g., a state-space representation (2). A popular sufficient condition for data to satisfy (3) is known as Willems’ Fundamental Lemma [5], which has become synonymous with using image representations. We note that, contrary to the Fundamental Lemma, the generalized persistency of excitation condition (3) is both sufficient and necessary, and neither requires controllability nor a Hankel structure for the data matrix. To include the current initial condition of the system as a starting point for predicted trajectories, the generated I/O-sequence is typically partitioned into a past section (u p , y p ) and a future section (u f , y f ) with N p respectively N f time-steps yielding
The past section of a predicted trajectory is then forced to match the I/O-data ξ recorded in the most recent N p time-steps during closed-loop operation, i.e., the constraints
force any predicted trajectory to start with the most recently witnessed behavior of the system. Note that ξ is a (non-minimal) state of the LTI system (2), if N p is chosen greater or equal to its lag ι. From now on, we omit the “future” subscript from u f , y f , U f , and Y f , since their “past” counterparts are already incorporated in W and ξ , eliminating any risk of confusion. For more concise notation, we also define
Additionally, with a slight abuse of notation, we redefine
Remark 1.
Although we have introduced the data-driven predictions in an I/O setting, they can be straightforwardly modified to a state-space setting [16]. To this end, consider
leading to a data matrix
Notably, in the ideal deterministic LTI setting with condition (3), the data matrix always has the rank deficiency
such that the image representation (4) implies a unique (and exact) linear predictor mapping

Visualization of the data matrix
2.2 Regularized DPC
The optimal control problem (OCP) that is solved for DPC in every time step can be stated as
with control objective J(
ξ
, u, y), regularization h(
a
), and input-output constraints
Assumption 1.
The data matrix
Note that full row-rank of
2.3 Running numerical example
We emphasize that this work does not propose a new DPC scheme; rather, it provides tools for analyzing the structure of the OCPs in existing schemes. Therefore, and in light of the following discussion in Section 3 regarding point (i), we do not consider extensive closed-loop simulations to be the best demonstration of our results. Such simulations can be found in the works we reference when analyzing the respective schemes. Instead, we use a low-dimensional system as a running example to visualize the structure of the data matrix
which was generated by drawing an initial state
where the two additional samples are generated in the same way as before. These extra samples are needed to allow that the data matrix
3 Motivation and setting
Looking at recent DPC literature, we have observed the following two trends:
We acknowledge that both observations are grounded in intuitive and practically valid heuristics. However, (i) only offers a quantitative indicator of control performance being improved/diminished by regularizations. It may obscure the underlying qualitative interactions between constraints, control objective, and regularization that led to this effect. Some of the already established theoretical investigations in these qualitative effects include the interpretation of DPC with regularization as a convex relaxation of other (indirect) schemes [20] or in the context of distributional robustness [8], and our work aims to add on to these.
Regarding (ii), such interpretations may be unspecific in the context of predictive control and might even be misleading. For example, it is claimed in [22] for regularization via
The motivation of our work is to provide a structured analysis that explains the interactions between constraints, control objectives, and regularization in DPC. Although the current results focus on analyzing existing schemes rather than proposing new ones, they offer deeper insights into these interactions and reveal potential pitfalls. This creates a foundation for future improvements in DPC schemes. Furthermore, our analysis is intentionally agnostic to the specific class of the data-generating system. Rather than focusing on a particular system type, our setting is tailored to the structure of data matrices generated by them. This often reduces to Assumption 1, which is typically (almost surely) satisfied for both non-deterministic LTI, and non-deterministic nonlinear systems. On the contrary, in practical cases, Assumption 1 typically only fails if too little data are used (i.e.,
Our analysis relies on two key conceptual tools. One is given by the novel concept of implicit predictors (depicted in Figure 1 and specified in Definition 2 further below), which aims to describe the predictive behavior of OCPs that may not have an explicitly enforced prediction model. The introduction of implicit predictors in [12] has also led to a parametric characterization of regularization costs in terms of their trajectory-specific effects (formalized in Definition 1). Since then, similar analyses have appeared in [13], [14], [19], [23], offering useful generalizations and intuitive interpretations of the underlying structures. Due to its independent value beyond implicit predictors, we treat this as a separate, second tool.
We first introduce the trajectory-specific effect of regularizations in Section 4, since it facilitates the following introduction of implicit predictors in Section 5. As an exemplary demonstration of these tools, we summarize results from [12], [13], [14] for their use in DPC with quadratic regularization in Sections 4.1, 4.2, and 5.1. We then extend these results in Section 6 by analyzing the effects of common modifications to the DPC problem. These analyses cover the extension to affine DPC in Section 6.1, the inclusion of an offset in the regularization in Section 6.2, the inclusion of slack variables in Section 6.3, and the inclusion of additional (terminal) equality constraints in Section 6.4. While not technically a modification to DPC, we also give two short but very general results on (recursive) feasibility in DPC in Section 6.5.
4 Trajectory-specific effect of regularization
In the ideal deterministic LTI setting without regularization, the variable a is only used in (6b) as an expression for the image representation introduced in (4), i.e.,
However, adding a regularization h(
a
) introduces another meaning to
a
, which is not based on the image representation and the underlying behavioral system theory but on heuristics. Intuitively, h(
a
) adds a price tag to every
a
, which is also transferred to the trajectory tuple (ξ, u, y) generated by
Definition 1.
We call the solution h*( ξ , u, y) to the optimization problem
the trajectory-specific effect of the regularization h(
a
) given the data
Importantly, note that ( ξ , u, y) appear as parameters in (1) and not as optimization variables. Therefore, additional constraints such as (6c) are irrelevant. That is, h*( ξ , u, y) is valid for all ( ξ , u, y) satisfying (7), and therefore also for those, which additionally need to satisfy (6c). The relevance of h*( ξ , u, y) to the DPC problem comes from the fact that (1) naturally appears as an inner optimization problem in (6). That is, (6) is equivalent to
Note that we have deliberately replaced (6b) via (7) to highlight the fact that
a
can be fully eliminated, since it is just an auxiliary variable, after all. That is, the image representation (7) acts just as before, but the additional heuristic costs introduced by h(
a
) (with which we started this section) are now fully explained by their trajectory-specific effect h*(
ξ
, u, y). This allows for much more intuitive interpretations, which we demonstrate by summarizing results from [12], [13], [14] on the trajectory-specific effect of quadratic regularization
4.1 Trajectory-specific effect of quadratic regularization
It was first shown in [12] with additional details in [13], Prop. 1] that, under Assumption 1, the trajectory-specific effect of quadratic regularization
Here,
via the least-squares solutions
where ‖.‖F denotes the Frobenius norm. Note that
where
are the residual matrices associated with the least-square problems (13), and the inverses exist under Assumption 1. Therefore,
When discussing the role of these cost terms, first note that the last cost term (10c) is irrelevant to the OCP, since
ξ
is a parameter determined in closed-loop and not an optimization variable. Regarding the usefulness of (10a), we believe that y being pushed towards the least-squares (multistep) predictor
Finally, we want to highlight the discrepancy between (10a) and (10b) in terms of proper tuning for λ. The output-related cost term (10a) needs a large weight λ because, without the previously discussed rank deficiency (5), it is the sole factor keeping output predictions y from being greedily and unrealistically (i.e., without considering the data in
4.2 Isolating trajectory-specific effects via projections or γ-DDPC
In [20], the orthogonal projection matrices
were introduced to yield a regularization
Since
This separation explains the observations in [20], Figure 2]. There, the performance of
In γ -DDPC, the constraint (6b) and variable a are replaced via LQ decomposition as
where the diagonal blocks
L
ii
for i ∈ {1, 2, 3} are non-singular (under Assumption 1) and the matrices
Q
i
have orthonormal rows. Furthermore,
Q
4 and
γ
4 are typically omitted, since they do not affect the generated trajectory. The idea behind
γ
-DDPC can be summarized as re-parameterizing the OCP with a lower dimensional variable
γ
, and decoupling the matching of the initial condition
ξ
, since
Corollary 1.
([13]) Under Assumption 1, regularization of the γ -variables can be equivalently expressed by the trajectory-specific effect
We note that the difference in notation w.r.t. to Definition 1 only comes from the fact that (
γ
1,
γ
2,
γ
3) are uniquely determined by (
ξ
, u, y), and therefore
As a guideline for practitioners, we advocate to use a mixed regularization
to retain a quantitatively similar effect, since
Remark 2.
Technically, the data matrix
5 Implicit predictors in regularized DPC
Although the absence of the rank deficiency (5) allows for non-unique output predictions, and Assumption 1 even allows for any triple (
ξ
, u, y) to be generated from
Definition 2.
([12]) We call
Hence, an implicit predictor
In contrast to (1), we now treat (
ξ
, u) as parameters and optimize over (
a
, y). Hence, additional set constraints
In the presence of the rank deficiency (5),
However, in these two settings, the predictor
In the absence of the rank-deficiency (5), i.e., in a realistic (non-deterministic) setting with more data columns than in (17), the additional degrees of freedom lead to output predictions deviating from
An implicit predictor for DPC with quadratic regularization
In the following, we assume that the control objective is a quadratic output-tracking formulation
with reference yref, positive semidefinite weighing matrix
Furthermore, since the trajectory-specific cost of
Note that the involved inverse exists because

Implicit predictor, optimal parametric solutions, and least-squares mappings for the DPC problem discussed in Section 2.3 (a–d). The optimal parametric DPC solutions (
x
0, u*(
x
0), x*(
x
0)) for the different regularizations

Implicit predictor
6 Effects of modifications in DPC
While the previous section introduced the tools, namely trajectory-specific effect of regularization and implicit predictors, and exemplified them via results from [12], [13], [14], we now extend these results towards some common modifications in DPC. We analyze the extension to affine DPC in Section 6.1, the inclusion of an offset in the regularization in Section 6.2, the inclusion of slack variables in Section 6.3, and the inclusion of additional (terminal) equality constraints in Section 6.4. Finally, we also give two brief results on (recursive) feasibility for regularized DPC in Section 6.5.
6.1 DPC for affine systems
While standard (linear) DPC can yield exact predictions for deterministic LTI systems (see the discussion in Section 2.1), exact extensions to particular classes of nonlinear systems have been proposed, e.g., in [6], [7], [22]. Among these, we want to briefly discuss the case of affine time-invariant (ATI) systems
proposed in [22], which is also used for other nonlinear systems with continuously updated trajectory data in order to approximate a local (affine) linearization of the nonlinear system for predictions as in [10], [22]. Similarly to how trajectories of LTI systems (2) can be generated by linear combinations, trajectories of ATI systems (20) can be generated by affine combinations of trajectory data. That is, in addition to (1), generated trajectories must also satisfy
Intuitively, this condition can be explained by noting that the effect of
e
,
r
is present exactly once in each data trajectory and, accordingly, should be present exactly once in the generated trajectory. Assuming exact data generated by an ATI system, the affine hull
However, in the presence of noise and (other) nonlinearities, the same discussions as in Section 2.1 apply, i.e., the unique and exact predictions are no longer possible. Accordingly, our analysis is not confined to affine DPC applied to data from ATI systems (20), but rather extends to affine DPC with data generated by any system, including the nonlinear tracking case in [10]. To understand the features of affine DPC in the presence of such realistic data, we extend our results from the linear DPC case. As discussed in [19], Section II] and, in particular, [19], Rem. 4], many analysis results for linear DPC also apply to such nonlinear systems, which are linear in known (nonlinear) transformations of the state, input, and output. This also applies to the affine DPC scheme at hand, where we can simply consider
for the transformed state-input data. Instead of the linear least-squares estimates
with
and the corresponding residual matrices
The analysis of affine DPC with regularizations in terms of their trajectory-specific effect and implicit predictors then follows accordingly. In the following, we present the case of (projection-based) quadratic regularization.
Proposition 1.
For affine DPC, the trajectory-specific effect of
with weighing matrices
Proof.
The proof follows analogously to the linear DPC case.□
The interpretation of these cost terms also directly follows from the discussion below (10). Importantly, note that instead of the least-squares estimates for a linear predictor/controller
Finally, the analysis of predictive behavior via implicit predictors follows accordingly.
Proposition 2.
Consider affine DPC with quadratic output-tracking objective (18), (projection-based) quadratic regularization
is an implicit predictor for this problem.
Proof.
The proof follows analogously to the linear DPC case from the trajectory-specific effect of regularization in (23).□
Again, the interpretation of this predictor follows from the linear DPC case discussed below (19).
6.2 Regularization with offset
Some DPC schemes, in which the tracking of a non-zero equilibrium is desired, modify the regularization by including an offset, i.e.,
and similarly with Δ z , Δu, Δy, Δ ξ .
Theorem 1.
Consider the DPC problem (6) with quadratic offset-regularization
with
Proof.
For a given tuple
Except for the translation into Δ-coordinates, this expression is equivalent to the one obtained in [12], Section III.A], and the same block-matrix decomposition steps of
Hence, an offset by
While the third cost term is always irrelevant, we can see that the first two terms also may have no additional effect (compared to the usual quadratic regularization) if the trajectory
respectively. Similarly to (15), the effect of the cost terms (25b) and (25c) can be eliminated by considering a projection-based quadratic regularization. On that note, we briefly remark that both
yield the same effect, since they only differ in a constant term unrelated to a . Furthermore, using the first term of the alternative cost expression (26), we can also state an implicit predictor as follows.
Proposition 3.
Consider the DPC scheme (6) with quadratic output-tracking objective (18), (projection-based) quadratic offset-regularization
with
is an implicit predictor for this problem.
Proof.
The proof follows analogously to the linear DPC case from the trajectory-specific effect of regularization in (26).□
We want to highlight that
6.3 Effect of slack variables in DPC
A common modification to DPC, first introduced in [2], is the inclusion of a slack variable as follows
Here, we briefly decompose the notation from
ξ
,
W
back to u
p
, y
p
,
U
p
,
Y
p
to show exactly which part the slack variable
σ
is acting on. The use of slack variables is common to avoid infeasibility of the initial condition
Wa
=
ξ
in situations where
W
does not have full row-rank. That is, while in the deterministic LTI case
for which we have
and sum up the regularization terms
Using this re-parameterization, the slack variables can be interpreted as adding artificial trajectory data columns in the new augmented data matrix
Regarding the tuning of λ
σ
with respect to λ, how prominently the artificial trajectories are used in the resulting DPC predictions mainly depends on the ratio
6.4 Predictive behavior with (terminal) equality constraints
In order to provide closed-loop stability guarantees, classical MPC typically makes use of terminal ingredients (see, e.g., [32]). Similarly, practical stability for some DPC schemes like [1], [9], [10] has been proven by employing (among other modifications) terminal equality constraints for the last steps of the predicted I/O sequence (see [11] for a tutorial). While we view terminal constraints as the main use case of our following analysis, the results naturally expand to other kinds of equality constraints. When analyzing the effect of additional terminal equality constraints in the DPC scheme (6), first, let us briefly recall that the trajectory-specific effect of regularization introduced in Section 4 is universally unaffected, since it applies to any chosen triple ( ξ , u, y) satisfying (8b), and thus also the ones satisfying additional (terminal equality) constraints. Regarding the characterization of predictive behavior via implicit predictors, similar considerations apply for the effect of additional constraints on the input sequence u, as already discussed below (16).
However, the predictive behavior of DPC is significantly influenced by any kind of output constraints and often contradicts the unconstrained behavior as observed in [12], Section III.C]. To simplify upcoming notation, we assume that the terminal output constraints require the predicted output y to match the reference yref over the final n (or any other amount of) steps of the prediction. This assumption is made without loss of generality; if the original reference yref does not naturally satisfy this condition, we can simply define a modified reference
which aligns with the setup in [1]. Given these assumptions, the following theorem precisely characterizes the effect of (terminal) output equality constraints on the predictive behavior of DPC with (projection-based) quadratic regularization.
Theorem 2.
Consider the DPC problem (6) with
Furthermore, consider the block partitioning
with
with the weights
is an implicit predictor for this problem.
Proof.
The proof strategy lies in characterizing the effect of hard terminal constraints as soft constraints with costs tending to infinity. Note that this is an exact characterization that works due to Assumption 1 allowing for any triple (
ξ
, u, y) in (6b) and would be invalid for an indirect (i.e., model-based) scheme, where the set of feasible (
ξ
, u, y) is limited by a prediction model enforced as a hard constraint. The result thus follows by considering
and note that
follows analogously. From the structure and positive definiteness of
Computing the limit for this expression yields
where we used a well known block-matrix inversion formula in the third step. The computation of Λreg follows analogously.□
As expected, the last n steps of the predicted
6.5 (Recursive) feasibility in DPC
Recursive feasibility of the closed-loop is an important concept in stabilizing predictive control schemes, and a lot of focus is placed on either guaranteeing it a priori or certifying it for a given controller [33]. Correspondingly, practical stability results of DPC schemes also include results on recursive feasibility (see [1], Section IV.D], [11], Prop. IV.1], [9], Thm. 14]). Our work emphasizes the analysis of DPC schemes through the structure of the underlying OCP, without making assumptions about the class of systems generating the data. While we should not expect in-depth closed-loop analysis results without making such assumptions, our approach still reveals broad results on (recursive) feasibility based on the OCP structure, which seem to be currently overlooked in the literature. Although the DPC scheme (6) does not include constraints
Proposition 4.
Consider the DPC scheme (6) with non-empty constraint sets
Proof.
Under Assumption 1, there exists an
a
satisfying (6b) for any triple (
ξ
, u, y). Hence, for any
The following is a simple consequence of this result.
Proposition 5.
Consider the DPC scheme (6) with non-empty constraint sets
Proof.
Simply consider
Although straightforward and already briefly discussed in [12], Section III.C], these results have important implications. The DPC problem (6) is typically (i.e., under Assumption 1) always feasible. This feasibility is by design of the OCP itself, rather than coming from the closed-loop control or system dynamics. For the schemes in [1], [11], the use of slack variables on the output variables y
p
and y ensures that Assumption 1 holds for the extended data matrix
7 Conclusions and outlook
This work discussed the use of trajectory-specific effects of regularizations (see Definition 1) and implicit predictors (see Definition 2) as analysis tools to improve explainability in regularized DPC. The former concretizes the effects of any regularization h(
a
) by eliminating auxiliary variables and reformulating an equivalent cost h*(
ξ
, u, y), which is specific to the trajectory variables (
ξ
, u, y), instead. The latter is a predictor mapping
Although this work primarily focused on DPC with (projection-based) quadratic regularizations, we emphasize the broadness of our proposed analysis tools, which are (in principle) applicable to any choice of regularization h(
a
). Therefore, similar analyses for more general quadratic regularization
About the authors

Manuel Klädtke received a B.Sc. and a M.Sc. in Electrical Engineering from Paderborn University in 2019 and 2021, respectively. He is currently pursuing a Ph.D. in Control and Cyberphysical Systems Group at TU Dortmund University. His primary research focus is on data-driven predictive control schemes.

Moritz Schulze Darup received a Diploma degree in Mechanical Engineering, a B.Sc. in Physics, and a Ph.D. in Control Engineering from the Ruhr-Universität Bochum in 2008, 2010, and 2014, respectively. He became Assistant Professor and leader of an Emmy Noether group for encrypted control in 2019 at Paderborn University. Since 2020, he has been Full Professor for Control and Cyberphyiscal Systems at TU Dortmund University. His research interests include secure, predictive, and data-driven control.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: None declared.
-
Conflict of interest: The author states no conflict of interest.
-
Research funding: None declared.
-
Data availability: Not applicable.
References
[1] J. Berberich, J. Köhler, M. Müller, and F. Allgöwer, “Data-driven model predictive control with stability and robustness guarantees,” IEEE Trans. Autom. Control, vol. 66, no. 4, pp. 1702–1717, 2021. https://doi.org/10.1109/TAC.2020.3000182.Search in Google Scholar
[2] J. Coulson, J. Lygeros, and F. Dörfler, “Data-enabled predictive control: in the shallows of the DeePC,” in 18th European Control Conference, 2019, pp. 307–312.10.23919/ECC.2019.8795639Search in Google Scholar
[3] H. Yang and S. Li, “A new method of direct data-driven predictive controller design,” in 2013 9th Asian Control Conference, 2013, pp. 1–6.10.1109/ASCC.2013.6606233Search in Google Scholar
[4] J. B. Rawlings, D. Q. Mayne, and M. M. Diehl, Model Predictive Control: Theory, Computation, and Design, 2nd ed., Santa Barbara, California, Nob Hill Publishing, 2017.Search in Google Scholar
[5] J. C. Willems, P. Rapisarda, I. Markovsky, and B. L. M. De Moor, “A note on persistency of excitation,” Syst. Control Lett., vol. 54, no. 4, pp. 325–329, 2005. https://doi.org/10.1016/j.sysconle.2004.09.003.Search in Google Scholar
[6] J. Berberich and F. Allgöwer, “A trajectory-based framework for data-driven system analysis and control,” in 2020 European Control Conference, 2020, pp. 1365–1370.10.23919/ECC51009.2020.9143608Search in Google Scholar
[7] M. Alsalti, J. Berberich, V. G. Lopez, F. Allgöwer, and M. A. Müller, “Data-based system analysis and control of flat nonlinear systems,” in 2021 60th IEEE Conference on Decision and Control, 2021, pp. 1484–1489.10.1109/CDC45484.2021.9683327Search in Google Scholar
[8] J. Coulson, J. Lygeros, and F. Dörfler, “Regularized and distributionally robust data-enabled predictive control,” in 2019 IEEE 58th Conference on Decision and Control, 2019, pp. 2696–2701.10.1109/CDC40024.2019.9028943Search in Google Scholar
[9] M. Alsalti, M. Barkey, V. G. Lopez, and M. A. Müller, “Robust and efficient data-driven predictive control,” digital preprint, arXiv:2409.18867, 2024.10.23919/ECC64448.2024.10591022Search in Google Scholar
[10] J. Berberich, J. Köhler, M. Müller, and F. Allgöwer, “Linear tracking MPC for nonlinear systems—part II: the data-driven case,” IEEE Trans. Autom. Control, vol. 67, no. 9, pp. 4406–4421, 2021. https://doi.org/10.1109/tac.2022.3166851.Search in Google Scholar
[11] J. Berberich, J. Köhler, M. A. Müller, and F. Allgöwer, “Stability in data-driven MPC: an inherent robustness perspective,” in 2022 IEEE 61st Conference on Decision and Control, 2022, pp. 1105–1110.10.1109/CDC51059.2022.9993361Search in Google Scholar
[12] M. Klädtke and M. Schulze Darup, “Implicit predictors in regularized data-driven predictive control,” IEEE Control Syst. Lett., vol. 7, pp. 2479–2484, 2023. https://doi.org/10.1109/LCSYS.2023.3285104.Search in Google Scholar
[13] M. Klädtke and M. Schulze Darup, “Towards a unifying framework for data-driven predictive control with quadratic regularization,” digital preprint, arXiv:2404.02721, 2024.Search in Google Scholar
[14] M. Klädtke, M. Schulze Darup, and D. E. Quevedo, “Extending direct data-driven predictive control towards systems with finite control sets,” in 2024 European Control Conference, 2024, pp. 3345–3350.10.23919/ECC64448.2024.10590896Search in Google Scholar
[15] I. Markovsky and F. Dörfler, “Identifiability in the behavioral setting,” IEEE Trans. Autom. Control, vol. 68, no. 3, pp. 1667–1677, 2023. https://doi.org/10.1109/TAC.2022.3209954.Search in Google Scholar
[16] C. De Persis and P. Tesi, “Formulas for data-driven control: stabilization, optimality, and robustness,” IEEE Trans. Autom. Control, vol. 65, no. 3, pp. 909–924, 2020. https://doi.org/10.1109/tac.2019.2959924.Search in Google Scholar
[17] I. Markovsky, “Structured low-rank approximation and its applications,” Automatica, vol. 44, no. 4, pp. 891–909, 2008. https://doi.org/10.1016/j.automatica.2007.09.011.Search in Google Scholar
[18] V. Breschi, A. Chiuso, and S. Formentin, “Data-driven predictive control in a stochastic setting: a unified framework,” Automatica, vol. 152, 2023, Art. no. 110961. https://doi.org/10.1016/j.automatica.2023.110961.Search in Google Scholar
[19] P. Mattsson, F. Bonassi, V. Breschi, and T. B. Schön, “On the equivalence of direct and indirect data-driven predictive control approaches,” IEEE Control Syst. Lett., vol. 8, pp. 796–801, 2024. https://doi.org/10.1109/lcsys.2024.3403473.Search in Google Scholar
[20] F. Dörfler, J. Coulson, and I. Markovsky, “Bridging direct and indirect data-driven control formulations via regularizations and relaxations,” IEEE Trans. Autom. Control, vol. 68, no. 2, pp. 883–897, 2023. https://doi.org/10.1109/TAC.2022.3148374.Search in Google Scholar
[21] L. Huang, J. Zhen, J. Lygeros, and F. Dörfler, “Quadratic regularization of data-enabled predictive control: theory and application to power converter experiments,” IFAC-PapersOnLine, vol. 54, no. 7, pp. 192–197, 2021. https://doi.org/10.1016/j.ifacol.2021.08.357.Search in Google Scholar
[22] J. Berberich, J. Köhler, M. A. Müller, and F. Allgöwer, “Data-driven model predictive control: closed-loop guarantees and experimental results,” at – Automatisierungstechnik, vol. 69, no. 7, pp. 608–618, 2021. https://doi.org/10.1515/auto-2021-0024.Search in Google Scholar
[23] V. Breschi, A. Chiuso, M. Fabris, and S. Formentin, “On the impact of regularization in data-driven predictive control,” in 2023 62nd IEEE Conference on Decision and Control, 2023, pp. 3061–3066.10.1109/CDC49753.2023.10383820Search in Google Scholar
[24] W. Favoreel, B. De Moor, and M. Gevers, “SPC: subspace predictive control,” IFAC Proc. Vol., vol. 32, no. 2, pp. 4004–4009, 1999. https://doi.org/10.1016/s1474-6670(17)56683-5.Search in Google Scholar
[25] M. Sader, Y. Wang, D. Huang, C. Shang, and B. Huang, “Causality-informed Data-Driven Predictive Control,” digital preprint, arXiv:2311.09545, 2023.Search in Google Scholar
[26] V. Breschi, M. Fabris, S. Formentin, and A. Chiuso, “Uncertainty-aware data-driven predictive control in a stochastic setting,” IFAC-PapersOnLine, vol. 56, no. 2, pp. 10083–10088, 2023. https://doi.org/10.1016/j.ifacol.2023.10.878.Search in Google Scholar
[27] A. Chiuso, M. Fabris, V. Breschi, and S. Formentin, “Harnessing uncertainty for a separation principle in direct data-driven predictive control,” digital preprint, arXiv:2312.14788, 2024.10.1016/j.automatica.2024.112070Search in Google Scholar
[28] F. Fiedler and S. Lucia, “On the relationship between data-enabled predictive control and subspace predictive control,” in 2021 European Control Conference, 2021, pp. 222–229.10.23919/ECC54610.2021.9654975Search in Google Scholar
[29] A. Padoan, F. Dörfler, and J. Lygeros, “Data-driven representations of conical, convex, and affine behaviors,” in 2023 62nd IEEE Conference on Decision and Control, 2023, pp. 596–601.10.1109/CDC49753.2023.10383687Search in Google Scholar
[30] E. Elokda, J. Coulson, P. N. Beuchat, J. Lygeros, and F. Dörfler, “Data-enabled predictive control for quadcopters,” Int. J. Robust Nonlinear Control, vol. 31, no. 18, pp. 8916–8936, 2021. https://doi.org/10.1002/rnc.5686.Search in Google Scholar PubMed PubMed Central
[31] M. Yin, A. Iannelli, and R. S. Smith, “Maximum likelihood estimation in data-driven modeling and control,” IEEE Trans. Autom. Control, vol. 68, no. 1, pp. 317–328, 2023. https://doi.org/10.1109/TAC.2021.3137788.Search in Google Scholar
[32] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained model predictive control: stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000. https://doi.org/10.1016/s0005-1098(99)00214-9.Search in Google Scholar
[33] J. Löfberg, “Oops! I cannot do it again: testing for recursive feasibility in MPC,” Automatica, vol. 48, no. 3, pp. 550–555, 2012. https://doi.org/10.1016/j.automatica.2011.12.003.Search in Google Scholar
[34] M. Lazar and P. C. N. Verheijen, “Generalized data–driven predictive control: merging subspace and hankel predictors,” Mathematics, vol. 11, no. 9, p. 2216, 2023. https://doi.org/10.3390/math11092216.Search in Google Scholar
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Data-driven and learning-based control – perspectives and prospects
- Methods
- Towards explainable data-driven predictive control with regularizations
- Two-component controller design to safeguard data-driven predictive control
- Robustness of online identification-based policy iteration to noisy data
- Koopman-based control of nonlinear systems with closed-loop guarantees
- Applications
- Data-driven feedback optimization for particle accelerator application
- Application of stochastic model predictive control for building energy systems using latent force models
- Learning-based model identification for greenhouse climate control
Articles in the same Issue
- Frontmatter
- Editorial
- Data-driven and learning-based control – perspectives and prospects
- Methods
- Towards explainable data-driven predictive control with regularizations
- Two-component controller design to safeguard data-driven predictive control
- Robustness of online identification-based policy iteration to noisy data
- Koopman-based control of nonlinear systems with closed-loop guarantees
- Applications
- Data-driven feedback optimization for particle accelerator application
- Application of stochastic model predictive control for building energy systems using latent force models
- Learning-based model identification for greenhouse climate control