Reduced Order Models for Eigenvalue Problems Joost Rommes1 Mathematical Institute Utrecht University P.O.Box 80.010, 3508 TA Utrecht The Netherlands http://www.math.uu.nl/people/rommes [email protected] Summary. Two main approaches are known for the reduced order modeling of linear time-invariant systems: Krylov subspace based and SVD based approximation methods. Krylov subspace based methods have large scale applicability, but do not have a global error bound. SVD based methods do have a global error bound, but require full space matrix computations and hence have limited large scale applicability. In this paper features and short-comings of both types of methods will be addressed. Furthermore, ideas for improvements will be discussed and the possible application of Jacobi-Davidson style methods such as JDQR and JDQZ for model reduction will be explored.

Key words: reduced order models, eigenvalue problems

1 Introduction Dynamical systems and control systems arise from for instance partial differential equations and electrical circuits. Simulation and controller design for large scale systems can become extremely expensive in storage requirements and computations. A way to reduce these costs is to use reduced order models (ROMs), which preserve key characteristics of the original system, but are far smaller in dimension. This paper will summarize the two main approaches for reduced order modeling, Krylov subspace based and SVD based approximation methods, together with features and shortcomings. Furthermore, ideas for improvements will be discussed and the possible application of JacobiDavidson style methods such as JDQR and JDQZ for model reduction will be explored. In Section 2, the reduced order modeling problem will be stated. In Section 3, existing ROM methods will be described. Section 4 explores the application of Jacobi-Davidson style methods to ROM problems and concludes.

2

Joost Rommes

2 Reduced Order Modeling Problem In this paper linear time-invariant (LTI) systems will be considered:  dx(t) C dt = Gx(t) + Bu(t) y(t) = Lx(t) + M u(t)

(1)

where x(t) ∈ RN is the state vector, u(t) ∈ Rm is the input function and y(t) ∈ Rp the output. The matrices C, G ∈ RN ×N are system matrices. The matrices B ∈ RN ×m , L ∈ Rp×N and M ∈ Rp×m are distribution matrices. The number of state variables (the order of the system) is denoted by N . The number of input and output variables are denoted by m and p respectively. The problem is to find an approximating system, the reduced order model:  d˜x(t) ˜ x(t) + Bu(t) ˜ C˜ dt = G˜ (2) ˜ x(t) + M ˜ u(t) y˜(t) = L˜ ˜ G ˜ ∈ Rn×n , B ˜ ∈ Rn×m , L ˜ ∈ Rp×n with x ˜(t) ∈ Rn , u(t) ∈ Rm y˜(t) ∈ Rp , C, p×m ˜ and M ∈ R . Note that the number of inputs and outputs is the same as for the original system, and that the input itself is not changed. The ROM should satisfy the following requirements (see also [1]): • • • •

The order system is reduced strongly: n  N . The approximation error ||y − y˜|| in appropriate norm must be small. Important properties such as passivity and stability are preserved. The procedure for computing the ROM must be computationally efficient and numerically stable, and ideally has an automatic convergence test.

Physical realizability of the ROM may be an additional requirement.

3 Reduced Order Modeling Methods Krylov subspace based and SVD based approximation methods will be described shortly in this section. In this section and the following, SISO (m = p = 1) systems are considered. For an extended overview, see [1]. Krylov subspace based methods The transfer function of a linear dynamical system (2) is defined as the Laplace transform of the impulse response (for simplicity M = 0): H(s) = L(sC − G)−1 B = L(I − (s − s0 )A)−1 R with A = −(G + s0 C)−1 C and R = (G + s0 C)−1 B. Note that it is assumed that sC − G is a regular pencil and that s0 is chosen such that G + s0 C is nonsingular. A series expansion of H around s0 is

Reduced Order Models for Eigenvalue Problems

H(s) =

∞ X

mi (s − s0 )i ,

3

mi = LAi R.

i=0

Krylov subspace based methods construct a ROM by matching moments mi : ˜ H(s) =

∞ X

m ˜ i (s − s0 )i ,

˜ A˜i R, ˜ m ˜i = L

i=0

with mi = m ˜ i , i = 0, . . . , n for appropriate n  N . Explicit computation of the moments mi = LAi R is numerically unstable: within a few iterations the vector Ai R will converge to the eigenvector corresponding to the dominant eigenvalue. Krylov subspace methods deal with this difficulty by constructing an orthonormal basis for the Krylov subspace Kn (A, R) = span{R, AR, . . . , An−1 R}. Let the columns of Vn ∈ RN ×n form an orthonormal basis for Kn (A, R), i.e. VnT Vn = I and span(Vn ) = Kn (A, R). The ROM is now constructed by the state transformation x → Vn x ˜:  (VnT CVn ) d˜xdt(t) = (VnT GVn )˜ x(t) + (VnT B)u(t) y˜(t) = (LVn )˜ x(t) Two well-known methods based on Krylov subspaces are Pad´e via Lanczos (PVL) [3] and PRIMA [7]. PVL exploits the connection between the BiLanczos process and Pad´e approximation. After n iterations of the process, 2n moments are matched, while no storage of iteration vectors is needed. However, the process may suffer from breakdowns and does not guarantee preservation of stability and passivity (see [2] for some remedies). PRIMA constructs an orthonormal basis for the Krylov subspace Kn (A, R) using Arnoldi iterations. After n iterations of the PRIMA process, n moments are matched. Storage and orthogonalization of the iteration vectors is needed, but the procedure is numerically stable and preserves stability and passivity. Note that each iteration of PVL requires two matrix-vector products (one with A and one with AT ), against one for PRIMA. Furthermore, PVL does not provide the reduced system matrices. SVD based methods Let C = I and M = 0 and G be stable in (2). The singular values of the Hankel operator Z 0 H : u(t) 7→ LeG(t−τ ) Bu(τ )dτ t ≥ 0, −∞

are equal to the square roots of the eigenvalues of a product of two symmetric positive-definite matrices P, Q ∈ RN ×N (the gramians) [5]. Hence, the Hankel 1/2 singular values σiH = λi (P Q), i = 1 . . . N form a discrete set. These Hankel

4

Joost Rommes

singular values play a role for dynamical systems similar to the role singular values play for matrices. A reduced order model is constructed by truncating the original system in such a way that the n largest Hankel singular values are preserved. In order to do this, first the matrices P and Q must be solved from two Lyapunov equations GP + P GT + BB T = 0,

GT Q + QG + LT L = 0

(3)

Then, in order to truncate, a balancing transformation that diagonalizes both P and Q must be computed. This state transformation is computed using the standard SVD. SVD based methods have the disadvantage that dense matrix computations are required to solve two Lyapunov equations, so that large scale application is not attractive. However, stability and passivity are preserved for the SVD-based methods, and moreover, there is a global error bound [5]: ||H(s) − Hn (s)||L∞ = sup σmax (H(iω) − Hn (iω)) ≤ 2 ω

N X

σiH

i=n+1

For more details about the balancing transformation and the Hankel operator, the reader is referred to [5].

4 New Research Directions A method that combines the large scale applicability and stability of Krylov subspace methods with the global error bound and stability/passivity preservation of the SVD based methods may be fruitful. A first attempt to such a method, approximate balanced reduction, has been made in [9], but has not yet yielded a robust and efficient method. One can also consider Jacobi-Davidson methods [8]. The Jacobi-Davidson method is an efficient method for computing eigenvalue and eigenvector approximations near a specific target, provided a good preconditioner is available for the correction equation. This correction equation (I − ui u∗i )(A − θi I)(I − ui u∗i )t = −(Aui − θui )

(4)

has to be solved to modest accuracy every iteration of the Jacobi-Davidson process. Note that the Ritz-pair (θi , ui ) changes every iteration. The search space of the Jacobi-Davidson process is extended orthogonally with t. Now suppose that the transfer function is of the form H(s) = L(sI − A)−1 R. If the transfer function is computed exactly, (sI − A) has to be inverted for a range of values of s. For large systems this is not feasible, therefore reduced order models are needed. With s replaced by θi , the operator (sI −A) is equal to −(A − θi I). This suggests that the search space built during the Jacobi-Davidson process may contain useful information. The idea is to start

Reduced Order Models for Eigenvalue Problems

5

Jacobi-Davidson processes for several targets and to combine the relevant parts of the search space into a new space. This new space can be used the construct a reduced order model, similar to the Krylov subspace based methods. Because the transfer function is evaluated for values s = iω, the dominant eigenvalues are the eigenvalues closest to the imaginary axis. The dominance of the actual contribution to the transfer function also depends on the component of R in the direction of the corresponding eigenvector. This information has to be taken into account in the Jacobi-Davidson process. An advantage of the Jacobi-Davidson approach is that matrix inversions are avoided. PRIMA and PVL need the LU -decomposition of a large sparse matrix, while the JDQZ method [4] for generalized eigenproblems works with both operators directly. The JDQZ method computes a partial generalized Schur form for generalized eigenproblems Gx = λCx. Similar to the original Jacobi-Davidson method, the JDQZ method may be used for transfer functions of the form H(s) = L(sC −G)−1 R, where C is possibly singular. Because of the singularity of C, there may be eigenvalues at infinity, which are not of interest. Harmonic Petrov values can be used to avoid these eigenvalues. Another possibility is the use purification techniques [6]. Another open issue is the construction of good preconditioners, which are of vital importance to the convergence of the Jacobi-Davidson process.

References 1. A.C. Antoulas and D.C. Sorensen. Approximation of large-scale dynamical systems: an overview. Int. J. Appl. Math. Comput. Sci., 11(5):1093–1121, 2001. 2. Z. Bai and R.W. Freund. A partial Pad´e-via-Lanczos method for reduced-order modeling. Num. Anal. Manu. 99-3-20, Bell Laboratories, December 2000. 3. P. Feldmann and R.W. Freund. Efficient linear circuit analysis by Pad´e approximation via the Lanczos process. IEEE Trans. CAD, 14:639–649, 1995. 4. D.R. Fokkema, G. L.G. Sleijpen, and H.A. van der Vorst. Jacobi-Davidson style QR and QZ algorithms for the reduction of matrix pencils. SIAM J. Sc. Comp., 20(1):94–125, 1998. 5. K. Glover. All optimal hankel-norm approximations of linear multivariable systems and their L∞ -error bounds. Int. J. Control, 39:1115–1193, 1984. 6. K. Meerbergen and A. Spence. Implicitly restarted arnoldi with purification for the shift-invert transformation. Math. Comp., 66(218):667–689, 1997. 7. A. Odabasioglu and M. Celik. PRIMA: Passive Reduced-order Interconnect Macromodeling Algorithm. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 17(8):645–654, 1998. 8. G.L.G. Sleijpen and H.A. van der Vorst. A Jacobi-Davidson Iteration Method for Linear Eigenvalue Problems. SIAM J. Matrix Anal. Appl., 17(2):401–425, 1996. 9. D.C. Sorensen and A.C. Antoulas. The Sylvester equation and approximate balanced reduction. Lin. Alg. Appl., (351–352):671–700, 2002.

Reduced Order Models for Eigenvalue Problems

number of input and output variables are denoted by m and p respectively. ... that sC − G is a regular pencil and that s0 is chosen such that G + s0C is.

121KB Sizes 1 Downloads 205 Views

Recommend Documents

Quadratic eigenvalue problems for second order systems
We consider the spectral structure of a quadratic second order system ...... [6] P.J. Browne, B.A. Watson, Oscillation theory for a quadratic eigenvalue prob-.

Reduced-Order Finite Element Models of ...
e-mail: [email protected]. Reduced-Order Finite ... layer damping treatments need to be augmented by some active control technique. How- ever, active ...

EIGENVALUE ESTIMATES FOR STABLE MINIMAL ...
KEOMKYO SEO. (Communicated by H. Martini). Abstract. In this article ... negative constants, we can extend Theorem 1.1 as follows. THEOREM 1.3. Let M be an ...

Exact and Heuristic MIP Models for Nesting Problems
of the pieces within the container. big pieces small pieces. Pieces: 45/76. Length: 1652.52. Eff.: 85.86%. Complexity: NP-hard (and very hard in practice). Slide 2 ...

Exact and Heuristic MIP Models for Nesting Problems
Exact and Heuristic MIP models for Nesting Problems. Exact and Heuristic ... The no-fit polygon between two polygons A and B is defined as. UAB := A ⊕ (−B).

Reduced-Order Transfer Matrices From RLC ... - Semantic Scholar
[17] J. A. Martinez-Velasco, Computer Analysis of Electrical Power System ... degree in computer science, and the Ph.D. degree in mathematics from Utrecht.

Bayesian variable order Markov models - Frankfurt Institute for ...
Inference in such a domain is not trivial, and it becomes harder when S is unknown. .... but the best-approximating order depends on the amount of available data. ..... All IAS technical reports are available for download at the ISLA website, http:.

Bayesian variable order Markov models - Frankfurt Institute for ...
Definition 1 (HMM). .... Another well-known closed-form approach is Polya trees [7, 9], which define a tree ..... //www.science.uva.nl/research/isla/MetisReports.php.

Truncated Power Method for Sparse Eigenvalue ...
Definition 1 Given a vector x and an index set F, we define the truncation ..... update of x and the update of z; where the update of x is a soft-thresholding ...

First-Order Probabilistic Models for Information Extraction
highly formatted documents (such as web pages listing job openings) can be ..... likely to be transcribed as “Doctor Zhivago” or “Doctor Zi- vago”. Then when we ...

Bayesian Variable Order Markov Models
ference on Artificial Intelligence and Statistics (AISTATS). 2010, Chia Laguna .... over the set of active experts M(x1:t), we obtain the marginal probability of the ...

Oscillation Theory for a Quadratic Eigenvalue Problem
Sep 15, 2008 - For example, Roach and Sleeman [19, 20] recast (1.1. - 1.3) as a linked two parameter system in L2(0, 1)⊗C2 and set their completeness results in this space. Binding [2] establishes the equivalence of L2(0, 1)⊗C2 with L2(0, 1)⊕L2

A SECOND EIGENVALUE BOUND FOR THE ...
will be presented in Section 5. .... 5 operators are defined by their quadratic forms. (5) h±[Ψ] = ∫. Ω. |∇Ψ(r)|2 ..... The first eigenfunction of −∆+V on BR is radi-.

The generalized principal eigenvalue for Hamilton ...
large, then his/her optimal strategy is to take a suitable control ξ ≡ 0 which forces the controlled process Xξ to visit frequently the favorable position (i.e., around ..... this section, we collect several auxiliary results, most of which are f

Schwarz symmetric solutions for a quasilinear eigenvalue ... - EMIS
We begin with the abstract framework of symmetrization following Van Schaftingen [13] and in Section ...... quasi-linear elliptic problems, Electron. J. Differential ...

Using the Eigenvalue Relaxation for Binary Least ...
Abstract. The goal of this paper is to survey the properties of the eigenvalue relaxation for least squares binary problems. This relaxation is a convex program ...

Specialized eigenvalue methods for large-scale model ...
High Tech Campus 37, WY4.042 ..... Krylov based moment matching approaches, that is best .... from sound radiation analysis (n = 17611 degrees of free-.

Problems and Models Workbook, 5th Edition
Jan 15, 2013 - follow this web page constantly. Why? ... begun to make new deal to consistently be current. It is the first ... ever obtain the understanding and also experience without managing on your own there or trying by yourself to do it.

Multitask Generalized Eigenvalue Program
School of Computer Science. McGill University ... Although the GEP has been well studied over the years [3], to the best of our knowledge no one has tackled the ...

Dual equivalence in models with higher-order derivatives
Specifically, we examine the duality mapping of higher-derivative extensions of ... fµ → Fµ ∼ εµνρ∂νAρ, which maps the SD field fµ into the dual of the basic ...

Dual equivalence in models with higher-order derivatives
dual mapping of the MCS–Proca model with the MCS–Podolsky theory. ... dynamics: they carry one massive degree of freedom of definite helicity ..... which reproduces equation (25), confirming that equation (40) is a master or parent theory.

Solving rational expectations models at first order: what ...
Apr 2, 2011 - The original implementation of this algorithm was done by Michel Juillard, using MATLAB, and is available in the matlab/dr1.m file which is distributed with Dynare. Another implementation was done by the author, in C++, in the DecisionR

Inference in Second-Order Identified Models
Jan 9, 2017 - where fs(X, θ) is the s-th element of f(X, θ). The following assumption defines the identification configuration maintained throughout our analysis. ...... The relative performance of the LM statistic is less clear. Theorem 2 indicate

2016-2017 Application for Free and Reduced Price School Mea.pdf ...
Page 1 of 2. Contact information and adult signature. Weekly Bi-Weekly 2x Month Monthly Weekly Bi-Weekly 2x Month Monthly. Definition of Household.