Applied Numerical Analysis Seminar

Meeting Information

We typically meet on Mondays from 12:30-1:30pm in McBryde 455 (Commons Room).

Fall 2024

Sep 16
Reetish Padhi
Virginia Tech

Quadrature-based balanced truncation (Quad-BT) is a recently developed reduced-order modelling technique for linear (and bilinear) time invariant systems. Quad-BT is non-intrusive, i.e., does not require access to the original system’s realization (matrices) or state-space observations. It computes reduced models purely from data-based quantities, i.e., samples of the time-domain kernels and their derivatives. In this talk, I will present the extension of Quad-BT to quadratic bilinear systems.







Petar Mlinarić
Virginia Tech

Cultivating meat from animal cells is a relatively recent technology that promises to reduce the impact of conventional meat production. In this talk, I will present challenges in bringing cultivated meat production to the commercial scale and suggest how numerical methods could address some of them. In particular, we will discuss optimizing the culture media composition, bioreactor design, and optimal control for cell proliferation.







Sep 23
Mirjeta Pasha
Virginia Tech

Large-scale dynamic inverse problems are typically ill-posed and suffer from complexity of the model constraints and large dimensionality of the parameters. A common approach to overcome ill-posedness is through regularization that aims to add constraints on the desired parameters in both space and temporal dimensions. In this work, we propose an efficient method that incorporates a model for the temporal dimension by estimating the motion of the objects alongside solving the regularized problems. In particular, we consider the optical flow model as part of the regularization that simultaneously estimates the motion and provides an approximation for the desired image. To overcome high computational cost when processing massive scale problems, we combine our approach with a generalized Krylov subspace method that efficiently solves the problem on relatively small subspaces. The effectiveness of the prescribed approach is illustrated through numerical experiments arising in dynamic computerized tomography and image deblurring applications.







Sep 30
Rudi Smith
Virginia Tech

An important problem in numerical linear algebra is to calculate the trace of an unknown matrix with access to a matrix-vector multiplication oracle, both in randomized and deterministic settings. In this talk, we will review some important stochastic trace estimators, discuss improvements using low rank and statistical methods, and discuss some deterministic methods from compressive sensing and banded matrix recovery.







Steffen Werner
Virginia Tech

Stabilizing dynamical systems in science and engineering poses a significant challenge, especially in scenarios where only limited data are available. The quality and quantity of information within the data related to the task at hand are crucial factors to consider. This is particularly true when dealing with unstable dynamics, as the presence of instabilities typically leads to redundant or destructive system behavior, making it difficult to collect large amounts of informative data. In our new approach, we address the issue of generating small yet informative data sets for the stabilization of dynamical systems. The key lies in the adaptive construction of suitable input signals for the data generation via intermediate low-dimensional controllers that stabilize the system dynamics over limited subspaces. Numerical experiments with chemical reactors and fluid dynamics behind obstacles demonstrate that the proposed approach stabilizes systems reliably after observing about ten data samples even though the dimension of states is orders of magnitude higher.







Oct 7
Graduate Students
Virginia Tech

Yichen Guo (Argonne National Laboratory)
Amit Rotem (Lawrence Livermore National Laboratory)
Linus Balicki (ASML, San Diego)
Hayden Ringer







Oct 14
Aimee Maurais
MIT

In modern day data science and machine learning, the prevailing paradigm for describing information and data is that of probability distributions. For instance, in Bayesian inference we use a prior probability distribution to encode our knowledge about an object (e.g., the weather) and transform that prior using data (e.g., noisy weather measurements) to obtain a posterior distribution reflecting our updated knowledge. In generative modeling, we view a collection of data — for example, images, audio, or molecular configurations — as samples drawn from an unknown underlying probability distribution. In both cases, sampling from the probability distribution at hand is a fundamental task, relied upon for uncertainty quantification in the Bayesian setting and for creation of new data in the generative setting. In this talk we will discuss the dynamic transport approach to sampling, in which samples from a tractable reference distribution are transformed into samples from a desired target distribution by application of appropriate dynamics, such as an SDE or ODE. We will explore the flexibility of the dynamic transport framework and particularly focus on one dynamic transport algorithm, Kernel Fisher Rao Flow (KFRFlow). This algorithm, realized as an interacting particle system, corresponds to a mean-field ODE that transports samples along the geometric mixture of the two densities and has connections to Fisher-Rao gradient flows and optimal transport. On a practical level, KFRFlow only requires access to an unnormalized target-to-reference density ratio, is gradient-free, and empirically outperforms comparable gradient-free dynamic transport algorithms. We will conclude with an outlook on the challenges faced by KFRFlow and similar algorithms and discuss design and implementation choices that show promise in addressing them.







Oct 21
Johann Rudi
Virginia Tech

We target large-scale computational methods and parallel algorithms centered around a challenging application: global-scale and high-resolution mantle convection. We estimate parameters in the viscosity of models for mantle convection and plate tectonics from surface observations, which results in an optimization problem (i.e., outer loop). The forward problem is governed by highly nonlinear, heterogeneous, and incompressible Stokes equations (i.e., inner loop). Solving the forward problem is a major challenge at extreme computing scales. An outer loop for inference and uncertainty quantification adds another dimension of solver challenges. The computational methods for the forward problem heavily rely on advanced numerical methods, parallel algorithms, and high-performance implementations tailored to the nonlinear and highly variable viscosity. The methods we developed include adaptive mesh refinement, inexact Newton-Krylov, block preconditioners for saddle-point linear systems, and multigrid. We achieve scalable solutions of the outer loop optimization by deriving adjoint problems to efficiently compute derivatives, and we employ these in a BFGS Newton's method. We further quantify uncertainties of parameters by approximating the Bayesian posterior at the maximum a posteriori point using a Gauss-Newton Hessian approximation. This approach allows for the first time the inference of viscosity parameters of global Earth mantle models with 1-km resolution (reference: https://doi.org/10.1073/pnas.2318706121).







Chris Beattie
Virginia Tech

As observation systems acquire greater capacity and become more sophisticated, regression problems making use of this data must take into account the possibility of significant coupling and correlations among observations leading frequently to ill-conditioned weighted regression problems for which standard orthogonalization approaches become problematic and possibly may fail. I will describe an application setting coming from next-generation wide-swath satellite altimetry where such problems are central and then discuss more broadly extensions of an approach introduced by Paige that allow computationally effective resolution of these problems even in extreme cases where the observation covariance (and possibly its inverse) are not only ill-conditioned but possibly singular (!).







Oct 28
Max Heldman
Virginia Tech

We present a dual-mesh based quadrature rule for diagonalizing reaction operators arising from finite element discretizations of reaction-diffusion equations with discontinuous reaction terms. The method can be viewed as a generalization of mass lumping, in particular preserving quasi-optimal convergence rates for the discrete solution.







Jorge Reyes
Virginia Tech

For convection-dominated flows, standard Reduced Order Models (ROMs) typically require numerical stabilization or closure models to avoid spurious oscillations and account for the effects of discarded modes. This talk focuses on a specific type of ROM closure and stabilization inspired by Large Eddy Simulations (LES). A central component of these ROMs, known as LES-ROMs, is spatial filtering, which follows the same principle used in classical full-order LES models. This approach ensures consistency between the LES-ROMs and the methods that generated the data used for training. We prove key results, such as stability and convergence, supported by several numerical simulations that demonstrate agreement with theoretical predictions. Finally, we perform parameter scaling for the tunable stabilization parameter, aiming to either a priori calculate an acceptable value or develop more efficient algorithms for this purpose.







Oct 31 (9am, NEC 2420)
Samuli Siltanen
U Helsinki

Electrical Impedance Tomography (EIT) is a nonlinear PDE-based imaging modality where a patient is probed with harmless electric currents, and the resulting surface voltages are measured. EIT image reconstruction is an ill-posed inverse problem, meaning very sensitive to noise in the data and modelling errors. However, one can use complex geometric optics (CGO) solutions, and a nonlinear Fourier transform to do robust medical imaging; this is the so-called regularized D-bar method. A connection between EIT and X-ray tomography was found in [Greenleaf et al. 2018] using microlocal analysis. Fourier transform applied to the spectral parameter of CGO solutions produces virtual X-ray projections, enabling a novel filtered back-projection type nonlinear reconstruction algorithm for EIT. It is remarkable how this new approach decomposes the EIT image reconstruction process in several steps, where all ill-posedness is confined in two linear steps. Therefore, we can separate the nonlinearity and ill-posedness of the fundamental EIT problem. Furthermore, the new decomposition enables targeted machine learning approaches as only one or two (mathematically well-structured) steps in the imaging chain are solved using neural networks.







Nov 4
Till Peters
TU Braunschweig

Increasingly large and complex dynamical systems play a major role in mathematical modeling today. In mechanics, electrical circuits or flow dynamics, we can often represent these passive systems as port-Hamiltonian (pH) systems. The pH formulation is a very general structure which is based on a Hamiltonian function associated with the total stored energy. Furthermore, the formulation allows us to include dissipation, interconnection and control terms into the possibly implicit differential equations of the system. For high-dimensional pH systems the task of structure preserving model order reduction arises. In the linear case there are many methods to do this. A first extension to the nonlinear case can be so-called bilinear pH systems. These systems and their model order reduction will be discussed in this talk.







Eric de Sturler
Virginia Tech

The high-quality reconstruction of images with sharp edges requires inversion with edge-preserving regularization operators. A common approach is to use a general lq-norm on the gradient of the image. Such an approach is often implemented using an iteration with weighted l2-norms of the gradient terms, with the weights determined by the current solution estimate. While hybrid Krylov methods, which can pick regularization parameters dynamically, are suitable for this problem, they would need to build a new Krylov space for every update of the regularization term. This problem can be addressed with so-called generalized Krylov subspace methods, which can be combined with a majorization-minimization approach to enforce convergence to the original minimization functional, based on a (smoothed) lq-norm regularization. However, for large-scale and/or streaming applications this leads to several complications. If the solution requires many iterations, data comes in continually, or the data set is very large, the approach leads to overwhelming memory requirements and computational costs. We can mitigate this problem by periodically compressing the search space to a lower dimensional subspace. Unfortunately, minimizing only over low-dimensional subspaces makes previous convergence proofs for these methods not applicable. However, this can be fixed by a slight modification of the algorithm. We will demonstrate the resulting algorithm on a few model problems. This is joint work with Mirjeta Pasha and Misha Kilmer.







Nov 5 (9:30am, DDS 366)
Hitay Ozbay
Bilkent U

The solution to the mixed sensitivity minimization problem is reviewed for distributed parameter systems (DPS) with rational weights. The H-infinity-optimal controller’s internal structure is examined and its finite dimensional and infinite dimensional parts are identified. Typically, one approximates the infinite-dimensional parts of the optimal controller to obtain a finite-dimensional controller. We discuss the effect of such approximations on robust stability and the performance level. We also review the strong stabilization problem for time delay systems: interpolation methods and a small gain approach are presented. A new family of robust controllers is given for unstable DPS and related open problems are discussed.







Nov 11 (DDS 366)
Jiuhua Hu
Virginia Tech

Many applied problems feature coupled spatial and temporal heterogeneities that complicate homogenization or upscaling. Non-local multi-continua are proposed to address complex spatial heterogeneities in various papers. We extend this approach to a parabolic problem with time-dependent heterogeneous coefficients, developing the space-time Non-local multi-continua. This method efficiently constructs multiscale basis functions through local energy minimization within oversampled space-time regions. These functions demonstrate exponential decay outside their domain and offer computational advantages over classical methods, providing a systematic and flexible way to achieve accurate solutions while reducing computational cost.







Mike Ackermann
Virginia Tech

Data-driven reduced order modeling is a tool to construct models of physical phenomena when access to governing equations or a state-space model is not possible, but input-output data is abundant. One popular method for constructing data-driven models is the Adaptive Anderson Antoulas (AAA) algorithm, which blends interpolation and least-squares techniques. While AAA can produce models that fit the given data to high accuracy, the model produced by AAA does not preserve underlying differential structures which are important for interpretation of the models. A common structure to appear in mechanical systems is an explicit dependence on the second time derivative. Based on our previous work on structured barycentric forms, we present a structure preserving second-order AAA algorithm that is capable of learning highly accurate second-order surrogate models from frequency response measurements.







Nov 18
Hayden Ringer
Virginia Tech

The inverse Dirichlet Laplace eigenvalue problem, put simply, is: "Can a planar shape be determined by its Dirichlet eigenvalues?" This problem is known to be ill-posed. In this talk, we present advances in our ongoing project to develop computational and Bayesian-statistical techniques for recovering unknown shapes, based on their "spectral data." In particular, we will demonstrate the effectiveness of these approaches on small benchmark problems. We will discuss: (1) improvements we have made to our forward eigensolver, (2) a parameterization scheme which "quotients-out" the continuous symmetries of the eigenproblem, (3) an engineered loss/likelihood function for evaluating "spectral closeness", and (4) the construction of a regularization term/prior distribution which enforces non-degeneracy of the parameterized family of shapes.







Yingda Cheng
Virginia Tech

The Lindblad master equation is the fundamental model for open quantum systems. A defining feature of the Lindblad equation is the completely positivity and trace preserving (CPTP) property of the solution. We present a high order accurate low rank numerical method with CPTP property. This is joint work with Daniel Appelo (VT).







Dec 2
Andrew Christlieb
MSU

In this talk we review the challenges and multi scale modeling associated with inertial confinement fusion energy. I will give a brief overview of the Center for Hierarchical and Robust Modeling of Non-Equilibrium Transport (CHaRMNET), a DoE MMICC center founded on this class of problems. The fundamental issue is that in space the system covers 8 orders of magnitude and in time the system covers 12 orders of magnitude. Further complicating things is that it is a driven system in a non equilibrium state for the majority of its evolution. As such simplifying assumptions that would enable using reduced models such as rad hydro, Navier Stokes equations coupled with simplified non-linear transport models, do not capture known effects coming from kinetic phenomenon. After this overview, I will talk about blended computing, a process of developing structure preserving self-consistent machine learning surrogates to pair with traditional numerical algorithms as a way of capturing scales. I will review work in the field. Following this, I will talk about this in the context of a simplified problem, capturing kinetic phenomenon in fluid models of the BGK equations in 1D. I will present the models that motivate our “fluid model” closure, which leads to our work on developing structure preserving Neural Networks, touch on training, then talk about results and next steps. I end the talk by touching on other projects my group is involved with.







Dec 5 (10:00am, MCB 455)
Youngsoo Choi
Lawrence Livermore National Lab

Many computational models for physical systems have been developed to expedite scientific discovery, which would otherwise be impeded by the lengthy nature of traditional, non-computational experimentation (e.g., observation, problem identification, hypothesis formulation, experimentation, data collection, analysis, conclusion, and theory development). However, as these physical systems grow more complex, the computational models themselves can become prohibitively time-consuming. To address this challenge, we introduce a framework called Latent Space Dynamics Identification (LaSDI), which transforms complex, high-dimensional computational domains into reduced, succinct coordinate systems—a sophisticated change of variables that preserves essential dynamics. LaSDI offers significant potential for extension to other innovative, data-driven algorithms. It is an interpretable, data-driven framework composed of three core steps: compression, dynamics identification, and prediction. In the compression phase, high-dimensional data is reduced to a more manageable form, facilitating the construction of an interpretable model. The dynamics identification phase then derives a model, typically expressed as parameterized differential equations, that accurately captures the behavior of the reduced latent space. Finally, in the prediction phase, these differential equations are solved within the reduced space for new parameter sets, with the resulting solutions projected back into the full space. One of the key advantages of LaSDI is its computational efficiency, as the prediction phase operates entirely in the reduced space, bypassing the need for the full-order model. The LaSDI framework supports various identification methods, including fixed forms like dynamic mode decomposition and thermodynamics-based LaSDI, regression methods such as sparse identification of nonlinear dynamics (SINDy) and weak SINDy, as well as physics-driven approaches like projection-based reduced order models. The LaSDI family has demonstrated substantial success in accelerating various physics problems, achieving up to 1000x speed-ups in fields such as kinetic plasma simulations, pore collapse phenomena, and computational fluid dynamics.







Dec 9
Jenifer De Jager
Virginia Tech

Koopman theoretic methods, which convert an arbitrary nonlinear system on a finite dimensional state space into a linear system on an infinite dimensional observable space, are showing empirical promise in data-driven dynamical model order reduction, although the theory is not as well understood as for system theoretic methods. In this talk, we will discuss the boundedness of the Koopman operator and go over some of the implications thereof.







Ian Moore
Virginia Tech

In the field of reduced order modeling, filtering methods have received significant recent attention for treating poorly performing models as a method of regularization. However, filtering by itself can be susceptible to over-smoothing of generated solutions. We propose new reduced order models that integrate the methods of approximate deconvolution with a goal of balancing accuracy and smoothing properties. The models are tested with particular application to fluid flows simulated by the Navier-Stokes equations.







Dec 10 (2:00pm, DDS 366)
Jonathan Lindbloom
Dartmouth College

Reconstructing high-quality signals and images with sharp edges is crucial in many applications. Hybrid projection methods offer a competitive class of algorithms for this task. This talk focuses on such methods based on generalized Krylov subspaces (GKS), known for their efficiency in solving large-scale linear inverse problems with 2-norm ("smoothing") regularization and requiring only a low-dimensional subspace for the solution. However, adapting GKS methods to handle sparsity-promoting regularization penalties incurs high computational costs due to the high-dimensional subspaces needed to capture sparse solutions. In this talk, we present a new preconditioned GKS method that addresses this issue and apply the method to both deterministic and Bayesian inverse problems. We also make some comparisons with other hybrid projection methods. This is joint work with Mirjeta Pasha (Virginia Tech), Jan Glaubitz (Linköping University), and Youssef Marzouk (MIT).







Spring 2024

Feb 02
Tom Werner
TU Braunschweig

The eigenvector-dependent nonlinear eigenvalue problem (NEPv), also known as nonlinear eigenvector problem, is a special type of eigenvalue problem where we seek to find \(k\) eigenpairs of a Hermitian matrix function \(H : \mathbb{C}^{n,k} \to \mathbb{C}^{n,n}\) that depends nonlinearly on the eigenvectors itself. That is, we have to find \(V \in \mathbb{C}^{n,k}\) with orthonormal columns and \(\Lambda \in \mathbb{C}^{k,k}\) such that \(H(V) V = V \Lambda\). NEPv arise in a variety of applications, most notably in Kohn-Sham density theory and data science applications such as robust linear discriminant analysis. This talk is concerned with solving NEPv by viewing it as a set of nonlinear matrix equations and using an inexact Newton method on a matrix level. In this setting, Newton's method is applied using the Fréchet derivative and exploiting the structure of the problem by using a global GMRES-approach to solve the Newton-update equation efficiently.







Mark Embree
Virginia Tech

Recalling contributions of Nick Higham, including his most cited paper.







Feb 09
Shixu Meng
Virginia Tech

Inverse problem plays an important role in various applications associated with target identification, non-destructive testing, and parameter estimation. Of particular concern is the inverse scattering problem by inhomogeneous media, where the goal is to estimate the unknown inhomogeneity from the knowledge of measurement data. Due to its ill-posed nature, our aim is to alleviate the ill-posedness through data processing and computing the unknown in a suitable low-dimensional space. In the Born or linearized case, this space consists of the disk prolate spheroidal wave functions, computed efficiently via a Sturm-Liouville problem. We show how to image the unknown from noisy and potentially large scale measurement data. This technique further motivates some new understanding of sampling-type imaging methods in connection with nonlinear regression learning. In conclusion, we discuss several future research directions.







Feb 16
Agnieszka Międlar
Virginia Tech

Newly emerging technology of edge computing, where networked, autonomous devices work collaboratively in applications such as sensing platforms for radioactive source detection, fully-decentralized smart grids, or unmanned autonomous vehicle (UAV) systems to achieve a common goal through the synergy of humans and machines, requires asynchronous, flexible and resilient algorithms for decentralized processing, communication and predictions in heterogeneous environments. While the benefits of developing autonomous capabilities are multifold, they are subject to device failures, partial data losses, or security attacks, and hence require multidimensional resilience solutions. In this talk, we will present our recently proposed resilience and robustness improvements when solving linear systems of equations with asynchronous Jacobi (ASJ) method. In order to restore the convergence of the original ASJ method in the presence of data corruption or communication delay, we derive the ADMM (Alternating Direction Method of Multipliers) and average consensus push-sum inspired rejection and update criteria. Joint work with Zachary R. Atkins, Alyson L. Fox, Colin V. Ponce, and Christopher J. Vogl. Funded by LLNL LDRD projects 21-FS-007 and 22-ERD-045. Prepared by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-831426.







Yingda Cheng
Virginia Tech

In recent years, the Dynamic Low Rank Approximation (DLRA) has received lots of attention for computing time-dependent low rank solution of matrix/tensor differential equations. The method is based on the orthogonal projection of the original equation onto the tangent space of the low rank manifold. While this method enjoys many good properties, such as stability and simplicity in implementation, it suffers from the tangent projection error (modeling error) which cannot be eliminated by mesh refinement. In this talk, I will show a simple approach to remove this error to enhance the convergence property. The idea is to combine with the step truncation schemes. We propose an adaptive strategy that can accelerate the computation for moderately stiff problems. This is a recent joint work with Daniel Appelö.







Mar 01
Max Heldman
Virginia Tech

We discuss the extension of the Rhea code, which simulates the instantaneous coupling of mantle convection to plate tectonics, to the dynamic coupling of the slow, creeping motion of large-scale plate tectonics with the episodic slip events associated with great earthquakes. In the quasi-static regime, this involves solving a viscoelastic-plastic constitutive relation for the time evolution of the stress tensor, constrained by the incompressible Stokes equations. After time discretization, we derive a visco-plastic problem with an effective viscosity and strainrate to be solved on each timestep using Rhea's scalable solvers.







Steffen Werner
Virginia Tech

For accurate modeling of real-world phenomena, high-dimensional nonlinear dynamical systems are indispensable. Thereby, many physical properties are encoded in the internal differential structure of these systems. Some examples for differential structures are second-order time derivatives in mechanical systems or time-delay terms. When using such models in computational settings such as in numerical simulations, the high-dimensional nature of the models represented by large numbers of differential equations describing the dynamics becomes the main computational bottleneck. A remedy to this problem is model order reduction, which is concerned with the construction of cheap-to-evaluate surrogate models that are described by a significantly smaller number of differential equations while accurately approximating the input-to-output behavior of the original high-dimensional system. It has been shown that many nonlinear phenomena can be equivalently modeled using quadratic-bilinear systems. Therefore, many established model order reduction approaches such as transfer function interpolation and balanced truncation have been extended to the class of quadratic-bilinear systems in recent years. However, in most cases, these approaches cannot be applied to systems with internal differential structures. In this work, we present a structure-preserving interpolation framework for the multivariate transfer functions of quadratic-bilinear systems. This new approach allows the simulation-free construction of reduced-order models for quadratic-bilinear systems with internal structures, and it is able to preserve differential structures in the reduced-order model.







Mar 15 (DDS 366)
Ning Wan
Virginia Tech

The Anderson Acceleration (AA) is a well-established method that allows to speed up or encourage convergence of fixed-point iterations. It has been successfully used in a variety of applications, in particular within the Self-Consistent Field (SCF) iteration method for quantum chemistry and physics computations. In recent years, the Conjugate Residual with OPtimal trial vectors (CROP) algorithm was introduced and shown to have a better performance than the classical Anderson Acceleration with less storage needed. In this talk, we will delve into the intricate connections between the classical Anderson Acceleration method and the CROP algorithm. Our objectives include a comprehensive study of their convergence properties, explaining the underlying relationships, and substantiating our findings through some numerical examples. Through this exploration, we contribute valuable insights that can enhance the understanding and application of the mixing acceleration methods in practical computations, as well as the developments of new and more efficient acceleration schemes.







Paul Cazeaux
Virginia Tech

The Tensor-Train (TT) format stands out as a highly compact, low-rank representation for high-dimensional tensors, with numerous applications including the computation of many-body ground states with limited entanglement in spin models of condensed matter physics or quantum chemistry. The efficiency of this format hinges on the rounding operation, a process that trims the internal ranks of tensors to ensure feasible computational times and memory usage. In this talk, I will present a randomized algorithm specifically designed for the TT rounding task, building upon and expanding existing randomized low-rank matrix approximation methodologies. This algorithm not only preserves the TT format's integrity but also provides a computational advantage, particularly evident when rounding a sum of TT-tensors or the result of a matrix-vector product—a frequent bottleneck in numerous applications, in particular when using Krylov linear or eigenvalue solvers. I will further discuss potential applications of the randomized algorithms in particular to a problem from quantum chemistry, where certain symmetries are encoded as an additional structure for the tensor train format.







Mar 22
Linus Balicki
Virginia Tech

Rational approximation is a widely used tool for data-driven modeling of linear dynamical systems. In recent years the adaptive Antoulas-Anderson (AAA) algorithm has established itself as a successful method for computing rational approximations from a set of sampled data. The p-AAA algorithm extends the original AAA framework to multivariate rational functions which allows for effectively capturing the dynamics of parameter-dependent systems. As with many approaches for computing multivariate approximations, the computational cost of p-AAA increases significantly when considering functions of many variables. In order to overcome this drawback we suggest imposing a low-rank structure on the approximations computed by p-AAA. We discuss implications for the algorithm and investigate in which cases our proposed approach is particularly effective.







Yichen Li
Virginia Tech

When a water droplet freezes on a cold plate, a pointy tip forms as the result of volume expansion. In this talk, we will introduce a quasi-compressible phase-field model that deals with this non-isothermal three-phase system involving water, ice, and air. The water-ice phase transition and the water-air fluid interface are handled by the Allen-Cahn and the Cahn-Hilliard equations, respectively. The non-negative entropy production is ensured by the design of the governing equations, which include the energy equation, the Navier-Stokes equations, and the two phase-field equations. The open-source deal.ii package is then used to solve these equations using finite-element techniques. The Gibbs-Thomson and Clausius-Clapeyron equations, which demonstrate how the melting temperature depends on interface curvature and pressure, respectively, are reproduced in our model. Furthermore, the built-in quasi-compressibility accurately accounts for the volume change due to the water-ice density contrast during the phase transition. With proper parameters, our simulation captures the pointy tip of the frozen droplet with good agreement with the experiment. In the end, we will discuss the effects of the water-ice-air tri-junction conditions and other parameters on the final shape of the frozen droplet.







Mar 29
Hamza Adjerid
Virginia Tech

We study optimal feedback control for a class of differential algebraic equations (DAEs) with quadratic nonlinearity. Systems of this specific form would arise in the discretization of control problems associated with the Navier-Stokes equations, but here we focus our attention on finite-dimensional systems with this form. Our approach is to use the strangeness framework for DAEs to appropriately recast the control problem as a system of differential equations with polynomial drift terms. At this stage it is appropriate to develop standard approximations to the feedback control laws. We describe this process and note the differences in our implementation to other approaches that rely on projections (which can destroy sparsity in the original DAEs). To simplify the exposition, we describe the optimal control for cases when the control does not appear in the algebraic constraints though we describe how this can be generalized.







Amit Rotem
Virginia Tech

Due to the indefinite nature of the Helmholtz equation, solutions of this equation are difficult to approximate. Classical algorithms for definite elliptic equations (such as Krylov iteration, domain decomposition, multigrid, etc.) are less effective, and may fail to converge entirely. The WaveHoltz algorithm solves the Helmholtz equation by filtering the wave equation in the time domain. The resulting fixed point iteration can be accelerated via Krylov space methods, and the linear system is positive definite for problems with energy conserving boundary conditions. In this presentation, we show that for any stable discretization of the wave equation, the WaveHoltz iteration is guaranteed to converge to an approximate solution of the corresponding frequency domain problem.







Apr 05 (DDS 220)
Yichen Guo
Virginia Tech

Two classes of preconditioners have been proved as effective for high-order finite element discretizations: the \(p\)-multigrid with a Chebyshev-accelerated Jacobi smoother and the low-order refined preconditioner. However, both require a significant number of iterations when applied to highly deformed meshes. The Sparse Approximate Inverse (SAI) preconditioner has been shown to effectively accelerate the convergence of iterative methods, with its construction being inherently parallel. In this work, we apply the SAI as a smoother within the \(p\)-multigrid preconditioning. Given the dense element matrices and the costly matrix-vector operations of high-order finite element discretizations, we use a matrix-free method for construction. We focus on three aspects: choosing an appropriate sparsity pattern for the SAI and implementing a thresholding strategy to reduce the number of unknowns; coloring nodes to segregate them into several sets and solving numerous least squares problems simultaneously using the matrix-free operator; and applying multiple precision to decrease the matvec operation costs. We evaluate the performance of the smoother on highly deformed meshes using GPUs. Our numerical results demonstrate the effectiveness of this approach.







Eric de Sturler
Virginia Tech

The iterative linear solver GMRES, the generalized minimum residual method, is among the most popular solvers for general linear systems. Unfortunately, its reliance on full orthogonalization of the search space makes it expensive. For \(m\) iterations the cost is \(O(N m^2)\), where \(N\) is the, usually, very large linear system size. Therefore, GMRES is often 'restarted' every \(m\) iterations, where \(m\) is kept modest. Unfortunately, this may lead to relatively slow convergence. Recent randomization techniques, called sketching, offer the potential to solve the least squares problem in GMRES much more efficiently, reducing the cost to \(O(N m \log m)\) for \(m\) iterations, but may suffer from numerical instability due to the generation of the search space basis without orthogonalization or with selective orthogonalization. We further combine sketching with subspace recycling, reusing selected subspaces from previous linear systems or restarts to accelerate convergence.
This is joint work with Fei Xue (Clemson University).
The talk will focus on the underlying mathematical ideas.







Apr 12
Ionuț Farcaș
Virginia Tech

Recent advances in algorithms, computer architectures, and high-performance computing enable high-fidelity simulations of complex real-world phenomena with high accuracy. However, making quantitative predictions over long time horizons or performing many-query tasks such as design optimization and uncertainty quantification remains computationally challenging even on large supercomputers. In this presentation, I will discuss how structure-exploiting data-driven methods can be used to accelerate or enable these tasks in large-scale simulations. I will start with the inner-loop and show how Operator Inference can be used to construct structure-preserving reduced models of large-scale systems from data. I will also briefly discuss a recent development which integrates domain decomposition into the data-driven reduced modeling procedure. The capabilities of this method will be demonstrated in a realistic rotating detonation rocket engine simulation with 75 million degrees of freedom and a sparse training data set. I will then move to outer-loops and show how structure-exploiting sparse grid approximations can be used to quantify uncertainty and perform sensitivity analysis in complex applications with computationally expensive forward models. I will demonstrate the effectiveness of this framework in a turbulent transport simulation in the edge of fusion reactors with more than 264 million degrees of freedom and eight uncertain inputs. I will also briefly show how this method can be used to construct accurate surrogate models for quantities of interest. Lastly, I will discuss context-aware multi-fidelity Monte Carlo sampling for efficient uncertainty propagation. This method optimally balances the costs of training low-fidelity models with the costs of Monte Carlo sampling by taking into account the context in which the learned low-fidelity models will be used, namely for variance reduction in Monte Carlo estimation. Time permitting, I will demonstrate the capabilities of the context-aware algorithm in a plasma micro-turbulence simulation with 12 uncertain inputs.







Apr 19
Cankat Tilki
Virginia Tech

Dynamic Mode Decomposition (DMD) is an extensively used method in data-driven modeling of dynamical systems with no forcing. The analytical connection between DMD and the Koopman theory has paved the way for the Extended DMD algorithm (EDMD). In simplest terms, EDMD constructs an approximation of the Koopman operator on a finite-dimensional subspace. This subspace is predetermined, mostly by designating the observables: measurement functions. In practice choosing good observables is a big challenge. To overcome this problem, one can employ strategies from the signal processing domain and use wavelets as a candidate for these observables, as done in Wavelet-based DMD (WDMD). In this talk, we will discuss that the EDMD algorithm with wavelets coincides with a specific case of WDMD, up to an error introduced by the approximate Wavelet Transform, thus establishing the analytical connection to the Koopman theory. Moreover, we will investigate some extensions of the WDMD method when the data is noisy and prove converge rates depending on the DWT decomposition level: number of wavelets used in EDMD.







Haroun Meghaichi
Virginia Tech

We present a high order immersed finite element (IFE) method for solving the elliptic interface problem with interface-independent meshes. The IFE functions developed here satisfy the interface conditions exactly and they have optimal approximation capabilities. The construction of this novel IFE space relies on a nonlinear transformation based on the Frenet-Serret frame of the interface to locally map it into a line segment, and this feature makes the process of constructing the IFE functions cost-effective and robust for any degree. This new class of immersed finite element functions is locally conforming with the usual weak form of the interface problem so that they can be employed in the standard interior penalty discontinuous Galerkin scheme without additional penalties on the interface. Mathematical analysis and numerical examples are provided to showcase the convergence properties of the method under \(h\) and \(p\) refinements.







Apr 26
Matthias Voigt
UniDistance Suisse

Nonlinear eigenvalue problems arise in a large variety of applications such as the stability analysis of delay differential equations, the treatment of differential equations with nonlinear boundary conditions, or the modal analysis of vibroacoustic problems. In this talk, we will discuss numerical methods for nonlinear eigenvalue problems that are described by matrices of large dimension. Two techniques will be presented. The first one considers the eigenvalue problem in a Schur complement form and is based on projecting the large matrices in an interpolatory framework in order to obtain a reduced nonlinear eigenvalue problem that can be solved more efficiently. Based on the eigenpair residuals, new interpolation points and corresponding projection matrices can be computed. In this way, we obtain an iterative method for computing a few eigenvalues close to a desired target point. The second method we consider is the NLFEAST algorithm that is designed to compute all eigenvalues that are enclosed by a given contour. We will adapt this method to the particular application case of a nonlinear eigenvalue problem originating from a modal analysis in vibro-acoustics and we will illustrate that this method is feasible for efficiently computing approximations to the resonance frequencies in a frequency band of interest. This is joint work with Rifqi Aziz and Emre Mengi (Koc University, Istanbul) as well as Suhaib Koji Baydoun, Benedikt Goderbauer, Christopher Jelich, and Steffen Marburg (TU Munich).







Fall 2023

Sep 01
Justin Krometis
Virginia Tech

Recent years have seen an explosion in the application of artificial intelligence and machine learning (AI/ML) to practical problems from computer vision to game playing to algorithm design. This growth has been mirrored and, in many ways, been enabled by the development and maturity of publicly-available software packages such as PyTorch and TensorFlow that make model building, training, and testing easier than ever. While these packages provide tremendous power and flexibility to users, and greatly facilitate learning and deploying AI/ML techniques, they and the models they provide are extremely complicated and as a result can present a number of subtle but serious pitfalls. This talk will present three examples from the presenter's recent experience where obscure settings or bugs in these packages dramatically changed model behavior or performance-one from classic deep learning regression, one from image classification, and one from reinforcement learning. I will also show an example from (non-AI/ML) parameter estimation where a similarly issue buried deep in a codebase seriously undermined the accuracy of the results. These examples illustrate the importance of thinking carefully about the results that a model is producing and carefully checking each step in its development before trusting its output.







Sean Reiter
Virginia Tech

In this work, we investigate the \(\mathcal{H}_2\) model reduction problem for linear dynamical systems with quadratic outputs (LQO). Our primary contributions are twofold: First, we derive Wilson first-order optimality conditions for \(\mathcal{H}_2\) model reduction. We show that these constraints are equivalent to rational interpolation of a univariate and multivariate transfer function that characterize the input-output map of the underlying model; these describe the linear and quadratic parts of the output, respectively. Secondly, we adapt the iterative rational Krylov algorithm as an efficacious approach for computing reduced-order LQO models that satisfy the necessary optimality conditions upon convergence. Numerical experiments validate the effectiveness of the proposed framework.







Sep 08
Steffen W. R. Werner
Virginia Tech

Data-driven reduced-order modeling is an essential tool in constructing high-fidelity compact models to approximate physical phenomena when explicit models, such as state-space formulations with access to internal variables, are not available yet abundant input/output data are. When deriving models from theoretical principles, the resulting models typically contain differential structures that lead to certain system properties. A common example are dynamical systems with second-order time derivatives arising in the modeling of mechanical or electro-mechanical processes. In this case, data are often available in the frequency domain, where the systems' input-to-output behavior is described by rational functions rather than differential equations. Classical frequency domain approaches like the Loewner framework, vector fitting and AAA are available and can be used to learn unstructured (first-order) models from data. However, these models in their original formulation do not reflect the structure-inherited properties. The key element in the derivation of frequency domain approaches is the barycentric form of rational functions. In this work, we present structured extensions of the barycentric form for the case of mechanical systems. Building on these structured barycentric forms, we develop new algorithms for learning mechanical phenomena in the frequency domain, while enforcing the mechanical system structure in the model description.







Sep 15
Daniel Appelö
Virginia Tech

In this talk we will: 1. Introducing the most basic concepts in quantum computing. 2. Briefly highlight examples of quantum algorithms where the skills of a numerical analyst can be of use. 3. Describe one type of quantum computing hardware (a transmon) and how it is modeled. 4. Take you on a comprehensive journey through a real-world example involving characterization, control, and experimental validation, showcasing our experiences with a qutrit device within the Lawrence Livermore QUDIT testbed.







Sep 22
Yingda Cheng
Virginia Tech

In this talk, I will first introduce kinetic models, which is a class of models that provide statistical descriptions of particle systems. I will discuss a few prominent applications of such models, ranging from rarified gas, radiation transport in nuclear engineering and Vlasov dynamics in plasmas. We will discuss properties of kinetic models, including the underlying structure of the solutions and the multi scale behavior which bridges such models to fluid (macroscopic) equations. Kinetic models are prominent examples of high dimensional partial differential equations (PDEs), which motivates the second part of the talk. I will discuss several approaches for solving high dimensional PDEs, focusing on our work on sparse grid discontinuous Galerkin schemes that are built upon multiwavelet basis functions. I will also outline some potential future directions and research opportunities.







Sep 29
Art Pelling
TU Berlin

Modelling the complex dynamics of acoustical systems constructively can be challenging due to imprecise knowledge of material properties and domain geometry. Thus, measurements are often unavoidable in practice and, in turn, high-fidelity measurement data is abundantly available in the field. Recent advances in the mathematical community of model order reduction, specifically data-driven methods, can help to obtain efficient and meaningful models from the high-dimensional data. So far, reduced order modelling methods are not widely used in acoustics. At the other end of the spectrum in mathematics, new methods are oftentimes only validated by means of synthetic data and toy models. Consequently, a noticeable gap between theory and practice remains. Our aims are twofold: Firstly, clearing obstructions that arise when employing recent data-driven reduced order modelling methods in acoustical engineering practice. Secondly, providing measurement data of real-world systems that can be used as benchmarks by the mathematicians that develop new data-driven methods. In this talk, we showcase some 'roadblocks' that arise when applying the so-called Eigensystem Realization Algorithm by means of real examples. Amongst others, we consider head-related transfer functions that are used for auralization in virtual reality applications.







Johann Rudi
Virginia Tech

Machine learning algorithms have successfully been able to approximate nonlinear maps under weak assumptions on the structure and properties of the maps. This talk presents deep neural networks to solve inverse problems, where we seek to estimate parameters of certain physical models. We employ the neural networks to approximate inverse or reconstruction maps for model parameter estimation from observational data. The models we target exhibit computational challenges in an inference setting, namely, having a highly nonlinear and nonconvex data misfit term and permitting only weakly informative priors on parameters. These challenges typically cause traditional optimization to fail and alternative algorithms to exhibit large computational costs, therefore deep learning can become attractive. We quantify the prediction errors of model parameters obtained from the neural networks and investigate the effects of network architectures with and without the presence of noise in observational data. Furthermore, this talk investigates new techniques for quantifying uncertainties in the predictions.







Oct 13
Ning Wan
Virginia Tech

Anderson Acceleration is a method that allows to speed up the convergence of the fixed point iteration, which is widely used within the SCF (Self-Consistent Field Iteration) method in quantum chemistry/physics computations. In this talk, we aim to explore Anderson Acceleration under perturbations and various truncation approaches. By studying the effects of perturbations and different truncation methods, we can gain insights into the behavior and performance of Anderson Acceleration in practical applications.







Petar Mlinarić
Virginia Tech

Mechanical structures can break catastrophically due to external forces introducing resonances. One way to address this is by adding dampers to suppress the effect of disturbances. This leads to the question of how to find optimal damper viscosities and positions for a given mechanical structure. There is significant prior work on finite-dimensional second-order systems, but remains challenging because of the discrete nature of damper positions. In this talk, we investigate the influence of a single damper on an infinite-dimensional system, in particular, the wave equation.







Oct 20
Dan Folescu
Virginia Tech

Contour integral methods solve nonlinear eigenvalue problems by performing, as the name suggests, contour integration of a domain in the complex plane. As formulated in the system-theoretic framework of Brennan, Embree, and Gugercin, exact contour integration yields the external (or "black-box") description of a linear, time-invariant dynamical system. In practice, inexact numerical quadrature is used to construct underlying system moments, presenting stability concerns when solving the subsequent Realization problem: given a system's external description, construct an internal (or state-space) description of the system. A seminal approach to the realization problem was developed by Ho and Kalman in 1966, which has since been extended to data-driven approaches based on the Loewner framework, among others. Concerning stability, Oymak and Ozay have recently revisited the Ho-Kalman approach, providing upper bounds on the norm of the difference of state-space coefficient matrices realized from exact and noisy external system descriptions, respectively. In this talk, we will first give a brief overview of the relevant systems theory, along with a description of the Ho-Kalman realization algorithm. Then, we will present an upper bound from the recent work of Oymak and Ozay. Finally, we will discuss additional approaches to bounding the normed difference of state matrix realizations via perturbation theory. Numerical examples will be provided.







Mike Ackermann
Virginia Tech

Model reduction via transfer function interpolation is frequently used to produce high-fidelity reduced-order models of large-scale dynamical systems. These interpolatory models can be produced via projection of the full-order model matrices or constructed directly from input-output data. The data-driven approach is desirable since it does not require knowledge of a realization of the full-order model, instead only a way to evaluate values and derivatives of its associated transfer function. However, while theoretically equivalent to explicit projection-based formulation, the data-driven approach is plagued by numerical issues and sensitivity to interpolation point selection. We will present our work thus far in analyzing the Hermite Loewner framework, which produces reduced-order models that interpolate the full-order transfer function and its derivative at specified points in the complex plane. We first examine the case where the poles of the full-order model are known, and then examine the connection to \(\mathcal{H}_2\)-optimal model reduction. We conclude by presenting an algorithm to produce locally \(\mathcal{H}_2\)-optimal reduced-order models from data that is robust to initial interpolation point selection.







Oct 27
Eric de Sturler
Virginia Tech

For difficult problems, in our context, problems on highly deformed meshes, the (preconditioned) Conjugate Gradient (PCG) method may converge slowly, because the linear system is very poorly conditioned, and expensive preconditioners are needed. To reduce the runtime cost, we can accelerate PCG further by 'recycling' the search spaces from previous linear system solves to increase the rate of convergence. We will discuss how to select and reuse approximate invariant subspaces from previous search spaces to accelerate the convergence of PCG\@. However, on the GPU, recycling is memory-wise expensive, as we store a basis for the approximate invariant subspace and typically also for its image under the matrix, as well as a window of CG search direction vectors to periodically update/improve the recycle space. As fast memory is quite limited on the GPU, we reduce the memory requirements by storing all these vectors in single precision and we avoid the image of the recycle space at the cost of an extra matrix-vector product. However, as the problem is ill-conditioned, we may need to solve to double precision accuracy. This presentation brings together an interesting blend of CG and high performance computing related topics, demonstrating how well-known algorithms need to be adapted for high performance architectures. We discuss a convergence result for CG when 'recycling' an approximate invariant subspace in a sequence of linear systems and demonstrate that the convergence is not sensitive to small perturbations (such as storing the basis vectors for the space in single precision). We also discuss how to solve to double precision accuracy while using single precision intermediate results and vectors stored in single precision without iterative refinement using the theory for inaccurate matrix-vector products in iterative linear solvers and how to control that accuracy.







Max Heldman
Virginia Tech

In this talk we discuss asymptotic methods for studying interacting reaction-drift-diffusion particle systems as the number of particles goes to infinity. First, we look at mean field models for reaction-diffusion systems and compare the result to standard mass action reaction-diffusion partial differential equations (PDEs). Then, we present a stochastic PDE model which captures the fluctuations of the particle concentrations about their deterministic mean-field limit. Finally, we show preliminary work on a drift-diffusion model.







Nov 03
Heike Faßbender
TU Braunschweig

Finding the unique stabilizing solution \(X = X^H\) of a large-scale continuous-time algebraic Riccati equation (CARE) \(0 = R(X) := A^H X + X A + C^H C - X B B^H X\) with a large, sparse \(n \times n\) matrix \(A\), an \(n \times m\) matrix \(B\) and an \(p \times n\) matrix \(C\) is of interest in a number of applications. Here, \(B\) and \(C\) are assumed to have full column and row rank, resp., with \(m, p \ll n\). The unique stabilizing solution \(X = X^H\) is positive semidefinite and makes the closed-loop matrix \(A - B B^H X\) stable. Even so \(A\) is large and sparse, the solution \(X\) will still be a dense matrix in general. But our assumptions on \(B\) and \(C\) often imply that the sought-after solution \(X\) will have a low numerical rank (that is, its rank is \(\ll n\)). This allows for the construction of iterative methods that approximate \(X\) with a series of low rank matrices stored in low-rank factored form. To be precise, we focus on Hermitian low-rank approximations \(X_j\) to \(X\) of the form \(X_j = Z_j Y_j Z_j^H\), where \(Z_j\) is an \(n \times k_j\) matrix with only few columns and \(Y_j\) is a small square \(k_j \times k_j\) Hermitian matrix. There are several methods which produce such a low-rank approximation. Our approach is based on a block rational Arnoldi decomposition and an associated block rational Krylov subspace spanned by \(A^H\) and \(C^H\). The approximations \(X_j\) as well as \(\lVert R(X_j) \rVert_F\) can be computed fast and efficiently. In particular, our approach gives a whole new family of algorithmic descriptions of the same approximation sequence \(X_j\) to the Riccati solution as four other algorithms for CARE previously known in the literature.







Nov 10 (DDS 366)
Volker Mehrmann
TU Berlin

For linear evolution equations (in continuous-time and discrete-time) we revisit and extend the concepts of hypocoercivity and hypocontractivity and give a detailed analysis of the relations of these concepts to (asymptotic) stability, as well as (semi-)dissipativity and (semi-)contractivity, respectively. On the basis of these results, the short-time decay behavior of the norm of the fundamental solution matrix for linear continuous-time and discrete-time systems is characterized by an integer called hypocoercivity index or hypocontractivity index, respectively. The results extend to operators in Hilbert spaces and can be applied to the analysis of anisotropic flows.







Dec 01
Sam Bender
Virginia Tech

Time-periodic systems are ubiquitous both naturally and as engineered systems. This is often as a consequence of periodic forcing due to rotation (e.g., the Earth's rotation generates both tidal gravity forces and diurnal temperature gradients that cyclically drive atmospheric and ocean flows; gyroscopic forces can generate significant vibration and noise in vehicles). More broadly, periodic phenomena can occur through the emergence of a dynamic balance between inertial and various restoring forces. For example, a structure exposed to an otherwise steady wind or current flow can experience large oscillations caused by vortex shedding or flutter. This can be destructive (the Tacoma Narrows bridge failure is a famous example); but there can be positive effects as well (e.g., high-efficiency windmills may take advantage of these effects). Linear time-periodic (LTP) systems play a fundamental role in the analysis, simulation, and control of such phenomena even when the underlying models reflect fundamentally nonlinear dynamics, since by their character the periodic phenomena of interest emerge as components of a "center manifold" and must themselves be stable at least when subjected to small perturbations. Were this not the case, say for a natural system, oscillatory phenomena would not generally be observed, while for an engineered system, they would not generally be desired. Beyond this, computational strategies for extracting periodic solutions of nonlinear systems necessitate repeated solution of linear periodic systems, and this leads (naturally) to the question of effective model order reduction for such systems.







Hayden Ringer
Virginia Tech

The frequencies of a vibrating planar membrane correspond to the Dirichlet Laplacian eigenvalues on the planar domain. In 1966, Mark Kac posed the famous question "Can One Hear the Shape of a Drum?". In other words: is the shape of a planar domain uniquely determined by its Dirichlet Laplacian spectrum? In 1992, Gordon, Webb, and Wolpert constructed a pair of non-isometric domains with identical Laplacian eigenvalues, answering Kac's question in the negative. In this talk, we present a perspective on the inverse eigenvalue problem as an ill-posed inverse problem, and present some preliminary work in applying Bayesian inference techniques to this question. In particular, we will present the perturbation theory for Dirichlet Laplacian eigenvalues, as well our modifications to the state-of-the-art algorithm for computing Dirichlet Laplacian eigenvalues: Trefethen and Betcke's improved Method of Particular Solutions.







Spring 2023

Feb 10
Max Heldman
Virginia Tech

Particle-based stochastic reaction-diffusion (PBSRD) systems are used in computational biology to model the dynamics of particles which react, diffuse, and interact in space. In this talk, we describe a numerical method for mesoscale simulations of PBSRD systems based on the convergent reaction-diffusion master equation (CRDME) framework. The CRDME generates approximate samples from the probability density for the particle locations by discretizing the forward Kolmogorov equation for the density in space using a quadrature-modified finite element method, and then solving the resulting continuous-time Markov chain using a variant of Gillespie's stochastic simulation algorithm. Our work extends previous versions of the CRDME by adding pair potentials, which can be used to model, e.g., electrostatic forces and soft-core repulsion. The new method exhibits optimal convergence rates for the spatial discretization and exactly preserves important physical properties such as detailed balance.







Feb 24
Michal Outrata
Virginia Tech

In 1655, an Oxford professor John Wallis published his book Artihmetica Infinitorum - The Arithmetic of Infinitesimals - introducing the term continued fraction, which has later been used as the tool of choice in approximation theory for nearly 300 years, until the notation now bearing the name of Henri Padé became the one commonly used in the field. Around the time when Padé introduced his work - namely in 1870 - Hermann Schwarz introduced an iterative process for solving the Laplace problem, nowadays called the alternating Schwarz method. It was introduced as a theoretical tool but has since evolved into a broad family of associated methods - the Schwarz methods. One of the important aspects of these methods is choosing the interface conditions through which the method propagates the subdomain solution information and one common class of these interface conditions is called absorbing boundary conditions (ABCs). In this talk we marry these two areas - continued fractions and ABCs - and show for a model problem that the truncation of an unbounded domain by an artificial Dirichlet boundary condition placed far away from the domain of interest is equivalent to a specific absorbing boundary condition at the boundary of the domain of interest. We prove that the absorbing boundary condition thus obtained is a spectral Padé approximation about infinity of the transparent boundary condition. We also propose two additional ABCs motivated by this result and numerically show their efficiency for our test problem.







Mar 17
Agnieszka Międlar
Virginia Tech

Randomized NLA methods have recently gained popularity because of their easy implementation, computational efficiency, and numerical robustness. We propose a randomized version of a well-established FEAST eigenvalue algorithm that enables computing the eigenvalues of the Hermitian matrix pencil \((A, B)\) located in the given real interval \(I \subset [\lambda_{\min}, \lambda_{\max}]\). In this talk, we present the analysis of this randomized variant of the subspace iteration method using a rational filter and propose several modifications of the original FEAST algorithm. First, we derive some new structural as well as probabilistic error analysis of the accuracy of approximate eigenpair and subspaces obtained using the randomized FEAST algorithm, i.e., bounds for the canonical angles between the exact and the approximate eigenspaces corresponding to the eigenvalues contained in the interval, and for the accuracy of the eigenvalues and the corresponding eigenvectors. Since this part of the analysis is independent of the particular distribution of an initial subspace, we denote it as structural. In the case of the starting guess being a Gaussian random matrix, we provide more informative, probabilistic error bounds. Our new algorithm allows to improve the accuracy of orthogonalization when \(B\) is ill-conditioned, efficiently apply the rational filter by using MPGMRES-Sh Krylov subspace method to accelerate solving shifted linear systems and estimate the eigenvalue counts in a given interval. Finally, we will illustrate numerically the effectiveness of presented error bounds and proposed algorithmic modifications. This is joint work with E. de Sturler (VT) and A. K. Saibaba (NC State).







Mark Embree
Virginia Tech

The numerical range (or field of values) of a matrix is a closed, bounded, convex set in the complex plane that contains the eigenvalues. When analyzing various algorithms involving nonsymmetric matrices, the numerical range provides a convenient proxy for the eigenvalues. Unfortunately, the set is often inconveniently large. In this talk we will describe two attempts to refine our understanding of the numerical range: inverse numerical range problems, and a kind of nonsymmetric interlacing theorem due to Russell Carden. We will describe the implications of this interlacing for projection-based model reduction: If we perform Galerkin projection of a stable model, how many unstable modes can the reduced order model have?







Mar 24
Sean Reiter
Virginia Tech

In this work we present extensions of the Quadrature-based Balanced Truncation (Quad-BT) framework of Gosea, Gugercin, and Beattie, to other types of balancing. Quad-BT is a "non-intrusive" reformulation of balanced truncation; a classical projection-based model order reduction technique for linear systems. Quad-BT builds accurate reduced-order models entirely from system response data (e.g., transfer function measurements) without the need to reference an explicit state-space realization of the underlying system. We extend this data-driven framework to include other types of balancing; namely, balanced stochastic truncation, positive-real balancing, bounded-real balancing, and frequency-weighted balanced truncation. This is accomplished by sampling certain spectral factors associated with the system of interest. We verify this approach with several numerical examples.







Petar Mlinarić
Virginia Tech

There are various approaches to \(\mathcal{H}_2\)-optimal reduced-order modeling of (unstructured) linear time-invariant systems, such as the iterative rational Krylov algorithm that is based on interpolation. Interpolatory model order reduction was also extended to structured linear systems. We are interested in extending \(\mathcal{H}_2\)-optimal reduced-order modeling to structured linear systems and investigating whether it necessitates interpolation. Interpolatory \(\mathcal{H}_2\)-optimality conditions have been established for certain second-order systems and port-Hamiltonian systems, as well as special time-delay systems. Furthermore, for a special class of parametric linear time-invariant systems, interpolatory \(\mathcal{H}_2 \otimes \mathcal{L}_2\)-optimality conditions are known. In this talk, we will present some generalizations of these results using the work on \(\mathcal{L}_2\)-optimal reduced-order modeling and present them on a few numerical examples.







Mar 31
Yichen Guo
Virginia Tech

The discretization of partial differential equations using finite element methods leads to the formulation of large-scale linear systems. Researchers often employ iterative techniques to solve these systems, which give rise to two distinct errors: algebraic error induced by the linear solver, and discretization error depending on the discretization. While algebraic error converges to zero as iterations progress, discretization error remains constant. Consequently, determining the optimal point to terminate iterations in order to maintain overall precision and prevent over-solving becomes a critical question. In this presentation, we will describe a stopping criterion based on the residual for the Poisson equation. First, we will review error estimation in the conjugate gradient method, followed by an exploration of polynomial-degree robust a posteriori error estimators for high-order finite element methods. Subsequently, we will introduce an alternative stopping criterion based on the linear residual and two associated terms, and also consider the Poisson equation with a heterogeneous coefficient. Finally, we will demonstrate the robustness and efficiency of this criterion by applying it to several benchmark problems and comparing its performance with other criteria.







Ashlyn McDonald
Virginia Tech

Inverse problems arise in a wide variety of applications including biomedicine, environmental sciences, astronomy, and more, but computing reliable solutions to these problems requires the inclusion of prior knowledge, in a process that is often referred to as regularization. Most regularization techniques require suitable choices of regularization parameters, so I will describe new approaches that use deep neural networks to obtain regularization parameters. We can train multiple networks to approximate mappings from observation data to individual regularization parameters, and once the networks are trained, regularization parameters for newly obtained data can be computed by efficient forward propagation of the DNNs. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods.







Apr 07
Jonathan Baker
Virginia Tech

It is well known that a linear time-invariant system is (state) controllable if and only if the associated controllability matrix (CM) is full rank. To find the CM rank, it may be sufficient to examine a submatrix. This is especially true for multi-input systems for which the controllability matrix may have many more columns than rows; the rank of the entire CM may be the same as the submatrix formed by the first few blocks. The number of blocks needed to reach the maximum rank may be called the grade of the system. I will briefly explain how several areas benefit from understanding how the system grade is affected by the properties of the system and control matrices. I will present some widely known results, new results, and examples.







Apr 14
Jonas Nicodemus
U Stuttgart

State of the art structure-preserving model order reduction (MOR) methods for port-Hamiltonian (pH) systems focus only on approximating the input-output dynamics by (approximately) minimizing classical system norms. Nevertheless, a pH system consists of two objects: the input-output dynamics and an energy function that is typically called the Hamiltonian. If we thus measure the approximation quality only with respect to the input-output dynamics, then the approximation of the Hamiltonian is not reflected at all. In this talk, we take a first step towards the dual-objective minimization problem of the optimal approximation of the input-output dynamics and the Hamiltonian by noticing that in the pH reduced-order model, we can modify the Hamiltonian without changing the system dynamics. Thus, for a given pH reduced-order model, we can search for the reduced Hamiltonian that best approximates the full Hamiltonian. Moreover, we propose a numerical algorithm to solve the minimization problem and demonstrate its applicability with numerical examples. This talk describes joint work with Paul Schwerdtner (TU Berlin) and Benjamin Unger (U Stuttgart).







Linus Balicki
Virginia Tech

Balanced truncation is a popular model order reduction technique that reduces the order of a system based on its state variables' contribution to the controllability and observability energy. For linear time-invariant systems, this approach has been thoroughly investigated and scalable computational frameworks exist. In this talk, I will focus on systems that are linear in the state equation but whose output is a polynomial of the state variables. The underlying energy functions of such systems turn out to be polynomials as well and the coefficients of these polynomials can explicitly be computed by solving large-scale tensor-structured linear systems. I will talk about the computational aspects of solving these tensor-structured systems and point out possible ways to use these tensor-based energy function representations in the context of model order reduction.







Apr 19 (10:15-11:15am)
Alice Cortinovis
Stanford U

In this talk we consider the computation of the action of a matrix function \(f(A)\), such as the matrix exponential or the matrix square root, on a vector \(b\). For a general matrix \(A\), this can be done by computing the compression of \(A\) onto a suitable Krylov subspace. Such compression is usually computed by forming an orthonormal basis of the Krylov subspace using the Arnoldi method. In this talk, we propose to compute (non-orthonormal) bases in a faster way and to use a fast randomized algorithm for least-squares problems to compute the compression of \(A\) onto the Krylov subspace. We present some numerical examples which show that our algorithms can be faster than the standard Arnoldi method while achieving comparable accuracy.







Apr 21
Tobias Breiten
TU Berlin

Computing the Lyapunov function of a system plays a crucial role in optimal feedback control, for example when the policy iteration is used. This talk will focus on the Lyapunov function of a nonlinear autonomous finite-dimensional dynamical system which will be rewritten as an infinite-dimensional linear system using the Koopman operator. Since this infinite-dimensional system has the structure of a weak-\(*\) continuous semigroup in a specially weighted \(L_p\)-space one can establish a connection between the solution of an operator Lyapunov equation and the desired Lyapunov function. It will be shown that the solution to this operator equation attains a rapid eigenvalue decay, which justifies finite rank approximations with numerical methods. The usefulness for numerical computations will also be demonstrated with two short examples. This is joint work with Bernhard Höveler (TU Berlin).







Apr 28
Cankat Tilki
Virginia Tech

Dynamic Mode Decomposition and its variants have been successfully used in data-driven modeling of dynamical systems in diverse applications. DMD has been generalized so that it admits external forcing and outputs, giving rise to the input-output (ioDMD). DMD and ioDMD work with full state measurements. However in practice one does not always have access to full measurements. This leads to versions of DMD, such as delay-DMD, that use partial state measurements rather than full ones. One of the recent techniques that works with partial state information in a DMD setting is Wavelet-DMD (WDMD). The main idea behind this method is to create observables based on the stationary wavelet coefficients using the available measurements and apply ioDMD with these observables. Specifically, WDMD uses maximal overlap discrete wavelet transform to generate an auxiliary state variable from partial measurements, and then uses ioDMD to analyze dynamics with respect to this auxiliary variable and creates a data-driven model for the underlying dynamics. In this talk, we demonstrate that WDMD can be represented formally in the context of Extended DMD with a specific choice of observables. Thus we establish the connection between WDMD and the extensions of DMD. We also provide numerical examples to illustrate the theory.







Sam Bender
Virginia Tech

Time-periodic systems are ubiquitous both naturally and as engineered systems. This is often a consequence of periodic forcing due to rotation (e.g., the Earth's rotation generates both tidal gravity forces and diurnal temperature gradients that cyclically drive atmospheric and ocean flows; gyroscopic forces can generate significant vibration and noise in vehicles). One particular example of interest is tidal currents, which are modeled with nonlinear PDEs. Linearization around a periodic solution leads to linear time-periodic PDEs, then further discretization in space leads to large systems of time-periodic ODEs. In this presentation, I investigate computational strategies for model order reduction techniques that further simplify these time-periodic systems of ODEs into constant coefficient systems.







May 1
Perfect Y. Gidisu
TU Eindhoven

A CUR factorization is a type of low-rank matrix decomposition that approximates a given matrix using a small subset of its rows and columns as bases for its row and column spaces. Unlike traditional low-rank decompositions, which use orthonormal bases, a CUR factorization offers advantages such as preserving sparsity and non-negativity, improved data interpretation, and reduced storage requirements. Mathematically, we aim to solve a combinatorial problem where we select a subset of rows and columns from a data matrix to construct thin matrices C and R. The goal is to minimize the approximation error over all possible choices for C and R. An exact solution to this problem is intractable, so researchers seek efficient mathematical methods that yield a solution not worse than the optimal by some factor. However, finding deterministic algorithms that result in a modest factor is still an open problem. We have developed deterministic algorithms that seek to minimize this factor while also extending the CUR factorization to multiple matrices.







Fall 2022

Sep 09
Matthew Brown
Virginia Tech

ARC is Virginia Tech's centralized resource for research computing at scale. ARC hosts clusters of computational equipment designed to be a platform for high performance computing (HPC) and based on standard models for HPC in academic and industrial settings. In addition, ARC provides data storage, software, and application interfaces to lower barriers to entry along with support, instruction, and consultation services. This talk will provide an overview of ARC's systems, a brief introduction to some standard parallelization concepts which are fundamental to computational system design, and some "getting started" information for those interested in using ARC systems.







Sep 16
Linus Balicki
Virginia Tech

Rational approximation represents a powerful tool for accurately capturing complex system dynamics via low-order models. The adaptive Antoulas-Anderson (AAA) algorithm combines the benefits of interpolatory methods and least-squares approximation to compute univariate rational approximants based on a set of function samples. A generalization to the AAA algorithm which enables approximations via multivariate rational functions has been introduced as the parametric AAA (p-AAA) algorithm. The method has been successfully used in order to obtain low-order surrogate models which precisely capture the behaviour of parametric dynamical systems and stationary models. I will present recent modifications to p-AAA which increase its viability for practical use cases amongst others by computing real-valued state-space representations of the p-AAA approximant and improving the conditioning of the numerical computations in various steps in the algorithm.







Sep 23
Ning Wan
Virginia Tech

Anderson Acceleration is a method which allows to speed-up the convergence of the fixed point iteration, which is widely used within the SCF (Self-Consistent Field Iteration) method in quantum chemistry/physics computations. In this talk, we will discuss the convergence analysis and numerical behavior of Anderson Acceleration method.







Sep 30
Jonathan Baker
Virginia Tech

Eigenvalue veering is a well-known phenomenon affecting most problems involving spectra of parameterized symmetric matrices. As a symmetric matrix changes, its (real) eigenvalues may change, but they do not typically combine into a double eigenvalue. When the eigenvalues are plotted against the matrix parameter, the eigenvalues may approach each other, but they appear to "veer" away to avoid each other. What's more, the eigenvectors of the veering eigenvalues seem to "jump" from one eigenvalue to the other. It turns out that veering and jumping tend to occur in parameterized nonlinear eigenvalue problems as well. In my presentation, I'll introduce this topic more thoroughly with examples and some theory.







Oct 14
Eric de Sturler
Virginia Tech

We use the Golub-Kahan bidiagonalization algorithm with hybrid projection to solve inverse problems with regularization with only a limited part of the matrix available at a time and using only limited memory. The algorithm allows accessing the matrix one block at a time, this can be a block of rows or a block of columns (and a mix of these in the future). This is useful for extremely large problems, for fast processors with limited (local) memory, and for problems where we want to compute or update approximate solutions while data is coming in. It also allows us to combine randomization with iterative solvers. The possibility of solving a linear system with only a subset of columns of the matrix available at a time also allows a range of interesting solution strategies for general matrices.







Petar Mlinarić
Virginia Tech

The iterative rational Krylov algorithm (IRKA) is a well-known method for \(\mathcal{H}_2\)-optimal model order reduction (MOR) of linear time-invariant systems. The \(\mathcal{H}_2\)-optimal MOR problem is in finding the distance between a point and a set in the Hardy \(\mathcal{H}_2\) space, where the point represents the full-order model and the set a subset of reduced-order models. We consider the Riemannian embedded submanifold of reduced-order models of fixed McMillan degree in the \(\mathcal{H}_2\) space and explore whether IRKA can be interpreted as a Riemannian optimization method over this manifold.







Oct 17 (10-11am)
Daniel Appelö
Michigan State U

We discuss the WaveHoltz iteration, for solving the Helmholtz equation. Our method makes use of time domain methods for wave equations to design frequency domain Helmholtz solvers. We show that the WaveHoltz iteration we propose results in a positive definite linear system even though we are solving the Helmholtz equation. A unique “free lunch” property that WaveHoltz possesses allows us to solve for multiple frequencies at the cost of a single solve. We will present numerical examples, using various discretization techniques, that show that our method can be used to solve fairly large problems with rather high wave numbers.







Oct 21
Christian Frederiksen
Tulane U

One is often interested in estimating functional parameters in a partial differential equation given sparse and noisy observations of the solution. A Bayesian statistical methodology provides a comprehensive approach for such problems and establishing posterior consistency is an important step in validating this approach. In this talk I will introduce both posterior consistency and the Bayesian approach to inverse problems and present a newly developed abstract framework for establishing posterior consistency in PDE inverse problems. The abstract nature of the framework makes it easily adaptable to new problems unlike existing results which focus on specific problems. Additionally, and quite significantly, it allows for the use of fixed Gaussian priors and different observation types which are both absent in existing literature on PDE inverse problems. The talk should be readily accessible to anyone with a decent understanding of basic probability and PDE theory. This is joint work with Nathan Glatt-Holtz.







Ashlyn McDonald
Virginia Tech

Inverse Problems have a wide variety of applications in medicine, environmental sciences, astronomy, and beyond, but these problems are often ill-posed for a number of reasons. In order to combat this, we can use prior knowledge in the form of hyperparameters to find approximate solutions to inverse problems. To begin, I'll provide some basic information about inverse problems and discuss previous work that has been done using machine learning to find hyperparameters. From there, I'll discuss our approach to using deep neural networks to learn multiple hyperparameters and demonstrate a few numerical results via an example in tomography.







Oct 28
Serkan Güğercin
Virginia Tech

We introduce a non-intrusive data-driven time-domain formulation of balanced truncation (BT) for bilinear control systems. We build on our recent method [Gosea, Gugercin, Beattie, SIAM SISC, 2022] that recasts the classic BT method for linear time invariant systems as a data-driven method requiring only evaluations of either transfer function values or impulse responses. We extend the domain of applicability of this non-intrusive data-driven method to bilinear systems, a class of weakly nonlinear systems. This is a joint work with Ion Victor Gosea (MPI Magdeburg), Igor Pontes Duff (MPI Magdeburg), and Christopher Beattie (VT).







Johann Rudi
Virginia Tech

The main driver for the presented computational methods is coming from the goal to simulate Earth's mantle convection, which models 10,000-years-scale motion of rock from the surface down to ~3000km depth. While Earth's mantle flow is a fundamental geophysical process, enormous knowledge gaps remain. The reasons: Realistic mantle models pose computational challenges due to highly nonlinear rheologies, severe heterogeneities, anisotropies, and wide ranges of spatial scales. We present advances in nonlinear implicit solvers for Earth's instantaneous mantle flow governed by nonlinear instantaneous Stokes PDEs: (1) Heterogeneity-robust Schur complement preconditioning; (2) Advancing Newton's method for the computational challenges of viscoplastic Stokes Flow.







Nov 04
Yichen Guo
Virginia Tech

In finite element methods for partial differential equations, the computed solution is to approximate the exact solution of the partial differential equation in discretization spaces. It is obtained by solving a linear system of equations using iterative methods. A natural question is when to stop the iterations to get a sufficiently accurate solution and avoid overly solving the linear system. In this talk, I will introduce several methods to evaluate the energy norm of the difference between the exact solution and the approximate solution obtained when stop the iteration by some criteria. We will start with error estimation in the conjugate gradient algorithm linked with Gauss quadrature, and then discuss a method to estimate the total error using flux reconstructions which is polynomial-degree-robust.







Sean Reiter
Virginia Tech

Quantum chemistry provides powerful tools for understanding the rich electronic structure of molecular systems. One major challenge faced by standard implementations of traditional wave function methods is that they incur significant computational costs, e.g. scaling on with complexity \(\mathcal{O}(N^5)\) and higher, where \(N\) is proportional to the size of the chemical system of interest. This necessitates efficient implementations of these methods that take advantage of modern parallelism in order to study problems of even modest sizes. In this talk, we discuss a new integral-driven implementation of a selected configuration interaction method, CIPSI (Configuration Interaction using a Perturbation Selection made Iteratively). Our contributions include an integral-driven framework for implementing Slater-Condon rules. This provides an alternative means for deriving Hamiltonian matrix elements in a particular basis of Slater determinants. We also discuss the overall parallelization scheme, as well as a distributed "matrix free" implementation of Davidson's diagonalization algorithm.







Nov 11
Dan Folescu
Virginia Tech

Nonlinear operators present unique obstacles to the stable, yet efficient characterization of their spectra. In this setting, Contour Integral Methods (CIMs) are particularly appealing; the trapezoid rule applied to conformations of the unit circle yields exponentially convergent numerical quadratures and a great potential for parallelization. Further, one can interpret the nonlinear eigenvalue problem (NLEVP) as one of data realization for a corresponding transfer function. These qualities provide a fertile foundation for the exploration of optimal data sampling strategies - an aspect which will be showcased in this talk. We will begin with a motivating example and brief systems-theoretic description of the NLEVP. Care will be taken to delineate Hankel and Single-point Loewner strategies, with investigations via robust visual codes to follow. Finally, we will envision near-future tools of the trade, and end with a question: what features would help you find a rank drop in a \(\sigma\)-stack?







Sam Bender
Virginia Tech

Time-periodic systems are ubiquitous both naturally and as engineered systems. This is often as a consequence of periodic forcing due to rotation (e.g., the Earth's rotation generates both tidal gravity forces and diurnal temperature gradients that cyclically drive atmospheric and ocean flows; gyroscopic forces can generate significant vibration and noise in vehicles). One particular example of interest is tidal currents, which are modeled with nonlinear PDEs. Linearization around a periodic solution leads to linear time-periodic PDEs, then further discretization in space leads to large systems of time-periodic ODEs. In this presentation, I investigate computational strategies for model order reduction techniques that further simplify these time-periodic systems of ODEs into constant coefficient systems.







Nov 18
Arvind K. Saibaba
NC State

For real symmetric matrices that are accessible only through matrix vector products, we present Monte Carlo estimators for computing the diagonal elements. Our probabilistic bounds for normwise absolute and relative errors apply to Monte Carlo estimators based on random Rademacher, sparse Rademacher, normalized and unnormalized Gaussian vectors. The novel use of matrix concentration inequalities in our proofs represents a systematic model for future analyses. Our bounds mostly do not depend on the matrix dimension and imply that the accuracy of the estimators increases with the diagonal dominance. I will also present LanczosMC a method for estimating the diagonals of the inverse of a symmetric positive definite matrix that combines a preconditioned Lanczos low-rank approximation for the inverse with a Monte Carlo estimator. The accuracy of the methods is demonstrated by numerical experiments on synthetic test matrices and applications to inverse problems, sensitivity analysis, and network science.







Dec 02
Alan Garcia
Virginia Tech

Electrical power grids are one of the biggest and most important industrial systems, and so we need proper mathematical models (aka Swing Equations) to understand their dynamics and make predictions about their performance. We can use a DMD approach to recover matrices that reproduce the measurements but this is not enough to estimate the dynamical parameters of the model. We need to recover matrices that preserve the sparsity pattern and underlying properties as being symmetric Laplacian, diagonal, identity matrix, etc. We will present a DMD framework that includes a regularization step where we impose the underlying structure of the model.