Invited Plenary Speakers
William D. Collins, Lawrence Berkeley National Laboratory
Examining Earth's Fast Radiative Feedbacks Using Machine-Learning-Based Emulators of the Climate System
Abstract
The response of the climate system to increased greenhouse gases and other radiative perturbations is governed by a combination of fast and slow feedbacks. Slow feedbacks are typically activated in response to changes in ocean temperatures on decadal timescales and often manifest as changes in climatic state with no recent historical analogue. On the other hand, fast feedbacks can be activated in response to rapid atmospheric physical processes on timescales of weeks and are already operative in the present-day climate. This distinction implies that the physics of fast radiative feedbacks is present in the historical reanalyses that have served as the training data for many of the most successful recent machine-learning-based emulators of weather and climate. In addition, these feedbacks are functional under the historical boundary conditions pertaining to the top-of-atmosphere radiative balance and sea-surface temperatures. We discuss our work using historically trained weather emulators to characterize and quantify fast radiative feedbacks without the need to retrain for conditions pertinent to a future warmer climate.
Alice Gabriel, University of California, San Diego
Cracking the Earthquake Code - Digital Twins for Earthquake Physics
Abstract
Earthquakes remain one of nature’s most unpredictable and destructive phenomena, occurring right beneath our feet. They, and their secondary hazards including tsunamis, cannot be predicted but can cause severe casualties and hundreds of billions to trillions of dollars in losses even in well-prepared societies.
Earthquakes are governed by highly nonlinear dynamics and vary in size over many orders of magnitude. Recent large events, such as the 2023 Turkey earthquakes, ruptured in complex multi-fault and multi-event sequences, “jumping” from one segment of a geologic fault system to another, “communicating” via subsurface stress transfers and rupturing at 'supershear' speeds, faster than theoretically plausible. Despite dense seismic and geodetic networks and the large number of observed earthquakes, a predictive, physics-based understanding of the earthquake system remains elusive. A fundamental gap persists between long-term earthquake hazard assessment (decades to centuries), where hazard is approximated as time-independent, and operational statistical aftershock forecasting after large events.
We are entering a transformative era for earthquake science, driven by the integration of high-performance computing, dense observational data, large-scale physics-based modeling, and data science methodologies to develop digital twins (DTs) of earthquake systems: computational analogs dynamically updated with real-time observational data streams, designed to represent the complex behavior of earthquake processes for predictive decision-making. I will present physics-based forward and inverse models, closely integrated with interdisciplinary observations and utilizing supercomputing, that elucidate the complicated rupture dynamics of recent large earthquakes toward a comprehensive, physically consistent understanding of the mechanics of faults. However, none of the existing supercomputing methods are efficient enough for real-time (early warning) or full physics-based probabilistic seismic hazard assessment (routinely evaluating 10,000s of complex models). Instead, the output of high-fidelity models can be used to construct time-dependent reduced-order model, a low-dimensional and accurate approximation of the earthquake forward simulations, that is fast enough to rapidly evaluate new earthquake scenarios for rapid response and physics-based probabilistic seismic hazard assessment.
These components pave the way toward digital twins that not only simulate but anticipate seismic behavior in complex tectonic environments. The nonlinear and transient nature of earthquake systems provides natural testbeds to develop new ways of analyzing complex dynamics through DTs that can be generalized across Earth and engineering sciences.
Felix Herrmann, Georgia Institute of Technology
Context-Aware Digital Twin for Underground Storage
Abstract
We introduce an uncertainty-aware Digital Twin (DT) for monitoring and optimizing underground storage operations, with a focus on Geological Carbon Storage (GCS). In real-world scenarios, forward models are often misspecified due to uncertainties in subsurface dynamics and observation models. Our DT addresses this challenge by incorporating context-awareness into its neural networks to account for complexities in indirect seismic observations, including variability in rock-physics relations that link reservoir states (e.g., pressure and saturation) to time-lapse seismic responses.
To achieve this, we employ sensitivity-aware amortized Bayesian inference (SA-ABI), a simulation-based inference method that integrates sensitivity analysis into the training phase. This enables the DT to quantify and propagate uncertainty stemming from model discrepancies, particularly in rock-physics parameters. Computational efficiency is maintained through shared neural network weights that capture structural similarities across simulations based on different rock-physics models. This design supports fast amortized inference: once trained, the network can evaluate the impact of varying rock-physics parameters without retraining.
In addition to producing posterior distributions over reservoir states conditioned on observed seismic data, our approach supports “what-if” scenario analysis, where posterior samples can be drawn for different values of the DT’s context variables. The resulting context-aware DT improves decision-making under uncertainty and provides a flexible, scalable foundation for future extensions to other sensing modalities and subsurface processes.
Toward Real-Time Predictive Modeling of Wind Turbine Wakes: From Multiscale Methods to Reduced-Order Simulation of Complex Atmospheric Flows
Abstract
As wind energy continues to scale, accurate and efficient modeling of wind turbine wakes becomes essential for optimizing performance, layout, and control of utility-scale wind farms. This talk presents reduced-order modeling (ROM) framework developed by the CFSMgroup at the University of Calgary, built on the Proper Orthogonal Decomposition (POD)-Galerkin projection and variational multiscale (VMS) turbulence modeling framework. Coupled with the Actuator Line Method (ALM) and mesh-based hyper-reduction strategies, the framework enables accurate, cost-efficient simulations of full-scale turbine wakes under realistic operating conditions. Recent extensions of this framework have incorporated stratified atmospheric flows and wake interactions among multiple turbines. We will present new results on the ALM-VMS-ROM’s application to large turbine arrays, showing its ability to capture long-range wake interactions and their impact on power losses and structural loads. Simulations of stratified flows reveal critical influences of thermal layering on wake recovery and turbine performance, highlighting the importance of coupling physical insight with scalable computation. We will also discuss challenges in translating high-fidelity modeling into real-time predictive tools for wind energy. The broader goal is to bridge the gap between simulation accuracy and operational practicality, bringing us closer to physics-aware digital twins for atmospheric flows and renewable energy systems.
[1] S. Dave and A. Korobenko, “Consistent reduced order modeling for wind turbine wakes using variational multiscale method and actuator line model”, Computer Methods in Applied Mechanics and Engineering, 2025, under review.