Dan Gabriel Cacuci holds the South Carolina SmartStateTM Endowed Chair Professor and Director of the Center of Economic Excellence in Nuclear Science and Energy at the University of South Carolina (Columbia, SC., USA, since 2012). He is the Founding Editor-in-Chief of the Journal of Nuclear Energy (2019-present), and has been the Editor in-Chief of Nuclear Science and Engineering, The International Research Journal of American Nuclear Society (1984-2020). He received his M.S., M.Phil., and Ph.D. degrees in applied physics and nuclear engineering from Columbia University in the city of New York. His scientific expertise encompasses the following areas: predictive modeling (including sensitivity & uncertainty analysis, data assimilation, model calibration, inverse problems) of large-scale physical and engineering systems, large scale scientific computations, nuclear engineering (reactor multi-physics, dynamics, and safety). Professor Cacuci’s career encompasses both the academia and large-scale multidisciplinary research centers. His teaching and research experience as a full professor (tenured or visiting) at leading academic institutions includes appointments at the University of Tennessee (1983-88), University of California at Santa Barbara (1988-90), University of Illinois at Urbana-Champaign (1990-93), University of Virginia (1993-2000), University of Michigan (1995-2000), University of California at Berkeley (2001-06), Royal Institute of Technology Stockholm (2003-04), the French National Institute for Nuclear Sciences and Technologies in Paris (2006-09), the University of Karlsruhe/KIT (Ordinarius Chaired Professor and Department Director, 1992-2012), North Carolina State University (2010-12), Imperial College London (2013-2017). Professor Cacuci’s research and management experience at leading national research centers includes employments as senior section head at Oak Ridge National Laboratory (1976-1988), Institute Director at the Nuclear Research Center Karlsruhe in Germany (1993-2004), and Scientific Director of the Nuclear Energy Directorate/Sector, Commissariat a l Energie Atomique in France (2004-2009). Professor Cacuci is a member of several European National and International Academies, and has received many prestigious awards, including four titles of Doctor Honoris Causa, the E. O. Lawrence Award and Gold Medal (US DOE, 1998), the Presidential Citation, American Nuclear Society (2020), the FRED C. DAVIDSON Distinguished Scientist Award, Citizens for Nuclear Technology Awareness (2019), the IAN SANQIANG Award, Shanghai, China (2017), the Annual Distinguished Lecture, Korean Advanced Institute of Science and Technology (KAIST), the Arthur Holly Compton Award (ANS, 2011), the Eugene P. Wigner Reactor Physics Award (ANS, 2003), the Glenn Seaborg Medal (ANS, 2002), ANS Fellow (1986), and the Alexander von Humboldt Prize for Senior Scholars (Germany, 1990). Professor Cacuci has served on numerous international committees, was founding coordinator of the EURATOM-Integrated Project NURESIM (European Platform for Nuclear Reactor Simulation, 2004-2008), and founding coordinator (2004 2007) of the Coordinated Action for establishing a Sustainable Nuclear Fission Technology Platform (SNF-TP) in Europe. He has made over 600 presentations worldwide, has authored 7 book chapters, 290 peer-reviewed articles, and has published seven books.
BERRU Predictive Modeling: Best Estimate Results with Reduced Uncertainties
The results of measurements and computations are never perfectly accurate. On the one hand, results of measurements inevitably reflect the influence of experimental errors, imperfect instruments, or imperfectly known calibration standards. Around any reported experimental value, therefore, there always exists a range of values that may also be plausibly representative of the true but unknown value of the measured quantity. On the other hand, computations are afflicted by errors stemming from numerical procedures, uncertain model parameters, boundary and initial conditions, and/or imperfectly known physical processes or problem geometry. Therefore, knowing just the nominal values of experimentally measured or computed quantities is insufficient for applications. The quantitative uncertainties accompanying measurements and computations are also needed, along with the respective nominal values. Extracting “best estimate” values for model parameters and predicted results, together with “best estimate” uncertainties for these parameters and results requires the combination of experimental and computational data, including their accompanying uncertainties (standard deviations and correlations). The goal of predictive modeling is to perform such a combination, which requires reasoning from incomplete, error-afflicted, and occasionally discrepant information, to predict future outcomes based on all recognized errors and uncertainties. In contradistinction to the customary methods used currently for data assimilation, the BERRU predictive modeling methodology presented in this book uses the maximum entropy principle to avoid the need for minimizing an arbitrarily user-chosen “cost functional” (usually a quadratic functional that represents the weighted errors between measured and computed responses), thus generalizing and significantly extending the customary “data adjustment” and/or 4D-VAR data assimilation procedures. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties,” because the application of the BERRU predictive modeling methodology reduces the predicted standard deviations of both the best-estimate predicted responses and parameters. The BERRU predictive modeling methodology also provides a quantitative indicator, constructed from response sensitivities and response and parameter covariance matrices, for determining the consistency (agreement or disagreement) among the a priori computational and experimental information available for parameters and responses. Furthermore, the maximum entropy principles ensures that the more information is assimilated, the more the predicted standard deviations of the predicted responses and parameters are reduced, since the introduction of additional knowledge reduces the state of ignorance (as long as the additional information is consistent with the physical underlying system), as would also be expected based on principles of information theory.
In general terms, the modeling of a physical system and/or the result of an indirect experimental measurement requires consideration of the following modeling components:
(a) a mathematical model comprising linear and/or nonlinear equations (algebraic, differential, and integral) that relate the system's independent variables and parameters to the system's state (i.e., dependent) variables;
(b) inequality and/or equality constraints that delimit the ranges of the system's parameters;
(c) one or several computational results, customarily referred to as system responses (or objective functions, or indices of performance), which are computed using the mathematical model; and
(d) experimentally measured responses, with their respective nominal (mean) values and uncertainties (variances, covariances, skewness, kurtosis, etc.).
Predictive modeling comprises three key elements, namely model calibration, quantification of the validation domain, and model extrapolation. Model calibration addresses the integration of experimental data for the purpose of updating the data of the computer/numerical simulation model. Important components underlying model calibration include quantification of uncertainties in the data and the model, quantification of the biases between model predictions and experimental data, and the computation of the sensitivities of the model responses to the model’s parameters. For large-scale models, the current model calibration methods are hampered by the significant computational effort required for computing exhaustively and exactly the requisite response sensitivities. Reducing this computational effort is paramount, and methods based on adjoint sensitivity models show great promise in this regard, as will be demonstrated in this book. The quantification of the validation domain underlying the model under investigation requires estimation of contours of constant uncertainty in the high-dimensional space that characterizes the application of interest. In practice, this involves the identification of areas where the predictive estimation of uncertainty meets specified requirements for the performance, reliability, or safety of the system of interest. The conceptual and mathematical development of methods for quantifying the validation domain is in a relatively incipient stage. Model extrapolation aims at quantifying the uncertainties in predictions under new environments or conditions, including both untested regions of the parameter space and higher levels of system complexity in the validation hierarchy. Extrapolation of models and the resulting increase of uncertainty are poorly understood, particularly the estimation of uncertainty that results from nonlinear coupling of two or more physical phenomena that were not coupled in the existing validation database.
The numerical results presented in this book were obtained by the author in collaboration with his former doctoral students, Drs. Madalina C. Badea, Erkan Arslan, Christine Latten, James J. Peltz and Federico Di Rocco, to whom the author wishes to express his deep gratitude for their contributions. The author is also grateful to Drs. Aurelian F. Badea, Ruxian Fang, and Jeffrey A. Favorite for their collaboration and contributions. Special thanks are due to the author’s long-time collaborator, Dr. Mihaela Ionescu-Bujor, for her very significant contributions to the BERRU predictive modeling methodology, and for constructively reviewing this book. Last, but not least, the author is grateful to the Springer Editorial team, especially to Dr. Christoph Baumann, Springer Nature, for their guidance throughout the publication process.