Verification, validation, and predictive capability in computational engineering and physics
Abstract
Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, ie, experimental data, is the issue. This paper presents our viewpoint of the state of the art in V&V in computational physics. (In this paper we refer to all fields of computational engineering and physics, eg, computational fluid dynamics, computational solid mechanics, structural dynamics, shock wave physics, computational chemistry, etc, as computational physics.) We describe our view of the framework in which predictive capability relies on V&V, as well as other factors that affect predictive capability. Our opinions about the research needs and management issues in V&V are very practical: What methods and techniques need to be developed and what changes in the views of management need to occur to increase the usefulness, reliability, and impact of computational physics for decision making about engineering systems? We review the state of the art in V&V over a wide range of topics, for example, prioritization of V&V activities using the Phenomena Identification and Ranking Table (PIRT), code verification, software quality assurance (SQA), numerical error estimation, hierarchical experiments for validation, characteristics of validation experiments, the need to perform nondeterministic computational simulations in comparisons with experimental data, and validation metrics. We then provide an extensive discussion of V&V research and implementation issues that we believe must be addressed for V&V to be more effective in improving confidence in computational predictive capability. Some of the research topics addressed are development of improved procedures for the use of the PIRT for prioritizing V&V activities, the method of manufactured solutions for code verification, development and use of hierarchical validation diagrams, and the construction and use of validation metrics incorporating statistical measures. Some of the implementation topics addressed are the needed management initiatives to better align and team computationalists and experimentalists in conducting validation activities, the perspective of commercial software companies, the key role of analysts and decision makers as code customers, obstacles to the improved effectiveness of V&V, effects of cost and schedule constraints on practical applications in industrial settings, and the role of engineering standards committees in documenting best practices for V&V. There are 207 references cited in this review article.
- Publication:
-
Applied Mechanics Reviews
- Pub Date:
- September 2004
- DOI:
- 10.1115/1.1767847
- Bibcode:
- 2004ApMRv..57..345O