Making the Solar System Accessible with Trusted and Autonomous Systems
Abstract
A close partnership between people and semi-autonomous machines has enabled decades of space exploration; but to look farther, future systems must be more autonomous, enabling scientific exploration of regions that have thus far remained inaccessible. Models are essential to system autonomy. In most current designs though, models are only implicit, especially at the system level (e.g., planning, execution, configuration and mode management, fault management). Models present in designers' understanding of a system informs its design, but do not overtly reside within that design. Accordingly, reasoning about models is necessarily limited to design time, where all required actions of the system must be elaborated /a priori/ and then encoded (usually procedurally) into its behavior. Such implementations are laborious to develop and difficult to validate, often involving substantial trial and error due to the combinatorial complexity of this approach. Moreover, these procedurally-based implementations are limited, being assured to work only when /a priori/ assumptions hold during mission operation. In more advanced approaches, such problems are addressed by encoding models (typically in a declarative form) that can be deployed in automated reasoning functions to determine actions appropriate to objectives and circumstance. Models are generally more straightforward to develop and validate, while automated reasoning capabilities may generally be multipurpose and evolvable from project to project, thereby gaining development advantages on both fronts. Furthermore, both models and reasoning engines, considered separately, are more amenable to formal methods of validation. With such model-based foundations in place, higher-level functions can more confidently assess dynamic situations and adjust objectives and priorities accordingly, with underlying reasoning counted upon for reliability in a wider variety of circumstances. Even machine learning and other data-driven approaches to autonomy, which can aid operation in more uncertain and dynamic regions, do so by adapting and improving models and must rely on model-based foundations for integration into the overall system. Therefore, model-based reasoning capability enables other aspects of autonomy as well. Integration that is grounded in model-based reasoning can help in important ways to establish trust in autonomous systems (i.e., reasonable assurance of their rational and explainable behavior). Machine learning alone cannot do this, but in a mixed approach, where the result of machine learning is revised models, supervisory control remains model-based and amenable to checking. Thus, capable model-based reasoning is a necessary precursor to the assured exploitation of machine learning. Advances in autonomous systems technology will dramatically increase science return by extending the reach, productivity, and robustness of planetary science missions. While not completely general, the model-based reasoning technologies available today can already take over many of the functions routinely exercised in current flight systems. To realize future benefits, we must transition now to model-based reasoning for the tasks we already understand best, even where legacy capability would suffice, as a vital step to the ambitious missions we envision.
- Publication:
-
43rd COSPAR Scientific Assembly. Held 28 January - 4 February
- Pub Date:
- January 2021
- Bibcode:
- 2021cosp...43E.201D