Learning implicitly in reasoning in PAC-Semantics
Abstract
We consider the problem of answering queries about formulas of propositional logic based on background knowledge partially represented explicitly as other formulas, and partially represented as partially obscured examples independently drawn from a fixed probability distribution, where the queries are answered with respect to a weaker semantics than usual -- PAC-Semantics, introduced by Valiant (2000) -- that is defined using the distribution of examples. We describe a fairly general, efficient reduction to limited versions of the decision problem for a proof system (e.g., bounded space treelike resolution, bounded degree polynomial calculus, etc.) from corresponding versions of the reasoning problem where some of the background knowledge is not explicitly given as formulas, only learnable from the examples. Crucially, we do not generate an explicit representation of the knowledge extracted from the examples, and so the "learning" of the background knowledge is only done implicitly. As a consequence, this approach can utilize formulas as background knowledge that are not perfectly valid over the distribution---essentially the analogue of agnostic learning here.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2012
- DOI:
- 10.48550/arXiv.1209.0056
- arXiv:
- arXiv:1209.0056
- Bibcode:
- 2012arXiv1209.0056J
- Keywords:
-
- Computer Science - Artificial Intelligence;
- Computer Science - Data Structures and Algorithms;
- Computer Science - Machine Learning;
- Computer Science - Logic in Computer Science