Bayesian neural network (BNN) priors are defined in parameter space, making it hard to encode prior knowledge expressed in function space. We formulate a prior that incorporates functional constraints about what the output can or cannot be in regions of the input space. Output-Constrained BNNs (OC-BNN) represent an interpretable approach of enforcing a range of constraints, fully consistent with the Bayesian framework and amenable to black-box inference. We demonstrate how OC-BNNs improve model robustness and prevent the prediction of infeasible outputs in two real-world applications of healthcare and robotics.
- Pub Date:
- May 2019
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- Presented at the ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning and Workshop on Understanding and Improving Generalization in Deep Learning. Long Beach, CA, 2019