On the Differential Privacy of Bayesian Inference
Abstract
We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on proba-bilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian na{ï}ve Bayes and Bayesian linear regression illustrate the application of our mechanisms.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2015
- DOI:
- 10.48550/arXiv.1512.06992
- arXiv:
- arXiv:1512.06992
- Bibcode:
- 2015arXiv151206992Z
- Keywords:
-
- Computer Science - Artificial Intelligence;
- Computer Science - Cryptography and Security;
- Computer Science - Machine Learning;
- Mathematics - Statistics Theory;
- Statistics - Machine Learning
- E-Print:
- AAAI 2016, Feb 2016, Phoenix, Arizona, United States