Privacy-preserving Learning via Deep Net Pruning
Abstract
This paper attempts to answer the question whether neural network pruning can be used as a tool to achieve differential privacy without losing much data utility. As a first step towards understanding the relationship between neural network pruning and differential privacy, this paper proves that pruning a given layer of the neural network is equivalent to adding a certain amount of differentially private noise to its hidden-layer activations. The paper also presents experimental results to show the practical implications of the theoretical finding and the key parameter values in a simple practical setting. These results show that neural network pruning can be a more effective alternative to adding differentially private noise for neural networks.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2020
- DOI:
- 10.48550/arXiv.2003.01876
- arXiv:
- arXiv:2003.01876
- Bibcode:
- 2020arXiv200301876H
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Cryptography and Security;
- Statistics - Machine Learning