A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks
Abstract
Model extraction attacks have been widely applied, which can normally be used to recover confidential parameters of neural networks for multiple layers. Recently, side-channel analysis of neural networks allows parameter extraction even for networks with several multiple deep layers with high effectiveness. It is therefore of interest to implement a certain level of protection against these attacks. In this paper, we propose a desynchronization-based countermeasure that makes the timing analysis of activation functions harder. We analyze the timing properties of several activation functions and design the desynchronization in a way that the dependency on the input and the activation type is hidden. We experimentally verify the effectiveness of the countermeasure on a 32-bit ARM Cortex-M4 microcontroller and employ a t-test to show the side-channel information leakage. The overhead ultimately depends on the number of neurons in the fully-connected layer, for example, in the case of 4096 neurons in VGG-19, the overheads are between 2.8% and 11%.
- Publication:
-
arXiv e-prints
- Pub Date:
- March 2023
- DOI:
- arXiv:
- arXiv:2303.18132
- Bibcode:
- 2023arXiv230318132B
- Keywords:
-
- Computer Science - Cryptography and Security;
- Computer Science - Machine Learning
- E-Print:
- Accepted to the International Symposium on Cyber Security, Cryptology and Machine Learning 2023 (CSCML)