Federated Mimic Learning for Privacy Preserving Intrusion Detection
Abstract
Internet of things (IoT) devices are prone to attacks due to the limitation of their privacy and security components. These attacks vary from exploiting backdoors to disrupting the communication network of the devices. Intrusion Detection Systems (IDS) play an essential role in ensuring information privacy and security of IoT devices against these attacks. Recently, deep learning-based IDS techniques are becoming more prominent due to their high classification accuracy. However, conventional deep learning techniques jeopardize user privacy due to the transfer of user data to a centralized server. Federated learning (FL) is a popular privacy-preserving decentralized learning method. FL enables training models locally at the edge devices and transferring local models to a centralized server instead of transferring sensitive data. Nevertheless, FL can suffer from reverse engineering ML attacks that can learn information about the user's data from model. To overcome the problem of reverse engineering, mimic learning is another way to preserve the privacy of ML-based IDS. In mimic learning, a student model is trained with the public dataset, which is labeled with the teacher model that is trained by sensitive user data. In this work, we propose a novel approach that combines the advantages of FL and mimic learning, namely federated mimic learning to create a distributed IDS while minimizing the risk of jeopardizing users' privacy, and benchmark its performance compared to other ML-based IDS techniques using NSL-KDD dataset. Our results show that we can achieve 98.11% detection accuracy with federated mimic learning.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2020
- DOI:
- 10.48550/arXiv.2012.06974
- arXiv:
- arXiv:2012.06974
- Bibcode:
- 2020arXiv201206974A
- Keywords:
-
- Computer Science - Networking and Internet Architecture
- E-Print:
- 6 pages, 6 figures, accepted to Blackseacom 2020