A black-Box adversarial attack for poisoning clustering
Abstract
Clustering algorithms play a fundamental role as tools in decision-making and sensible automation processes. Due to the widespread use of these applications, a robustness analysis of this family of algorithms against adversarial noise has become imperative. To the best of our knowledge, however, only a few works have currently addressed this problem. In an attempt to fill this gap, in this work, we propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms. We formulate the problem as a constrained minimization program, general in its structure and customizable by the attacker according to her capability constraints. We do not assume any information about the internal structure of the victim clustering algorithm, and we allow the attacker to query it as a service only. In the absence of any derivative information, we perform the optimization with a custom approach inspired by the Abstract Genetic Algorithm (AGA). In the experimental part, we demonstrate the sensibility of different single and ensemble clustering algorithms against our crafted adversarial samples on different scenarios. Furthermore, we perform a comparison of our algorithm with a state-of-the-art approach showing that we are able to reach or even outperform its performance. Finally, to highlight the general nature of the generated noise, we show that our attacks are transferable even against supervised algorithms such as SVMs, random forests and neural networks.
- Publication:
-
Pattern Recognition
- Pub Date:
- February 2022
- DOI:
- 10.1016/j.patcog.2021.108306
- arXiv:
- arXiv:2009.05474
- Bibcode:
- 2022PatRe.12208306C
- Keywords:
-
- Adversarial learning;
- Unsupervised learning;
- Clustering;
- Robustness evaluation;
- Machine learning security;
- Computer Science - Machine Learning;
- Statistics - Machine Learning
- E-Print:
- 18 pages, Pattern Recognition 2022