Exfiltration of personal information from ChatGPT via prompt injection
Abstract
We report that ChatGPT 4 and 4o are susceptible to a prompt injection attack that allows an attacker to exfiltrate users' personal data. It is applicable without the use of any 3rd party tools and all users are currently affected. This vulnerability is exacerbated by the recent introduction of ChatGPT's memory feature, which allows an attacker to command ChatGPT to monitor the user for the desired personal data.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2024
- DOI:
- 10.48550/arXiv.2406.00199
- arXiv:
- arXiv:2406.00199
- Bibcode:
- 2024arXiv240600199S
- Keywords:
-
- Computer Science - Cryptography and Security;
- Computer Science - Artificial Intelligence;
- Computer Science - Computation and Language;
- Computer Science - Computers and Society;
- Computer Science - Emerging Technologies