Self-supervised learning (SSL) has emerged as a promising alternative to create supervisory signals to real-world tasks, avoiding the extensive cost of labeling. SSL is particularly attractive for unsupervised tasks such as anomaly detection (AD), where labeled anomalies are costly to secure, difficult to simulate, or even nonexistent. A large catalog of augmentation functions have been used for SSL-based AD (SSAD) on image data, and recent works have observed that the type of augmentation has a significant impact on performance. Motivated by those, this work sets out to put image-based SSAD under a larger lens and carefully investigate the role of data augmentation in AD through extensive experiments on three different models across 420 different tasks. Our main finding is that self-supervision acts as a yet-another model hyperparameter and should be chosen carefully in regard to the nature of true anomalies. That is, the alignment between data augmentation and the underlying anomaly-generating mechanism in given data is the key to the success of SSAD, and in the lack thereof, SSL even impairs (!) the accuracy. Moving beyond proposing another SSAD method, our study contributes to a better understanding of this growing area and lays out new directions for future research.