Handling out-of-distribution (OOD) samples has become a major stake in the real-world deploy- ment of machine learning systems. This work explores the application of self-supervised con- trastive learning to the simultaneous detection of two types of OOD samples: unseen classes and adversarial perturbations. Since in practice the distribution of such samples is not known in advance, we do not assume access to OOD examples. We first show that similarity func- tions trained with contrastive learning can be leveraged with the maximum mean discrepancy (MMD) two-sample test to verify whether two independent sets of samples are drawn from the same distribution. Inspired by this approach, we introduce CADet (Contrastive Anomaly Detec- tion), a method based on contrastive transfor- mations to perform anomaly detection on single samples. CADet compares favorably to adver- sarial detection methods to detect adversarially perturbed samples on ImageNet. Simultaneously, it achieves comparable performance to unseen la- bel detection methods on two challenging bench- marks: ImageNet-O and iNaturalist. CADet is fully self-supervised and requires neither labels for in-distribution samples nor access to OOD examples.