ServiceNow Research

Contrastive Self-supervision Defines General-Purpose Similarity Functions

Abstract

Handling out-of-distribution (OOD) and adversarial inputs has become a major stake in the real-world deployment of machine learning systems. In this work, we explore the use of maximum mean discrepancy (MMD) two-sample test in conjunction with self-supervised contrastive learning to verify whether two sets of samples have been drawn from a same distribution. In particular, we find that the similarity functions defined on top of models trained with contrastive learning lead to high testing power on different types of distributional shifts. Our approach is able to differentiate CIFAR10 from CIFAR10.1 with much higher probability and using less samples than previous methods. Moreover, when trained on ImageNet, our approach shows great efficiency in detecting both adversarial attacks and OOD data on challenging benchmarks, using only 3 to 20 samples.

Publication
Workshop at the Neural Information Processing Systems (NeurIPS)
Charles Guille-Escuret
Charles Guille-Escuret
Visiting Researcher

Visiting Researcher at Low Data Learning located at Montreal, QC, Canada.

David Vazquez
David Vazquez
Manager of Research Programs

Manager of Research Programs at Research Management located at Montreal, QC, Canada.

João Monteiro
João Monteiro
Research Scientist

Research Scientist at Low Data Learning located at London, UK.