About
People
Publications
Open Source
Demos
Events
Blog
Contact
English
English
Français
ServiceNow
ServiceNow Research
Tags
Adversarial Attacks
ServiceNow Research
Adversarial Attacks
Constraining Representations Yields Models That Know What They Don't Know
A well-known failure mode of neural networks is that they may confidently return erroneous predictions. Such unsafe behaviour is …
João Monteiro
,
Pau Rodriguez
,
Pierre-André Noël
,
Issam H. Laradji
,
David Vazquez
International Conference of Learning Representations (ICLR), 2023.
PDF
Cite
Maximal Jacobian-based Saliency Map Attack
The Jacobian-based Saliency Map Attack is a family of adversarial attack methods for fooling classification models, such as deep neural …
Rey Reza Wiyatno
,
Anqi Xu
Montreal AI Symposium (MAIS), 2018.
PDF
Cite
Code
Cite
×