About
People
Publications
Open Source
Demos
Events
Blog
Careers
Contact
English
English
Français
ServiceNow
ServiceNow AI Research
Tags
Prompt Injection
ServiceNow AI Research
Prompt Injection
Attack What Matters: Integrating Expert Insight and Automation in Threat-Model-Aligned Red Teaming
Prompt injection attacks target a key vulnerability in modern large language models: their inability to reliably distinguish between …
Kiarash Mohammadi
,
Abhay Puri
,
Georges Belanger Albarran
,
Mihir Bansal
,
Navdeep Gill
,
Yanick Chénard
,
Segan Subramanian
,
Marc-Etienne Brunet
,
Jason Stanley
NOW AI, 2025.
Cite
Shifting AI Security to the Left: Design-Time Defenses to Mitigate the Risks of Prompt Injections
Prompt injections pose a critical weakness for modern Large Language Models, making it difficult for AI to distinguish between …
Abhay Puri
,
Kevin Kasa
,
Kiarash Mohammadi
,
Georges Belanger Albarran
,
Mihir Bansal
,
Yanick Chénard
,
Marc-Etienne Brunet
,
Jason Stanley
NOW AI, 2025.
Cite
Cite
×