ServiceNow AI Research

Safety and Security

Attack What Matters: Integrating Expert Insight and Automation in Threat-Model-Aligned Red Teaming
Prompt injection attacks target a key vulnerability in modern large language models: their inability to reliably distinguish between …
Shifting AI Security to the Left: Design-Time Defenses to Mitigate the Risks of Prompt Injections
Prompt injections pose a critical weakness for modern Large Language Models, making it difficult for AI to distinguish between …
DoomArena: A framework for Testing AI Agents Against Evolving Security Threats
We present DoomArena, a security evaluation framework for AI agents. DoomArena is designed on three principles: 1) It is a …
DoomArena: A framework for Testing AI Agents Against Evolving Security Threats
We present DoomArena, a security evaluation framework for AI agents. DoomArena is designed on three principles: 1) It is a …
Silent Sabotage: Injecting Backdoors into AI Agents Through Fine-Tuning
The rise of AI agents that can use tools, browse the web and interact with computers on behalf of a user, has sparked strong interest …
No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data
Leading language model (LM) providers like OpenAI and Google offer fine-tuning APIs that allow customers to adapt LMs for specific use …