By John Castelly, Chief Ethics and Compliance Officer at ServiceNow
It’s no secret that AI adoption is accelerating at an astounding pace. In fact, ServiceNow’s 2025 Enterprise AI Maturity Index, which surveyed executives at about 4,500 global companies, found that 82% of respondents plan to ramp up AI investments in the next year. ServiceNow itself is no exception. We’ve been an AI leader for more than a decade, but our AI usage has exploded over the last few years. It’s an opportunity for incredible innovation and value creation, and we’re seeing the benefits every day.
However, scaling AI adoption responsibly requires effective AI governance. It’s all about trust. Organizations rightly worry about AI risks such as hallucinations, privacy concerns, and misinformation. How do you confidently deploy AI while proactively minimizing these risks, and how do you take swift corrective action if deployed AI starts exhibiting concerning behaviors?
Organizations struggle with AI governance
There’s a huge gap between technology leaders’ eagerness to embrace the transformative potential of AI and their confidence in their ability to effectively govern it. In fact, a recent Deloitte survey found 80% of industry leaders expect generative AI (GenAI) to drive significant transformation in the next three years, but only 23% feel that their risk management and governance functions are ready to support scaling GenAI.
ServiceNow has faced this governance challenge in our own AI journey. That’s why we’ve worked aggressively to establish an effective AI governance framework. It’s still a work in progress, but it’s been critical for scaling AI across our business.
And we’ve learned a lot along the way.
WEBINAR