How platform thinking drives AI maturity

ARTICLE | September 10, 2025 | VOICES

Balancing governance and innovation

How Pacesetters navigate AI risk and opportunity

By Vijay Kotu, Chief Analytics Officer at ServiceNow


As someone deeply involved in shaping enterprise AI strategy, I’ve seen firsthand how experimentation drives transformation. To get ahead, businesses must encourage their employees to try out new technologies—and artificial intelligence is no exception. That’s one of the key takeaways of ServiceNow’s 2025 Enterprise AI Maturity Index research, which involved surveying nearly 4,500 executives from around the world to discover how organizations are achieving AI maturity.

In fact, the businesses that foster a culture of AI experimentation are more likely to be Pacesetters. These leaders in the race to put AI to work are further ahead on key markers of AI maturity than their peers. They’re more collaborative, more profitable, and better poised to take on the challenges of the future.

When it comes to experimentation, however, organizations must maintain a balance. It’s a tightrope walk—lean too far into caution, and you risk stagnation; lean too far into freedom, and you risk chaos.

Executives are concerned about AI governance and security, according to the Enterprise AI Maturity Index. This is unsurprising: The more leeway employees are given to experiment, the greater the risk to the organization. Yet failing to experiment out of an overabundance of caution carries its own risk: the risk of being left behind.

Fortunately, there are steps that organizations can take to balance bold innovation with sound governance.

IMPACT AI

Enterprise AI Maturity Index 2025 

Freedom to innovate doesn't come from a lack of rules; it comes from having the right ones. To empower your teams, you first need to establish the foundational principles of AI governance. For any AI initiative to succeed, its results must be valid and reliable, safe and secure, fair with mitigated biases, transparent and accountable, and respectful of privacy.

This isn't something you have to invent from scratch. There are several industry-standard risk management frameworks available, with the NIST AI Risk Management Framework being a prominent example. About 50% of companies currently use the NIST framework for risk management. Adopting a framework like this helps you map out your AI initiatives, evaluate the responsible deployment of each use case, and establish a clear lifecycle for every model. This framework then becomes the backbone of your internal AI policy, providing clear, unambiguous guidelines on what is and isn't permissible.

A framework is only a blueprint; you need a dedicated team to bring it to life. The most effective way to implement AI governance is by forming a central governing body, such as a responsible AI council. This shouldn't be a siloed committee. Rather, it must be a cross-functional group that includes leaders from legal, security, AI engineering, product management, and HR.

This team’s mission is to collaborate on the operational details, establishing the cadence for reviewing metrics, discussing new use cases, and evolving policies as the technology landscape changes. Ultimately, this governing body is responsible for enforcing the company’s standards, ensuring that responsible AI is upheld across the entire organization.

While the governing body sets the strategy, it’s the AI teams on the ground that have day-to-day responsibility for executing it. To support this process, internal AI policies should be clearly communicated and understood.

This is where technology becomes your greatest ally. Achieving organization-wide compliance and transparency is nearly impossible without a platform that offers a unified, real-time view of all AI use cases across the company. This allows you to see what’s being developed and what’s in production and determine whether everything aligns with your established standards. Approved and unapproved use cases should be clearly communicated through this platform, creating a powerful feedback loop for organizational learning and adherence to best practices.

Optimizing performance means tracking key metrics such as the adoption of AI tools among different employee personas, the accuracy and reliability of the models' outputs and, most importantly, the tangible improvements in your company's operating outcomes, such as reduced processing time and increased customer satisfaction.

Some AI models will fail to deliver their intended value. It is the responsibility of the AI owners to continuously monitor performance, take corrective action when a model is underperforming, and have the discipline to sunset defective models that are not adding value. 

Finally, bring in an external, third-party perspective from academic institution, ethics board, or compliance auditing firm to ensure your AI initiatives consistently meet regulatory, compliance, and ethical standards. Periodic reviews of your policies and your adherence to them provide an invaluable layer of objectivity. Partnering with third parties not only helps you identify blind spots, but also builds trust with your customers, partners, and regulators by demonstrating a deep commitment to responsible AI.

By building this structure, you create an environment where governance is an enabler to innovation rather than a barrier to creativity. The organizations that master this balance will not only lead in AI—they’ll redefine what responsible innovation looks like.

Related articles

The future-ready manufacturer
REPORT
The future-ready manufacturer

New research shows how technology investments can yield priority outcomes across the value chain

The six biggest AI questions facing companies today
ARTICLE
The six biggest AI questions facing companies today

How to start filling in the blanks when it comes to the AI era’s most pressing unknowns

Go ahead and automate that bad process
ARTICLE
Go ahead and automate that bad process

AI can help turn a bad process into a good one

Author

Vikay Kotu is ServiceNow’s Chief Analytics Officer
Vikay Kotu is ServiceNow’s Chief Analytics Officer
Loading spinner