Chief ethics & compliance officer, ServiceNow, John Castelly

ARTICLE | January 22, 2024 | VOICES

Governing AI’s future

Executives need to incorporate well-thought-out governance into every AI deployment

By John Castelly, chief ethics & compliance officer, ServiceNow


Every business leader knows they need generative AI. Nearly 80% consider it to be the top emerging technology of the next few years, according to KPMG. But few are embracing AI governance with the same fervor.

As the Biden administration’s recent executive order underscores, governments are getting serious about regulating AI. The order aims at promoting the “safe, secure, and trustworthy development and use of artificial intelligence” and directs the National Institute of Standards and Technology (NIST) to develop guidelines and standards to make AI adoption more ethical and effective. Meanwhile, the European Union passed the comprehensive EU AI Act that classified AI systems according to the risk they pose to users and directed companies to mitigate those risks.

This is a cycle we’ve seen before with environmental, social, and governance (ESG) issues and with emerging technologies like cryptocurrency. Regulation begins slowly and amorphously before governments establish a coherent approach. But just because there’s historic precedent doesn’t mean we’re ready for what’s coming. To avoid getting caught flat-footed, executives need to start thinking about what AI governance looks like in their own businesses.

Related

The self-optimizing enterprise

The key to good AI governance is good data governance. Researchers train AI on data. But bias and noise in the data can lead to inaccurate, skewed results and unethical outcomes. The challenge for businesses then is to bolster the quality and integrity of that data. And, in fact, when executives think about the future of AI, data governance tops the list of their concerns.

The cost of bad AI governance is high. AI models can be incredibly expensive to train. If a model is found to be in violation of a law or a policy, it can take the business a long time—and a lot of money—to adjust the training data to fix it. In other words, it’s worth getting AI governance right the first time.

Data governance means knowing your data inside and out: where it comes from, what’s in it, where it’s going, whether the data is accurate and valid, and whether the data is appropriate for the algorithm that’s using it. Data privacy and security is another key component of AI governance, since unauthorized access can compromise the quality of the data and the integrity of a model’s outcomes.

 

To avoid getting caught flat-footed, executives need to start thinking about what AI governance looks like in their own businesses.

 

Executives who are skeptical about the importance of AI governance should think back to a lesson from our recent past. When executives started paying closer attention to making progress on ESG issues, they were surprised by the results. The companies that perform better on ESG also perform better financially. However, this correlation has little to do with ESG itself. Rather, to make progress on ESG, companies must develop the muscles for gathering data, analyzing risk, and parsing trends. Once developed, those muscles can be applied to myriad other parts of the organization. The result is better numbers across the board.

Many organizations opted for a voluntary ESG disclosure framework rather than waiting to be hit with regulatory penalties. Executives recognized that transparency could be a competitive advantage and that if they weren’t facing pressure from stakeholders at the time, they soon would be.

Even though governments are taking AI governance seriously, companies may find that it’s in their best interest to be proactive. Fortunately, if they’re already thinking about data governance, then they can apply many of those principles to AI governance as well. Conversely, AI governance best practices will improve overall data governance and reporting across your entire organization. It’s good for business.

Historically, governance has been within the purview of the executive in charge of compliance. But AI governance is not a responsibility that a single person or team can manage. Rather than trying to handle it alone, organizations should work to create a businesswide culture of responsible AI usage, grounded in principles like transparency, fairness, and compassion.

When it comes to building AI models, these aren’t just abstract ideals. Companies have made headlines for deploying algorithms that caused real-world harm such as discrimination and racial bias. As we try to anticipate the future regulatory landscape, choosing our ethical North Star is important. But it can also be the difference between building models that work and models that don’t: innovating for the future or repeating the mistakes of the past.

Related

 About Low-code platforms

Related articles

Year Two of the AI Revolution
ROUNDTABLE
Year Two of the AI Revolution

If 2023 was the year of experimentation, things get real in 2024. Here’s what the experts expect next year

Put GenAI to work
SPECIAL REPORT
Put GenAI to work

Generative AI can make every person, system, and process work better. But how do you put AI to work in the enterprise? For this special report, we asked top AI thinkers to explain how business leaders can get started with GenAI. Not next year. Now.

2024: The year AI delivers
VOICES
2024: The year AI delivers

This year will prove the GenAI hype was warranted as artificial intelligence begins to fundamentally transform how work gets done

Experience in the age of AI
WORKFLOW QUARTERLY
Experience in the age of AI

Companies are putting AI to work as they interact with customers, employees, and the world

Author

 OG image (size: 1200*630px) author-john-castelly-1200x630.jpg   Chief ethics & compliance officer, ServiceNow, John Castelly

John Castelly is ServiceNow’s chief ethics and compliance officer. Previously, he was the chief compliance officer at Personal Capital and most recently at Robinhood, where he designed and scaled its brokerage compliance programs. Castelly has held senior legal positions at Morgan Stanley and TD Ameritrade and served as special counsel for the U.S. Securities and Exchange Commission’s Division of Trading and Markets.

Loading spinner