The business case for AI governance

The right guardrails can improve AI performance and boost the bottom line

Putting the right AI technology guardrails in place can help drive better business outcomes.

Artificial intelligence is not the “black box” it once was. Instead of blindly embedding enterprise-level AI algorithms into their systems without understanding how they actually work, the majority of organizations for which enterprise AI is a priority technology are now using digital tools that provide transparency around their models.

And yet AI governance remains for too many large organizations an afterthought or worse. Why is that so when so much rides on the quality of the intelligence, artificial or otherwise, a company produces about its products, customers, and employees?

There’s real money to be made by providing the right AI governance.

It’s a resource problem. In most industries (except heavily regulated sectors such as healthcare and financial services), proper AI governance remains optional—as does the significant investment of financial and human capital required to create it. What new technology spending will be needed? Which C-level leaders, data scientists, and lawyers will do the hard work of setting the rules? How many hours will monitoring eat up per annum?

Many companies haven’t yet recognized the strong ROI available from creating a framework for AI governance. Good AI governance increasingly means better business outcomes—measured by improved performance, better risk mitigation, and stronger trust between the company and its customers.

Governance and the bottom line

Beyond creating the structure for the ethical use of AI in workflows, products, and services, there’s real money to be made by providing the right AI governance.

By streamlining an HR hiring process or sniffing out customers’ problems before the customer does, AI governance can produce tangible cost savings. Proper governance enables organizations to spend dollars more efficiently while increasing customer and employee happiness.

[Read also: Exclusive interview with Turing Award-winner Yoshua Bengio]

If the financial benefits aren’t persuasive enough, consider the risks of regulatory noncompliance. The European Union’s 2018 General Data Protection Regulation (GDPR) is a signal of what may be in store in years ahead for enterprise AI.

So far the EU has set the pace for AI regulation, but other regions aren’t far behind. It’s only a matter of time before governments start requiring companies to provide specific levels of transparency in the form of hard metrics around their AI models. For all these reasons, putting a sturdy AI governance framework in place today makes long-term business sense.

Learning loops

AI algorithms evolve and improve in learning loops based on the data they’re fed. The right governance enables companies to guide each algorithm so it evolves beneficially. Good managers never ask human employees to work without proper support and oversight. Similarly, managers should provide course-correcting guardrails designed to keep their AI algorithms on track so they develop in beneficial ways.

A generative adversarial network (GAN), for instance, feeds synthetic data to an algorithm to detect skewed outcomes. Using this technology, a financial services analyst can test its loan approval model for fairness. A retailer can test whether its in-store facial recognition software is racially profiling shoppers. Across all industries, an HR leader can ensure that hiring software is free of bias.

Any red flags raised by these monitoring tools can be kicked up the review ladder established by your governance framework, with ultimate accountability at the CEO and board level. This shared responsibility creates trust among all stakeholders, from talent to investors to customers, in your ethical use of digital technology.

Regulation: Not if, but when

As digital technology drives ever more GDP globally, the pressure to regulate enterprise AI use is mounting. Historically, most AI regulators have been more reactive than proactive, for fear of stifling innovation and competition. But the regulatory wheels are in motion. In April 2021, the European Commission unveiled the Artificial Intelligence Act, which, if adopted, will specify and ban “unacceptable” uses of AI.

In the United States, Sens. Ron Wyden (D-Ore.) and Cory Booker (D-N.J.) are co-sponsoring the Algorithmic Accountability Act, which would require companies with $50 million or more in revenue (or in possession of more than 100 million people’s data) to produce verified impact assessments of their AI algorithms.

The regulatory wheels are in motion.

I helped draft the European Commission’s Ethics Guidelines on Trustworthy AI. In formulating these guidelines, my colleagues and I found that numerous laws, regulations, and formally articulated principles already existed to deal with most AI use cases. What was lacking was regulation around transparency.

In the EU, regulation of AI models is under discussion by committees formed after the European Commission unveiled proposed legislation, the Artificial Intelligence Act, or AIA, earlier this year. The committees focused on industries such as healthcare, manufacturing, and energy. The strategy is to set down markers for what standards they will enforce, allowing organizations to build internal governance frameworks before the regulations go into effect. It’s a well-balanced approach that gives each industry the opportunity to implement an AI framework that provides the right level of transparency.

[Infographic: 8 key moments in the evolution of enterprise AI]

Industry leaders should consider the U.S. Food & Drug Administration’s mission, structure, and operations. The FDA’s mandate is to regulate products and services that impact the U.S. population’s health. As AI algorithms become more embedded in all aspects of our lives, from advertising and banking to medical diagnoses and self-driving cars, I believe many industries will need the equivalent of an FDA for AI governance.

Self-governing AI

One bit of good news for industry is that AI governance will become easier over time as AI itself gets better at monitoring other algorithms. Increasingly, we’ll rely on AI to manage and make sense of the digital world.

Technology itself is evolving so quickly that humans can’t keep up. Startups are already building AI-based services that monitor algorithms for us. On the content front, I’ve seen sophisticated recommender systems that can filter for the information we really care about and fact-check it.

Basically, these algorithms earn our trust by engaging reliably with search algorithms and other news feed systems. Again, the key word is trust. While proper AI governance can yield measurable business outcomes, the trust benefits are priceless.