Hand made of geometric lines drawing a line with a pencil to represent how generative AI is changing how companies operate

ARTICLE | May 9, 2023

Generative AI is a game changer. Companies need to set the new rules of the game.

There are no hard and fast answers yet, but governance pitfalls are rife, from bias and security threats to the spread of shadow AI

By Howard Rabinowitz, Workflow contributor


In November 2022, ChatGPT burst into public consciousness amid buzzy headlines about how it could compose emails, essays, and song lyrics, chat circles around Siri, write computer code, and more. The arrival of generative pre-trained transformer (GPT) technology—or generative AI—was hailed as a watershed moment for society.

“Think of the turn of the last century when electricity was discovered and the impact it had on every industry, from manufacturing to healthcare,” says Gary Fowler, an AI entrepreneur and CEO of GSD Venture Studios, which incubates startup business plans. “This is where we are with AI and GPT today.”

Fowler sees limitless business applications for the text- and image-creation powers of generative AI, from smarter customer support to hyper-personalized marketing and faster report generation. But as companies rush to tap the commercial and enterprise workflow potential of generative AI, keeping governance issues top of mind will be critical.

Related

 The benefits of low-code platforms

“In the European Union, meaningful AI regulation is at least 12 to 18 months away, and in the U.S., there’s nothing imminent on the horizon,” says Charles Radclyffe, founder and CEO of EthicsGrade, an ESG ratings firm that assesses AI ethics risks for global companies. “Meanwhile, GPT is already out in the wild. Companies have to learn how to use it responsibly, and how to prepare to mitigate its risks.” 

But where to begin? In this rapidly evolving space, there are no hard and fast answers yet, but governance pitfalls are rife, from bias and security threats to the spread of shadow AI. As business leaders explore what guardrails should be put in place, they also must decide who “owns” governance of generative AI and GPT within the organization, and how it ladders back up to a holistic and transparent approach to governance.

Even OpenAI cautions that GPT has limitations, not the least of which is its propensity to write plausible-sounding but incorrect or nonsensical answers. “This ‘hallucination’ of fact and fiction is especially dangerous when it comes to things like medical advice or getting historical facts right,” it warns on its website.

This underscores a key guardrail for companies to deploy: a human-in-the-loop, such as a “prompt engineer,” an emerging IT role, to refine the text prompts that people type into the generative AI to yield more accurate outputs.

“Prompt engineer” job title aside, every employee deploying generative AI should closely monitor outputs and flag potential bias or factual inaccuracies, notes Fabio Casati, principal machine learning engineer at ServiceNow and lead of ServiceNow Research's AI Trustworthiness and Governance Lab.

“Monitoring, steering, and constraining the AI to align it to behaviors and values that match what a company or society believes in is the most important aspect of human-in-the-loop,” says Casati. “This is the form of human-in-the-loop I'd expect to be in place for the longest time, possibly forever.”

Generative AI could run afoul of these behaviors and values in any number of subtle but dangerous ways, he says.

Consider talent recruitment. “We all know how hard it is to find and select the ‘right person’ for a job,” says Josh Bersin, leading HR analyst and CEO of the Josh Bersin Company. “Suppose you could crawl millions of employee profiles and assess, based on comparative data with people in similar roles at other companies, how ‘good’ this person is at this job? That would be impossible to do manually. Generative AI can do it.”

But those “good” candidate outputs could be tainted by unconscious biases baked into the AI, posing a huge diversity, equity, and inclusion (DEI) issue in hiring. For example, a UC Berkeley researcher asked ChatGPT which race and gender were the “best” scientists, and it replied they were white and Asian males. And a Stanford University study found that the bigger the large language model dataset, the more toxic the bias in its outputs.

That’s why Bersin says a human-in-the-loop fail-safe is essential for good governance. “I’d tell companies to be very careful drawing direct conclusions from GPT’s results without a human double-checking it,” he advises. “It’s not a perfect calculation engine at this point.”

Among the early adopters of GPT are cyber criminals, according to new research by Check Point cybersecurity solutions provider. This is no surprise given the AI’s ability to “spoof” the voice of trusted contacts and generate countless phishing emails at scale.

“We’re already seeing phishing that’s a lot more sophisticated,” says Randy Lariar, practice director for Big Data and Analytics at Optiv endpoint security firm. 

With an expanded attack surface and exponential scalability, the governance challenge for CIOs and CISOs is twofold: First, they must bolster basic cyber risk protocols (such as zero-trust architecture and two-factor authentication) and then re-educate and train workers to be the first line of defense against pitch-perfect phishing attacks, so they don’t click any email link, even a photo or video attachment from Aunt Jan sharing vacation snapshots.

“The image generation capability of generative AI is going to be a big problem,” Lariar predicts. “Seeing will no longer be believing.” Indeed, AI-generated “deep fake” photos of former President Donald Trump in handcuffs and Pope Francis in a puffer jacket garnered millions of online views in March.

It’s taking the cost and difficulty out of being a hacker so the barrier to entry is lower

Another governance boondoggle, according to Lariar, is code. Developers are already using GPT to save time on repetitive coding yet most GPT tools offer no way to vet how or where generated code originated, whether it was ethically written, or whether these developers have the right to use it in commercial projects. “You need strong DevSecOps protocols in place so you don’t let AI-written code get to production without a series of human and automated testing,” he advises. To support responsible development ServiceNow and partner Hugging Face have sponsored the BigCode project, an open scientific collaboration dedicated to the responsible and open development of AI-generated code.)

For CIOs and CISOs, the spread of generative AI means they must add “shadow AI” to the shadow IT equation. For years, employees have created or added apps into enterprise networks without proper IT governance, because you can’t provide oversight to what you don’t know exists.

At least 40% of knowledge workers are already using GPT like ChatGPT and Dall-E, 70% of them without their bosses’ awareness of it, according to a recent survey by Fishbowl, a social network where professionals come to discuss career topics anonymously. IT business leaders need to put strong controls in place to ensure transparency so that workers don’t introduce generative AI into internal or external networks, processes, and products in the shadows.

40% of knowledge workers are already using GPT

As generative AI gains traction in enterprises, many organizations reflexively will delegate governance oversight to the CIO, CTO, or equivalent C-level tech role, but Radclyffe advises against this.

The CIO or CTO’s ultimate priority, he notes, is to leverage technology to maximize the company’s profits. “There’s a clear incentive misalignment,” he explains. “This is the reason we don’t allow the head of sales at a wealth management company to also be accountable for anti-money laundering.”

A better choice, he says, would be to delegate oversight to the most senior risk leader, such as a chief risk officer, with a dotted line to the head of ESG. The ascension of risk leaders to the C-suite has been accelerating, with 3 in 5 companies promoting them to the boardroom in recent years, and 29% considering doing so soon, according to global accounting firm BDO.

Of course, when it comes to generative AI, as with all shiny new tech advances that a company deploys, the ultimate human-in-the-loop is the CEO. With a 10,000-foot view of the business landscape from their corner office, the AI buck will always stop there.

Related

Digital gold rush

Related articles

How Port of Montreal is managing supply chain pressures
Q&A
How Port of Montreal is managing supply chain pressures

Port executive Daniel Olivier is using AI and other technologies to create efficiencies and increase resilience in turbulent times

How AI can drive business results
CRASH COURSE
How AI can drive business results

Strategies for extracting more value from machine learning models and applications

Customer experiences are as salient as ever
CRASH COURSE
Customer experiences are as salient as ever

Changing financial conditions are making it increasingly difficult to maintain customer loyalty. New strategies and technology can help.

From the lab to the platform
Q&A
From the lab to the platform

Valérie Bécaert relies on her scientific background to translate AI research breakthroughs into enterprise software products

Author

Howard Rabinowitz is a business and technology writer based in West Palm Beach, Fla.