thumb-scaling-in-genai-era

ARTICLE | September 20, 2024 

Three keys to scaling in the GenAI era

The road from pilot programs to enterprisewide deployments is bumpy. Here are tips for smoothing the ride.
By Howard Rabinowitz, Workflow contributor

It’s Tuesday morning and your AI-infused enterprise is humming. Machine learning apps are honing sales forecasts, revealing zero-day cyberattacks, and assisting in dozens of other ways. And generative AI (GenAI) copilots are turbocharging the talents of people in every corner of the organization: writing code for software engineers, giving real-time suggestions to customer service reps, and creating first drafts of everything from social posts for marketers to merger contracts for lawyers. Thanks to the underlying AI platform, data from all of these silos is continuously collected and analyzed to reveal new efficiencies and growth opportunities.

That’s the much-hyped promise of AI, but for most companies not the reality…yet. According to a survey of nearly 4,500 companies by ServiceNow and Oxford Economics, relatively few companies have managed to scale from isolated AI pilot programs to an integrated, enterprisewide AI capability. Those that have, the AI Pacesetters, are already leveraging their investments into real business value. 

What set these leaders apart? They were far more likely to have developed a clear AI strategy that spans functional areas, and C-suite executives were twice as likely to have been engaged in defining and promoting that strategy. Leaders were also far more likely to say they had the right mix of talent to execute the strategy. They are also investing substantially more in AI. Forty percent of leaders plan to boost these investments 15% or more in the next few years, compared to just 18% for non-leaders. 

There’s no shame in having started slowly on your AI transformation journey. As the low grades on this AI Enterprise Index show, it’s not easy. But further delays will be costly, says Maribel Lopez, founder of Lopez Research. “Given how fast AI is developing and being adopted, you can’t be sitting around thinking you’ll put it in next year’s budget,” she says. “It’s going to be very hard to catch up once you fall behind.”

Here are three core practices that Lopez and other experts recommend to any company looking to scale AI successfully across the enterprise.

On-demand webinar

Nonprofit Digital Transformation Report 2024 

Data is the lifeblood of AI. But bad data is poison.

That’s because unless the data is vetted, properly organized, tagged, and up to date, the AI systems that use it cannot be relied on to deliver insights that are accurate, reliable, and useful. 

Too often, this prep work gets back-burnered by IT teams that are engaged in sexier work, such as working with large language models (LLMs), or, more likely, simply too busy putting out daily operational fires. As such, IT leaders need to make sure to allocate the time and resources to build a robust data architecture and maintain data quality. 

This advice holds especially true if you’re looking to scale GenAI across your operations. Today, most companies are using LLMs, such as OpenAI’s GPT-4, that are relatively easy to deploy but are trained on data from the public internet. The job gets more complicated, and costly, when companies decide to fine-tune these models with their own proprietary data, and still more complicated and costly if they build their own models from scratch. 

Ultimately, the companies that take this path will get greater strategic benefits from AI. They’ll also be able to better protect their proprietary data from falling into the wrong hands or leaking into the public domain, and they’ll have the control needed to design and implement effective governance systems.

But this path is tougher. It requires big investments of time and money not only to prep the data, but also to modernize the underlying data architecture. For example, companies that want to make the most of GenAI need to meld data warehouses, traditionally used to store structured data, such as financial information, with data lakes that are used only for unstructured data, such as videos, images, and social media posts, into an integrated architecture known as a data lakehouse.
 

Responsible AI is not just the right thing to do. It’s also the smart thing to do.

Also, data tuning for GenAI is not a one-and-done proposition. It’s a continuous process, says former JPMorgan Chase CTO Michelle Bonat, now chief AI officer at AI Squared, a data integration platform vendor. “When your chatbot meets the world for the first time, it’s seeing data it’s never seen before,” she explains. “It’s critical that you keep tuning and retraining and updating the model once it’s deployed.”

Responsible AI is not just the right thing to do. It’s also the smart thing to do. Since GenAI burst on the scene in November 2022, companies have been pilloried for misusing the technology, from Sports Illustrated not disclosing that some of its articles were written by AI to Air Canada being sued when a chatbot incorrectly promised a customer a bereavement discount to go to his grandmother’s funeral. 

The damage goes far beyond embarrassing headlines. According to a survey by the Artificial Intelligence Industry Association of 1,000 companies with more than $1 billion in revenue, 20% suffered losses of $50 million to $100 million, 24% saw losses of $100 million to $200 million, and 10% admitted to losses of $200 million or more due to failures to govern GenAI models and applications well.


Yet, according to ServiceNow’s Enterprise AI Maturity Index, governance is by far the weakest link for most companies as they try to scale their AI operations. Even leaders gave themselves a score of just 7 out of 100, only slightly better than non-leaders, who reported an average score of 5.

To do better, companies should start by defining a clear, comprehensive framework for making ethical business choices at every step in the deployment plan, from the training and tuning of LLMs to ensuring employees are giving customers accurate information about company policies. Companies need to build—and publicize—clear principles and policies across the organization. That includes who can and cannot use particular tools, and what data can or cannot be shared with them.

Companies should start by defining a clear, comprehensive framework for making ethical business choices at every step in the deployment plan.

A key challenge is figuring out who will make and enforce the rules of the road. “The most important thing is to have a clear understanding of who is going to take the ball and run with it,” says Tom Davenport, distinguished professor of IT and management at Babson College. “Is it the chief information officer, the chief technology officer, the chief data and analytics officer? Many approaches can work.”

Davenport favors an AI steering committee approach, often with the CIO or CTO leading a group that may also include the chief security officer, a data officer, a risk officer, and representatives from legal and HR. Many organizations also create an AI Center of Excellence, bringing together business leaders and data scientists to determine the best use cases for AI and how to get maximum value.

John Castelly, chief ethics and compliance officer at ServiceNow, agrees that AI governance is a multidisciplinary affair. “It’s most definitely a team sport, and that’s because you can’t afford a bottleneck,” he says. “The speed of innovation requires movement. It requires collaboration, and it requires buy-in. You can’t have one person that’s responsible for understanding and knowing all about the development and the deployment of your AI or GenAI strategy.”

No matter what organizational structure is chosen for governance, it’s essential that it oversees all aspects of scaling, says Isaac Sacolick, founder and president of StarCIO, a consulting company. “If you don’t have a defined process, you’ll probably end up with a shadow AI problem,” where—as with shadow IT—individuals throughout the company take matters into their own hands and start using AI tools without central oversight. “That’s not just inefficient. It has real risk to it.”

Troublingly, more than half of workers admit to using generative AI tools from outside the organization and hiding it from their managers. 

“We don’t want to open up a bunch of tools, throw [them] at the organization, and just say, ‘Have at it,’” cautions Sacolick. “Governance policies determine how much structure or how much flexibility you provide to employees to use these LLMs for different processes. Some organizations are going to give access to everybody. Some are going to be more selective.”

Much of the business promise of AI is tied up in GenAI, which, unlike previous versions of the technology, can directly transform the way knowledge workers do their jobs—whether by summarizing a lengthy document or creating a PowerPoint deck in seconds. But companies are struggling mightily to get the right mix of human and machine. Only 19% of the AI Maturity Index companies surveyed say they have created workflows that leverage human and AI capabilities to make work more efficient. 

The top barrier to scaling GenAI is a lack of technical skills, according to Deloitte’s State of Generative AI 2024 report. It states that three-quarters of organizations anticipate evolving their talent strategies within the next two years to develop their generative AI-powered workforces.

But are they moving fast enough? Consider this: Only 25% of companies are planning to offer workers training on how to use GenAI in their jobs in the coming year, according to the 2024 Work Trend Index Annual Report from Microsoft and LinkedIn.

ServiceNow’s AI Maturity Index shows that although the majority of office workers have used generative AI in some capacity, almost half of them still don’t understand how GenAI can best support them in their specific roles.
 

“In early 2023, we thought we could just give a bot interface to everybody and we’d be done,” recalls Carm Taglienti, chief data officer at Insight. “It doesn’t work that way.” Employees need to be educated not just about governance policies, but also how to collaborate with the AI and tap its vast data and compute capabilities to do their jobs better.

Without that granular change management, companies risk disengagement and lack of buy-in. “For those who are early adopters within the organization, it’s a little bit easier to deliver these services,” notes Taglienti. “But there’s a big group of the population who will resist any change. If they try it and don’t see the value, they’ll never use it again.”

40%

of leaders plan to boost GenAI investments by 15% or more in upcoming years


For scaling to succeed, companies need to conduct AI training across the enterprise. It’s not enough to give basic primers on GenAI, or even to show how to use co-pilots tuned for specific jobs. Anyone using AI needs to understand responsible AI practices to ensure ethical, unbiased use of the technology. They need to comprehend the strategic value of AI at the enterprise level, not just in their own work. The potential synergies are huge—maximizing the value of data to drive more AI-generated insights, increased collaboration across functions, and streamlined governance processes, to name a few. Ignore such benefits at your peril, because your competitors likely won't.

Special report

Impact AI: 2024 Workforce Skills Forecast

Related articles

AI will impact the nonprofit workforce too
ARTICLE
AI will impact the nonprofit workforce too

With the right employee reskilling, nonprofits can use AI to do greater good for more people

AI will unlock developer productivity
ARTICLE
AI will unlock developer productivity

The future of programming will pair human coders with AI assistants, reducing repetitive, boring tasks and maximizing creativity and problem-solving

Put GenAI to work in the enterprise
REPORT
Put GenAI to work in the enterprise

Generative AI will supercharge productivity and enable new ways of working

 

We’re on the cusp of a human capability revolution
COLUMN
We’re on the cusp of a human capability revolution

Professor Dave Ulrich says HR leaders must focus on business results more directly. AI can help.

Author

Howard Rabinowitz is a business and technology writer based in West Palm Beach, Fla.

Loading spinner