Are you ready for Generative AI?

ARTICLE | May 19, 2023

Are you ready for Generative AI?

Buckle up: ChatGPT and similar chatbots will change business irrevocably

By Nicolas Chapados, Workflow contributor


Late last November, tech startup OpenAI released ChatGPT, an experimental chatbot built on so-called generative AI technology. ChatGPT can perform a broad range of tasks—from writing code and passing exams to composing essays and poetry—at an apparently human-level in response to queries written in natural language.

By January, even though it was still a research project, ChatGPT had more than 100 million monthly active users, making it the fastest-growing consumer app in history, according to data from the analytics firm Similarweb. In late January, Microsoft confirmed its multiyear, multibillion-dollar investment in OpenAI and in February offered limited access to new AI-powered capabilities based on ChatGPT in its Bing search engine and announced plans for a wider release of the technology via Azure Cloud and Office 365. Google, Baidu, and other big tech companies have also announced rival chatbots, all of which is sparking a generative AI arms race.

So far, most analysts have focused on how ChatGPT and its competitors will transform internet search, supplant human creativity, turbocharge plagiarism, or make millions of white-collar workers redundant.

Few, however, are talking about the implications of this new technology for the enterprise itself. Going forward, tools like ChatGPT will be a source of uncertainty for organisations whose business models they threaten to disrupt. For companies that understand its potential, generative AI presents a huge opportunity.

 Multimedia feature

The low-code innovators

ChatGPT is built on what is known as a large language model, part of a broader family of technologies called foundation models. These systems are trained on gigantic volumes and varieties of data, but they are not specialised for any specific task. Rather, they are pregnant, if you will, with a huge set of potential capabilities. They can perform at near-human levels when given just a tiny amount of data related to a specific task.


In 2020, OpenAI released GPT-3 (Generative Pretrained Transformer 3), the underlying algorithm for ChatGPT. That version of the model encompassed 175 billion parameters and was trained from a multitude of data sources from across the internet.

Most importantly, it reshaped the notion of generality in AI. After the system was trained, you could then adapt it for a specific task by giving it a few examples of text and then giving it simple instructions in natural language. That core ability to instruct a chatbot the same way you would a fellow human is what’s revolutionary about GPT-3.

The model’s emergent capabilities make it great at testing creative ideas, fast.

The model’s emergent capabilities make it great at testing creative ideas, fast. It’s also very useful for retrieving information, and software developers have found it to be quite skillful at suggesting code when prompted with a programming challenge.

GPT-3 also has significant limitations. One is linked to “hallucinations” in which the system provides made-up answers to questions. Hallucinations arise due to the static nature of the model’s knowledge, which is whatever the engineers burned into the model on a specific date. Without the ability to update itself or access the internet, its knowledge is limited, and when it doesn’t know the answer to a question, it may make one up.

These limitations are not permanent. For now, ChatGPT can’t connect to real-time or updated corpora of knowledge or application programming interfaces (APIs). However, Microsoft’s more advanced version can search the internet and include very recent information and news. Such breakthroughs will unlock even more of this technology’s potential.

It’s easy to imagine everyone in an organisation soon having access to cognitive assistants powered by large language model-based AI.

Sitting at your computer, you could ask your AI assistant questions throughout the day about the tasks you’re working on and have conversations with it. You could talk to it as you would a trusted colleague and get help testing hypotheses, trying things out, gathering information from a knowledge base, organising your day, and so on. Such functionality would help you do your job better and faster.

There’s also a lot of potential in functionality that resembles GitHub Copilot, a generative AI tool that offers on-the-fly coding suggestions for programmers. For everyday enterprise system users, we can imagine every text-entry box automated to suggest likely inputs based on the context and what you’ve done before. (Of course, the human user always has the final word on what the system should do). We can imagine Copilot-like technology deployed at scale everywhere.

A bit further out, it’s only a small-speculative jump to having a digital twin for various work personas, such as IT help desk agents. In such a future, newly hired agents would get up to speed quickly because they would benefit from the autocompletion and automatic interface of their digital twin, which is the product of the AI system learning from hundreds of workers that came before them. Anything that a user produces can serve as feedback to further train these models through reinforcement learning, so the feedback itself contributes to making the models better. Soon the digital twin might be as capable as top human performers in a variety of everyday situations. 

Looking at products, we can imagine companies providing highly trained foundation models for specific industries or verticals. A foundation model tailored for telecom, manufacturing, or the public sector would understand the specific vocabulary and processes unique to those sectors. These models would be fine-tuned on specialized data sets that allow them to understand context-specific nuances. Companies would then provide them as part of a platform capability or service.

If we think about foundation models as a new kind of operating system for AI, the possibilities become quite exciting.

Looking into the future, we can imagine the emergence of what I’ll call “intelligence value chains,” an ecosystem of products, services, and applications based on the pervasive use and availability of AI-based foundation models. We’re already seeing the emergence of specialised AI prompt stores where short pieces of natural language that efficiently query models to accomplish a task are being sold for a couple of dollars apiece.

This has two important implications for the future. One possible route is to imagine that standardised AI dialogue turns, consisting of a text-in/text-out exchange, become so important to business that they turn into a form of cognitive commodity. Not unlike a barrel of oil or a bushel of wheat, a cognitive commodity would be a standardised unit for AI services. From there, it’s not hard to imagine the creation of cognitive futures contracts to guarantee the future availability of AI services at a set price.

A second possibility, perhaps nearer and more concrete, is the emergence of an app economy for these models. This would allow not only prompts, but also a combination of prompts plus additional training parameters to plug into the model to produce new behaviours, in the same way that apps for iOS and Android can enrich our phones with new functionality. 

If this market for customising foundation models takes off, it might give rise to flywheel effects in favour of some models, which would garner more users because they have more apps, and more app developers because they have more users. (In effect, these models would become the AI version of Microsoft Windows or Apple’s iOS.)

It’s always difficult to predict how revolutionary technologies will shape the future. I believe this moment represents the beginning of a new era in software, with the industry likely to change more in the next five years than it has in the last 50. When it comes to enterprise-specific foundation models, conversational interfaces will become a primary way users interact with machines, on an equal footing with today’s point-and-click interfaces. Users will be empowered to interact and partner with their machines more intuitively, to create highly customisable and sophisticated workflows, to automate complex daily tasks, and to be more effective both at work and throughout their lives.

Related

The self-optimizing enterprise

Related articles

Betting the future on CX
RESEARCH
Betting the future on CX

Great customer experiences can be a bulwark against economic uncertainty.

The race to optimize business
RESEARCH
The race to optimize business

Our latest operation optimization survey reveals how global executives are automating business processes and increasing productivity. Read now to learn more.

 

Can AI make banking fairer?
ARTICLE
Can AI make banking fairer?

Artificial intelligence has given disadvantaged communities greater access to financial services. Do some algorithms perpetuate inequality?

Automation to the rescue
ARTICLE
Automation to the rescue

Hackers are leveraging new ways of working to wreak havoc on organizations. Here’s how we can fight back.

 

Author

Nicolas Chapados is the vice president of research at ServiceNow. He co-founded Element AI, which was acquired by ServiceNow in 2021.