- A new NLP model that mimics human language could accelerate enterprise automation
- Developers have so far tapped the model’s API to create more than 300 applications
- Companies are using the technology to produce digests of financial news and create realistic artificial voices
Last June, OpenAI, the San Francisco–based nonprofit, released the world’s most powerful natural language processing model to date. Known as GPT-3, it can automatically complete a statement, respond to questions, or generate lines of code based on a few simple commands, and can do so with more human-like realism than any previous AI-based language program.
The technology is still in private beta, but initial enthusiasm has been strong. (The acronym stands for Generative Pre-trained Transformer 3.) Developers have already produced more than 300 applications using the model’s API, including programs to help gather material for legal briefs, provide more relevant search results, answer customer-service questions, and create simple websites.
Commercial applications remain limited for now, but that’s expected to change as GPT-3’s advanced ability to use everyday language accelerates automation across the enterprise. The application could enhance a wide range of business operations, from analyzing, digesting, or translating the content in documents to producing marketing, sales, or other text-based materials.
[Read also: Return on AI]
“I think GPT-3 will have a significant and lasting impact on NLU [natural language understanding] technologies,” says Bryan Healey, CTO of Aiera, a company that produces financial-news digests. “But we’re still in early days for this technology.”
A language ‘transformer’
To be sure, much has been made of GPT-3, maybe too much. Sam Altman, OpenAI’s CEO, wrote on Twitter last year that “GPT-3 hype is way too much.” While it might free content creators from routine writing tasks, it’s incapable of understanding or reasoning, and its output is sometimes incoherent or nonsensical.
GPT-3 works like a super-powered version of Google’s text-prediction algorithm, which automatically tries to complete search queries or suggests basic email responses. Given a prompt, GPT-3 can produce a long-form answer to a question, write an original story, essay, or poem, or generate the code to create an application.
What makes GPT-3 so powerful is the size of its algorithm, which uses more than 175 billion learning parameters — the rules it follows in predicting what words to produce. That is more than 10 times more than the previous language-model champ, Microsoft’s Turing NLG, and the reason why it can generate more accurate and realistic languages. OpenAI has already begun licensing the technology to Microsoft, which plans to make its features available through its Azure cloud-computing platform.
Using GPT-3 AI to revamp the search experience
At Aiera, based in New York, Healey began experimenting with GPT-3 last year, using it to summarize news releases and to highlight the most important segments in a corporate-earnings call. The company is also testing the tool’s ability to improve search results. Rather than respond to a query with a list of links, a search request could produce a document with the requested information.
“This could, in theory, allow our users to be more abstract with their searching, i.e., what is the most important product for Apple?” Healey says. Rather than giving users a firehose that includes every story written about Apple, GPT-3 could allow Aiera to digest all of that information and turn it into something more coherent and usable.
The tool is powerful, but can sometimes falter when auto-completing queries — repeating phrases, suggesting profanities, and other inappropriate phrases. “It’s not impossible to overcome, but can be challenging,” Healey says. Still, he plans to integrate the technology into Aiera’s search engine once the kinks are worked out.
More authentic chatbots
At Resemble.ai, a producer of synthetic voices, GPT-3 is being used to create more realistic vocal interactions, which can be used in video games or voice-response systems in call centers. Traditionally, this kind of work involves hiring voice actors to record lines for every conceivable response. Now, after feeding five minutes worth of spoken words into Resemble, the AI does the rest, creating audio responses dynamically.
GPT-3 can mine a company’s knowledge base and provide customers with a bespoke answer instead of canned, prewritten dialogue.
With the technology, game characters can have conversations with players without relying on prerecorded dialogue, and the television and film industries could recreate voices of long-dead actors or historical figures, says Fawad Ahmed, Resemble’s growth manager. Last year, its technology was used to create a video for a well-known tech reviewer using a synthetic voice and without the participation of the reviewer.
The same ability can be used to create more realistic customer-service chatbots. The tool can mine a company’s knowledge base and provide customers with a bespoke answer instead of canned, prewritten dialogue.
A GPT-3 powered chatbot not only has a better grasp on the meaning of a query than previous NLU technologies, it can come back with more relevant and realistic-sounding answers. For example, check out the italicized paragraphs of this blog post, which were written by a GPT-3 bot in response to the previous paragraphs, written by people.
Customer-service practitioners suggest that by itself, GPT-3 isn’t ready to handle real-life queries. It is unable to read between the lines when a customer phrases a question imperfectly, and will respond with bland, generic answers instead of something more helpful. But its ability to quickly search a company’s knowledge base allows it to feed customer-service reps with accurate answers in a fraction of the time it would take for a human to look them up. Other AI-based customer-support tools offer similar abilities, but GPT-3 generally delivers faster, more realistic results.
A boost for low-code development
GPT-3 is also generating buzz among low-code developers.
While traditional low-code software tools allow programmers to code using points and clicks of a mouse, GPT-3 is a different animal: Just type a short description of what you want, or a few lines of sample code, and the algorithm fills in the rest.
This is easiest to see in a tweet by Sharif Shameem, who developed an app that takes a simple description of a table, a button, or a web page layout and returns the code needed to create that element. GPT-3 can also write full apps.
This is mind blowing.
With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.
W H A T pic.twitter.com/w8JkrZO4lk
— Sharif Shameem (@sharifshameem) July 13, 2020
Check out Shameem’s to-do list app, which was written entirely by GPT-3 in response to nothing more than a description of what the app should do.
These are simple examples, of course, and they don’t always work perfectly. Critics also note that GPT-3 doesn’t scale well — and that writing code isn’t the major challenge in app development. Instead, clarifying and solving problems are the biggest development hurdles. GPT-3 may be able to write code, but it can never understand what code needs to be written.
With Microsoft now holding the reins of GPT-3 — and holding them rather tightly — the outlook for a general release of the technology remains shrouded in mystery. That is, until Outlook starts finishing your emails for you.