Enterprise AI and the ‘valley of death’

ARTICLE | DECEMBER 9, 2021

Enterprise AI and the ‘valley of death’

How companies can translate cutting-edge research into practical business tools

By Evan Ramzipoor, Workflow contributor


Researchers are trying to build artificial intelligence and machine learning models that can learn as fast as humans, translate verbal instructions into code, help people make accurate decisions in an unpredictable environment, and deliver unbiased results.

While these agendas could lead to faster, smarter business decisions, they first must be translated into concrete use cases. To do that, most businesses have teams dedicated to “crossing the valley of death.”


This phrase, well-known to experts in enterprise AI research and development, means finding practical applications for AI lab research. That’s part of Valérie Bécaert’s job at ServiceNow. Bécaert, a senior director of research and scientific programs at the company’s Advanced Technology Group (ATG), leads AI research programs and ensures they align with business needs.

There’s not always an obvious link between lab research and practical applications.

As organizations ingest more data, training machine learning models becomes more expensive and complex.


“A major component is figuring out the level of understanding and skill the organization has, the processes and culture in place, and how we can transform this culture or influence it so there is more risk acceptance,” Bécaert says. “It’s about getting the organization prepared to take [on] this research and not be afraid of it.”

Related

8 key moments in the evolution of enterprise AI

Like humans, algorithms rely on examples to learn new concepts, but not at the same pace or level of understanding. For example, a child can quickly learn the difference between a cat and a squirrel, if shown pictures of each. An algorithm may require data scientists to feed it thousands of pictures before it can make the same distinction. Moreover, the model recognizes only as many types of cats and squirrels as it has been shown.

This is a problem for organizations. As they ingest more data—from web apps, mobile apps, IoT devices, customers, vendors, and partners—training machine learning models becomes more expensive and complex.


So-called low-data learning addresses this problem. Low-data learning technology allows AI and ML models to recognize more objects than those on which they were initially trained. Christopher Pal, an AI researcher who is jointly affiliated with the ATG and Polytechnique Montréal, says low-data learning provides an important path through which companies can use data to build effective models, requiring less effort to collect labeled data.

“We can use far fewer resources and create many different models,” Pal says.

The kinds of things that people once thought AI should do—really get the nuances of language—are becoming possible.


Though low-data learning has been around for decades, researchers are pushing the technology further than was previously possible. A seminal 2015 paper by Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum, published in the journal Science, describes a model that learns the alphabet as efficiently as children do.

Pal says low-data learning could have important business benefits, allowing models to react quickly as the business context evolves.

“Tools like low-data learning can make your workforce more productive,” he says. “In an enterprise AI context, the most prevalent use-case of low-data learning would be for a customer to adapt a pretrained ‘out-of-the-box’ model to their own business context, using their own datasets while minimizing labeling effort.”

Another promising trend in AI research involves training machines to write software in response to simple voice commands. “This has been a holy grail for a long time,” says Pal. “Coming up with a programming language so flexible that it allows you to program even if you don’t know the programming language.”

It is not hard to imagine a future where almost anyone will be able to execute rudimentary programming tasks using natural language and the spoken word. The ATG team has developed an experimental model called PICARD, inspired by the well-known Star Trek character, that enables this scenario. (PICARD is an acronym for “Parsing Incrementally for Constrained Auto-Regressive Decoding.”)

“Let’s say you want to ask how often a delivery is late,” says Pal. “Instead of writing the code, you can simply say it, and the model translates from text to SQL. Or you’ll be able to say ‘Send me an alert any time I get a sale of this kind of product,’ and that’s it.”

Nicolas Chapados, ATG’s vice president of research, says his group is extending these language models into so-called foundation models, which “are good not just at representing texts, but also at capturing the relationship between related modalities such as text and images.” When a researcher inputs a text description of an image, for example, the model may have the ability to generate that image.

“The kinds of things that people once thought AI should do—really get the nuances of language—are becoming possible,” Chapados says.

As AI gets smarter and faster, researchers are using it to help humans make better decisions more quickly. The next step is human decision support, which involves using AI to augment our ability to make strategic decisions and forecast their long-term impact.

As organizations take in more data, decision-making grows increasingly complex. Researchers are building algorithms that can parse large volumes of data that are constantly evolving—something even the smartest human experts can’t do. AI models can then plot the likelihood an event will cause a second event.

For example, ATG researchers are building AI that can look at an IT environment, predict whether service issues might occur and determine a possible cause. The model will either flag the IT team to address the issue or automatically fix the issue itself.

Bias in training data or model behavior can lead to faulty results, but it can also have serious real-world consequences.

A 2019 study published in Science found that an algorithm used to allocate healthcare across American hospitals was systematically biased against Black patients. Other studies have shown that location- and demographic-based algorithms meant to predict where crimes will occur and who will commit them exhibit similar biases against people of color.

Chapados says it’s important for researchers to implement benchmarks and methodologies that can effectively flag and assess potential bias and its consequences. “The main advantage of AI systems is that we can characterize biases when they exist in AI systems,” he says, “whereas the biases in humans are totally opaque.”

ServiceNow’s ATG team can probe AI models to understand where their biases are, then trace that bias back to the data and algorithmic choices that underpin the models.

“You can identify the kinds of things that might be problematic and create a unit test,” says Pal. “It’s like a pop quiz.”

Establishing legal and governance frameworks to limit AI bias is critical to building consumer trust in AI, Chapados adds.

“You trust your doctor because they have gone through a grueling process of medical certification,” he says. “Does that mean that the doctor is perfect? Absolutely not. We should have similar standards for AI systems. Will they get something wrong? Absolutely, but we want to understand how often it happens and the potential cost.”

Related

How to Transform Your Customer Experience with AI

Related articles

Digital gold rush
REPORT
Digital gold rush

Finding Australia’s new common wealth: Why ethical AI, human-machine teams, digital identity and diverse perspectives will transform the next decade

How Port of Montreal is managing supply chain pressures
Q&A
How Port of Montreal is managing supply chain pressures

Port executive Daniel Olivier is using AI and other technologies to create efficiencies and increase resilience in turbulent times

How AI can drive business results
CRASH COURSE
How AI can drive business results

Strategies for extracting more value from machine learning models and applications

Customer experiences are as salient as ever
CRASH COURSE
Customer experiences are as salient as ever

Changing financial conditions are making it increasingly difficult to maintain customer loyalty. New strategies and technology can help.

Author

Evan Ramzipoor is a writer based in California.

Loading spinner