As organizations ingest more data, training machine learning models becomes more expensive and complex.
By Evan Ramzipoor, Workflow contributor
Researchers are trying to build artificial intelligence and machine learning models that can learn as fast as humans, translate verbal instructions into code, help people make accurate decisions in an unpredictable environment, and deliver unbiased results.
While these agendas could lead to faster, smarter business decisions, they first must be translated into concrete use cases. To do that, most businesses have teams dedicated to “crossing the valley of death.”
This phrase, well-known to experts in enterprise AI research and development, means finding practical applications for AI lab research. That’s part of Valérie Bécaert’s job at ServiceNow. Bécaert, a senior director of research and scientific programs at the company’s Advanced Technology Group (ATG), leads AI research programs and ensures they align with business needs.
There’s not always an obvious link between lab research and practical applications.
As organizations ingest more data, training machine learning models becomes more expensive and complex.
“A major component is figuring out the level of understanding and skill the organization has, the processes and culture in place, and how we can transform this culture or influence it so there is more risk acceptance,” Bécaert says. “It’s about getting the organization prepared to take [on] this research and not be afraid of it.”
Related
Bias in training data or model behavior can lead to faulty results, but it can also have serious real-world consequences.
A 2019 study published in Science found that an algorithm used to allocate healthcare across American hospitals was systematically biased against Black patients. Other studies have shown that location- and demographic-based algorithms meant to predict where crimes will occur and who will commit them exhibit similar biases against people of color.
Chapados says it’s important for researchers to implement benchmarks and methodologies that can effectively flag and assess potential bias and its consequences. “The main advantage of AI systems is that we can characterize biases when they exist in AI systems,” he says, “whereas the biases in humans are totally opaque.”
ServiceNow’s ATG team can probe AI models to understand where their biases are, then trace that bias back to the data and algorithmic choices that underpin the models.
“You can identify the kinds of things that might be problematic and create a unit test,” says Pal. “It’s like a pop quiz.”
Establishing legal and governance frameworks to limit AI bias is critical to building consumer trust in AI, Chapados adds.
“You trust your doctor because they have gone through a grueling process of medical certification,” he says. “Does that mean that the doctor is perfect? Absolutely not. We should have similar standards for AI systems. Will they get something wrong? Absolutely, but we want to understand how often it happens and the potential cost.”
Related