Executive need to listen

ARTICLE | December 3, 2021 | 5 min read

Why AI needs to show its work

Explainability is a business necessity and can help forestall regulation

By Janet Rae-Dupree, Workflow contributor

  • Users and regulators are demanding greater transparency from AI systems
  • AI applications must be able to explain how they make decisions
  • New AI design principles are helping companies build more transparent algorithms from the start

Artificial intelligence is changing how companies do business. But it still has trouble showing its work.

Nine out of 10 businesses using AI today report a critical need for better explanations about what’s happening inside AI’s “black box,” according to a 2021 IBM report. More than three-quarters of IT professionals in the survey say it’s critical they can trust that their AI’s output is fair, safe, and reliable.

Designers of AI systems know what data goes in and the answers that come out. Too often, what happens in between remains a mystery. Hidden biases in the data can deliver results that are inaccurate, unethical, and, in some cases, illegal. Faced with increased regulatory pressure—and growing demand for transparency from customers, employees, and investors—organizations need to explain the reasoning of their AI systems to show they are delivering accurate results and operating within ethical boundaries.

When AI models recommend potentially costly or risky decisions, the humans in charge need to know why. Explainable AI, or XAI, is the term of art for an AI system that allows people to understand how it reaches its decisions. In short, the algorithm provides an explanation along with its output.

As noted in previous Workflow stories, various forms of explainable AI have emerged in recent years. Next Move Strategy Consulting estimates that explainable AI providers will generate more than $8 billion in revenue by the end of this year, and more than $14 billion by 2025.

Dozens of companies now provide tools to improve the transparency of machine-learning systems. For example, some tools perform algorithmic auditing to test whether an AI system has blinds spots or biases. So-called shadow algorithms can also be used to detect and mitigate AI bias.

Explainable AI makes the technology more trustworthy and accountable, which increases the rate of adoption by the enterprise and reduces the need for government regulation.

Trust is the first and perhaps most important driver of interest in XAI. It is especially significant for applications that make high-stakes decisions, often in the absence of rigorous testing. To trust the predictions these applications make, users need to know the predictions are valid and appropriate.

XAI must reveal the true limitations of an AI model so users know the bounds within which they can trust the model.

Trust is the first and perhaps most important driver of interest in XAI.

This is particularly important because many people are prone to blindly trusting AI explanations, based on a false assumption that they understand the process that produced them. To avoid the problem of misplaced human trust, the explanations provided by AI models must clearly show how the model works, as well as its limitations, in a format that is clear and transparent.

“Everyone expects that there will be new regulations, but for now, trust is the key driver,” says Meeri Haataja, CEO of Saidot, a provider of enterprise AI risk-assessment platforms. “Companies are looking at AI governance and responsibility because they see that it’s essential for their stakeholders.”

As the IBM report and other surveys reveal, consumers are more likely to give their business to companies that are transparent about how they use AI models. Organizations can face public relations headaches and potential legal liability when opaque AI systems produce biased outcomes.

Companies and government entities are deploying AI applications to make increasingly impactful decisions. In recent years, we’ve seen harmful consequences from machine bias in health carecriminal justice, and consumer lending.

In a heavily regulated industry like financial services, companies must be able to explain that their decisions aren’t biased against, for instance, women or minority borrowers. Similarly, credit agencies now rely on machine learning to determine credit scores. These agencies face increasing pressure to explain the logic behind their algorithms.

The European Union’s General Data Protection Regulation (GDPR) requires all businesses that gather data to explain how their automated systems make decisions. The EU has proposed additional regulations that require stiffer standards for transparency, accuracy, and responsible use that only explainable AI can make possible.

Singapore has also been a leader in this field. In 2018, the Monetary Authority of Singapore released its FEAT framework—an acronym for fairness, ethics, accountability, and transparency—as a blueprint for how financial services companies should manage AI and data analytics.

Over time, most AI models provide increasingly precise responses to data while making it more difficult for humans to understand how they arrived at these responses. Control is rarely distributed evenly. Performance becomes dynamic, based on how the model is used and by whom.

In a multi-stakeholder value chain, who’s in charge of XAI? The systems designers focused on reducing bias? The product team that guides overall development of the AI solution? A government watchdog? Or all three?

Achieving XAI involves answering such thorny questions. The AI Now Institute at New York University, a nonprofit organization that researches the social impact of AI, has urged public agencies responsible for high-stakes decisions to ban so-called black box AIs whose decisions can’t be explained.

In a June report, the U.S. National Institute of Standards and Technology (NIST) included explainability in a list of nine factors needed to measure trust in AI systems. According to the NIST framework, AI systems should be able to provide reasons for their outputs that are understandable to users and that correctly reflect how the decisions were reached.

Developers are also working to design AI systems with explainability built in.
More than a decade ago, Duke University computer science and engineering professor Cynthia Rudin worked with Con Edison, an electric utility serving the New York City region, to build an algorithm that predicted which manholes were at risk of exploding. Rudin developed explainability features for the algorithm that helped improve the accuracy of its predictions.

$14 billion

Estimated revenue generated by explainable AI providers by 2025

“These systems are much easier to troubleshoot when you understand the reasoning process,” Rudin says.

Aperio Consulting Group, a company that explores the intersection between behavioral science and AI, developed an AI app that analyzes psychological factors in entrepreneurs. The target audience: Investors who want to know how to guide startup founders toward success. Explainability was designed into the tool from the start, says Bryce Murray, Aperio’s director of technology and data science.

For example, a black-box AI tool might give a single score of an entrepreneur’s potential. Aperio’s explainable system provides a more detailed explanation of the factors that influence the score—such as a low level of perseverance—making it possible to outline more targeted coaching guidance to help the entrepreneur “tune up” that quality.

“We’re working with people data, so if we were using unexplainable AI, it wouldn’t work,” Murray says. “We have to explain beyond just throwing lots of statistics and numbers out there.”

Despite the recent progress, explainability is still part of a “continuous improvement journey,” says Grace Abuhamad, who heads up trustworthy AI research at ServiceNow.

“Explainability is still an area of active research,” she says. “There are a lot of companies that are trying to do work in that area, and a lot of startups are launching themselves as experts in this space, but overall it’s not a solved question.”

Related articles

People. Innovation. Connections. They can make an enterprise great.
VIDEO
People. Innovation. Connections. They can make an enterprise great.

What does it take to make an enterprise great? These days, it’s empowering people, innovation, and connections

How Port of Montreal is managing supply chain pressures
Q&A
How Port of Montreal is managing supply chain pressures

Port executive Daniel Olivier is using AI and other technologies to create efficiencies and increase resilience in turbulent times

How AI can drive business results
CRASH COURSE
How AI can drive business results

Strategies for extracting more value from machine learning models and applications

Customer experiences are as salient as ever
CRASH COURSE
Customer experiences are as salient as ever

Changing financial conditions are making it increasingly difficult to maintain customer loyalty. New strategies and technology can help.

Author

Janet Rae-Dupree has been covering innovation for more than two decades, specializing in writing about emerging technologies and the science of technology. She has been on staff at U.S. News & World Report, BusinessWeek, the L.A. Times, the San Jose Mercury News, the Silicon Valley Business Journal and a number of smaller publications.