Machine learning can produce powerful results. In 2016, Alphabet’s
DeepMind became the first AI system to defeat a world champion in the
strategy game Go. It mastered the game by playing thousands of
matches. A later version of DeepMind became even more proficient by
playing against itself.
However, the old principle “garbage in, garbage out” applies to
machine learning algorithms, which are only as good as the data you
feed them. In a well‑known example, an MIT study found that
facial‑recognition software from IBM failed to identify the faces of
black women 35% of the time, compared to a failure rate of just 0.3%
for white men. The problem was the limited set of data used to train
the software. (IBM has since updated its database to be more diverse.)
Algorithmic
auditing uses a variety of techniques to test whether an AI
program has blind spots or other biases by looking for questionable
patterns in the decisions the software produces. While auditing
algorithms can help identify bias and other flaws in the data used to
teach machine‑learning systems, they can’t help explain how decisions
are reached. Other approaches are needed. Ditto, which sells
explainable‑AI systems to healthcare, waste management, and other
companies, bases its tool on technology known as symbolic AI.
Symbolic AI dates back to the 1950s. It uses natural‑language
concepts to build large‑scale knowledge bases that map how different
terms relate to each other. In finance, for instance, symbolic AI
would recognize that “principal,” “interest,” “income,” and “default”
are all factors in making a loan decision.
This ability can be tapped to explain the reasoning behind an
AI‑based decision. Analyzing a loan application, the system could
decide to reject the applicant and also tell the bank that it was
doing so because one concept (income) couldn’t support another concept
(interest payments).
“I am not giving this person a loan because they do not have
sufficient income to cover the cash payments,” says Ryan Welsh, CEO of
San Mateo, Calif.–based Kyndi, another explainable‑AI firm. “That’s an explanation.”
Kyndi’s machine‑learning tool mines thousands of documents and
automatically extracts their key concepts. It then answers questions
about its decisions in simple terms, understandable by people.