Businesses will soon face new limits on how they use applications and services powered by artificial intelligence. They can thank European Union regulators who will require them, starting in May, to justify decisions made by artificial intelligence that impact European residents.
The EU’s General Data Protection Regulation takes effect May 25, 2018. GDPR mandates aggressive new standards for consumer rights regarding use of their personal data. It’s the first data privacy update from the EU in more than two decades.
The regulation also includes an implied right to an explanation when a EU resident is subject to an “automated” decision. This right, and uncertainty over its implementation, could change how businesses embrace AI, says Nick Wallace, senior policy analyst in Brussels for the Information Technology and Innovation Foundation.
GDPR’s language related to the right to an explanation remains vague; the rule is not explicitly spelled out in the document’s main text. Instead, GDPR gives residents a right to “meaningful information” about automated decisions that affect them. It also mandates a right to human intervention when contesting a decision. With “meaningful information” and other language in GDPR not clearly defined, the new regulations will likely be enforced and tested on a case‑by‑case basis.
This uncertainty leaves many companies in the dark about how to navigate this new challenge. Some analysts believe GDPR presents problems for current uses of machine learning, while others suggest the imprecise language in GDPR simply creates toothless regulations that can’t be enforced.
CIOs are unlikely to sit around waiting for the legal drama to play out. Regardless of how the EU rule is interpreted or enforced, it augurs closer regulatory scrutiny for AI‑based decision making. Smart organizations will start looking more closely for signs of bias in their results, as well as flaws in their algorithms and in the fidelity of their data.
Jana Eggers, CEO of Nara Logics, a startup that sells an AI‑powered recommendation engine, says companies should be developing cross‑functional processes “for evaluating algorithms and data used, for evaluating results on a regular basis to spot changes for review, and for responding to customer inquiries in a consistent, transparent and ethical manner.”
If GDPR ultimately requires only a basic explanation about an AI’s decision, most businesses may be able to comply, Wallace says. But if EU regulators require detailed explanations about the elements of a particular algorithm—along with frequent human interventions to second‑guess decisions—companies may rethink what they choose to automate, and at what cost.
GDPR also raises a thorny question: What is the purpose of automated decision‑making if it requires constant human oversight?
“The whole point in using AI is either it can do things more efficiently than humans or more accurately,” Wallace says. “If a human can just as easily replicate these decisions, why would you bother investing in AI research?”
Tradeoffs for process automation
Some studies suggest that regulatory requirements for AI transparency can hurt accuracy. “If you program an algorithm to be transparent, that puts a constraint on the complexity of the decisions that you make,” Wallace says.
For example, an AI might use deep learning techniques to interpret thousands of data points in order to estimate the probability that an individual will develop cancer. The tradeoff for that level of analysis is that it may not be able to explain the diagnosis in a way that most humans can understand.
Yet a tradeoff between transparency and accuracy isn’t always necessary, says Natasha Duarte, policy analyst at the Center for Democracy and Technology, an advocacy group. AI developers often must choose between a simpler model explainable to humans and a more complex and opaque model.
“Sometimes going with a simpler model means losing some accuracy, but not always—or not always a lot of accuracy,” Duarte says. “And sometimes, losing a little accuracy for more interpretability is a good thing.”
In some cases, a more transparent AI may allow researchers to spot flaws in an algorithmic decision. That’s what happened when an AI downplayed the risk that asthma patients will contract pneumonia.
Some AI experts welcome a push for more transparency. While explaining decisions can be computationally expensive, it’s important for humans to understand and trust the results, says Eggers. She compares the right to an explanation for an AI decision to the right of medical patients to seek second opinions. “We often don’t trust an expert’s opinion, and we ask for more information,” she says. “We should be able to do that with machines.”
Companies should view the mushiness of GDPR language related to AI as a feature, not a bug. It will buy them time to evaluate potential risks before a new set of rules arrives—likely with sharper teeth.