Solutions

  • Products
  • Use Cases
  • Industries
  • EBOOK
  • Making it #EasyForEmployees
  • A guide with best practices for transforming the employee service experience.
  • WHITE PAPER
  • Modernizing government via ITSM
  • A research doc about government agencies’ digital transformation challenges.

Platform

  • REPORT
  • Gartner names ServiceNow a leader
  • 2018 Magic Quadrant for Enterprise High-Productivity Application PaaS.

Customers

  • CUSTOMER STORY
  • General Mills transforms HR
  • Global employee service experience shows entire corporation how it’s done.

Explore

  • PERSPECTIVE
  • Do you need an AI council?
  • Formal collaboration helps implement new technology safely and effectively.

Is your AI fair?

4 ways to mitigate machine bias at work

By Alex Salkever

Companies are using facial recognition tools in a range of applications, from unlocking phones to verifying customers at ATMs. Yet even as the technology goes mainstream, it’s becoming clear that AIs don’t always treat faces equally.

Early this year, researchers at the MIT Media Lab published a study showing that AI‑based facial ID systems from IBM and Microsoft had dramatically higher error rates for non‑white faces. The systems misidentified one‑third of darker‑skinned female faces, compared to error rates of less than 1% for light‑skinned females. The disparities in accuracy, the authors noted, “require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.”

Machine bias isn’t just a challenge for nascent biometric technologies. It’s one that thousands of companies seeking to leverage AI to automate business processes must manage proactively. Bias can occur in a wide range of computational areas, including sentiment analysis, word associations, and medical diagnoses, among others. Here’s a look at what machine bias is and how to monitor for it.

Know the basics of bias

Machine bias occurs when an algorithm produces results that are systematically biased against a group that shares a common characteristic such as gender or skin color. The problem dates back to the early days of computerized decision‑making. In 1996, for instance, a Freddie Mac study found that its credit‑scoring algorithms discriminated against Hispanics and African Americans on mortgage applications.

The rise of automation in business increases the risks of machine bias, not just because the technologies are new, but because the processes are more complex and opaque than those used in FreddieMac’s old FICO models. Left unchecked, machine bias can result in everything from bad press and missed sales opportunities to class‑action lawsuits.

Companies investing in AI “need to understand and monitor for machine bias so that you’re not going to be embarrassed when your company has been found to be—intentionally or unintentionally—biased,” says Rich Caruana, a senior researcher at Microsoft Research. He adds that machine bias can make companies overlook entire customer segments.

Monitor for risk

Automated algorithms already affect significant aspects of our lives, from our credit scores, to the ads we see, to our medical diagnoses. Algorithms factor heavily into critical decisions such as whether a prisoner gets parole, or whether parents who have already been reported to child protective services can retain custody of their children. Algorithms powered by machine learning have made some horrific blunders. In 2015, for example, the image recognition engine inside Google Photos identified dark‑skinned people as gorillas.

According to Cathy O’Neil, author of the book “Weapons of Math Destruction,” the myriad business processes now in the hands of automation are too often taken at face value and are assumed to be fair and objective simply because they’re mathematical. She calls this “the authority of the inscrutable.”

As automation expands its footprint across organizations, machines have more opportunities to inject bias. Which sales prospect will receive a higher priority in an automated marketing system? Which ethnic groups will be targeted for ads on social media? Which recruiting prospects will algorithms favor over the recommendation of hiring managers?

Identify root causes

Modern deep‑learning systems create multi‑dimensional software code and algorithms that even computer scientists can’t fully comprehend. Consider “Move 37,” the unexpected but game‑winning move that Google’s AlphaGo program made against South Korean Go champion Lee Sedol in 2016. AlphaGo’s creators had never seen the move before. Neither had the world’s greatest Go players. Nobody could even explain it.

In such situations, humans can only test the outputs to ensure they’re accurate or effective. (For more information, check out our story about algorithmic auditing—a nascent discipline that aims to make the computational calculus of AI more accountable.

Sometimes machine bias is injected into decision processes through insufficient or imbalanced data used to train the AI systems. In the example of facial ID failures, part of the problem was that the machine learning systems trained on fewer darker‑skinned faces, reducing the accuracy of the algorithm.

University of Michigan computer scientist H.V. Jagdish has studied machine‑bias scenarios in the hiring realm. He found that algorithms used to recommend promotions often prioritize uninterrupted full‑time employment. As a result, women who take time off to care for children or elders may be penalized even if the algorithm doesn’t explicitly include gender in its decision criteria.

Data handling and selection problems can also inject machine bias. Data labeled by hand may capture latent or overt biases. For example, two image libraries labeled by Facebook and Microsoft employees identified women with cooking and cleaning, according to a 2017 study by researchers at the University of Virginia. Machine learning models then amplified the bias when they were trained on this data.



Always question the data

The most likely source of bias is problematic data. CIOs looking to strip out bias should focus on weaknesses in their data sources. These can include data that have been mislabeled data or don’t sufficiently represent the diversity of potentially affected parties, or data types such as zip code or job tenure that may seem objective on their face but are actually vectors for bias.

A range of organizations have mobilized to fight machine bias, including Microsoft, McKinsey, the OpenAI Initiative, and others. Researchers are studying novel approaches to reducing bias, such as “shadow algorithms” that mimic what primary algorithms are doing and allow engineers to swap variables and compare results.

However, it’s not currently possible to eliminate all machine bias. And to some degree, bias is in the eye of the beholder. What appears biased to some observers may strike others as innocuous market segmentation.

“Some companies may want to bias algorithms towards selling certain things to certain types of people because those people are the most obvious customers,” says Michael Skirpan, founder of an algorithmic auditing consultancy called Probable Models. “A company trying to sell skateboards may want to avoid marketing to people in wheelchairs and that would likely not be viewed as discriminatory.”

That said, machine bias detection remains a nascent discipline that lacks standardized methods and solutions. Managing bias requires vigilance, which means asking lots of questions and not trusting the black boxes embedded in your processes and workflows.

Alex Salkever is the co‑author, with Vivek Wadhwa, of “The Driver in the Driverless Car: How Our Technology Choices Can Change the Future." He writes frequently about disruptive business technologies and artificial intelligence.

Thank You

Thank you for submitting your request. A ServiceNow representative will be in contact within 48 hours.

form close button

Contact Us

I would like to hear about upcoming events, products and services from ServiceNow. I understand I can unsubscribe any time.

  • By submitting this form, I confirm that I have read and agree to the Privacy Statement.