Solutions

  • Products
  • Use cases
  • Industries
  • EBOOK
  • Making it #EasyForEmployees
  • A guide with best practices for transforming the employee service experience.
  • WHITE PAPER
  • Modernizing government via ITSM
  • A research doc about government agencies’ digital transformation challenges.

Platform

  • REPORT
  • Gartner names ServiceNow a leader
  • 2018 Magic Quadrant for Enterprise High-Productivity Application PaaS.

Customers

  • CUSTOMER STORY
  • General Mills transforms HR
  • Global employee service experience shows entire corporation how it’s done.

Explore

  • PERSPECTIVE
  • Do you need an AI council?
  • Formal collaboration helps implement new technology safely and effectively.

Dealing with AI backlash

Trust can be hard to repair when AI goes wrong

By Christopher Null

  • Trust in new technology erodes quickly when something goes wrong
  • Avoiding a backlash requires a sustained record of success and openness about problems and how you plan to address them
  • A majority of workers want their companies to be more transparent about how they plan to use AI

It’s been a rough few months for artificial intelligence. In March 2018, a pedestrian was killed by a self‑driving vehicle operated by Uber. A few days later, a Tesla, traveling in autonomous Autopilot mode, slammed into a concrete barrier, killing its driver.

A few weeks after that came a far more benign AI mishap, but one that left just as many people feeling unsettled. Google Assistant, Google’s voice‑enabled AI platform, showed off its new conversational features by phoning a restaurant to make a dinner reservation.

The assistant carried on a three‑minute conversation with the host and successfully booked a table. But it never identified itself as an AI assistant to the host, who believed he was talking to a person—a revelation that media outlets called creepy, crazy, and “scary as hell.”



It’s not just consumers feeling the impact of runaway AIs. Microsoft’s experimental Tay chatbot was designed to learn how to interact with customers via Twitter. The company was horrified to see Tay evolve into a misogynistic racist in less than 24 hours, leaving it publicly embarrassed.

Thousands of companies are offering new AI products and incorporating the technology into applications. But as these and other incidents show, promising but unpredictable technology can become an overnight pariah, leaving companies scrambling to deal with the backlash.

In the wake of the Uber and Tesla accidents, some manufacturers began pulling back on self‑driving programs. Several lawmakers quickly moved to ban them outright. In response to the outcry over the public demo of the restaurant call, Google officials announced that future tests would start with a clear disclosure: “I’m the Google Assistant and I’m calling for a client.” This move failed to quell public concern.

People have long worried about the threat of intelligent machines slipping out of our control. That presents an ongoing challenge for companies experimenting with AI technology. How do you build support for AI initiatives and products that are vulnerable to negative public sentiment? And how do you rebound from failures that are sure to come?

Successful technologies build and maintain trust, says Keng Siau, a professor at the Missouri University of Science and Technology. “AI is supposed to make our lives better,” Siau says, but that’s far from a predictable outcome. “Smarter companies are being proactive about this by developing trust.”

Trust is dynamic, as Siau and cowriter Weiyu Wang suggest in their March 2018 report, Building Trust in Artificial Intelligence, Machine Learning, and Robotics, and it is not something that develops overnight.

Siau breaks down trust into two general types: initial trust and continuous trust. Creating initial trust is the easier challenge. A successful trial run of a new product or a simple friendly face can help to establish initial trust. That’s one reason why robots fashioned after humans are so popular; it’s easier for people to “establish an emotional connection” with them,” says Siau.

Longer term, companies must prove a product or service’s reliability and usability, showing that it’s not prone to downtime or accidents, that it can collaborate effectively with humans, and that it exhibits strong security.

In the case of driverless cars, trust remains a long‑term challenge—and can be destroyed overnight. In January 2017, 78% of U.S. drivers said they would be afraid to ride in a fully autonomous car, according to an American Automobile Association survey. By December 2017, that number fell to 63%. Then came the Tesla and Uber incidents, which dented public confidence in autonomous vehicles. In the most recent survey update, from April 2018, 73% of drivers still wouldn’t trust a driverless car.

Rebuilding Bonds

When something bad happens, a backlash is inevitable—from customers, employees, the public, or all three. “Fixing the problem takes a long time and a lot of energy,” says Siau. “You have to assure the public that you will investigate, report on the issues you identified, and so on.”



Such a report might include noting how many hours of testing were undertaken, whether the issue was an isolated incident, and what’s being done to fix it. Tesla, for example, has updated the public twice about the accident, outlining in significant detail what went wrong (the car had been in a prior accident and a key part had not been repaired), and citing its overall safety statistics while noting the impossibility of achieving a perfect safety record.

When problems occur, it’s also critical for companies to help people understand the underlying technologies. People naturally fear the unknown. “You need to demystify the process by which the product works,” Siau says.

For example, it isn’t good enough for an AI application to make a medical recommendation to a patient. Trust is only built when the patient understands why it made the recommendation. “The key is to be even more transparent,” says Siau.

The same principle applies internally and building trust with employees, who may have lingering concerns not just about AI products but future competition for their jobs. “Companies should be candid about their automation plans and communicate with employees about retraining, redeployment, and continuous education possibilities,” says Siau.  

The most recent workplace statistics suggest employers have their work cut out for them. Nearly 60% of organizations today have yet to discuss the potential impact of AI on their workforce with employees, according to a 2018 study by the Workforce Institute. And 61% of employees say they would like to see their companies be more transparent about future plans for AI.

No matter how a company’s plans for AI may evolve, the backlash potential suggests that when it comes to your workforce, any successful AI strategy will require a large dose of humanity.

Christopher Null is a longtime technology and business journalist who contributes regularly to TechHive, PCWorld, Wired, and other publications.

Thank You

Thank you for submitting your request. A ServiceNow representative will be in contact within 48 hours.

form close button

Contact Us

I would like to hear about upcoming events, products and services from ServiceNow. I understand I can unsubscribe any time.

  • By submitting this form, I confirm that I have read and agree to the Privacy Statement.