Solutions

  • Products
  • Use Cases
  • Industries
  • EBOOK
  • Making it #EasyForEmployees
  • A guide with best practices for transforming the employee service experience.
  • WHITE PAPER
  • Modernizing government via ITSM
  • A research doc about government agencies’ digital transformation challenges.

Platform

  • REPORT
  • Gartner names ServiceNow a leader
  • 2018 Magic Quadrant for Enterprise High-Productivity Application PaaS.

Customers

  • CUSTOMER STORY
  • General Mills transforms HR
  • Global employee service experience shows entire corporation how it’s done.

Explore

  • PERSPECTIVE
  • Do you need an AI council?
  • Formal collaboration helps implement new technology safely and effectively.

The imitation game

Computers have a lot to learn from children

By Richard McGill Murphy

Most people are familiar with the Turing Test, proposed by the British computing pioneer Alan Turing as a way to measure artificial intelligence. The idea is that if you interrogate a computer and can’t tell if the answers are coming from a person or a machine, then the computer has won the “imitation game” and can be said to possess intelligence.

Despite the recent furor that ensued after a Google chatbot phoned up a hair salon and made an appointment without revealing that it wasn’t a carbon‑based life form, computers generally struggle to imitate people convincingly. On the other hand, the imitation game comes naturally to children, who have always learned by copying their elders and peers.

Turing’s game points to a world where people and computers develop by mirroring each other. This game ultimately has no winner because it yields a synthesis of human and machine intelligence.

This is in fact the world we live in, one where children grow up learning via technology that in turn learns to meet defined goals (winning a chess game, diagnosing an ailment, resolving a customer complaint) without being explicitly programmed. In short, human and machine learning are both distinct and inextricable from each other.



In the 1950 paper where he proposed the Turing Test, Turing suggested that the key to machine intelligence was to build a machine that thought like a child, not an adult. He also argued that his hypothetical machine could benefit from acting randomly, at least some of the time.

Turing was on to something there. Computers aren’t designed to act randomly, although they excel at drawing inferences from structured hypotheses. On the other hand, children learn by testing hypotheses that can strike adults as irrational, if not crazy—try playing “pretend” with a three‑year‑old if you want to remember just how wacky this process can get. Yet there’s method in their madness. Unlike computers, which interpret the data sets that humans feed them, children instinctively explore the world around them and extract the data they need to learn essential skills like knowing what to eat and whom to trust.

“Babies systematically look longest at the events around them that are most likely to be informative, and they play with objects in a way that will teach them the most,” writes Berkeley psychologist Alison Gopnik.

When small children explore the world in this way, they are tapping the same tension between rationality and randomness that we see in pretend games. You could also say that kids excel at extracting signal from noise. And they have remarkably acute BS detectors. According to Gopnik, three‑year olds can read subtle cues to determine whether grown‑ups are just saying what they think, or are intentionally trying to be instructive. “Even the most sophisticated computers have yet to master the ability to roll their eyes at adult fatuity,” she adds.

So how should we prepare kids for a world where they will need to work productively with intelligent machines? Much of the contemporary debate around AI reflects fears that computers threaten human jobs. This has been a recurring theme in the history of automation. In 1930, for example, the British economist John Maynard Keynes predicted that advances in mechanized production would cause a wave of “technological unemployment.”

This theme returned in the automation debates of the early 1960s, when U.S. pundits and policymakers worried that factory robots and electronic data processing systems would put millions of American workers on the bread line. The same concerns echo today in predictions that AI will make most human workers redundant and lead to a social crisis that will need to be resolved by redistributive measures like a universal basic income and/or a robot tax.

Automation anxiety is a recurring historical theme because each new wave of automation causes short‑term disruption and pain in labor markets. In the long run, automation has not reduced the volume of work for which humans are needed. In fact, automation tends to increase employment while it changes the nature of work and the skills that workers need to be successful. That’s because productivity increases spur demand for goods and services by reducing their cost. Companies create new jobs to meet this demand, often in new industries that grow up around emerging technologies.  

While it can be hard for individuals, society as a whole adapts. Starting in the late 19th century, for example, new factory jobs absorbed the farm labor lost to mechanized agriculture. More recently, a host of new, digitally‑enabled occupations (social‑media strategist, Uber driver, Bitcoin miner) have replaced much of the labor demand lost when U.S. manufacturing employment plummeted in the late 20th century due to factory automation and offshoring.

From a business perspective there’s little point in raging against the machine. Artificial intelligence will transform work whether we like it or not, because the operational and strategic benefits of AI are simply too compelling to pass up. Companies that don’t leverage AI will be outcompeted by companies that do.

Nor should we teach kids to race against the machine. If we train our children to compete with computers, we’re setting them up for failure. There’s usually one outcome when humans try to beat machines at their own game, like John Henry racing the steam engine in the old folk song. Metaphorically or literally, the human dies with his hammer in his hand.

Instead, we need to prepare kids for a division of labor where computers will handle all the routine, repetitive tasks so that people can focus on what they do best: asking smart questions, synthesizing information from different domains, applying ethical standards, setting goals and inspiring their colleagues.

Richard McGill Murphy is the editor in chief of Workflow. A journalist and social anthropologist by background, he runs a research and publishing program at ServiceNow that studies how emerging technologies are shaping the future of work.

Thank You

Thank you for submitting your request. A ServiceNow representative will be in contact within 48 hours.

form close button

Contact Us

I would like to hear about upcoming events, products and services from ServiceNow. I understand I can unsubscribe any time.

  • By submitting this form, I confirm that I have read and agree to the Privacy Statement.