Home Conversations On AI App Development CRM Enterprise IT Ethics & Governance Futures HR Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All topics For Leaders In IT & Dev Customer Experience Finance, Operations & Strategy Employee Experience Security & Risk News & Events People & Culture My List Explore All
March 9, 2026 6 min How to avoid the AI progress trap Preserving judgment, creativity, and empathy is crucial to drive change that benefits all Futures Thought Leadership
Simon Grice
Simon Grice Sr Dir, Innovation, ServiceNow
Diagram of a staircase ascending from black to purple to pink steps with an arrow pointing up
Top takeaways
Don't let AI "efficiency" erode the capabilities your business depends on.
Shift the question from "How fast can we deploy AI?" to "What should we let AI do?"
Make trust the strategy: ethics + security + safety, owned across the C-suite.
Alt text

Early humans perfected a hunting technique that seemed brilliant at first: driving entire herds of mammoths off cliffs to kill them en masse. The strategy delivered unprecedented abundance until the very species they depended on went extinct.

Fast-forward to today and smartphones put more computing power in our pockets than existed in entire research centers just a generation ago. Such computing power gives us always-on connectivity around the globe, but it also contributes to screen addiction, social isolation, and cognitive fragmentation that previous generations never had to face.

Historian Ronald Wright gave a name to this "chain of successes which, upon reaching a certain scale, leads to disaster.” He called it a progress trap.

Societies optimize for short-term gains without safeguarding what made those gains possible in the first place. AI is the latest chapter in this ancient story. And this time, the trap of progress threatens faster than ever before.

When efficiency is a trap

Unlike historical precedents that unfolded over generations, AI operates at breakneck speed. When the MIT Media Lab studied participants who used ChatGPT to write essays over a four-month period, the users’ brain activity declined, comprehension scores dropped, and they consistently underperformed at neural, linguistic, and behavioral levels.

Researcher Nataliya Kosmyna warned that "increasing reliance on AI could potentially reduce critical thinking, creativity, and problem-solving" across the workforce. Organizations leaning into AI without guardrails are substituting rather than augmenting human capability.

The pattern of optimization without safeguards appears across domains. The COMPAS algorithm, deployed in the U.S. judicial system to assess defendants' likelihood of reoffending, falsely labeled Black defendants as high risk at nearly twice the rate of white defendants. Systems trained on flawed data reproduce those flaws at scale, even when wrapped in the appearance of objectivity and precision.

Mental health experts are now documenting cases of “AI psychosis,” delusional states triggered when AI systems prioritize flattery over the truth. In a business, this can manifest as systems that validate executives' existing biases and generate sycophantic forecasts that mask the gap between AI's narrative and operational reality.

Early results may justify further deployment: Processes accelerate, margins expand, and engagement metrics climb. However, organizations focused purely on short-term gains rarely notice the costs until the trade-offs become irreversible.

Societies optimize for short-term gains without safeguarding what made those gains possible in the first place. AI is the latest chapter in this ancient story.
Arms and hands holding five phones in a circle

Optimizing toward the edge of the cliff

According to the ServiceNow Enterprise AI Maturity Index 2025, only 44% of companies have designated teams focused on AI policy, risk mitigation, and responsible use. The small percentage results from a wrong-headed view about AI.

Organizations are focusing on how AI can save them money today rather than first asking: “What should we let AI do?” The first consideration drives adoption velocity. The second drives long-term success. When AI makes decisions instead of people, organizations trade human judgment and critical thinking for convenience and efficiency.

We lose agency when we allow the very human judgment that AI systems depend on for oversight and correction to erode. Preventing this requires a robust strategic commitment in responsibly deploying AI.

Safeguarding civilization

To avoid AI’s progress trap, we need to foster a fundamental shift. We must establish trust as the foundation for AI development. Just because we can do something with this technology doesn't mean we should. Organizations must shift from capability-driven adoption to AI deployment that’s aligned with human values and robust accountability.

Trust consists of three inseparable elements that are all verifiable: ethics, security, and safety. Trust now requires embedding these principles across the AI lifecycle, from fairness and transparency in design to privacy protections in deployment, all aligned with standards such as the NIST AI Risk Management Framework and ISO/IEC 42001.

Leaders must foster cultures where employees at all levels understand AI's ethical implications and can raise concerns when systems deviate from widely understood and shared organizational values.

In this way, trust becomes a competitive architecture. Without security, systems are vulnerable to manipulation. Without safety, they behave unpredictably under pressure. And without ethics, they optimize for outcomes that erode stakeholder confidence. With all three elements in place, companies are trusted by customers, and AI is a valuable and well-managed tool that supports long-term success.

Achieving this requires distributed ownership across the C-suite, not siloed responsibility in IT. Legal, operations, finance, and business leaders each bring essential perspectives on risk, impact, and long-term value.

Trust is strategic foresight that will deliver customer loyalty and regulatory advantage, becoming an organization’s most valuable currency that creates sustainable differentiation.

To keep that high level of trust, humans must retain authority to intervene during AI decision-making and to override AI systems before outcomes are finalized. Deviations from ethical boundaries and bias thresholds should be continuously monitored, and every system should have protocols delineating when to pause or disable it.

Increasing reliance on AI could potentially reduce critical thinking, creativity, and problem-solving across the workforce. Nataliya Kosmyna Research Scientist, MIT Media Lab
To avoid AI’s progress trap, we need to foster a fundamental shift. We must establish trust as the foundation for AI development.

The inflection point

Few doubt that AI is set to fundamentally transform business. Soon we will see which organizations chose to employ AI as a transparent collaborator and which ones decided to use it as an opaque controller.

Leaders who don’t pursue only short-term gains but instead develop an AI strategy focused on long-term, sustainable AI success will help define what responsible innovation looks like in the years ahead.

Our early ancestors who drove mammoths to extinction had no way of seeing the consequences of their actions until it was too late. But we can. The difference between progress and a progress trap is defined by that essentially human trait—wisdom.

Choose wisely and AI becomes a tool that amplifies the essential capacities that make us human: judgment, creativity, and empathy. That's not just avoiding extinction. That's evolution.

Get more insights in our innovation brief: The AI Inflection Point.

Next up
Dive into more conversations AI App Development CRM Enterprise IT Ethics & Governance Human Resources Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All Topics
Stay in the know Join Us
stay in know image
Alt