Home Conversations On AI App Development CRM Enterprise IT Ethics & Governance Futures HR Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All topics For Leaders In IT & Dev Customer Experience Finance, Operations & Strategy Employee Experience Security & Risk News & Events People & Culture My List Explore All
April 6, 2026 4 min What AI practitioners can learn from ‘Jurassic Park’ AI, for all its capability, lacks the judgment, accountability, and empathy needed to make the highest-stakes decisions. It requires governance. AI Thought Leadership
Lisa Lee
Lisa Lee Writer, ServiceNow
Illustration of two dinosaurs hovering over two workers at a desk
Top takeaways Treat AI risk as a governance problem, not a technology problem. Human in the loop isn’t sufficient; you need human at the helm. Require explainability and auditability before scaling AI.
Alt text

In “Jurassic Park,” Dr. Ian Malcolm tells the park’s overzealous founder, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

We all know how that ended. The fences failed, the dinosaurs got out, and the park’s cutting-edge technology became completely unmanageable because nobody had built the infrastructure to contain it.

Change the dinosaurs to an AI algorithm making a hiring decision, denying a loan, or interpreting the results of a cancer screening, and Dr. Malcolm’s admonition stops being a punchline and becomes more of a cautionary tale.

AI, for all its capability and sophistication, doesn’t have the moral judgment, accountability, or empathy needed to independently make the highest-stakes decisions. These are the ones where being wrong has real consequences for real people.

Yet, AI users are deploying it there anyway. AI has already been used in systems that influence loan and credit approvalsinsurance claimsmedical decisionshiring decisions, and criminal justice situations.

In most cases, the people affected have no idea an algorithm was involved and face a murky path to challenge it when it’s wrong. That’s not a technology problem. That’s a governance problem. And it’s one many enterprises are ill-equipped to solve, as AI’s capabilities advance faster than organizations’ ability to put safeguards in place.

The ServiceNow Enterprise AI Maturity Index found that most enterprises don’t have effective guardrails in place to govern AI. In fact, only 44% acknowledge having a designated team that drafts AI policies, mitigates AI risks, and focuses on the responsible use of AI.

Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should. Dr. Ian Malcolm Mathematician
Governance is more than a document you write once and file away. It’s a living, dynamic system that requires the same rigor and intentionality as building AI itself.

Why humans need to be more than “in the loop”

There’s a version of “human in the loop” that might sound reassuring but means almost nothing in practice. A reviewer who approves 200 AI-driven decisions a day without the time, context, or authority to push back isn’t oversight. It’s a rubber stamp.

Human accountability is completely different, and in high-stakes situations, the distinction is much more than semantic. It’s the difference between a system with a safety net and one that merely appears to have one.

A lot of that comes down to language—specifically, two terms that sound similar but describe very different relationships between people and the systems they’re supposed to be governing: human in the loop and human at the helm. The difference sounds subtle. It isn’t.

When you’re in the loop, AI does the heavy lifting and you’re a fact-checker at the end. AI generates an output—like an email, some code, or a customer service response—and the human reviews it, corrects it if necessary, and hits send.

This works well for many use cases but can be risky when humans become complacent and trust AI so much that they allow errors to get through.

When you’re at the helm, you define the strategy, set the constraints, and use AI to arrive at a specific outcome. For example, instead of simply instructing AI to write an email to a customer, you tell it your goal, describe the audience, and request various options to critique against your brand voice.

You are the director, steering AI through a multistep workflow that you designed and control.

According to ServiceNow's Blueprint for Agentic Business, “Humans have intuition about boundaries. They know not to look at a colleague’s compensation data. They know to get approvals before making payroll changes. They watch deadlines, read context, and exercise judgment.”

AI agents, the blueprint notes, can amplify risk unless the platform supplies guardrails, including identity resolution, entitlements, workflow constraints, integration governance, audit evidence, and change management.

In high-stakes AI deployment, this level of explainability is a prerequisite for accountability.

Examples of human at the helm

It’s not that AI should never be used in high-stakes domains. It’s just that humans need to be in charge. Research from Arizona State University found that loan officers made more accurate and fair decisions when encouraged to evaluate applicants themselves instead of automatically accepting AI recommendations.

Another example is Google DeepMind’s g-AMIE model, which takes patient history, asks questions, gathers symptoms, and documents details. It then generates a clinical note for a physician to analyze and recommend treatment.

In a blog post, Google researchers described what responsible AI looks like:

“Preserving conversational properties, AMIE can operate within guardrails, performing history-taking without providing individualized medical advice. This disentangles history-taking from decision-making, ensuring patient safety with the overseeing physician remaining accountable.”

It’s not that AI should never be used in high-stakes domains. It’s just that humans need to be in charge.
Illustration of two large dinosaurs creeping up on two people doing work

Governance in practice

The instinct in many enterprises may be to treat AI governance as a compliance checkbox. Build a policy, form a committee, and add a disclaimer. But governance is more than a document you write once and file away. It’s a living, dynamic system that requires the same rigor and intentionality as building AI itself.

True AI governance in high-stakes domains requires three things working together:

  1. Human oversight: In any situation where an AI decision can materially affect someone’s health, financial security, employment, or freedom, a human needs to be at the helm as a genuine check. That means building workflows where human review is mandatory, visible, and documented.
  2. Explainability by default: If your organization can’t clearly articulate why an AI system made a specific decision, you’re not ready to deploy it in critical situations. The “black box” defense—“We don’t know exactly how the AI decided this”—is a liability waiting to materialize.
  3. Connected auditable infrastructure: Governance without visibility is not true governance. Enterprises need systems that log AI decisions in real time, flag anomalies, track outcomes, and make the entire decision trail accessible to the humans responsible for oversight. Most organizations fall short here, not because they lack good intentions, but because their systems were never built to support this level of transparency.

This is precisely where ServiceNow delivers a fundamentally different approach. AI Control Tower is the command center for all the AI running across your enterprise.

Instead of having AI scattered across different tools and departments with nobody really knowing what it’s doing or how well it’s working, AI Control Tower is one place to see it all—what’s in use, how it’s performing, and whether it’s helping make fair and accurate decisions.

It also handles the governance side of things. So if an AI model starts drifting, showing bias, or making decisions that don’t align with company policies, you can catch and correct it before it becomes a bigger problem. The fences, in other words, hold.

Jurassic Park’s Dr. Malcolm wasn’t arguing against innovation. He was arguing against the dangerous combination of capability without accountability. The enterprises that understand this will build the infrastructure to deploy AI responsibly, and win.

Find out how ServiceNow can help you put responsible AI to work for people.

Next up
Dive into more conversations AI App Development CRM Enterprise IT Ethics & Governance Human Resources Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All Topics
Stay in the know Join Us
stay in know image
Alt