Charles Benedi1
Tera Explorer

 

The Strategic Shift to Intelligence-First Architecture

As we move through 2026, the conversation surrounding Artificial Intelligence in the Australian Federal Government has matured. The conversation around Artificial Intelligence has shifted from "What can it do?" to "How does it work within my governance framework?" For Australian Federal Government agencies, the leap from Predictive AI to Agentic AI represents a fundamental change in how we deliver citizen services.

We have moved past the initial "wow factor" of Large Language Models (LLMs) and into a phase of rigorous architectural scrutiny. For Australia Federal Government departments and agencies governed by the various security policy frameworks such as the Information Security Manual (ISM), Protective Security Policy Framework (PSPF) and various privacy mandates, the leap from Predictive AI to Agentic AI represents a fundamental shift in service delivery.

CharlesBenedi1_2-1776317281271.png

Understanding the Spectrum: From Assist to Agent

In my recent mentoring sessions with my architects and CTAs, we discussed the ServiceNow AI roadmap as a hierarchy of capability, amid the general the wider world AI progression. At the base is Now Assist, which serves as a "Co-pilot." It excels at productivity boosts: summarising long "Incident" or "Case" histories, generating knowledge articles from closed tickets, and providing code suggestions for developers.


While Now Assist provides immediate productivity gains through summarisation and content generation, Agentic AI is the true game-changer. Unlike traditional chatbots that follow a rigid, branching logic, Agentic AI is goal-oriented. It can understand a complex user intent—such as "I need to onboard a new contractor with NV1 clearance by next Monday"—and autonomously decompose that goal into sub-tasks. It can check identity via an integration, trigger a background check workflow, and provision hardware, all while navigating the platform’s business rules without manual scripting for every edge case.

 

The CMA’s Dilemma: Governance and Guardrails

For a CMA perspective, the challenge is indeed Architectural Governance. This is what I’ve been considering as the important aspect in the government sector that we need to provide sound and convincing guidance to our government customers. We must address three critical pillars:

  1. Data Sovereignty and LLM Selection: Federal departments/agencies require certainty that their data isn't training public models. We focus on ServiceNow’s "Domain Specific" LLMs, which are hosted within the ServiceNow infrastructure, ensuring that sensitive data remains within the agreed residency boundaries. However, when it comes to external LLMs, how we do we ensure data sovereignty meets the security frameworks? (ISM, PSPF).
  2. Accuracy and Hallucination Management: In a government context, providing the wrong information to a citizen can have legal consequences. We implement "Grounding" techniques, ensuring the AI only answers based on the agency’s verified Knowledge Management (KM) databases.
  3. The "Human-in-the-Loop" Design: We architect workflows where the Agentic AI performs the heavy lifting, but a human officer provides the final "Authorise" step for sensitive decisions. This maintains accountability while achieving massive efficiency gains.

CharlesBenedi1_3-1776317299269.png

 

Architectural Considerations for the CMA

For a Lead Architect, the challenge isn't just turning on a plugin. It’s about:

  • Guardrails and Governance: Ensuring that LLMs operate within the specific context of agency policy.
  • Data Integrity: AI is only as good as the CMDB and Knowledge Base it feeds on.
  • Human-in-the-Loop: Designing workflows where the AI "Agent" performs the heavy lifting, but a human officer provides the final validation for sensitive government decisions.

By embracing an "Intelligence-First" strategy, we aren't just automating tasks; we are redesigning the government service delivery model to be proactive, predictive, and—most importantly—compliant.

1 Comment