- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
◆ VIBE-CODING◆ Vibe-Coding a ServiceNow App: What Worked, What Didn’t My experience using ServiceNow Build Agent and AI tooling (Zurich) to build the Recognition Card application — the good parts, the frustrations, and where I had to go old-school. #ServiceNow · #Build Agent · #Vibe-Coding |
If you haven’t read the first post about the Recognition Card application, the short version: a scoped ServiceNow app that lets employees create and send collectible recognition cards to colleagues. Two catalog items, a Service Portal, a Script Include doing the heavy lifting, Flow Designer for fulfilment, and everything sitting on reused CMDB tables — no custom tables created.
The whole point of this build was to test something: how far can you get building a real, functional ServiceNow application using AI tooling and prompting alone? No manual scripting from the start — just describe what you want and iterate. That approach is what people are calling “vibe-coding”, and I wanted to find out where it actually gets you on the Now Platform.
01 — The Tool: Build Agent
The primary tool for this experiment was ServiceNow Build Agent — the AI-powered development assistant embedded in the Now Platform. Describe what you want to build and the agent generates application files, scripts, catalog items, flows, and configuration directly in your instance.
Build Agent relies on ServiceNow Fluent as its underlying framework. Fluent is the structured representation of Now Platform artifacts the agent reads, writes, and reasons about. The quality of what the agent produces is directly tied to how well Fluent represents the platform construct you’re working with. If Fluent covers it, the agent tends to do well. If coverage is thin, the agent guesses, produces something broken, or refuses to try.
◆ Useful reading |
02 — How the Build Went
Describe the application, let the agent scaffold it, review, prompt again to refine, repeat. For early scaffolding — app scope, initial structure, basic catalog variables — it moved fast. The overall experience was positive enough that I’d do it again. But I also hit a wall in several places, and those are worth being specific about.
03 — Build Agent: The Limitations
Custom tables instead of platform reuse
The first thing the agent suggested was a new custom table. No exploration of whether anything on the platform already served the purpose. I had to explicitly push back and guide it toward cmdb_model for card designs and alm_information_asset for assigned cards — both perfectly suited, both already there. It defaults to “create new” rather than “reuse what exists.” For someone less familiar with the platform, that default would add unnecessary technical debt from the start.
Scoped data and predefined records
A significant chunk of the application’s logic depended on a specific Model Category record and a predefined user group. The agent struggled to include these as proper artifacts in scope. It was actually creative in its workaround — it suggested a Business Rule or Fix Script to generate the required data on first run, which I liked as an idea — but it missed the fact that Fluent supports defining records directly and referencing them as attributes from other records. The capability existed. The agent just wasn’t using it.
UI/UX: classic pages and broken widgets
This was the most frustrating part. I wanted the user interface through Service Portal — catalog items and widgets. The agent’s default was a classic UI Page. That also didn’t work, and after five attempts to fix it the agent was no closer.
When I pushed for a widget, it initially resisted — claiming widgets weren’t supported, which is not true. Widgets are officially supported by Fluent. After convincing it to try, the generated widget still didn’t work. Multiple fix attempts followed, and to be fair the agent suggested some useful debugging steps — adding output directly into the page to trace what was happening. But it couldn’t find the root cause itself.
I eventually found it: the agent had been incorrectly addressing how data sent from the server was being accessed on the client. A fundamental miswiring it couldn’t diagnose through repeated iterations.
04 — Catalog Items: Where I Stepped Away
For catalog items I moved away from Build Agent and used the native Catalog Builder instead. Honestly I missed an opportunity here — there’s a dedicated Catalog Item Generation skill I didn’t use, and in hindsight it would have been worth testing.
◆ Positive surprise The Catalog Builder’s AI assist for auto-filling descriptions, tooltips, and help text on variables was genuinely good. Saved real time and the quality was usable with minor edits. If you haven’t tried it, worth a look. |
05 — Workflow Generation: Prompt Quality Matters
Flow Designer generation was where I most clearly felt the difference between a good prompt and a lazy one. My first couple of attempts produced something vague and structurally wrong.
◆ What actually worked I used ChatGPT first to write a detailed prompt — describing exact steps, conditions, data inputs, and expected outputs — then fed that into Build Agent. Even with a good prompt the output was a skeleton at best. Not every step used the correct action, several create/update steps pointed at the wrong tables. Think of it as a rough starting point that saves you from a blank canvas, not a finished flow you can deploy. |
06 — Mid-Build: Upgrade, IDE Errors, Old School
Somewhere in the middle of development we ran an instance upgrade. The app was exported as an update set for backup first. After the upgrade and activating relevant plugins including the ServiceNow IDE, I started getting errors about syncing metadata files that blocked me from continuing in the Build Agent flow.
At that point I switched to finishing development the traditional way — directly in the platform, with Claude helping alongside for scripting, debugging, and logic. No more vibe-coding pipeline. Just proper development with AI as a pair programmer rather than the driver. That combination worked really well. It’s probably where I’d position AI tooling today: most effective as an accelerator when you’re doing the steering yourself.
07 — Overall Take
✓ What worked
| ✕ What was frustrating
|
Build Agent got me moving fast and produced some things I'd have spent longer on manually. But for anything beyond simple scaffolding — reusing platform tables, scoped data, Service Portal widgets, complex flows — it needed constant guidance and manual intervention. In an essence it started really optimistic, but somewhere in the middle I found myself micromanaging: debugging tricky issues, correcting wrong table references, fixing broken widgets. In the end I'm not sure I saved any time compared to doing it myself from scratch.
Is there still value? Absolutely — but you need to understand where it fits right now. For me the strongest area is server-side work: tables, forms, heavy scripting. It can also produce solid client-side code if you know enough to review and steer it. The key is going in with realistic expectations: Build Agent is a capable accelerator for contained, well-defined tasks. It is not an end-to-end app builder — not yet.
Fluent expansion feels like the real bottleneck for both agent capability and adoption. Some areas of the platform — Flows, Now Assist, Reporting, Mobile — have data models complex enough that even reading and editing them in Fluent seems challenging, let alone generating them reliably. Until Fluent coverage matures across these products, the agent's reach will stay limited to the areas it already covers well.
Swivel chair development isn't going away in the near future. Each individual tool is improving, but I don't see end-to-end vibe-coding covering all ServiceNow components arriving soon. For now the most realistic position is: use the agent where it's strong, know when to step away, and don't expect it to hold the whole architecture in its head.
One thing I'm still thinking about: the source code → build → install → test pipeline feels like it could become a friction point in real-world delivery. Whether that artificial separation ends up feeling slow or adds genuine rigour — I'm curious to find out with more use.
The biggest mental shift: with Build Agent, the instance is the deployment target, not the source of truth. Fluent source files are where the app actually lives. That said, the ServiceNow IDE does support bi-directional sync — so direct instance changes aren't necessarily lost, as long as you pull them back into your Fluent source before running the next build. Skip that sync step, run a build from stale source, and your instance change disappears. The discipline is: sync before you build, every time. It's manageable, but it's a different habit from classic ServiceNow development where the instance always wins.
Making Build Agent globally available is the right move. But I'd still be deliberate about where to use it — right now the sweet spot feels like targeted assist rather than end-to-end generation. Specific, contained tasks. Well-defined scope. Not "build me an app." It’s not a replacement for knowing the platform. It rewards people who already know what good looks like.
The frustrating part is that once you've used it at all, the gap becomes impossible to ignore. Knowing the agent can create a system property and wire it into your code in a single prompt — and then having to click through five screens to do it yourself — hits differently than it used to.
Have you tried Build Agent on something non-trivial? I’d be curious what your experience was — drop a comment below.
⚠ Disclaimer Development has been done at Zurich version; At some time may give Australia a try; to give it the same promth; with the intention to cover most ServiceNow ares and to see how far it can go by itself 😄 |
Stay Curious | Dzmitry Peshkur Certified Master Architect “Was doing ServiceNow before it became mainstream” |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
