We've updated the ServiceNow Community Code of Conduct, adding guidelines around AI usage, professionalism, and content violations. Read more

laszloballa
ServiceNow Employee

You know what nobody became a developer for? Manually copying requirements from a doc into Agile 2.0 stories, one field at a time, for an hour straight. And yet here we all are, doing exactly that every time a new requirements doc or gap analysis lands on our desk.

 

I built a small open-source CLI tool called sn-story-transformer to skip that part. It's a Python pipeline you run from your terminal (or your IDE's integrated terminal) that takes a requirements document, uses AI to break it into epics and stories, and pushes them to your ServiceNow instance via the Table API. Two scripts, a few minutes, and your backlog is populated.

 

story-transformer-robots.jpg

 

To be clear: this is just one way to solve this problem. You could build something similar with Flow Designer, a scoped app, or a completely different stack. I went with a standalone CLI tool because it was the fastest path to something useful, but the approach matters less than the idea. If this sparks a better version, even better.


Two scripts, one review gate

 

The whole thing is intentionally simple.

 

python scripts/analyze_doc.py --input GAP_ANALYSIS.md   →  stories.json
python scripts/create_stories.py                         →  ServiceNow

 

Step 1 sends your document to an AI model. Out comes a structured JSON file with epics and user stories. Step 2 reads that JSON and POSTs the records to your rm_epic and rm_story tables via the Table API.

 

The stories.json file sitting between the two steps is the most important design choice here. It's not a temp file. It's your review gate. Open it in your IDE, scan the output, fix anything that looks off, then push. The AI does the grunt work. You keep full control over what actually ends up in your product backlog.

 

No one should blindly trust AI output going straight into production systems. This gives you the checkpoint without making the process painful.


Use whatever AI model you want

 

The analysis script uses LiteLLM under the hood, so you're not locked into any specific provider. Set AI_MODEL and your API key in .env:

 

AI_MODEL=claude-sonnet-4-6
ANTHROPIC_API_KEY=sk-ant-...

 

Want to use OpenAI, Gemini, Mistral, or Azure? Swap the values. The script doesn't care. If you don't set AI_MODEL at all, it auto-detects based on whichever API key it finds in your environment.

 

The prompt logic itself lives in schema.yaml, not hardcoded in the Python. That file defines how work gets grouped into epics, how stories are structured, what priority labels to use, and what acceptance criteria look like. You can tweak prompt behavior without ever opening the scripts.


Auth and safety nets

 

The ServiceNow script authenticates via OAuth 2.0 client credentials by default (the grant type you'd configure in Application Registry under System OAuth). If your instance setup is simpler, set SERVICENOW_AUTH=basic in .env and use Basic auth instead.

 

Before writing a single record, the script runs a pre-flight check: it verifies both target tables are reachable and that the product sys_id you specified actually exists. If permissions are wrong or the sys_id is off, you'll know immediately. Not after 40 records have been created. Or worse, not at all.

 

Two flags worth knowing about:

 

Flag What it does
--dry-run Shows you exactly what would be created without making any API calls. Use this first. Always.
--update Patches existing records matched by title instead of creating duplicates. Great for iterating on the same document.

One YAML file controls everything

 

schema.yaml is the single configuration file for all structural behavior. Target table names, field mappings, priority values (human-readable keys in the JSON, numeric ServiceNow values at write time), and the AI prompt instructions. All in one place.

 

If your team uses different tables, different fields, or different priority schemes, you change the YAML. Not the scripts. That was the goal: make the tool adaptable without asking anyone to read through Python.


Where this fits in the real world

 

Let's be honest: requirements almost never start as a clean document. They're scattered across meeting recordings, Miro boards, Confluence and Loop pages, Figma files, or even email and chat threads (that nobody can find anymore).

Today's AI tools are already great at consolidating that chaos into something coherent: summarize a transcript, describe a wireframe, pull notes together from five different sources, attach screenshots, etc.

 

Once you have that document, however you got there, this tool handles the last mile.
From "we know what we need to build" to "it's in the backlog and ready to plan."

The workflow fits however you like to work. Run the scripts from your terminal, or use your IDE's integrated terminal and edit stories.json right in the editor before pushing. The two-step design keeps the human in the loop without making the process slow.


Try it out

 

The project is on GitHub: sn-story-transformer. You need Python 3.9+ and four pip packages. Clone the repo, copy .env.example to .env, fill in your ServiceNow credentials and an AI provider API key, and you're up and running.

Fork it, adapt the schema to your setup, and see how much of the backlog grunt work you can automate away.

 

And yes, the obvious next step is turning this into a proper ServiceNow app. No terminal, no Python, just upload a doc to some nice UI and go. We might build that next. But if someone from the community gets there first, we'd honestly love that even more.


Built something similar? Found a completely different approach? We'd love to hear about it in the comments👇