- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Don't Waste Your Time or Your Tokens
The prompting pro guide for ServiceNow AI Agents (yes, I know we don't use tokens—it rhymed so nicely)
Look, I know this is basic. Prompting is like protein in the fitness world—you're probably doing it, but you should do it more and do it better.
Here's how to not waste your time or energy on bad prompts. Read it. Apply it. Rinse and repeat forever.
Visual Aide
The Framework: GCES (Goal, Context, Expectations, Source)
Every effective prompt answers four questions:
Goal - What do you want from AI?
Bad: 'Help me prepare for a meeting.'
Good: 'Generate 3-5 bullet points to prepare me for a meeting with Client X to discuss their Phase 3+ brand campaign.'
Specificity matters. The more precise your goal, the more useful the AI output.
Context - Why do you need it and who is involved?
Bad: 'Write meeting prep.'
Good: 'For an upcoming meeting with Client X focused on Project Y. I'm the marketer on the ITSM product acting as product marketing.'
Context shapes tone, depth, and relevance. Give AI the situational awareness it needs.
Expectations - How should AI respond to best fulfill your request?
Bad: [No guidance on format or tone]
Good: 'Based on emails and Teams chats from the last 2 weeks. Refer to product sheet X [link to document]. Takes into account current market trends and collaboration preferences. Please use simple language so I can get up to speed quickly.'
Tell AI how to respond—format, tone, detail level, what to emphasize. Don't make it guess.
Source - What information or samples do you want AI to use?
Bad: [Hope AI finds the right information]
Good: 'Use emails and Teams chats from the last 2 weeks. Reference product sheet X [link]. Consider our brand guidelines [link]. Review competitor analysis [link].'
Point AI to specific sources. Don't let it guess which documents or data matter.
The Five Principles That Actually Work
- Give Direction
Describe the desired outcome in detail. Vague requests get vague results. 'Write an email' produces generic garbage. 'Write a 3-paragraph email to executive stakeholders explaining Q3 delay, maintaining professional tone, focusing on recovery plan not excuses' produces something you can actually use.
- Specify Format
Define the structure. 'Summarize this case' is incomplete. 'Summarize this case in 3 bullet points: Issue, Actions Taken, Resolution. Use simple language for non-technical stakeholders' gives AI clear constraints that improve output quality.
- Provide Examples
Show AI what good looks like. Include 2-3 examples of ideal outputs. This dramatically improves reliability. But don't overdo it—too many examples constrain creativity when you actually need it.
- Evaluate Quality
Test your prompts systematically. Try different variations. Identify what works and what fails. Rate outputs on faithfulness, completeness, and format adherence. Iterate based on results, not assumptions.
- Divide Labor
Break complex tasks into steps. Instead of 'analyze this data and create a presentation,' try: Step 1 - 'Analyze this data and identify top 3 insights.' Step 2 - 'Create presentation outline based on these insights.' Step 3 - 'Generate slide content for each section.' Chain prompts for complex goals.
The Real-World Application
When you're building AI Agents in ServiceNow, these principles determine success:
Give Direction: Your agent instructions should clearly state the goal, not assume AI will figure it out.
Specify Format: If the agent's output feeds another system, define exact format requirements. JSON? Specific field structure? Be explicit.
Provide Examples: Include examples of ideal agent behavior in your instructions. 'When triaging incidents, prioritize like this...' with concrete examples.
Evaluate Quality: Use Agentic Evaluations to test your agent across 50+ scenarios. Don't guess—measure faithfulness, completeness, and tool-calling accuracy.
Divide Labor: Complex agentic workflows benefit from breaking tasks across multiple agents. One agent investigates, another triages, another remediates. Chain them effectively.
The Bottom Line
Bad prompts waste your time generating useless output you have to redo. Good prompts save hours by producing exactly what you need on the first try.
It's like protein: basic, essential, and most people aren't getting enough. The difference between mediocre AI results and genuinely useful AI assistance is almost always prompt quality.
So spend 30 seconds adding clear direction, context, expectations, and sources to your prompts. Those 30 seconds turn 'meh' AI output into 'wow, this is exactly what I needed.'
Read. Apply. Rinse. Repeat. Forever.
What's your biggest prompting frustration? Drop it in the comments. Let's troubleshoot together.
Views expressed are my own and do not represent ServiceNow, my team, partners, or customers.
- 36 Views
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
