- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
At Knowledge 2026, my colleague Libbie Miller and I led a session on something that might sound a bit unconventional: the "vibe" of your AI assistant. As content designers at ServiceNow, we spend our days shaping conversational experiences in AI products, and we've learned that getting the vibes right is not just about being friendly—it's about building trust, driving adoption, and avoiding costly mistakes.
The story of Lucia and Sunny Escape Air
We opened the session with a story about Lucia, a nurse practitioner in Chicago who receives an urgent call about a family emergency. She needs to fly to Mexico City immediately, so she logs onto Sunny Escape Air's booking site and starts chatting with their AI assistant.
The AI greets her with: "Hey there! Looks like someone's got the travel bug. Where are we headed today? Somewhere fun?"
Lucia responds: "I need to fly to Mexico City tonight. It's an emergency."
The AI's response: "Ohh, no worries! We've totally got your back. Let's knock this out real quick. Flights to MEX are super popular, so fingers crossed we can snag you something."
Do you see the problem? The AI perfectly matches Sunny Escape Air's fun, casual brand—but it completely misses the context. Lucia isn't planning a vacation. She's dealing with a family crisis. The forced cheerfulness and casual tone feel dismissive and inappropriate.
The situation gets worse. The AI tells Lucia it found two seats on a 6:00 PM flight for $340 each. When she asks to book them, it responds: "Ohh, looks like those seats got snagged. The airline world moves fast, lol. But hey, no biggie. You want me to check what else is in the pipeline?"
Frustrated and distrustful, Lucia abandons the AI assistant entirely. She calls customer support, waits on hold for 20 minutes, and books with a human agent. From that point forward, she never uses Sunny Escape Air's AI again.
Why vibes matter: The data behind trust and adoption
While Lucia's story is fictional, it represents real patterns we're seeing in AI adoption. Here's what the research tells us:
- AI models prompted to sound warm without clear guidelines yield 10-30% more errors
- Users lose trust in AI they perceive as overly emotional
- Companies see 10-20% revenue increases when brand is consistently presented across all communication channels
There's also real money on the line. Air Canada was recently ordered by a tribunal to pay a refund to a customer who was given incorrect information by their AI assistant—even though Air Canada argued the AI had hallucinated. The tribunal ruled that the company was responsible regardless.
At ServiceNow, we say that trust is earned in drops and lost in buckets. When it comes to AI assistants, it doesn't take much to lose that trust.
Our position is simple: The vibes of your AI are key to its adoption. People will not adopt AI systems that feel off, no matter how capable they are.
The four elements of AI vibes
We introduced a framework that breaks down AI communication into four distinct elements, organized into two categories: what stays fixed and what flexes based on context.
Fixed elements (who your AI is and how it communicates)
Traits: Who it is
This is who your AI fundamentally is—its baseline character traits. Think of it like hiring someone: Are they naturally helpful and patient? Direct and efficient? Empathetic and warm? These traits stay consistent regardless of situation.
Example: Your AI is fundamentally warm, helpful, and professional.
Style: How it communicates
This is your AI's communication conventions—the editorial decisions that keep its writing consistent. Does it say "I" or "we"? Does it use contractions? How does it handle capitalization and punctuation? These choices stay consistent across interactions.
Example: Your AI uses "I" language, writes conversationally with contractions, and formats responses with clear line breaks.
Flexible elements (how fixed elements adapt to context)
Tone: Traits in context
This is how your AI's traits show up in different moments. The same traits adjust their emphasis based on what's happening—some traits get amplified, others get withheld. Professional and warm traits might show up as serious and focused during a crisis, or friendly and encouraging during routine requests.
Example: That warm, helpful AI emphasizes efficiency and calm during an urgent issue, but emphasizes warmth and encouragement when helping a first-time user.
Length: Style in context
This is how your AI's communication style expands or compresses based on user needs. The same style conventions apply, but the amount you say changes. A power user troubleshooting a known issue needs concise responses, while a confused first-time user needs more comprehensive guidance.
Example: That conversational AI style delivers a two-sentence confirmation for experienced users, but a detailed walkthrough with examples for new users.
The relationship: Traits and style define who your AI is at its core. Tone and length are how those fixed elements adapt to meet users where they are.
What went wrong with Sunny Escape Air
The issue with Sunny Escape Air's AI was that it treated its personality as completely fixed. The AI had one setting: fun and casual. Its traits never adjusted emphasis based on context to create appropriate tone. Its responses never compressed or expanded based on user needs. Every interaction felt the same, regardless of what the user actually needed.
The AI's traits (upbeat, casual, enthusiastic) were always at maximum volume, even when the context called for restraint. Its style (casual language, emojis, exclamations) stayed verbose and playful even in a high-stakes emergency. There was no flexibility—no recognition that Lucia needed a different kind of help than a vacationer browsing weekend getaways.
Defining traits in practice
To show how this framework works in action, we walked through how to properly define an AI assistant's traits. Here's how we redesigned Sunny Escape Air's AI:
Traits: Warm, upbeat, helpful
But traits alone aren't enough. You need to define what they mean in practice and out of bounds:
Warm
- In practice: Friendly and welcoming without being overly familiar. Says things like "I'd be happy to help you find a flight" or "Let's get you to Cancun."
- Out of bounds: Doesn't become saccharine or over-the-top. No "Ohh my gosh, you're going to love it there!" No excessive exclamation points. Doesn't assume familiarity with "Hey hun" or "No prob, babe."
Upbeat
- In practice: When the context is clearly vacation-related, matches the customer's enthusiasm. "Perfect! Tulum in April is beautiful. You're going to have an amazing time."
- Out of bounds: Doesn't force cheerfulness when it doesn't fit. If someone is rebooking after a cancellation or dealing with travel stress, it doesn't lead with excitement. It's not relentlessly peppy when the customer is clearly frustrated.
Helpful
- In practice: Prioritizes getting things done efficiently. Leads with what it can do: "I can get you a 6:00 PM flight" or "Let me check seat availability for your group." Anticipates needs—if you're booking for a family of four, proactively mentions baggage policies.
- Out of bounds: Doesn't over-explain or add unnecessary information. Doesn't say "Our airline was founded in 1982" when you want to change a seat. Doesn't make customers ask multiple questions for the same thing.
These "in practice" and "out of bounds" definitions give you the guardrails you need to create appropriate tone across different contexts—you know which traits to emphasize when someone's excited about vacation versus stressed about an emergency.
The conversation, reimagined
With these trait definitions in place, we revisited Lucia's conversation with a properly configured AI that can adapt its tone and length.
AI: "Hi, how can I help you today?"
Lucia: "I need to fly to Mexico City tonight. It's an emergency."
AI: "I understand this is urgent. Let me search for available flights to Mexico City departing tonight."
Notice what changed in both tone and length:
Tone adjustments (traits in context):
- Warm trait withheld playfulness, emphasized respect
- Upbeat trait completely dialed down—no enthusiasm
- Helpful trait amplified to maximum—immediate problem-solving
Length adjustments (style in context):
- Compressed from chatty to concise
- Eliminated unnecessary personality language
- Focused only on essential information
The AI comes across as respectful, professional, and competent—the same core traits, but adapted perfectly to what Lucia needs in this situation.
Key takeaways
Here's what we hope attendees walked away with:
- Vibes are not superficial—they directly impact trust, adoption, and business outcomes
- The four-element framework (Fixed: Traits & Style; Flexible: Tone & Length) gives you a systematic way to configure AI communication
- Define traits in practice—not just what they are, but what they look like in real interactions and what goes out of bounds
- Build flexibility into your AI—the same personality should adapt its tone and length based on context
Continue the conversation
If you're configuring AI assistants in ServiceNow or anywhere else, we'd love to hear about your experiences. What challenges have you encountered in getting traits and tone right? How are you thinking about response length in your AI systems?
Drop your thoughts in the comments below, or connect with us in the User Experience SIG community.
And if you missed our session at K26, remember: before you configure another AI response, give your AI a vibe check.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
