- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
2 hours ago - edited 57m ago
Security teams are not lacking data. They are lacking time, clarity, and confidence when making decisions under pressure.
In Vulnerability Response, analysts and IT remediation owners constantly ask questions such as:
- Which vulnerabilities are internet-facing?
- Which ones are actively exploitable?
- What impacts production or business-critical assets?
While the answers exist across scanner findings, CMDB records, and threat intelligence feeds, retrieving them often requires navigating multiple interfaces, understanding complex data schemas, and manually stitching together results using filters and dot-walking.
From Query-Heavy to Outcome-Driven
AI changes this dynamic by shifting vulnerability data retrieval from a manual, query-heavy process to an intuitive, outcome-driven conversational experience.
In January, we are introducing the VR Data Retrieval Agent, enabling both vulnerability analysts and IT remediation owners to ask natural-language questions and retrieve vulnerability and exposure data, across both the legacy UI and Unified Exposure Management Workspace (USEM).
Instead of constructing complex queries or navigating multiple list views, users can simply ask for the information they need in their own words.
Examples include:
- Show me all open critical VITs.
- Which open vulnerabilities are internet-facing and exploitable?
- How many open VITs impact our production environment?
- List all VITs created on Windows servers.
- Are any of our assets impacted by the latest CISA KEV vulnerabilities?
AI returns precise, structured results, including counts and direct links to the underlying records for immediate follow-up.
This is not a static dashboard with predefined filters—it is a dynamic conversational interface that adapts to the question being asked.
Eliminating Complexity Behind the Scenes
Consider a common scenario. As a vulnerability analyst, you may want to quickly identify high-priority vulnerability items, for example:
“Show me all open, critical-risk vulnerability items that are internet-facing, and are due within the next 7 days.”
Traditionally, answering this question requires navigating to the Vulnerable Item (VIT) list view and manually applying multiple filters. Analysts must know that attributes such as internet-facing do not exist on the VIT table itself, understand which related CI tables contain the required fields, and correctly dot-walk across relationships to apply each condition. This process demands deep familiarity with the data model and often involves trial and error across multiple views and filters.
With natural-language retrieval, the analyst can simply ask the question in plain language and receive the exact results, without needing to understand table relationships, schema details, or complex filtering logic.
With AI, that complexity disappears.
An analyst simply opens the Now Assist Panel and asks in natural language.
The system returns the total count along with a direct link to the corresponding VIT list—pre-filtered and ready for action.
Unlocking the Full Value of Vulnerability Data
AI isn’t just about automating manual steps—it’s about making vulnerability data easier to access, understand, and act on.
With natural-language retrieval of Vulnerability Response data, vulnerability analysts and IT remediation owners can quickly get the answers they need, without navigating complex schemas or building advanced filters. This capability is available across both the Vulnerability Response (VR) workspace and the Unified Security Exposure Management (USEM) workspace, helping teams investigate faster, prioritize more effectively, and make remediation decisions with greater confidence.
To try it out, download the latest version of Now Assist for Vulnerability Response from the ServiceNow Store.
What vulnerability data questions do you find yourself asking most often? Let us know in the comments—we’d love to hear your feedback!
- 45 Views
