Help with duplicate automated incident
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi Community.
I am working on an incident which is regarding duplicate automated incident creation. So, the issue is Amazon GuardDuty is creating duplicate incidents for same issue and assigning to different groups which leads to confusion between teams.
I am attaching a screenshot which shows the work note of the autonomous incident. The problem is I am unable to find the root of these alerts/integrations. I have not worked with automated incident creation and would appreciate any help on where to look for the root cause.
The incident's contact type is 'Automated Alert' and the caller is an AWS bot. The short description/description always reads '[Amazon GuardDuty Finding] Recon:EC2/PortProbeUnprotectedPort - Finding Severity: Low'.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi @AwanishK,
Based on the screenshot and the text format ("Hello, This is an automatic update notification..."), this looks highly likely to be coming from an Inbound Email Action or the AWS Service Management Connector failing to deduplicate the findings.
Here is the step-by-step to find the root cause and stop the duplicates:
1. Identify the Source (Email vs. API)
Open one of the duplicate incidents.
Right-click the header -> History -> Calendar.
Look at the very first entry (CREATE). Who is the user in the "Updated by" / "Created by" field?
If it is "guest" or "system": It is likely an Email Inbound Action.
If it is a specific user (e.g., aws_integration): It is likely a REST API or the AWS Connector App.
2. Scenario A: It is coming via Email (Most Likely) The text in your screenshot says "Count: 1210". GuardDuty sends emails every time the count increases.
The Issue: If the Subject Line of the email changes (e.g., includes the timestamp or count) and there is no ServiceNow Watermark (Ref:MSG...) in the body, ServiceNow treats it as a New Email instead of a Reply.
The Fix:
Go to System Policy > Email > Inbound Actions.
Search for the script that sets contact_type to "Automated Alert".
You need to modify the script to search for an existing active incident with the same "GuardDuty Finding ID" (you might need to extract this ID from the email body using Regex and store it in a Correlation ID field) before creating a new one.
3. Scenario B: It is the AWS Connector If you are using the official AWS Service Management Connector:
Navigate to System Import Sets > Administration > Transform Maps.
Search for maps related to "AWS" or "GuardDuty".
Check the Coalesce field.
The Fix: Ensure that the "Finding ID" (unique ID from AWS) is set to Coalesce = True. If the integration is not coalescing on a unique ID, it will create a new record for every update payload sent by AWS.
Why different groups? Since duplicates are being created, your Assignment Rules or Flows run freshly on every new ticket. If the payload contains different data (e.g., different Source IP or Region), your rules might be routing them differently each time. Fixing the deduplication (preventing the new ticket) will solve the assignment issue automatically.
If this response helps you solve the issue, please mark it as Accepted Solution.
This helps the community grow and assists others in finding valid answers faster.
Best regards,
Brandão.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hey Itallo.
Thanks for a response. The created by field says system. However, I can't find any email inbound actions nor any transform map which could trigger this. Also, I saw the contact type isn't set as 'Automated Alert' during incident creation, rather it is updated by the person who works on the incident.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi Awanish,
Thank you for the update. The fact that contact_type is manual and the user is "system" changes the investigation strategy.
Since you couldn't find the code by searching for "Automated Alert," we need to search for the Content (Short Description) instead.
Here is the definitive way to find exactly what triggered this, without guessing:
Step 1: The "Smoking Gun" (Check Email Logs) Even if you didn't find an Inbound Action script, we need to confirm if an email was actually received.
Type sys_email.list in the navigator.
Filter: Type is Received AND Subject contains GuardDuty (or Created on [Today]).
If you find records:
Open one of the emails that corresponds to a duplicate incident.
Scroll down to the bottom to the Related List named "Email Log" (or look at the "Log" field on the form).
Look for a line that says: "Processed 'Record Action' name: [NAME OF THE SCRIPT]".
Result: This will give you the exact name of the Inbound Action or Flow that processed it.
Step 2: Check Event Management (ITOM) If sys_email is empty, "System" often indicates the Event Management module.
Open one of the duplicate Incidents.
Check if the field "Alert" is populated (you might need to look at the XML or configure the form layout to see it).
If an Alert is linked, the duplication is happening in the Alert Management Rules (em_alert_management_rule), not in the Incident table directly.
Step 3: Check Flow Execution If neither of the above works:
Navigate to Process Automation > Flow Designer > Executions.
Filter by Created [Today] and Result = Complete.
Look for any flow running as "System" that matches the timestamp of the incident creation.
My bet: It is likely still an Inbound Action, but one you missed because the script doesn't set the contact_type.
Once you find it via Step 1, you can add the logic to query for existing tickets before inserting a new one.
Let us know what you find in the sys_email table!
Best regards,
Brandão.
