Discovery Schedule for Data Center Devices Not Completing After Yokohama Upgrade
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-28-2025 01:45 AM
Hi everyone,
We recently upgraded to the Yokohama release and have encountered an issue with one of our Discovery Schedules targeting data center devices. The schedule starts as expected and the IP ranges are correctly configured, but the discovery process cancels near the end of the run.
Upon reviewing the logs and device statuses, we noticed that the devices which do not complete have their current activity stuck at "Updating CI". This behavior is new since the upgrade, and we haven't made any changes to the probes or patterns involved.
Has anyone else experienced similar behavior after upgrading to Yokohama? Any insights or suggestions would be greatly appreciated.
Thanks in advance!
- Labels:
-
Data Health Tools
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2025 01:37 AM
Hi Kabelo
We have a similar issue. Did you find a solution?
Regards
Adri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2025 03:48 AM
Hi @KabeloMohale I think you need to raise a case to servicenow support for it they may help you on that part
if my response helps you mark helpful and accept solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2025 07:19 AM
Hi @KabeloMohale ,
As per my understanding What’s happening:
After upgrading to Yokohama, there have been reports (including known issues) where Discovery Schedules cancel or hang during the final reconciliation step. This is often due to:
* Tightened validation or changes in Identification and Reconciliation Engine (IRE)
* Changes in transform logic when Discovery tries to update CMDB CIs
* Updated CMDB rules that can cause unexpected validation failures or errors
* Potential conflicts with data integrity or stale identification rules
Recommended to troubleshoot and resolve:
1. Check the Discovery logs & CMDB logs
* Go to Discovery Log (Discovered devices > Discovery Log)
* Filter for error, warning, or cancel messages around the time of the stuck activity.
* Also check:
* cmdb_ire_error table (any failed identification or reconciliation errors)
* syslog table for recent errors related to Discovery or CMDB update
2. Review your Identification Rules and Data Integrity
* In Yokohama, stricter rules may cause CIs to fail reconciliation if:
* Required identifying attributes are missing
* Duplicate entries exist
* Go to CI Class Manager > Identification Rules for the affected classes (e.g., network gear, servers) and confirm that the rules are still valid and that the required fields are populated.
3. Validate affected CIs
For CIs that get stuck on “Updating CI”:
* Open them in the CMDB and check:
* Are required fields missing?
* Are there duplicate CIs conflicting?
* Check the Reconciliation Definition and make sure no unexpected changes happened post-upgrade.
4. Check Discovery Patterns and Probes
* Ensure your patterns/probes haven't been impacted by the upgrade:
* Go to Discovery Patterns > run test pattern on an affected CI
* Verify that all steps complete and that the data being returned is valid
* Re-publish patterns if needed
5. Review ECC Queue and Discovery Schedule
* Check ECC Queue for stuck or error messages (especially around the time of the discovery run).
* Make sure MID Servers are healthy and properly assigned.
6. Test with a smaller scope
* Clone the discovery schedule and reduce IP range to just 2–3 devices that consistently fail.
* Observe if they still get stuck.
7. If you find IRE errors:
* Consider cleaning up duplicate CIs
* Validate CMDB health and resolve completeness/correctness issues
* Adjust Identification Rules to better match your data
Additional: ServiceNow Known Issue / Patch
* Review ServiceNow release notes and known issues for Yokohama:
* There have been hot fixes and patches specifically for discovery schedules cancelling unexpectedly or getting stuck in "Updating CI."
* Open a HI case with ServiceNow support if the issue persists — they can identify if your case matches a known defect.
Summary:
* Check cmdb_ire_error and Discovery Logs to see why CIs get stuck.
* Validate and adjust Identification Rules for the affected CI classes.
* Review Discovery Patterns/Probes and re-test.
* Reduce scope to reproduce the issue on a smaller scale.
* Watch for known issues from ServiceNow for Yokohama (patches might be available).
Please appreciate the efforts of community contributors by marking appropriate response as Mark my Answer Helpful or Accept Solution this may help other community users to follow correct solution in future.
Thank You
AJ - TechTrek with AJ
LinkedIn:- https://www.linkedin.com/in/ajay-kumar-66a91385/
YouTube:- https://www.youtube.com/@learnitomwithaj
ServiceNow Community MVP 2025
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-28-2025 06:14 AM
hi Kabelo
We had a similar experience after Yokohama update and eventually figured out, that our mid servers Configurations had reverted back to the standard settings. "threads.max" value="25"
We changed our mid server threads back to what it was before the upgrade and the scheduled scans now ran to completion.
We also used Credentials aliases in the scheduled scans and that seems to make our scans even faster.