Discovery Admin Workspace How to resolved errors

SotaT
Tera Contributor

I have a question about the list of errors displayed in Discovery Admin Workspace > Diagnostics > Diagnostics. Please tell me the best practices for debugging these errors. Please tell me what to check and how to resolve them, as well as how to operate them.

SotaT_0-1753780915986.png

SotaT_1-1753780936809.png

 

2 REPLIES 2

AJ-TechTrek
Giga Sage
Giga Sage

Hi @SotaT ,

 

As per my Understanding you go through below and might be you are able to find the probable solution.

 

1. Understand what this Diagnostics list shows
The Diagnostics list under:
Discovery Admin Workspace → Diagnostics → Diagnostics shows:
* Discovery probe and pattern errors
* MID Server issues (connection, credentials, connectivity)
* Classification and Identification failures
* General discovery runtime errors
These come from discovery logs and event tables (like discovery_log, ecc_queue).


2. Steps to approach to debug


Step 1: Identify the type of error
Read the Error Type and Message columns.
Common categories:
* Probe failed / Pattern failed
* Classification failed
* Identification failed
* MID server errors (unavailable, down, credential errors)

 

Step 2: Drill down to raw logs
Click on the error entry → it usually links to:
* discovery_log (raw discovery log)
* ecc_queue messages (input/output)
This helps you see:
* Which probe/pattern failed
* Which target CI or IP it was working on
* Detailed stack trace or SNMP/SSH/WinRM command error

 

Step 3: Validate environment issues
* Check network connectivity: MID Server ↔ target
* Validate firewall rules and ports
* Confirm credentials (use “Test Credentials” feature in Discovery Credentials)
* Check MID Server is up, has enough resources, and Validate MID passes

 

Step 4: Use built-in tools
From Discovery Admin Workspace:
* Classifier test tool: confirm classification logic matches the target device
* Pattern Debugger: run the pattern manually on the target IP and step through each step
* SNMP/WMI/SSH Command Tester: confirm MID can execute commands

 

Step 5: Check IRE (Identification & Reconciliation Engine)
If the error is in the identification phase, go to:
* Identification Log table
* See if there is a mismatch (e.g., missing identifiers, wrong serial number)

 

Step 6: For probe/pattern errors
* Open the pattern in Discovery Patterns → Patterns
* Use “Debug” mode with the IP to see exactly where the failure occurs
* For probe errors, check if the target OS or device actually responds to the probe

 

3. After you fix an issue
* Mark the diagnostic error as Resolved (you can add a note)
* Rerun discovery to validate fix

 

4. Best practices for ongoing management


A. Regularly monitor Diagnostics → prioritize by frequency & criticality


B. Use tagging & filtering to focus on production vs. test environments


C. Document frequent issues & solutions internally


D. Regularly validate MID Server capacity and patch level


E. For recurring credential failures: review rotation policies and use External Credential Storage (like CyberArk)

 

Please appreciate the efforts of community contributors by marking appropriate response as Mark my Answer Helpful or Accept Solution this may help other community users to follow correct solution in future.
 

Thank You
AJ - TechTrek with AJ - ITOM Trainer
LinkedIn:- https://www.linkedin.com/in/ajay-kumar-66a91385/
YouTube:- https://www.youtube.com/@learnitomwithaj
Topmate:- https://topmate.io/aj_techtrekwithaj (Connect for 1-1 Session)
ServiceNow Community MVP 2025

AJ-TechTrek
Giga Sage
Giga Sage

Hi @SotaT ,

 

As per my Understanding you go through below and might be you are able to find the probable solution.

 

1. Understand what this Diagnostics list shows
The Diagnostics list under:
Discovery Admin Workspace → Diagnostics → Diagnostics shows:
* Discovery probe and pattern errors
* MID Server issues (connection, credentials, connectivity)
* Classification and Identification failures
* General discovery runtime errors
These come from discovery logs and event tables (like discovery_log, ecc_queue).


2. Steps to approach to debug


Step 1: Identify the type of error
Read the Error Type and Message columns.
Common categories:
* Probe failed / Pattern failed
* Classification failed
* Identification failed
* MID server errors (unavailable, down, credential errors)

 

Step 2: Drill down to raw logs
Click on the error entry → it usually links to:
* discovery_log (raw discovery log)
* ecc_queue messages (input/output)
This helps you see:
* Which probe/pattern failed
* Which target CI or IP it was working on
* Detailed stack trace or SNMP/SSH/WinRM command error

 

Step 3: Validate environment issues
* Check network connectivity: MID Server ↔ target
* Validate firewall rules and ports
* Confirm credentials (use “Test Credentials” feature in Discovery Credentials)
* Check MID Server is up, has enough resources, and Validate MID passes

 

Step 4: Use built-in tools
From Discovery Admin Workspace:
* Classifier test tool: confirm classification logic matches the target device
* Pattern Debugger: run the pattern manually on the target IP and step through each step
* SNMP/WMI/SSH Command Tester: confirm MID can execute commands

 

Step 5: Check IRE (Identification & Reconciliation Engine)
If the error is in the identification phase, go to:
* Identification Log table
* See if there is a mismatch (e.g., missing identifiers, wrong serial number)

 

Step 6: For probe/pattern errors
* Open the pattern in Discovery Patterns → Patterns
* Use “Debug” mode with the IP to see exactly where the failure occurs
* For probe errors, check if the target OS or device actually responds to the probe

 

3. After you fix an issue
* Mark the diagnostic error as Resolved (you can add a note)
* Rerun discovery to validate fix

 

4. Best practices for ongoing management


A. Regularly monitor Diagnostics → prioritize by frequency & criticality


B. Use tagging & filtering to focus on production vs. test environments


C. Document frequent issues & solutions internally


D. Regularly validate MID Server capacity and patch level


E. For recurring credential failures: review rotation policies and use External Credential Storage (like CyberArk)

 

Please appreciate the efforts of community contributors by marking appropriate response as Mark my Answer Helpful or Accept Solution this may help other community users to follow correct solution in future.
 

Thank You
AJ - TechTrek with AJ - ITOM Trainer
LinkedIn:- https://www.linkedin.com/in/ajay-kumar-66a91385/
YouTube:- https://www.youtube.com/@learnitomwithaj
Topmate:- https://topmate.io/aj_techtrekwithaj (Connect for 1-1 Session)
ServiceNow Community MVP 2025