Catalog Tasks Closing as “Closed Incomplete” After Approval
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi community,
The ADAPT integration is implemented as part of the catalog task workflow.
Once a “TPE Central Email Service (CES) Request” is raised and the RITM is approved, the corresponding catalog tasks are created and assigned to the system user “ADAPT”.
This assignment triggers the integration, which:
Collects the required data from the RITM / catalog task
Sends the data to AWS
Receives the response
Posts the response back to the catalog task
However, after the CES Request is raised and approved, the catalog tasks are being automatically closed as “Closed Incomplete.”
Currently, the backend behavior in ServiceNow is unclear, as the ticket is populated with the following error messages:
“Request has failed to process” with HTTP 400 errors indicating:
Product code with the specified product type already existsOr intermittently:
“Failed to process: Endpoint request timed out.”
Due to these errors, the request is not being processed successfully, and the task is closed without completion.
Please refer to the attached screenshots and advise on the reason for these errors and why the integration is failing.
Additionally, guidance on how to troubleshoot and validate this issue would be appreciated
Please let me know if you require you more details
Thanks,
Srinivasu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
Hi Srinivasu,
From what you describe, there are really two separate things happening.
First, the integration is failing for specific technical reasons, HTTP 400 duplicate and occasional timeouts.
Second, your catalog task lifecycle logic is likely treating any integration failure as a terminal outcome and is auto closing the task as Closed Incomplete.
Below is how I would interpret each symptom and how to prove it in ServiceNow.
Why the tasks are closing as Closed Incomplete
In ServiceNow, catalog tasks typically end up Closed Incomplete when a workflow or Flow Designer path decides the request cannot be fulfilled, or when an activity that is considered mandatory returns an error and the process handles it by closing tasks rather than leaving them open.
What to check to confirm the closing mechanism
- Open one of the affected catalog tasks and check the Activity or Stage history and the Audit History.
Look for an update that changes State to Closed Incomplete and identify the Updated by user and the exact timestamp. - At the same timestamp, check what actually performed the update.
Common causes are a workflow activity, a Flow Designer action, a Business Rule, or a Script Action. - Review these areas in this order
Flow Designer execution details for the record, especially any failed action output or error branch.
Workflow context if you are still using the classic workflow engine for that catalog item.
Business Rules on sc_task, task, or your specific catalog task table, that run on update or on insert and set state based on an error flag.
The key point is this.
The integration error does not automatically close tasks by itself.
Something in your orchestration is deciding, error equals close incomplete.
Meaning of the HTTP 400 “Product code with the specified product type already exists”
This is almost always an idempotency or duplicate creation problem.
Typical root causes
The request is being sent more than once for the same RITM or for the same catalog task.
Example, multiple updates to assignment_group or assigned_to keep re triggering the outbound call.
The AWS endpoint expects create on first call, then update on subsequent calls, but ServiceNow is always calling the create operation.
The payload has a product code that must be unique per product type, and your mapping is sending the same product code for different requests.
Example, using a static value or a non unique field, or truncating a value so it collides.
The remote system already has the product code from a previous attempt, and you are retrying without a safe upsert pattern.
How to validate quickly
A. Prove whether you are sending duplicates
Check outbound logs and count how many times the same task triggers the outbound request.
If you see multiple sends for one task, fix the trigger condition first.
B. Compare payloads
Take two failing examples and compare the exact product code and product type being sent.
If they match, then you are trying to create a duplicate.
If they differ, then your mapping may be wrong and producing collisions.
C. Confirm expected API behavior with AWS team
Ask whether the endpoint is create only, or supports idempotency keys, or supports upsert.
If they have an idempotency header or request id, you should pass a stable correlation id such as the RITM sys_id plus task sys_id.
Meaning of “Endpoint request timed out”
This is usually one of these.
- The AWS endpoint is slow or doing long running processing synchronously.
- Network path issues, proxy, DNS, firewall, or transient connectivity.
- ServiceNow REST timeout settings are too low for that call.
- AWS throttling or rate limiting leading to delayed responses, which show up as timeouts.
- Large payloads or TLS handshake issues that become intermittent.
How to validate quickly
- Check if timeouts cluster around busy hours.
- Check if retries are happening automatically.
- Check the exact timeout value configured in the REST Message or IntegrationHub action.
If the remote side needs more time, the best pattern is asynchronous.
ServiceNow sends the request, remote returns immediately with an acknowledgement and a correlation id, then ServiceNow polls or receives a callback.
Practical troubleshooting checklist in ServiceNow
- Identify the exact integration entry point
Is it a Business Rule, Flow Designer action, Script Include, or Workflow activity that runs when assigned_to becomes ADAPT. - Confirm trigger conditions are safe
Only trigger when state is a specific value and a dedicated flag indicates not yet sent.
Example, u_sent_to_adapt is false, then set to true before sending.
This prevents repeated sends when the task is updated for other reasons. - Capture correlation and payload
Write the outbound request payload, response body, status code, and correlation id back to fields on the catalog task.
You already post responses back, but make sure you also persist the request that was sent, not only the response. - Turn on the right logs for one test case
Enable REST or integration logging for a short window, then reproduce with a single RITM.
Collect request headers, endpoint, payload, response code, and response body. - Trace why the task closes
On the catalog task, look for the update that sets it Closed Incomplete.
Then locate the exact Flow or Business Rule that did it and read the condition.
Often it is something like if response indicates failure then close incomplete. - Change failure handling behavior
Instead of closing the task, set it to a pending state, populate a clear error field, and assign it to an integration support group.
This gives operations a chance to remediate without losing the task. - Validate with a direct call
Take the same payload and call the AWS endpoint from a REST client.
If it still returns “already exists”, the issue is on the request content or remote data state, not ServiceNow transport. - Validate uniqueness rule for product code
Document exactly how product code is derived.
If it must be unique, derive it from a unique key such as RITM number plus a sequence, or use a guaranteed unique identifier.
What I would fix first, based on your errors
- Stop duplicate sends
Add a dedicated one time trigger flag and ensure assignment to ADAPT does not retrigger on every update. - Make the request idempotent
Send a stable request id or correlation id to AWS if supported.
If not supported, implement your own logic.
If a task has already been sent successfully or partially created, call an update operation instead of create. - Adjust the process so failures do not auto close the task
Closed Incomplete should be a deliberate human decision, or a final decision after controlled retries.
Let me know if this help you.
Best regards
Bruno
