Expert Feedback Loop Issue
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-21-2023 08:05 AM - edited 06-21-2023 08:06 AM
Hello all,
Someone decided to write a novel for an utterance that the expert feedback loop found. We had this marked as not sure and accepted some other items. After is when I attempt to train the model to complete the process and I am getting:
Exception caught in submitTrainingJob method: Synchronous training failed - reason: {"status":"failure","response":{"messages":[{"type":"ERROR","message":"Enter utterances that only contain less than 200 characters and less than 25 words.","messageKey":"Enter utterances that only contain less than {0} characters and less than {1} words.","replacements":["200","25"],"
I updated the item in ml_solution, open_nlu_predict_intent_feedback, open_nlu_predict_feedback, open_nlu_predict_entry_feedback, open_nlu_predict_log, and ml_label_candidate.
Until I updated the ml_label_candidate the feedback loop would only show the invalid version. At this point I attempted to update the item in EFL and matched it to a known topic. This still would not resolve the issue and still getting the "Enter utterances that only contain less than 200 characters and less than 25 words. Any suggestions?
Thanks John
- 1,578 Views

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-21-2023 11:54 AM
Ugh, I ran into this issue a few weeks ago and floundered my way around it and eventually found the solution but I don't recall what it was T-T
I definitely recall bouncing around to all the NLU tables like you noted and I'm pretty sure the solution was either deleting a record or deleting (part of a field) within a record - but I don't recall which one, sorry 😞
This is absolutely a major defect of Expert Feedback Loop and ServiceNow needs to put in controls to prevent EFL from including utterances that are too long if the system clearly can't handle them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-21-2023 12:04 PM
Thank you for at least confirming I am not the only one and are on the right track at least for resolving. But agree with you 100% that this is a defect so looping in @Victor Chen and will also be creating a HIWAVE case so this can be further addressed, and we can get a solution to post here for others in the meantime of them fixing this defect.
So just for the record, here is the utterance counts it accepted lol

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-23-2023 05:33 AM - edited 06-23-2023 05:35 AM
Did you check [ml_labeled_data] and [ml_label_user_feedback] tables for the utterance with > 200 chars? Also, this sounds like a defect if the Expert Feedback Loop retrieved an invalid utterance > 200 chars to provide feedback on. Let me know if this helps. Regards, Brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-23-2023 07:11 AM
Hello @Brian Bakker ,
@Chris D and I both agree with you that this is a defect and should never happen. This is why I have a HIWAVE ticket submitted for this so it can be labeled as a known issue and an update/hotfix provided. This is also why I tagged Victor in this post.
As to your question, yes both of these tables were updated as well but apparently left off the list above. I was going through so many tables that night trying to figure out where the root record is.
Thanks,
John