Expert Feedback Loop Issue

johndoh
Mega Sage

Hello all,

 

Someone decided to write a novel for an utterance that the expert feedback loop found. We had this marked as not sure and accepted some other items. After is when I attempt to train the model to complete the process and I am getting:

Exception caught in submitTrainingJob method: Synchronous training failed - reason: {"status":"failure","response":{"messages":[{"type":"ERROR","message":"Enter utterances that only contain less than 200 characters and less than 25 words.","messageKey":"Enter utterances that only contain less than {0} characters and less than {1} words.","replacements":["200","25"],"

johndoh_0-1687359016098.png

I updated the item in ml_solution, open_nlu_predict_intent_feedback, open_nlu_predict_feedback, open_nlu_predict_entry_feedback, open_nlu_predict_log, and ml_label_candidate. 

 

Until I updated the ml_label_candidate the feedback loop would only show the invalid version. At this point I attempted to update the item in EFL and matched it to a known topic. This still would not resolve the issue and still getting the "Enter utterances that only contain less than 200 characters and less than 25 words. Any suggestions?

 

Thanks John

9 REPLIES 9

Chris D
Mega Sage
Mega Sage

Ugh, I ran into this issue a few weeks ago and floundered my way around it and eventually found the solution but I don't recall what it was T-T

I definitely recall bouncing around to all the NLU tables like you noted and I'm pretty sure the solution was either deleting a record or deleting (part of a field) within a record - but I don't recall which one, sorry 😞

 

This is absolutely a major defect of Expert Feedback Loop and ServiceNow needs to put in controls to prevent EFL from including utterances that are too long if the system clearly can't handle them.

Thank you for at least confirming I am not the only one and are on the right track at least for resolving. But agree with you 100% that this is a defect so looping in @Victor Chen and will also be creating a HIWAVE case so this can be further addressed, and we can get a solution to post here for others in the meantime of them fixing this defect.

 

So just for the record, here is the utterance counts it accepted lol

 

johndoh_0-1687374250134.png

 

Brian Bakker
ServiceNow Employee
ServiceNow Employee

@johndoh 

Did you check [ml_labeled_data] and [ml_label_user_feedback] tables for the utterance with > 200 chars? Also, this sounds like a defect if the Expert Feedback Loop retrieved an invalid utterance > 200 chars to provide feedback on. Let me know if this helps. Regards, Brian

Hello @Brian Bakker ,

 

@Chris D and I both agree with you that this is a defect and should never happen. This is why I have a HIWAVE ticket submitted for this so it can be labeled as a known issue and an update/hotfix provided. This is also why I tagged Victor in this post.

 

As to your question, yes both of these tables were updated as well but apparently left off the list above. I was going through so many tables that night trying to figure out where the root record is.

 

Thanks,

John