Utterances are triggering incorrect topics

RenatoMendoza
Tera Contributor

Hi all,

 

Context: Utterances are triggering incorrect topics when end users interact with the virtual agent.

Actions Taken:

  1. Submitted feedback via the NLU Workbench test panel, marking the utterance as not relevant to any model.
  2. Verified the feedback is present in the ml_labeled_data table.
  3. Added the utterance to the test set in the build and test module, selecting "not relevant" for the model.
  4. Reviewed all training utterances and found none similar.
  5. Trained the model, ran the batch test, and published the model, but the issue persists.
  6. Adjusted the confidence threshold.
  7. We are using Xanadu.

Question: What additional steps can I take to prevent this issue? I am encountering false positives that are affecting the model's accuracy.

Any guidance would be greatly appreciated.

 

1 REPLY 1

Lynda1
Kilo Sage

When I have this happen and after going through the steps you already stated, I go to the Intent add an Utterance, train and publish the NLU to force it to behave how I need.