Logan Poynter
Mega Sage
Mega Sage

When it comes to ServiceNow, there’s not a lot you cannot do to meet a business need - just sometimes getting there takes a bit of thought. Let me set the background that led to the creation of this article.

Working through a workshop on Knowledge Management, the topic of Article Comments and Feedback came up (as it naturally does.) After talking for a bit, a stakeholder asked, “How does ServiceNow handle profanity? If we have public articles, how can we ensure that someone isn’t acting out and creating a bad representation of {company}?”

This is an excellent question, but I was unsure of an answer when it was asked. After Googling, I found that this is offered for Agent Chat with a store application but nothing for Knowledge! Why?!

 

Note: Before continuing, ensure you have the required dependencies available to follow along! 

  • Agent Chat Profanity Filter Application (Store link: sn_va_profanity)
    This creates the ML Classification solution to determine if a message contains profanity or not and is leveraged in our business rule
    Dependencies:
    • I18N: Internationalization
    • Glide Conversation Server
    • Agent Chat

 

Peeling Back The Layers

What makes the Agent Chat Profanity Filtering work? Let’s check what all is installed with this application (filtering out irrelevant things like form/list layouts, UX changes, etc.):

find_real_file.png

These are the seven items I found when doing research that I thought would be valuable. From those, there is value in two that led to the solution we’ll get to shortly: The VA Profanity Filter Solution Definition and the VAUserProfanityFilter Script Include.

While we cannot view the table being used by this ML solution due to access controls, nor can we gain anything by changing any of the items in its definition, we can see that the solution will return either a classification of “PROFANE” or “NON-PROFANE” depending on an input given to it.

find_real_file.png

Now, this ties into the script, include and scheduled job in this order:

  1. The scheduled job passes in the job id of the scheduled job to the filter function in our script include, which gets the filter mode defined in a sys_property, and then invokes the respective model-based or keyword-based search function:

    find_real_file.png

  2. Since we are leveraging the ML model-based filtering in this example, we are taken here wherein a lookup is done to find what the model id (sys_id) of our solution is:

    find_real_file.png

  3. This function will place us into the _filterEligibleRecordsFromInteraction function, which simply queries all conversations within the agent chat based on the chat language corresponding to the language of our ML solution

    find_real_file.png

  4. Once we have all conversations, we then get each message that is sent by the user and pass that to our _filter_with_model or _filter_with_keywords function, depending on our system property.

    find_real_file.png

  5. From this point, we run a prediction of the message against our Classification solution, and if the message contains profanity, a record is inserted into the Profane Message Log.

    find_real_file.png

     

    find_real_file.png

    The part I have highlighted is where I realized that we could leverage this same logic to evaluate Knowledge Feedback (or anywhere else in the platform to monitor profanity that we wanted) in real-time with a before business rule.

 

Let’s make it!

If you’re a heavy admin or in a position like I am operating as a Technical Consultant / BPC, you know that ServiceNow documents their Server API and Client API exceptionally. There will be plenty of times when someone wants something that isn’t natively offered, so familiarizing yourself with these will play a part in your success. Our script will make use of the ClassificationSolutionVersion.predict API if you want to read up on it more outside this article.

Head to System Definitions > Business Rules and create a new business rule. For this demonstration, I’m using “KB Feedback Profanity Filtering” as the name - you can use whatever is appropriate for your organization or use case. Our table is going to be Knowledge Feedback [kb_feedback].

  • Check advanced
  • When is set to before
  • Check insert
  • Set a condition that comments is not empty

Head over to the Advanced tab and enter the following script:

(function executeRule(current, previous /*null when async*/ ) {

    // Gather the comment of the feedback post
    var comment = current.comments.toString();

    // Create an input json object with a payload key:value pair
    var input = [{
        'sys_id': 'feedback', // This is optional - if not provided our result becomes a 1 in place of feedback; e.g. result[1][0]
        'payload': comment
    }];

    // Create an options object
    var options = {};

    // You can read up on these values at <https://docs.servicenow.com/bundle/sandiego-application-development/page/app-store/dev_portal/API_reference/ClassificationSolutionVersion/concept/ClassificationSolutionVersionAPI.html#ClssfnV-predict_O_A>
    options.top_n = 1;
    options.apply_threshold = false;

    // Create a variable to hold our Classification Solution 
    var mlSolution = sn_ml.ClassificationSolutionStore.get('ml_x_snc_sn_va_profanity_global_va_profanity_filter');

    // Set a variable called result to be the result of a predection against our ML Solution
    var result = mlSolution.getActiveVersion().predict(input, options);

    // Pull out our predictedValue - aka the Predicted Class - from our Solution
    var predictedValue = JSON.parse(result)["feedback"][0]["predictedValue"];

    // If the solution identified our feedback comment as profane, then we want to log it, add an error message and abort the action
    // This can be extended or modified to trigger an event which will then notify the appropriate party(ies) that this action occurred
    if (predictedValue == 'PROFANE') {
        gs.log('PROFANITY CAUGHT! User ' + current.user.name + ' tried to post the following comment on knowledge article ' + current.article.number + ': ' + comment);
        gs.addErrorMessage('No profanity in comments!!');
        current.setAbortAction(true);
    }

})(current, previous);

Final Result

With our business rule in place, it’s time to test. Head over to your Knowledge Base, open an article, and try to leave a nasty feedback comment!

Upon clicking submit, you should see this message show up at the top of the screen:

find_real_file.png

Success! While this tells us that the filter worked, we can head into our primary account (non-impersonation) and checking system logs to see that it did indeed catch some profanity!

find_real_file.png

Now that we have this documented, we can take appropriate action against the offending user. Again, you can also extend this further by including logic to fire an event to trigger a notification - or anything else you want to occur if a situation like this were to happen. 

 

Thanks for reading, have a great day!
--
Liked this article? Hit bookmark and mark it as helpful - it would mean a lot to me.
See something wrong with this article? Let me know down below!
LinkedIn

Comments
Logan Poynter
Mega Sage
Mega Sage

.

Ryan Lee2
ServiceNow Employee
ServiceNow Employee

Super helpful article!   Great write up on how to apply machine learning to report on KB comments.  Much appreciated!

Logan Poynter
Mega Sage
Mega Sage

@Ryan Lee2 

 

Thanks Ryan, greatly appreciate the kind words! 

Version history
Last update:
‎08-01-2022 08:14 PM
Updated by: