The Zurich release has arrived! Interested in new features and functionalities? Click here for more

Ashley Snyder
ServiceNow Employee
ServiceNow Employee

 

 

In this session, we covered our Now LLM Service answering questions on why our Now LLM Service was built, the benefits of using Now LLMs in our products, and how we train our models. We also cover Responsible AI at ServiceNow and how we incorporate it into our product development in each step of the lifecycle, how data sharing is used to improve our models, and our work on the BigCode project for our StarCoder LLM, alongside attendee questions on each of these topics.

 

Here are the links discussed in the session, keep in mind the KB links are behind a Support login. If you do not have a Support login, contact your account team to bridge the gap on receiving the information.

 

 

Here's the answered Q&A from this session; keep in mind this information is subject to change; the Q&A listed in this blog is a snapshot of our current functionality and practices as of July 2024. For any of our models, we recommend viewing the latest version of the model cards for information on our models.

 

Q: How will this affect Virtual Agent?
A: Now LLM is infused into VA in several ways:
1) Q&A with KB Summarization
2) Multi-turn service catalog in-take
3) Now LLM-based VA Topics (vs. tree-like VA topics) for admins

Q: Is sentiment analysis available for use with ITSM; we want survey inputs to run by sentiment analysis but we were told by ServiceNow that for sentiment analysis we need other 3rd party LLMs.
A: We don’t specifically support sentiment analysis with any of our OOTB skills. You will need an external LLM to conduct that capability, and will need to leverage the generative AI controller to build out this functionality. Watch this space though - we have a feature releasing in Xanadu that will open this area up. I believe I have an AI Academy scheduled for 9/17 that will cover something aligned with your use case!

Q: So will each customer have a form of the NowLLM trained specifically on their instance's data or will everyone be using the same fork?
A: Today, everyone is using the same fork which we update regularly.

Q: What are some ways that we can measure the effectiveness of the summarization related to knowledge articles? Are there metrics around content that was summarized? I would think views are expected to go down.
A: As a best practice, there should be a human-in-the-loop to review the relevance of KB generation.  The effectiveness of the KB article itself, once published,  can be measured by # deflections, feedback, articles search stats…etc

Q: Can more than one LLM be used? E.g. Now LLM or one from Google or any other vendor?
A: One model can power one or more skills in the platform experience. We employ a mix of models, implementing the best model for the task. Additionally, customers can implement custom use cases powered by their own choice of non-ServiceNow model if desired.

Q: Can more than one LLM be used simultaneously? E.g. NowLLM, along with Google’s LLM
A: Yes, using the Generative AI controller you can point a custom-created skill to designed LLM service.  Note, currently, you cannot override the LLM used by an OOTB skill.

Q: Can Now Assist summarize more than one ticket at a time? Can it summarize like 1000 tickets at once so we can trend on it in reports or deep-dive analysis?
A: This isn’t available OOTB today. You can build a custom workflow using the generative AI controller, or wait for the Xanadu release where we will have something that may also help here 🙂

Q: Will there be more targeted, industry-specific small language models, for example an IT support SLM?
A: This is a great question following industry trends. We’re investigating this possibility, but this isn’t yet on our roadmap for execution.

Q: How is AI and customer information intake into the ServiceNow AI model different between SN Commercial and Government versions?
A: Customers in the government version don’t have the chance to contribute data due to the regulated nature of those environments.

Q: How is customer data and inputs leveraged to train the ServiceNow AI model?
See KB1648406 for specifics on what customer data and inputs are leveraged to train our models.

Q: How is ServiceNow AI and LLM different between SN Commercial and Government Instances?
A: No difference, however the infrastructure and distribution will differ.

Q: is there somewhere we can try this out (plugin in PDI, demo), to see how it will fit our environment?
A: Contact your account team for demos.

Q: How do we differentiate from competitors? All can extract and get the data right?
A: Sean answered this in 26:45 of the video of where we are at on the Sandford University Foundation Model Transparency Index. There is a chart in that portion of the video where we show our index scores and how we differentiate from competitors when it comes to major dimensions of transparency.

Q: Will there be more targeted, industry-specific small language models, for example, an IT support SLM?
A: This is a great question following industry trends. We’re investigating this possibility; but this isn’t yet on our roadmap for execution (safe harbor).

Q: Instead of Now LLM can a customer other LLM Models and integrate it with ServiceNow using some controller which is outside of ServiceNow? If this is possible, then ServiceNow Now Assist subscription (ITSM PRO +) will not be required?
A: Yes - we have the generative AI controller which allows for you to connect to LLMs other than our own. This does not allow for you to change the model used for OOTB skills however - those are tied to NowLLM. Usage of the controller also relies on you having a Pro/Pro+ license however. You can see an example here: https://www.youtube.com/watch?v=1P1qWidrh9Q&list=PLkGSnjw5y2U407_1UQQaVVrD13-MFi5ia&index=2

Q: Another question regarding the use case of GenAI; we have KBAs in ServiceNow with doc attachments in them; can GenAI help find relevant information through the contents of those attachments?
A: As of July 2024 we currently do not search attachments in Now Assist for AI search, but it is on our roadmap (safe harbor). Check with your account team for more details.

Q: How is data sovereignty guaranteed?
A: See the KBs linked or check with your account rep to go deeper into this subject.

Q: But just to confirm, ServiceNow has never used our data without us signing off that it's okay for you to use it?
A: Yes, that is correct. Customers need to actively sign contracts for that and opt in.

Q: How frequently is the LLM updated? And will customers that opt-in be "expected" to contribute data on a regular basis to ensure optimum LLM behavior with time
A: Customers do not need to continually opt-in for data sharing. This is a one-time process, for more information on data sharing, see KB1648406. ServiceNow continuously works to improve its models, updating them on a per-use case basis. Information related to the latest models can be found in the latest update of the model card with every scheduled ServiceNow release. Furthermore, as models are introduced and updated in the ServiceNow Regional Data Centers, they become available to all dependent Now Assist skills across all Now Platform versions where the Now Assist skill is available. When ServiceNow changes the model used to power a skill, the new model will be used from the time of the next platform upgrade initiated by the Admin. For more information see KB1648406.

Q: How to enable code generation on PDIs (Personal Developer Instances)?
A: This feature isn’t available on PDIs today. Reach out to your account rep for a demo.

Q: Would you say our AI inferencing is “zero data retention”?
A: Yes.  When data is used for inference, such as incident summarization, the prompts and responses are processed in-memory only, meaning there is no disk storage for inferencing. Data is deleted immediately after processing from the shared infrastructure. This ensures that customer prompts and responses are not available to other customers, maintaining data privacy and confidentiality. For further explanation see KB1584492.