
- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
10-01-2023 10:12 PM - edited 10-05-2023 12:32 PM
With the recently announced Vancouver release, we are releasing Now LLM, ServiceNow large language models for enterprise domain use-cases.
The generative AI highlights for the initial release includes an interactive Q&A capability for requestors to get answers from a relevant knowledge corpus, incident/case & chat summarization capabilities for customer support/IT agents for quicker handoffs and resolutions and assistive code generation for developers increasing their productivity.
To deliver Now LLM, we have utilized some of the best-in-class models as foundational models, including this one where ServiceNow research partnered with Hugging face in the development of the pre-trained model, and the recently announced partnership with Nvidia as well as other leading open source models. Depending on the usage scenarios, we fine tune and deliver proprietary, custom models which are specific to our domains and use-cases.
This is in continuation of our investment in AI strategy and natural language technologies and builds on our prior work with language models for language understanding; this has been made possible with the rapid advances in technology, and auto-regressive/generative language models becoming mainstream.
To power these use-cases, a Now LLM has been tuned appropriately to provide a quality response which results in an improved experience for users. Depending on the specific scenarios, some or all of these steps are undertaken to deliver the right model:
- Extended Pre-training – Making the models suitable for enterprise domains.
- Instruction fine tuning – Fine tune using domain/use-case specific data and annotated with instructions.
- Dialog fine tuning – Fine tuning to allow for users to get answers through multi-turn interactions delivered through a conversational interface/experience.
- Retrieval Augmented Generation - Improving the quality of LLM generated responses by grounding the model on customer specific sources of knowledge and data.
- User feedback – Improving the model performance based on human feedback in the product, including both implicit and explicit signals
Now LLM has been made possible by significant efforts spanning engineering, research, product, QE and design teams as well as datacenter operation teams building and supporting the underlying GPU infrastructure in our datacenters, all made possible in a record time.
While we continue to learn from customers and accelerate on the features we are delivering to customers, we are only scratching the surface with the initial release, and we have an exciting roadmap ahead of us.
- 28,483 Views
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
How can we use generative AI to summarize the document?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
The capabilities in NOW platform are for Incident and Case summarization based on the short description and other text fields linked to the ticket/ case. Is your requirement is to summarise documents uploaded as attachment in ServiceNow? If the documents are independent of the incident/ cases, as of now (safe harbour) you may need to look at Gen AI controller for pure document summrization. https://docs.servicenow.com/bundle/vancouver-intelligent-experiences/page/administer/generative-ai-c...
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
How can we summarize the document in servicenow?

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Document Summarization is not Available OOTB but can be done using Flow Designer in Conjunction with Gen AI Controller's Summarize Action.
Read The Attachment file
Store the Data in Array
Pass the Array to Summarize Capability of Gen AI
Also explore Document Intelligence(https://www.servicenow.com/community/ai-intelligence-articles/document-intelligence-quick-start-guid...)
which used to do data extraction from Attachment just FYI
Regards
RP
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi,
I would love to better understand how RAG works with our existing ServiceNow knowledge base. How would the outputs of screenshots and custom CSS code implemented in knowledge articles output in a RAG model. For example, placing content in an ordered list but in a table. Curious to see if there are desired structures we should be following in a RAG. Maybe best practices that will result in accurate responses.

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
RAG in ServiceNow today uses AI Search (AIS) + internal sources. So whatever AIS supports (see our docs) is the structure you need.
To better understand how RAG works in ServiceNow please see this excellent article from our product management team. Under the hood for Now Assist for AIS.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
I've seen that article before, thank you for referencing again. Wonder if there is new guidance or direction since the day it was posted. We are using some good amount of CSS code with tables so I wonder how LLM's process this data from knowledge articles and summarize it. We also have KCS templates that separate the data into distinct fields - for example a KCS How To article has 5 fields like Internal Notes, Procedure, Objective etc. Not just the standard article OOB that has Article Body.
Just curious if anyone has any experience with complex CSS code and HTML and how it summarizes the data inside of CSS tables that is nice and neat to end users but not sure how it looks through the output from an LLM summarization. Wonder how also the screenshots of error messages or resolutions are summarized. Are images not supported. Very curious to see this in someone's production instance.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Question - are the customer data that feed into the platform GenAI capabilities (virtual agent, and else) stay inside the customer ServiceNow instance or are the data go into the 'broad' ServiceNow GenAI data pool?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Is NOW LLM something that needs to be configured? Or is it automatically imported after purchasing a GenerativeAI related application in the servicenow instance?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hi @xingrui the NowLLM does not have any configuration requirements and is part of any GenAI product that requires it.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Hello @shivramanna @Shamus Mulhall @Rahul Priyadars ,
I read this article - https://www.servicenow.com/community/ai-intelligence-articles/now-assist-faqs/ta-p/2685122
I understood that nowLLM supports only English language right now? We also want this capability enabled in Dutch, French & German in our instance. Do you know how soon this capability would be possible in given languages in future releases?
Do you have any alternatives or now store apps which we can use till the time ServiceNow brings this feature OOTB?
Thanks,
Shubham

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Shubham please - correct the Now LLM family has been trained on English. A few options to support Dutch, French, German. (1) leverage Dynamic Translation which is supported in the workspaces and VA. (2) SAFEHARBOR - Dutch, French, German will be supported 2H of 2024 in the Now LLMs. Please reach out directly to your ServiceNow sales team for details on the roadmap or on how to leverage Dynamic translation if needed today. -Lener

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@Shubham Garg PFA is the release and details on Now Assist with MultiLingual Support.
https://www.youtube.com/watch?v=5DuyXa8Bons
Regards
RP
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Regarding Now LLM, can the user control what is trained? For example, Can I provide several articles for it to train.
Or is the user only allowed to use this product and the training is ServiceNow's own business?
Can't the user interfere with the training process and content?
best
Regards

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@yang7 : Generative models are pre-trained by nature so you can not control what it is trained on.
You can apply Now Assist to your own documents though (KBs or Sharepoint docs) for summarisation purposes for instance.

- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
@yang7 - adding to Laurent's comments. ServiceNow uses a RAG architecture when answering questions from your KBs or Sharepoint documents. This has the advantage of using your data to generate the response w/o going through the time intensive process of fine tuning the LLM. This is preferred for most customers as our Now LLM is a shared model that other customers leverage. If you do have a unique requirement where you need to train/fine tune a LLM with your data you can use Now Assist Skill Kit (NASK) to integrate a 3rd party LLM that you control into the ServiceNow platform. You can see examples in the NASK Use Case Library.
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Can I use s3 bucket instead of knowledge articles for NOW ASSIST.. If yes then please guide me how?
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Thank you for these tips! Very useful. Is there any guidance on using or not using collapsible sections in the knowledge articles? Thanks!
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Created a Skill in Now Assist ,Not not getting the expected result .