- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
on
08-01-2024
10:06 AM
- edited on
08-12-2024
02:40 PM
by
Victor Chen
What is meant by custom generative AI solutions?
We can define custom generative AI solutions as any solution that leverages generative AI outside of the standard offerings that are provided OOTB (Out of the Box) with applications such as Now Assist for ITSM/CSM/HRSD’s Task Summarization or Resolution Note Generation skills.
Custom solutions can be delivered in a range of formats including:
- Flows
- Virtual Agent Topics
- Scripts
- *Xanadu release* Custom Skills using the Now Assist Skill Kit
Custom solutions should only be built when a requirement is present that OOTB skills or flows cannot be configured to fulfill.
There are 2 primary methods of building custom generative AI solutions available today – building directly with the Generative AI controller, and using the Now Assist Skill Kit (if on at least the Xanadu release).
Where can generative AI add value in my custom solutions?
Incorporating generative AI into custom solutions allows you to expedite certain processes such as:
- Creation of content
- Analysis of data
- Simplification
- Personalization (e.g. generating context-specific content)
How should I approach the decision to build custom vs OOTB?
When you need to implement a function in your instance that involves generative AI processing, it is recommended to follow these steps in order to find a solution:
- Utilize OOTB skills as is
Best practices dictate that one should endeavor to align with the OOTB features provided by ServiceNow. This results in ease of maintenance, as you minimize technical debt and can receive future updates for that skill. Try to find a OOTB workflow or skill that achieves the outcome you are looking for, and leverage that. Our capabilities are designed to adhere to industry standards, so it is not uncommon for them to be fit for purpose without any modification. - Configure an OOTB skill
If the OOTB capabilities don’t align directly with your needs, then the focus should be on configuring the OOTB skill. To do so, navigate to the skill within the Now Assist Admin console, and adjust the options offered to tailor the skill to your needs without the need to start anew.
An example of this can be seen during the activation of the Incident Summarization skill. When activating the skill, you will notice that the OOTB setup has 5 preselected fields from the incident table that will be used to provide context to the LLM when generating the summary.
You may wish to supplement these 5 fields with additional fields or information from related lists. To do so, you can use the guided configuration options seen below to tailor the skill to your needs. This avoids the need to create a net new skill or workflow simply to address the gaps between OOTB and your organization’s requirements whilst allowing for you to receive any upgrades we release for the skill. - Create a custom solution
If the above options still don’t result in a solution that is fit for purpose, then you may consider building a custom skill or workflow. In this scenario, you need to consider the following:
- Which LLM should I use?
- Should I host the solution in a Flow, custom skill (using the Now Assist Skill Kit), Virtual Agent Topic, or within a script?
- What should the outcome of the workflow be?
- Am I choosing the right process to automate?
- Do we have the data to support the ask?
- Can we maintain this custom solution in the future?
- How can I measure success?
- Will this solution scale?
We also recommend reviewing the guidelines outlined by our Trust & Governance team laid out in product documentation here. It specifically focuses on building a custom skill using the Now Assist Skill Kit, but contains information relevant for those building directly with the generative AI controller also.
Once you have reviewed the above questions, you can begin the build process, which is discussed in the question “How can I build a custom generative AI solution?”
What are some example use cases that require building a custom generative AI solution?
The following use cases may necessitate the build of a custom solution:
- Highly complex workflows where the output of a LLM is required to drive further action.
- Usage of an external LLM (i.e. non-ServiceNow managed model).
- This is a common requirement where you may need domain-specific knowledge or particular data handling and security restrictions that prevent you from using a NowLLM.
- You have organization-specific use cases that OOTB skills do not cater to.
How can I build a custom generative AI solution?
There are two tools available today for you to build custom generative AI solutions within your ServiceNow instance:
The Now Assist Skill Kit grants you the power to develop your own custom skills using an interface designed specifically for this use case. The high level process to build a custom skill is as follows:
- Name your skill and select the provider for the LLM. If using an external LLM, ensure you have set up the connection and provided credentials to grant access to their API first.
- Build your prompt, add pre- and post-processors, and indicate where you want to pull data to provide context to the request.
- Select where to deploy your skill – as of the August 2024 release, we only offer UI Actions OOTB, but future releases will expand the potential deployment surfaces.
- Publish the skill, then activate it from within the Now Assist Admin Console.
You can directly build using the Generative AI Controller to integrate generative AI functionality within Flow Designer, Virtual Agent Designer, and general platform scripting. This allows for the creation of custom applications and workflows that can generate content, summarize text, answer questions, and more using large language models (LLMs) such as OpenAI, Azure OpenAI, Google Bard, and WatsonX.
We offer 4 capabilities (prebuilt actions) for you to leverage in your workflows:
- Generate Content
- Sentiment Analysis
- Summarize
- Generic Prompt (i.e., any prompt you need that isn’t covered by the above)
You can see an example build here:
https://www.youtube.com/watch?v=1P1qWidrh9Q&list=PLkGSnjw5y2U407_1UQQaVVrD13-MFi5ia&index=3
When should I build a solution in Now Assist Skill Kit vs using the Generative AI Controller approach?
In the majority of scenarios, we recommend leveraging the Now Assist Skill Kit to build a self-contained skill that then can be deployed across multiple areas. This layer of abstraction means that you can leverage the same skill across multiple workflows without having to rebuild it each time.
It also allows for additional flexibility – within the Now Assist Skill Kit, we provide the following features to better serve you when creating something uniquely suited to your organization:
- Prompt editor – You can edit the prompt directly within the tool.
- Prompt augmentation - Easily configure where you want to bring data in from to augment your prompt with context. This includes selecting record types, bringing in the results of flows, scripts, integrations, and more.
- Prompt testing – You can test your prompt within the editor, allowing for you to iterate on your phrasing with ease.
- Deployment settings – You can select where you want the skill to be deployed to. As of the August 2024 release we only offer triggering of the skill as a UI Action, but future releases will focus on expanding this list.
- Activation mechanism—Custom skills you create within Now Assist Skill Kit can be managed directly within the Now Assist Admin console, which consolidates all your generative AI capabilities within a single location.
- Skill monitoring – Usage of custom skills deployed using the Now Assist Skill Kit is able to be monitored using the same dashboards leveraged by OOTB skills.
In contrast, simply using Generative AI Controller provides you with a much less streamlined approach for the following reasons:
- You are restricted to the capabilities we offer – generic prompt, summarize, sentiment analysis, and generate content.
- You are unable to leverage our Now LLM service – with this method you need to have access to an external LLM.
- You can only build generative AI solutions using the controller directly within Flow Designer, Virtual Agent Designer, or within scripts, all of which demand a certain level of familiarity.
- You have to manually determine and build the mechanism to pull in data to augment your prompt with the necessary context.
- You must define the method of deployment yourself.
This method is typically used if you are looking to use generative AI as a minor piece of a larger workflow or have existing solutions you wish to augment with generative AI components.
How does the generative AI portion of my custom solution operate?
When calling a generative AI capability, the following actions take place:
- Data is collected from where you define it – it could be from a record, an integration, an event, or directly from a user.
- This data is inserted into a prompt that is either preconfigured (e.g., for sentiment analysis) or created by you.
- The prompt with the injected data is sent to the LLM to generate a response. This response is delivered back to your instance for use wherever you want to deploy it.
Which LLMs can I use?
If you build a custom skill using the Now Assist Skill Kit, you can connect to any LLM, including a generic version of our NowLLM Service.
Custom solutions built using the Generative AI Controller approach within Flow/Virtual Agent Designers or being called via scripts are currently not able to use the NowLLM service, thus requiring you to use an external LLM.
We offer several prebuilt spokes that allow you to connect to external LLMs with ease. The list as of August 2024 is:
- Azure OpenAI
- OpenAI
- Aleph Alpha
- WatsonX
- Google Gemini (MakerSuite and Vertex AI)
Note that although these are spokes, they don’t consume integration hub transactions but rather Assists. For more information on this topic, please contact your account representative.
However, instances on at least the Washington DC release are also able to use the generic LLM connector to connect to any external LLM not listed above (Bring your own model). This process requires a fair amount of technical acumen. To integrate with a non-spoke-supported LLM, you need:
- An API key from the provider
- Endpoint for the LLM
- Access to API documentation for that LLM to assist with writing the transformation script to translate the input and response into an acceptable format
Regardless of the external LLM you choose to connect to, you will be responsible for managing the appropriate license and model configuration for your use case.
What are the licensing implications and requirements?
Leveraging any generative AI capability within ServiceNow requires access to the Generative AI Controller, which is included with the purchase of any Now Assist license. Please contact your account representative for more information.
How is my data handled when using custom generative AI flows?
The way you build custom generative AI solutions determines the answer to this question.
If you are using NowLLM as your model in your custom solution (only available via the Now Assist Skill Kit) then it adheres to the same data handling and security protocols outlined here.
If you are using an external, non-ServiceNow-managed LLM, you must refer to the LLM provider’s documentation for instructions on how they handle your data.
I have sensitive data in my instance that I don't want to exposed to either an external or a NowLLM - how can I do this?
You can employ the use of the Sensitive Data Handler plugin. It is not active by default and must be installed and configured by the admin to help detect and mask data considered sensitive for your business. To configure, you must define the regex patterns that match what you consider to be sensitive information.
Additionally, within the Now Assist Skill Kit, you can leverage the pre- and post-processer options to implement your own solution to protect sensitive data.
Additional Resources
- 11,677 Views