Gen AI Controller output cuts off despite recursive summarization

SJB14
Tera Contributor

Hi all,

 

I'm working with the Gen AI Controller on a Xanadu instance, trying to follow the steps of this blog:
https://developer.servicenow.com/blog.do?p=/post/generative-ai-controller/

Additionally, as I expect the output of the LLM to be large, I have enabled recursive summarization as detailed in the documentation here (also see screenshot):
https://www.servicenow.com/docs/bundle/xanadu-intelligent-experiences/page/administer/generative-ai-...

 

large input screengrab.png

 

Nonetheless, the output of my LLM is cut-off/incomplete when testing the Generate Content functionality in Workflow Studio (see screenshot, should be a full json object but only about half is given).

 

cut-off output .png


I know I can get the full output because I have another version which uses an Outbound REST request that is used in a business rule script. However, ideally we would want to replace that with the OOB functionality of the Gen AI Controller.

 

Does someone have an idea how I can actually get and store the full output of the LLM?

 

Cheers

In this video, I'll be demonstrating a live coding project using the Generative AI Controller. This controller allows you to create text based inputs that can be used to control computer graphics. I'll be demonstrating the project in real-time, so you can see how easy it is to use this controller
1 REPLY 1

Chris Yang
Tera Sage

I don't have a solution to your problem, but wanted to get your thoughts on why you want to migrate to using Gen AI Controller. I struggle with the value prop of Gen AI Controller as write a simple outbound REST call is much eaiser than configuring for Gen AI Controller.