Now Assist skill kit feature

Vedavalli
Kilo Sage

In now assist skill kit then payload will be send to llm so in which table we can see the payload stored?
can we evaluate the prompts using scheduled job?

1 ACCEPTED SOLUTION

Matthew_13
Mega Sage

Hi Buddy!

 When a Now Assist skill kit sends a prompt to the LLM, the actual payload/request content is logged in ServiceNow’s Generative AI log table (sys_generative_ai_log). That’s the only place where the prompt and response are persisted, and it’s typically restricted (maint/admin access) because it can contain sensitive data. There’s also a separate usage/metrics table (sys_gen_ai_usage_log) that tracks things like who ran the skill and how often, but it doesn’t store the full prompt.

 

As far as prompt evaluation: there isn’t an out-of-the-box way to run Skill Kit evaluations on a schedule. Evaluations are meant to be run interactively from the Skill Kit UI.

 

That said, you can build your own scheduled validation if needed. A good one off pattern is to use a Scheduled Script Execution or Flow to invoke the skill with a predefined dataset, store the responses in a custom table, and then report or score the results over time. It’s not the same as the built-in evaluation feature, but it works well for regression or quality checks.

Hope that helps clarify my Friend 🙂

 

@Vedavalli - Please mark Solution Accepted and Thumbs Up if you found Helpful!

View solution in original post

1 REPLY 1

Matthew_13
Mega Sage

Hi Buddy!

 When a Now Assist skill kit sends a prompt to the LLM, the actual payload/request content is logged in ServiceNow’s Generative AI log table (sys_generative_ai_log). That’s the only place where the prompt and response are persisted, and it’s typically restricted (maint/admin access) because it can contain sensitive data. There’s also a separate usage/metrics table (sys_gen_ai_usage_log) that tracks things like who ran the skill and how often, but it doesn’t store the full prompt.

 

As far as prompt evaluation: there isn’t an out-of-the-box way to run Skill Kit evaluations on a schedule. Evaluations are meant to be run interactively from the Skill Kit UI.

 

That said, you can build your own scheduled validation if needed. A good one off pattern is to use a Scheduled Script Execution or Flow to invoke the skill with a predefined dataset, store the responses in a custom table, and then report or score the results over time. It’s not the same as the built-in evaluation feature, but it works well for regression or quality checks.

Hope that helps clarify my Friend 🙂

 

@Vedavalli - Please mark Solution Accepted and Thumbs Up if you found Helpful!