• Products
  • Use Cases
  • Industries
  • 3 strategies for innovation
  • Learn how to transform your customer experience with artificial intelligence.
  • 5 steps to transformation
  • A proactive, connected client experience is essential for financial services.


  • Gartner names ServiceNow a leader
  • 2018 Magic Quadrant for Enterprise High-Productivity Application PaaS.


  • We need champions!
  • Use our tools and resources to more effectively advocate for ServiceNow in your organization.


Weekly instance performance

If routine tasks are a problem, finding errors, warnings, large log files, and slow jobs will help you get those tasks running smoothly.

  • Make sure your scheduled jobs run as they should.
  • Find the repeated errors and warnings in your logs.
  • Find data users logged excessively and large log files.
  • Determine if slow‑ or long‑running jobs are causing issues.

Review your scheduled jobs

By reviewing your scheduled job activity, you can help ensure your background activities, such as scheduled reports, discovery sensors, and other routine tasks, run smoothly. Check for anything that’s running for more than an hour (3,600,000 ms).

1.     Navigate to System Logs > Transactions (Background).

2.     Apply a filter with the following conditions (see Figure 3):

            Created > on > This week

            URL > starts with > JOB

            Response time > greater than > 3600000

Note: The response may take several minutes to return. If you don’t return any results for an hour, try the same steps again with a more stringent value such as a half hour (1800000 ms). Of course, some scheduled jobs are going to take a long time because they have a lot of work to process. Due to how the transaction log tables are stored and rotated in the database, it is not possible to use the “group by” function in the list view. Because of this, you may find it easier to do your trend analysis by exporting the result set to Excel.

If you see a job that has executed multiple times for a long duration, drill down into what the problem is. The most common culprits are glide record queries, which request information from large tables with unindexed “where” clauses or sorts/groups. These are often found inside of scripted transform maps and sometimes inside of script includes or business rules.

Figure 3: Filter showing all jobtransactions created in the currentweek thattook more than 360,000 ms to complete

Configure scheduled jobs to use “Burst” scheduler workers

To insulate against backed up scheduler worker queues, set the Priority field on the sys_trigger entry for the scheduled job to 25. This ensures that the core jobs—event processors, SMTP sender, POP reader, and SMS sender—get triggered in a timely fashion. Should all the scheduler workers be busy with other jobs, an “important” job, which is more than 60 seconds past due will spawn a “Burst” scheduler worker and execute in parallel to the core eight schedulers on the node.

Heads up! Using “Burst” scheduler worker is good insulation, but don’t use it as an excuse to avoid addressing the root causes of the other long‑running or high‑volume scheduled jobs.

Check for repeated errors in the error log

  1. Navigate to the System Log.
  2. Select Errors.
  3. Look for actionable errors as well as frequency within the warning messages.
  4. Look for an increase in volume in the number of errors by checking the total number in the top right corner of the screen.
  5. If you see a message like org.mozilla.javascript.gen.sys_script_include_5daa9bf593233100fa71b33e867ffb9b_
    , you can discover more about the error by examining the script_include record with that sys_id.

Look for repeated errors in the warnings log

  1. Navigate to the System Logs.
  2. Select Warnings.
  3. Look for actionable warnings as well as frequency.

Based on the warnings you see, you may be able to search through a sys_script for the text output.

Look for excessive logging

Next, look for unusually large log files. This is a relatively crude—but surprisingly accurate—way to spot potential problems that warrant closer attention.

  1. Navigate to System Logs > Utilities > Node Log File Download.
  2. Apply a Name starts with local filter. This will show you all the application logs for the node your session is active on.

Note that the most recent five days of log files are unzipped, and the remaining files are zipped. The size value is measured in KBs. If you notice that one day is significantly larger than the others, or that there is a progressive increase in file size, you may need to investigate further.

Expert Tip


The application logs all transactions and associated parameters, so if the number of users has ramped up or a new piece of functionality has gone live, the log files will naturally increase.

Find log files over 1 GB

Log files over 1 GB  may suggest possible frequent errors or logging issues that you need to fix.  

  1. First, look for a significant spike in log file size.

    Note: This may indicate that the gs.log or gs.print statements, which were used in sub‑production testing, have not been removed. Unnecessary logging makes the tables bulky, which slows maintenance activities, like backups, and also makes searching the syslog table slow and cumbersome. If that’s the case, try to remove the gs.log and/or gs.print statements (unless you need them) and complete steps 1–4 again.
  2. Find the log files that are over 1 GB.

Figure 4: A log file over 1 GB

Find slow‑running jobs

  1. Navigate to the System Scheduler.
  2. Select Slow Job Log.                                                                  
  3. View the job details in the URL and Response time columns.
  4. Check the SQL time column for the time the job has been in the database.
  5. Check the Business rule time column for the amount of time the job has been in logic (execution).
  6. Right‑click the Response time column heading and select Sort (z to a).

Review the Response time, SQL time, and Business rule time to look for suspiciously long run times.

Figure 5: Example of a Slow Job log

Find long‑running jobs

  1. Navigate to User Administration.
  2. Select Active Transactions.
  3. If there is a background job running, it will show in the User column. Check the Age column to see how long it’s been running.
  4. To kill a job that’s been running for too long or seems to be completely stuck, right‑click the User name and select Kill. (See Figure 6.)
  5. A confirmation message will appear at the top of the list. 

Figure 6: Right‑click menu for killing a stuck job

Trend your top 20 transactions

Create a spreadsheet to trend your top 20 transactions. These may constitute the 20 most executed transactions in a given week. Or you may choose to track the most business‑critical transactions (like incident or catalog transactions). Or it may be helpful to trend a mixture of these. Keep tracking data week after week.

Refer to this HI article for advice on how to investigate the performance of individual transactions.

Tools and resources

Explore additional phases


You want to be sure everything is in place for a smooth, successful deployment.


You want to be sure you’re following best practices during implementation.


You’re up and running and want to get the most from your investment.


You’re ready to extend ServiceNow into other areas of your enterprise.

Thank You

Thank you for submitting your request. A ServiceNow representative will be in contact within 48 hours.

form close button

Contact Us

I would like to hear about upcoming events, products and services from ServiceNow. I understand I can unsubscribe any time.

  • By submitting this form, I confirm that I have read and agree to the Privacy Statement.