Built something you're proud of? Tell the story. A quick G2 review of App Engine or Build Agent helps other developers see what's possible on ServiceNow. Share your experience.

Hendrik Engler
ServiceNow Employee

If you have been running a ServiceNow instance for a few years, you have likely had that moment: a clone takes forever, an upgrade window keeps stretching, or someone from finance asks why the hosting bill climbed again. The root cause is almost always the same — unchecked or not well managed database growth, often paired with a lack of proper data lifecycle management. Unchecked database growth is the silent performance killer in ServiceNow environments. This guide provides a comprehensive framework for understanding, measuring, and managing your instance footprint across all major table categories:

 

  • Performance: Large tables degrade query performance silently; maintaining huge tables becomes increasingly difficult over time.
  • Operations: Upgrades, clones, and backups take proportionally longer; weekend windows become multi-day affairs.
  • Cost: Exceeding contractual storage thresholds drives hosting costs up; reactive cleanup initiatives are enormously expensive.
  • Top tables: Excessive auditing, attachments, system and transaction logs, mails, task related and CMDB tables consistently dominate storage in most instances.
  • Key tools: Data Management Console, Audit Management Console, DB Compaction, Table Cleaner, Email Retention, and more

 

Table of Contents

 

  1. Why your instance footprint matters
  2. Understanding the data lifecycle
  3. How to check database and table sizes
  4. Database compaction and automated space reclamation
  5. Common tables that need your attention
    1. Attachments
    2. Audit related tables
    3. System logs
    4. eMail
    5. Platform Analytics
  6. (Orphan) User data and settings
  7. Additional resources

 

 

1. Why your instance footprint matters

Database growth cascades into every operational dimension of the platform. Understanding these impacts is the foundation for building urgency around data management governance.

 

Footprint.PNG

 

 

2. Understanding the data lifecycle

Every record moves through a predictable arc. Your data management strategy should map to each phase and become an essential part of solution design and the technical governance framework.

 

Phases.PNG

 

Notice that archiving requires special attention. Based on business requirements and compliance / risk or legal relevant policies, certain data might need to be kept available for a specific period of time (retention policies). Post that data can usually be safely deleted (or exported to external systems if required). Data archiving is an out-of-the-box feature to move specific data from an operational table to a dedicated archiving table. The data will remain available on the ServiceNow instance still. An alternative (but more complex) approach is to externalize and export the data in scope prior data deletion.

 

3. How to check database and table sizes

You cannot manage what you cannot measure. ServiceNow provides several mechanisms to understand your storage footprint and a respective tool set / dashboards to further analyze storage contributing factors and drill down options:

 

  • Database Footprint (via Now Support)

    In the Automation Store we will find the respective catalog request to create a database footprint report for a specific ServiceNow instance. The report will outline (up to 100) tables by size and contributing to the overall storage consumption. This report is limited (no growth history or drill down options) but provides a quick overview of the current state.

    Database Footprint.PNG

  • Physical Table Stats (System Table)

    Each instance is collecting the actual and individual table sizes based on a periodic sampling (daily, weekly, monthly and yearly), stored in an internal system table (sys_physical_table_stats). 

    Physical Table Statistics.PNG

    Per table we will find here the table size (GB) as well as the row count for a specific sampling period. Based on these data, a historical trend per table can be created. In addition, we will find an interesting information for reclaimable space - an estimation of space that can be reclaimed within the underlying table space (which is common if the table space grew over time but not shrink post data deletion). Here lies the potential to free up space and shrink the overall database size to save storage space. Internally the platform will use these metrics to perform a database compaction (see next chapters for details) and do the job for you.

  • Data Management Console (Zurich+)

    With the Zurich release we introduced an out-of-the-box dashboard / console to oversee, track and manage the database and table size growth via a central place for system administrators.

    Data Management Console.PNG

    On the homepage we can quickly oversee the overall database size, the historical trend and top tables contributing (by growth and overall consumption). The aggregated information are primary based on the physical table statistics collected. In addition, a detailed drill down per table is available to better understand relationships within the data model for related data driving the storage consumption as well. These are called Driver Tables and attachments are an excellent example - one of the top storage consuming datasets. As attachments always belong to a specific record on the platform we want to understand where most attachments are used (for example within the ITSM capabilities and processes).

    Driver Tables.PNG

    This is the recommended and fastest approach to identify areas of concern and derive corrective measures (like data archiving or deletion). Speaking of measures, we will also find within the same tool the option to oversee and create Rules (tab) right away:

    Rules.PNG

    New cleanup and archiving rules can be configured per table. A system administrator can also oversee here the historical run of respective rules and their current status more easily. The perfect place to configure respective measures and policies from a single place.

    Caution: Assure that the respective data policies are carefully planned and assessed with prior testing on a sub-production instance to avoid data loss and compliance with related policies.

  • Subscription Management (Cloud Capacity)

    For completeness you might be interested in an overall usage / consumption report for your (customer) account across all your instances. The product documentation provides a good summary of how to access this cloud capacity report along your other subscriptions.

 

4. Database compaction and automated space reclamation

Table fragmentation occurs naturally over time as records are updated and deleted. Starting with Utah, ServiceNow introduced an automated nightly DB Compaction job that identifies qualifying tables and compacts them automatically. While there is usually no intervention required by a system administrator, it is worthwhile understanding the criteria and configuration options.

 

Database Compaction.PNG

 

Let us recall that the table statistics are collected per instance. Based on this information paired with the below outlined eligibility criteria, we can better understand when and under which circumstances a compaction for database tables will run:

 

  • Optimization on table is not skipped (via no_optimize dictionary attribute)
  • Not in table rotation setup
  • Not a shadow / rollback / temporary / virtual table (like a database view)
  • Not an extension of the application files (sys_metadata) table
  • The reclaimable space is more than 10 GB (aka "Reclaim Estimate in GB")
  • Over 50% free space as calculated by dividing the reclaimable space by overall table size (aka "Table size in GB"); for example if the overall table size is 50GB, then the reclaimable space must be at least 25 GB
  • Table has less than 100M records
  • Table size is less than 100GB

 

While the eligibility criteria are high, you may see the need to still free up significant amount of available space to reduce the database size, especially if the pattern is seen across many large tables on the platform. One option is to engage the Now Support team with a dedicated request to verify if a database compaction can be run from support side for specific tables. Another option is to adjust the system's behavior and criteria which determine which tables get compacted. For this purpose, there are four system properties available (they do not exist out of the box) which allow us to change the respective thresholds:

 

Property Description Default value (Washington+)
glide.db.compaction.criteria.reclaim_size_mb Minimum reclaim size (MB) 10240
glide.db.compaction.criteria.reclaim_percentage Reclaim percentage (%) 50
glide.db.compaction.criteria.max_row_count Maximum row count (#) 100000000
glide.db.compaction.criteria.max_table_size_mb Maximum table size (MB) 102400

 

Keep the following hint and recommendations / limitation in mind:

 

  • The internal database compaction job (DB Compaction) is scheduled to run once a day (the trigger time can be adjusted and preferred is out of business hours)
  • Per day only up to five tables will be compacted if eligible
  • The database compaction job will automatically decide which tables to compact; you cannot influence this behavior (unless you exclude tables via the system dictionary attribute)
  • All compaction attempts are logged (sys_compaction_run table) also indicating why specific tables were skipped for compaction (with an outline of the four main criteria also being configurable)

    Compaction Run.PNG

  • Do not change these system properties without testing on a sub-production instance to evaluate the compaction runs and impact prior production rollout

 

5. Common tables that need your attention

This chapter will focus on the top five tables / areas which are common concerns regarding overall size and growth over time. These areas are usually platform wide relevant and used across most of the available platform capabilities and features. Of course, other areas like the CMDB (Configuration Management Database) or task related records are also common topics and require attention by an early data management strategy according to their lifecycle, as indicated earlier.

 

Top Tables.PNG

 

5.1 Attachments

Attachments are split across two system tables and always related to another record (like a task they are attached to). The sys_attachment table contains a list of all attachments and certain meta information (like size) and a second table (sys_attachment_doc) which contains the binary chunks (making up the entire attachment and all chunks belong to a parent attachment record. Take following comments and strategies into consideration:

 

  • Archiving

    Do not archive records from the attachment table directly (as this will leave orphan records in the chunks table). Only archive the specific records (which might have attachments) and the platform will automatically repoint the attachment record to the archived record.

  • Deletion

    You can implement custom solutions to delete attachments via the OOTB APIs in case they are not required anymore. A common scenario is for example attachments for incident records, whereas the incident records are already archived or do not require such context information anymore. Using flows allows you to make use of OOTB attachments actions for same purpose. Use the table cleaner (data management policy) as well for scheduled cleanups.

  • Attachment file size

    You can (via a system property) define a global file size limitation per attachment. This can be changed at any time but consider the business requirements. The OOTB default is with one GB maximum high. In addition (and in the context of security) you can limit the upload of attachments also by extension and / or MIME types).

 

As described earlier, it is recommended to use the Data Management Console to understand the contributing tables for a large attachment table to make an educated decision where to put your focus. If the Data Management Console is not yet available to you, you can also use a simply script to determine the top ten tables for attachments (can be run as a simple background script).

 

attachmentAggregate();
function attachmentAggregate() {
  var ga = new GlideAggregate('sys_attachment');
  
  ga.addAggregate('COUNT');
  ga.addAggregate('SUM', 'size_bytes');
  ga.groupBy('table_name');
  ga.orderByAggregate('SUM', 'size_bytes', true);
  ga.setLimit(10);
  ga.query();
  
  while (ga.next()) {
    var table = ga.getValue('table_name');
    var count = ga.getAggregate('COUNT');
    var sizeGB = ga.getAggregate('SUM', 'size_bytes') / 1024 / 1024 / 1024;

    // Size in GB: 12.8 | Count: 154 | Table: incident
    gs.print("Size in GB: " + sizeGB.toFixed(1) + " | Count: " + count + " | Table: " + table);
  }
}

 

 

5.2 Audit related tables

Audit tables are among the most aggressive growers in any ServiceNow instance. Keep the following general best practices and strategies in mind:

 

  • Audit can be enabled on table (including all fields) and individual table field levels
  • You may use inclusion and/or exclusion listing of fields to be audited of a specific table (prefer inclusion list over exclusion list)
  • Disable auditing via dictionary for all non-required audit records and define a retention period for audit records (or use the recommended Audit Management Console as preferred option)
  • Archiving audit records is usually less practical than deletion via a table cleaner (data management policy)
  • Be aware that OOTB many tables are audit enabled by default (for example in the ITSM and CDMB area)
  • Refrain from auditing fields with high frequency of updates (like the last login timestamp on a user record if not hardly required)

 

The audit-related data model involves several (interconnected) tables:

 

sys_history_* sys_audit sys_audit_delete sys_audit_relation

sys_history_set and sys_history_line are built from sys_audit for faster audit access. sys_history_set records are deleted after 30 days since last update. sys_history_line records are under table rotation and records get dropped after 28 days.

All historical information for audited records including journal field entries. Journal entries are ultimately stored in the sys_journal_field table. Can be archived / deleted upon process record archival.

Contains necessary information to restore records. Audit delete only when it is an absolute must. Remove records with table cleaner (data management policy). Tracks relationship changes between audited records. Cannot be archived along with process records.
 
DO NOT TOUCH Include in archiving / deletion strategies / configurations

 

There are different system properties steering the auditing of records and changes. Take note of the following and the recommendations / comments prior custom changes:

 

Property Description Default value Comment
glide.sc.audit.variables Audit changes to service catalog variables. true When enabled variable value changes are audited when changed from associated task.
glide.sys.audit_inserts

Create audit records for the initial values when a record is inserted.

false Do not enable, causes massive growth of audit table.
glide.ui.audit_deleted_tables

List of system tables that will have the delete audited.

sys_user, sys_user_group, sys_user_role, sys_user_has_role, sys_user_grmember, sys_group_has_role, sys_security_acl_role, sysapproval_approver, sys_report, sys_app_application, sys_app_module Avoid unnecessary delete auditing for system tables. Remove tables you do not need to audit and consider field level auditing instead.
glide.db.audit.ignore.delete

Specifies the tables where the record audit deletion is not updated when records are deleted.

N/A (does not exist OOTB) Another option to prevent delete auditing. If not in use, disable auditing of deleted records on a specific table via dictionary attribute.

 

In the context of auditing, the platform by defaults audits record deletion, if enabled for specific tables or part of the OOTB configurations. While this can be a lifesaver to restore deleted data, it is one of the tables (sys_audit_delete) that grows quickly over time. A fundamental question to be answered is how long - from an operational perspective - we need to keep these audited data for a potential review. OOTB the data are kept indefinitely but from an operational perspective, we might only want to restore data for the last couple of months, everything before might not be of interest or too risky to be restored. Common best practices to keep in mind:

 

  • For large tables consider to create indexes for the important fields being used for queries and filtering (table name and sys_created_on), this will make operational tasks more efficient
  • Assess the table contents but avoid querying the entire table, instead review data within a short timeframe and increase gradually to assess which data are maintained
  • Prevent delete auditing for specific tables where possible (deny auditing is a better approach than cleanup activities)
  • Delete expired / old records using a table cleaner / data management policy to continuously cleanup the table (for example all records older than six months)

 

Besides the dictionary and some system properties, it is a common challenge to keep track on tables and fields being audited. In addition, we usually want to define a retention period for audited data (the maximum timeframe to keep audit data and delete all others prior). The tool of choice to track and manage auditing became the Audit Management Console, introduced with the Zurich release:

 

  • Oversee all tables and their current auditing enablement
  • Enable / disable auditing on table as well as individual field level
  • Access and configure retention periods for audited data

 

Audit Management Console.PNG

 

Changes in the audit configuration on table / field level result in a direct update of the dictionary entries (and creation of corresponding updates within the appropriate update sets); useful to configure on sub-production instance and stage the updates up to the production instance. In case a retention period is configured, it is stored in an internal table (sys_audit_retention), used by an internal scheduled job (AuditCleanerScheduleJob) which runs daily and picks up the retentions defined to cleanup the targeted audit data. All executions and progress can be accessed via a logging table (sys_audit_cleaner_job_progress).

 

Audit Cleanup Job Progress.PNG

 

5.3 System logs

The syslog tables in ServiceNow are specific system tables that store log entries related to system activity, performance metrics, errors, and warnings. These logs are generated automatically by the system during various operations, such as executing scripts, processing workflows, or handling integration requests. The syslog tables enable administrators and support teams to track events, errors, and other significant activities that occur within ServiceNow instances.

 

Various OOTB system logs (and related tables) are configured with a large table rotation configuration. To keep the size of the respective tables under control the following table rotations for system logs can be changed and shortened to avoid too large system logs piling up. By default, 42 days of logs are kept. If a smaller retention period for the respective logs is required, the table rotation configuration can be changed. Rotation configuration use two main parameters to determine the maximum number of days the logs are kept (days kept per rotation and total number of rotation). The following simple formula can be used to determine and plan the days of logs being kept:

 

Days to keep = Days Duration * (Number of Rotation - 2)

 

Per table the rotation configuration can be adjusted to keep less data for a specific number of days.

 

System Logs.PNG

 

As an example, we may want to only keep 30 days of logs as this is sufficient from operational perspective. Hence, we may keep the total number of 8 rotations but reduce the duration to five days (which can lead to overall up to a 25% size reduction, assuming the system workload and number of log entries remains stable):

 

30 Days = 5 Days Duration * (8 Rotations - 2)

 

Beside those configurations, the amount of data being logged, especially in the system or application specific logs, can be directly influenced by the customer and implementation teams:

 

  • Establish development guidelines and standards for which information (and according to which logging level) to log
  • Introduce system properties defining the logging level for custom development (scripts, applications, ...) with proper logging levels (info, warn, error, debug)
  • Minimize logging on production and enable operations teams to change log levels to a more verbose setting if required
  • Avoid legacy (gs.log / gs.print) and client side (console.log / jslog) logging statements
  • Prefer using the GSLog API to add important information like the caller scope and the logging level (via a system property)

    // logger instance using a system property defining the log level and the source script
    var
    logger = new GSLog("my.logging.level", "MyUtilsScript");

 

5.4 eMail

The mail tables grow rapidly in instances with high notification volumes, either outbound or inbound. Evaluate whether mails need to be retained on the platform at all and to which extent (define according to retention policies). The Email retention  plugin provides built-in archive and destroy rules — active by default for new instances. It is recommended to keep the default settings as a starting point to assure older and less relevant mails get archived first (ar_sys_email) and purged later.

 

Mail Retention.PNG

 

From requirements and best practice perspective, it is recommended to minimize notifications via mail as much as possible. While mail communication (especially for inbound scenarios) is quite limited in terms of the information you can carry and parse, mail remains asynchronous media with unstructured data. In addition, we want to maximize user adoption by using the appropriate platform features like portals and workspaces.

 

5.5 Platform Analytics

A common oversight are the data being collected by Platform Analytics (formerly Performance Analytics) and the related indicators and jobs. OOTB we provide many pre-configured dashboards, reports and KPIs being collected across many different platform capabilities. Over time the data collection jobs accumulate a large amount of data (based in indicators, sources, breakdowns and more) but they are not kept forever. Collected scores and snapshots accumulate over time, they can increase in size, which may affect system performance. ServiceNow uses a scheduled job to remove outdated scores and snapshots. The Clean PA Collections job can be enabled and run daily. This job ensures
that only a small amount of data is deleted each time, with no significant impact on system performance. There are two important system properties / configurations which define the retention periods of both, scores and snapshots (and for their specific periods).

 

PA.PNG

 

Outlined above are the OOTB settings for PA cleanups. For example, daily collected scores will be kept for two years and daily collected snapshot data for half a year. These settings can be freely adjusted but shall be kept with potential reporting requirements in mind.

 

6. (Orphan) User data and settings

During the introduction of this article, we learned about the importance of understanding the lifecycle of the data on their platform. A good example are also user specific data, not just in the context of potential compliance or data privacy reasons, but also their relevance for the platform as core platform data. User records are usually created in large numbers, representing employees, staff, externals or even technical user accounts. They also have a lifecycle and state. In practice, especially employees might join or leave the company, hence the user record (sys_user) must reflect their status (active / inactive / logged out). Users working on the platform leave a footprint - they use and navigate through different applications, create bookmarks / favorites, run reports, file items within their personal shopping cart and more. All these meta information and user specific settings are filed in different system stables accordingly. But what happens if a user becomes (temporarily or permanent) inactive? It is quiet common that - due to regulatory and audit relevant purposes - that their user account remains in the system and might not being cleaned up for a long time. And user specific meta information will also remain in the system, even while they become useless. Outlined below are quite a number of system tables that contain user specific settings / configurations, that can therefore be considered for a cleanup as well.

 

User.PNG

 

All the system tables above keep a reference to a user record for personal settings (while some records within those tables are system wide and not user bound). For example, users might have created their own bookmarks (and groups). Once the user becomes inactive (and stays inactive) the data remain in the system without use. Using the OOTB table cleaner (data management policy) we can perform a regular cleanup. We need to ensure that the right condition is applied to for example only cleanup records for users which are inactive (and ideally additional conditions depending on the user lifecycle):

 

// sample condition for a cleanup rule for the targeted table
User.Active = false && User.Last login timestamp < Last 6 months

 

While data in those tables are usually not of much relevance for the overall database size or system performance, they contribute to data management and cleanup procedures and best practices.

 

7. Additional resources

ServiceNow Community

ServiceNow Support

ServiceNow Product Documentation

ServiceNow Impact Accelerators

Version history
Last update:
a week ago
Updated by:
Contributors