The CreatorCon Call for Content is officially open! Get started here.

Over your storage limit? Reducing our storage footprint; our lessons learned.

Janel
Kilo Sage

I can't make articles anymore so I have to post this as a question.  I also had to break this down into multiple smaller posts because some automated thing kept flagging my big post as spam.  So instead, here is a little mini-series on some things we learned while trying to reduce our storage footprint.

 

Recently, we got to go through a lovely exercise of drastically reducing our storage footprint.  We ran into many issues and roadblocks and I'd like to share what we did to bring our storage footprint down nearly 8TB, using mostly out-of-box features and Support.

Here are a few things we learned that helped us figure out what approach we should take to deleting records, what we could delete, and when we needed to get NowSupport involved in some of them.

 

Disclaimer:

This post is about the deletion of records to reduce the storage footprint size at our company.  This is what worked for us and it may not work for you.  Know your policies and retention periods and as always, review and make sure you test everything in a sub-prod instance first.

 

If you haven't already seen @Mark Roethof's blog about keeping your instance shiny, I recommend you give it a read.

Keeping your instance database footprint tidy (02): Maintain Attachments and Attachment Documents  

 

 

How do you even know your footprint size?

There is a self-service "report", thing, you can run from the Support website, if you have the correct role.

https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0819668

Your account team should also be able to pull you a more detailed report.

 

Be warned!  This is only a daily snapshot and not a real report!  This is only a snapshot taken about every 4 hours.  It provides no insight on what may be growing or what may be shrinking, only what things were at a point of time.  We eventually started checking it daily and copying the information to Excel to see where changes were happening.

 

They (SN) round everything to the nearest TB when you are presented the bill.  So if you are 1GB or even 1MB over, they round that to the nearest TB.  What makes that more confusing/frustrating is that rounding up is not reflected on the Database Footprint "report".

If you are targeting a certain footprint size, make sure it is under your intended target.

 

 

All instances count towards your TOTAL storage footprint.

Yes, this includes all of your sub-production instances!

All instances come with 4TB of space (confirm with your account team).  You can purchase additional space in blocks of 1 TB.  If you have purchased additional space, that added space is added to a pool and shared across all your instances.  If any instance goes beyond their 4TB limit, the extra storage for it is taken out of the pool of any additional space you may have purchased.

 

 

ServiceNow automatically resizes tables to reserve free space for it, but doesn't automatically shrink them

When a table grows, SN will automatically reserve free space for that table (for performance reasons) and lock it into a specific size.  If you delete a large chunk of records from the table, it does not mean the table size will shrink.

 

There are 3 ways to resize a table. 

  1. Use Table Cleaner (which is not always an option).
  2. Open a case to Support and request they do it (which in some cases took us 6+ weeks to get done (one time it took 3 months)).
  3. Delete enough records from the table to force DB Compaction (sys_compaction_run) to run.
    1. This is undocumented and new as of Utah.  It replaced "force_optimize" checkbox on table cleaner records that went away during Utah. 
    2. There is a scheduled job that is supposed to run to identify tables with enough space available and resize them.  We have only ever seen this run on 2 tables, and none of them were the big ones we were dealing with.

If we were unable to use table cleaner to resize the table, we opened a case to Support.

 

Also, a table can only be resized if there is enough space cleared off of it to do so.  This number varies by table and size (I thought I heard mention of 30-50% or something huge like that).  SN doesn't provide you any information on what the required free space size is or any way to see if your specific table has met that size (or even a way to guess).  This would be great to know so we could focus our efforts on tables that we could actually benefit from. 

 

 

sys_attachment_doc and sys_attachment are 2 different things.

Records on sys_attachment are just information about an attachment.  The actual, real attachment data, lives on sys_attachment_doc, which is why that table is so much bigger than sys_attachment.  It is important to make sure you're cleaning up and looking for orphans against both tables.

 

Read the section "Attachments: How do they work?" if you want more details about how the two tables work.

https://snprotips.com/blog/2016/2/25/understanding-and-using-glideattachment

 

 

Figure out your record retention policies

Figuring this out pretty much drove everything we did from here.  Setup meetings with your records retention department and learn how long you actually need to keep certain things (emails, incidents, HR records, user records, etc.).  We thought we needed to keep most of our data 7-10 years and after speaking with our retention department, turns out a lot of that we only needed for 1-3 years.

 

Taking that information (and documenting it in a knowledge article!), we were able to go back to any stakeholders/process owners and have the appropriate discussions with them on data deletion.

 

Check-in with whomever you need to and get a clear understanding of what you need and for how long, then use that information to start basing your deletion or archiving decisions off of.

 

 

Things we did

Here are some of the things we implemented to help keep our footprint down.

 

Exclude large tables when cloning.

This was an easy win.  We excluded many large tables from our clone profiles (if you have Discovery, there are a bunch) to keep our four sub-prod instances under the 4TB limit.

 

Check the Database Footprint "report" for large tables and determine which ones you can live without in your sub-prod instances.  If there is data from some large tables that you need, see if an XML export of a limited amount of records would be an option instead, only when needed.

 

If needed, consider setting up multiple clone profiles.  One that excludes the large tables and one that includes large tables when you know work will be performed with those tables.

https://docs.servicenow.com/en-US/bundle/utah-platform-administration/page/administer/managing-data/...

 

 

Add no_audit_delete=true on tables we didn't need to keep deleted records for

If you have tables where you don't care if something is deleted, open the collection for the table and give it the attribute no_audit_delete=true.  This saves space on the sys_audit_delete and sys_audit_relation tables.

https://support.servicenow.com/kb?id=kb_article_view&sysparm_article=KB0725007

I was not able to get clarification from support if this was needed, if the table did not have auditing enabled.

 

When a record from a table with no_audit_delete=true is deleted, will not go into sys_audit_delete, it will go into oblivion.  You should spend some time figuring out what tables this is okay to do on.  Think things like, import tables, data like filters or sys_templates, task_ci and CMDB relationship records, or anything that could be easily recreated if it were accidentally deleted are good candidates. 

We focused on relationship tables where it'd be easier/faster for a user to recreate a record versus contacting us to restore a record.

2024/01/08 EDIT: Adding no_audit_delete=true will also stop adding an entry to sys_audit.  We saw this happen on sys_email_log, when a record was deleted it was adding a rather large audit entry.

 

 

sys_attachment has many orphans

We learned through multiple Support cases that there are known out-of-box processes that leave orphans behind.  One big one is against ml_model_artifact leaving 300+MB files orphaned.  Archiving is also known to leave behind orphaned records (I asked for the problem records and haven't seen it yet). 

 

Finding and deleting Orphaned attachments (or any other orphaned record)

https://www.servicenow.com/community/developer-forum/finding-and-deleting-orphaned-attachments/m-p/2...

EDIT: We began keeping track of the number of orphans deleted.  On the last run through, out of 4.9 million records, 17,094 were deleted as orphans. 

 

 

Table Cleaner (and its limits)

https://www.servicenow.com/community/developer-forum/over-your-storage-limit-table-cleaner-and-its-l...

 

 

Archiving may not be your friend

https://www.servicenow.com/community/developer-forum/over-your-storage-limit-archiving-may-not-be-yo...

 

 

sys_email is loud (and full of junk)

https://www.servicenow.com/community/developer-forum/over-your-storage-limits-sys-email-is-loud-and-...

 

 

Wrangling in knowledge articles

https://www.servicenow.com/community/developer-forum/over-your-storage-limits-wrangling-in-knowledge...

1 REPLY 1

Mark Roethof
Tera Patron
Tera Patron

Hi there,

 

Just a small correction needed I think:

"Be warned!  This is only a daily snapshot and not a real report!"

 

The report is actually a real-time calculation.

 

Kind regards,
Mark

 

Kind regards,

 

Mark Roethof

Independent ServiceNow Consultant

10x ServiceNow MVP

---

 

~444 Articles, Blogs, Videos, Podcasts, Share projects - Experiences from the field

LinkedIn