- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎12-26-2017 09:18 AM
Lately the change team has been getting this error, and I also ran into it adding a True/False to the change request table.
Syntax Error or Access Rule Violation detected by database (Row size too large (> 8126).
I could create the field, but the change team needs to add the information as attachment if they run into it.
On researching this error I came across the following:
Do not add more than 10 medium-length or longer String fields to a single table. Attempting to save a large number of characters in 11 or more String fields can result in the following error: Syntax Error or Access Rule Violation detected by database (Row size too large (> 8126). Add and customize a field in a table
Seeing as the Task + Change request table together already have 9 string fields of +256 characters, I was wondering what other people have found to add more fields and work around this constraint. So my question; what are the best practices when working with string heavy records?
Thank you
(Semi related interesting read: https://www.servicenowguru.com/scripting/client-scripts-scripting/maximum-length-large-string-fields...)
Solved! Go to Solution.
- Labels:
-
Best Practices
-
Instance Configuration
- 11,462 Views

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-23-2018 03:25 AM
Hi, I know this is an old question but thought I could still share what I found out.
There is not much you can do as SN uses some innodb paging option with smallest possible page size which unfortunately means you can use only a fraction of real physical row size available to you in MySQL / MariaDB (depending what instance version).
Basically any field you add will use up some space depending on the field size - e.g. string with max. length 40 will be mapped as varchar(40) and will use up exaclty the number of bytes storied in it for each record (max. 40).
If you use true/false - this maps to tinyint(1) and should use only 1 byte of space - but I have to confirm this with SN as I'm not 100% sure.
If you add bigger fields, such as string field >255 length or any HTML-type field, it is a bit more complex. Every such field will use up to 768 bytes of your limit (less if the text in specific field is shorter), and if there is more content, all the remaining data is kept in a BLOB via reference - but you don't have to care how it works as it happens in the DB layer. You only have to know that if you put 10 HTML fields on a table, you are basically screwed as this eats up 7680 bytes potentially.
Solutions officially proposed by SN, and to be honest, I don't see too many alternatives:
1. Remove unused fields (especially strings) and try to reuse existing fields if possible
2. Reduce max field length for stings (quite often you change the max length only so that the field appears bigger in the UI but is it really necessary)?
3. what we had to do in one of our projects - change big string fields and HTML fields into reference fields. Create custom table where all that content will be stored and use it in your references. This basically means you reduce the size needed for every field from 768 to 32 bytes. It is not an easy process though:
- check for existing data - you will have to write a background script to copy existing data to the new table automatically if you want existing data to work.
- check for any scripts that are using the data from those fields - you will have to update all scripts not to get the value from the field directly, but rather by reference. eg. previously it was yourtable.yourfield --> now it will be yourtable.yourreferencefield.newtablecontentfield
- you may have some issues with changing field types - we did it via script and I remember there was some funny stuff we had to do to make it work. You can do it manually one by one which takes more time but is more secure.
Hope this clarifies some of it. I wish ServiceNow released official documentation on field sizes and mapping as the current docs are not really useful for architects.
EDIT: oh and you do not have to worry about fields on your base table such as task. Those are not eating up space on the table you extend from task.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-23-2018 03:25 AM
Hi, I know this is an old question but thought I could still share what I found out.
There is not much you can do as SN uses some innodb paging option with smallest possible page size which unfortunately means you can use only a fraction of real physical row size available to you in MySQL / MariaDB (depending what instance version).
Basically any field you add will use up some space depending on the field size - e.g. string with max. length 40 will be mapped as varchar(40) and will use up exaclty the number of bytes storied in it for each record (max. 40).
If you use true/false - this maps to tinyint(1) and should use only 1 byte of space - but I have to confirm this with SN as I'm not 100% sure.
If you add bigger fields, such as string field >255 length or any HTML-type field, it is a bit more complex. Every such field will use up to 768 bytes of your limit (less if the text in specific field is shorter), and if there is more content, all the remaining data is kept in a BLOB via reference - but you don't have to care how it works as it happens in the DB layer. You only have to know that if you put 10 HTML fields on a table, you are basically screwed as this eats up 7680 bytes potentially.
Solutions officially proposed by SN, and to be honest, I don't see too many alternatives:
1. Remove unused fields (especially strings) and try to reuse existing fields if possible
2. Reduce max field length for stings (quite often you change the max length only so that the field appears bigger in the UI but is it really necessary)?
3. what we had to do in one of our projects - change big string fields and HTML fields into reference fields. Create custom table where all that content will be stored and use it in your references. This basically means you reduce the size needed for every field from 768 to 32 bytes. It is not an easy process though:
- check for existing data - you will have to write a background script to copy existing data to the new table automatically if you want existing data to work.
- check for any scripts that are using the data from those fields - you will have to update all scripts not to get the value from the field directly, but rather by reference. eg. previously it was yourtable.yourfield --> now it will be yourtable.yourreferencefield.newtablecontentfield
- you may have some issues with changing field types - we did it via script and I remember there was some funny stuff we had to do to make it work. You can do it manually one by one which takes more time but is more secure.
Hope this clarifies some of it. I wish ServiceNow released official documentation on field sizes and mapping as the current docs are not really useful for architects.
EDIT: oh and you do not have to worry about fields on your base table such as task. Those are not eating up space on the table you extend from task.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-07-2018 03:43 AM
Thanks for your reply Tomasz! These are some nice best practices.
I tried deleting some fields in our sandbox, but it didn't seem to help. Because of this I created an incident for HI.servicenow. They found that the fields were not deleted in the back-end and helped cleaning them up. They also performed an analysis on record size in our task table and suggested off rowing some fields.
"Offrow Storage was introduced to help customers get around the 8126 byte MYSQL limit for task table. This functionality is only available for Maint users. An Offrow Storage table is a table that is created with a primary key, such as sys_id. These tables are named by using the original storage table's name with a suffix of "_offrow". When an element is migrated into offrow storage, the element's storage column definition (using the element's storage_alias) is copied to the offrow storage table. Then the data is copied from the original table. After updating sys_storage_alias, the corresponding element's original ("on-row") storage column values are set to NULL. "
Hope this helps others.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎08-19-2018 04:28 AM
Wow, good to know! Thank you for sharing your findings 🙂 This process does not seem to be documented but looks like a better solution than manually moving large fields to another table.
It would be great to hear your feedback on this after some time - has it caused any noticeable adverse effects such as slowness or any issues with accessing those fields? I hope it works OK!