- Post History
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
2 hours ago
Parallel Export is a new feature that improves export performance for large datasets by splitting data into smaller chunks and processing them simultaneously.
When to use:
- Dataset has 50,000+ records.
- Export is slow or heavy.
How it works:
- Data is divided into chunks.
- Multiple export sets are created.
- Processed in parallel across nodes.
Example:
A company exporting 1 million incident records:
- Before: 40–60 minutes.
- After: 5–10 minutes using parallel export.
Key Benefits:
- Faster processing.
- Better performance.
- Reduced system load.
Parallel Export in ServiceNow is a performance optimization feature designed to handle large data exports more efficiently by dividing the workload into smaller parts and processing them simultaneously. In a traditional export process, all records are handled sequentially within a single export set, which can become slow and resource-intensive when dealing with large datasets, especially those exceeding 50,000 records or involving complex scripts and transformations.
Parallel Export addresses this limitation by splitting the data into multiple smaller chunks, where each chunk represents a subset of the total records. For each chunk, a separate export set is created, effectively converting one large job into multiple smaller jobs.
These export sets are then processed in parallel across different worker threads or nodes within the ServiceNow instance. This means that instead of a single node processing the entire dataset, multiple nodes handle different portions of the data at the same time, significantly reducing the total processing time.
Once all chunks are processed, the system consolidates the results into a single output file, such as CSV or Excel, so the end user still experiences it as one complete export. This approach is particularly beneficial for scheduled exports, integrations, reporting, and data migration scenarios where performance and time efficiency are critical.
The feature is typically triggered when the dataset exceeds a certain threshold, controlled by the system property glide.scheduled_export.min_rows_for_parallel_export, which is usually set around 50,000 records.
If this property is not available in a Personal Developer Instance (PDI), it can be created manually.
While Parallel Export provides significant performance improvements, it is most effective in multi-node environments and may not offer benefits for smaller datasets due to the overhead of managing multiple export sets. Overall, it enhances scalability and ensures faster data processing by leveraging concurrent execution.
System Property:
glide.scheduled_export.min_rows_for_parallel_export (default: 50,000)
Key Concepts Involved :
- sys_export_set table : stores export jobs
- Workers / Threads : process chunks
- Nodes (multi-node architecture) :
- Schedulers / Background workers :
1. sys_export_set table (Export Jobs):
- This is the central tracking table for export operations.
- In Parallel Export:
- Instead of one export record, multiple export set records are created (chunks).
- Each record represents a portion of the total dataset.
- It helps:
- Track progress of each chunk
- Manage retries/failures independently
- Combine results at the end
"Task list where each task = one chunk of data to export"
2. Workers / Threads:
- Workers (threads) are responsible for executing export chunks in parallel.
- Each worker:
- Picks one sys_export_set record (chunk).
- Processes it independently.
- More workers = more chunks processed simultaneously.
Example:
- 100,000 records = split into 10 chunks.
- 10 workers = all chunks processed at the same time.
3. Nodes (Multi-node Architecture) :
- In a multi-node instance, ServiceNow has multiple application servers (nodes).
- Parallel Export distributes work across different nodes.
Role:
- Each node can process multiple chunks.
- Improves scalability and performance.
Example:
- Node 1 : processes chunks 1–3
- Node 2 : processes chunks 4–6
- Node 3 : processes chunks 7–10
Result: Faster execution vs single-node processing.
4. Schedulers / Background Workers :
- These are system-level job handlers that:
- Trigger export jobs.
- Manage execution lifecycle.
- Pick export jobs from sys_export_set.
- Assign them to available workers/threads.
- Handle queueing and retries.
Think of them as: "Traffic controllers" deciding which worker runs which chunk and when.
Thanks and Regards
Gaurav Shirsat
