Adam_on_Now
ServiceNow Employee
ServiceNow Employee

I can't take credit for the insight below, but I thought it was valuable enough to make sure others interested in Stream Connect and Kafka and it's challenges and perceived challenges were thinking about this in the right way.

 

----

 

In the consideration of using StreamConnect, a customer questioned whether Stream Connect truly improves inbound flow triggering, suggesting it simply shifts the bottleneck from API calls to flow executions. There’s some truth to this—processing is almost always the bottleneck in high-throughput scenarios.

 

However, Stream Connect is designed to optimize how data enters the platform, not how it's processed. By leveraging Kafka, it handles bursty traffic and backpressure far more effectively than APIs, which are prone to timeouts and throttling. This allows the platform to ingest data in near real-time and focus resources on processing.

 

Once data is ingested, customers can choose the most appropriate processing method—Extract Transform Load (ETL), RTE (Robust Transform Engine), Flow Execution, or Script—based on their needs. This isolates performance concerns to the processing layer, where optimization efforts should be focused.

 

A common misconception is “Stream Connect is slow,” when in reality, it’s the processing logic that’s underperforming. For example, if data is ingested at 2MB/sec but processed at only 500KB/sec, the entire pipeline feels slow, and Stream Connect unfairly takes the blame. In nearly every CSTASK tied to this perception, the resolution involves improving flow logic, scripts, or RTE definitions.

Put simply: Stream Connect doesn’t create bottlenecks—it reveals them and that visibility is key to building scalable, performant solutions.