When we look at AI, machine learning, and large language models (LLMs), people often describe their functionality as reducing a lot of toil for human operators through automation of incident management, garden-variety debugging, and things of that nature. In the next five to 10 years, that's going to continue to improve, but I think we're getting really close to a breakthrough with LLMs.
We’re going to see what they can do to the maintenance and operations of really complex software systems. They can see around corners in a way that most people can't. It takes a developer or site reliability engineer many, many years to develop a truly coherent understanding of how the whole thing fits together. I've been absolutely gob smacked by how an LLM can take a 6,000-word strategy doc and pull out the one aspect you care about. Imagine what that could do for a complex system with thousands of microservices that are constantly being deployed and redeployed and redeployed. And with every new release having something that you can direct questions to, to help give you global knowledge at the snap of your fingers, that’s mind-blowing for me. And I think observability will never be the same after those innovations have been incorporated into these products.