In early 2025, AI researcher Andrej Karpathy casually posted a description of his new coding workflow on X. Using a large language model, he was writing software without direction or intention, mostly going along with what AI suggested. Karpathy called this method vibe coding.
Within months, product managers with no development background were shipping internal tools. Marketing teams were building dashboards. Analysts were generating scripts to automate workflows that had previously required a ticket to IT.
The productivity gains were impressive. So was something else: a quiet, organization-wide expansion of software in production that nobody had fully reviewed, tested, or sanctioned.
The pattern was not entirely new. For years, low-code and no-code tools had been widening the circle of people who could build software inside an enterprise, but those tools still maintained some structural relationship to the underlying logic.
Vibe coding removed even that tether. A product manager typing a natural-language prompt into an AI assistant was now further from the code than a citizen developer dragging components in a low-code builder had ever been.
“Shadow IT predates low-code. Low-code accelerated it. And now, vibe coding is accelerating it further and much quicker,” says Craig Riegelhaupt, a product marketing director at ServiceNow who covers AI governance.
For decades, security researchers have catalogued the same recurring vulnerabilities in enterprise software: injection flaws, exposed credentials, broken authentication, and insecure data handling. While vibe coding hasn’t introduced any new vulnerabilities, it has increased business risk.
“AI is making it easier to reproduce these vulnerabilities without developers even realizing what’s going on,” says Rennie Naidoo, a professor at the University of the Witwatersrand in Johannesburg, South Africa, who studies enterprise risk and AI-generated code. “It amplifies existing risks rather than creating entirely new ones.”
Veracode’s 2025 GenAI Code Security Report found that 45% of AI-generated code contains security vulnerabilities. AI produces code fast, in large quantities, and with less review, which means there’s a risk that vulnerabilities are being replicated at a pace traditional security processes were never designed to absorb.
There’s also a structural problem. Most AI coding tools generate output from general training data with no awareness of the enterprise environment into which the code is being deployed and no knowledge of the organization’s data model, security policies, existing integrations, or compliance requirements. The result is code that may function correctly in isolation but is fundamentally disconnected from the context that determines whether it’s safe to run.
Because the output looks authoritative—formatted correctly, sensibly named, logically structured—developers don’t scrutinize it as carefully as they would code they’d written themselves, says Naidoo. The dynamic is even more pronounced for non-developers, who may lack the training to evaluate the output at all.
Organizations facing a global shortage of skilled developers have a real incentive to let AI close the gap. The U.S. Bureau of Labor Statistics projects that unfilled software developer jobs will grow 15% over the next decade. That shortage is part of what makes vibe coding so difficult to restrict: When teams can’t hire fast enough, AI-generated code stops being optional and starts feeling necessary.
Deploying it at speed carries governance consequences, though. “Most organizations think visibility means a registry of what’s deployed,” says Riegelhaupt. “That’s table stakes, and most still don’t have it.” What they actually need, he explains, is operational visibility, tracking:
- What data each application touches
- What decisions it makes
- What the exposure looks like if it behaves unexpectedly
- Who owns accountability when the person who built it is gone
In addition, there’s a practical gap that the security conversation tends to obscure. In addition to being insecure, many vibe-coded applications aren’t ready for production. They lack lifecycle management, rollback capability, change tracking, and integration with the systems they need to talk to.
Building a working prototype can happen quickly. Getting that prototype into a state where it can be deployed, maintained, and governed at enterprise scale is the difficult part. And the gap is widening.
Organizations are now enabling employees to engage in agentic development, where AI agents act autonomously, making decisions, accessing data, and executing tasks without waiting for a human to prompt them.
“You have to treat an AI agent less like software and more like an employee,” says Riegelhaupt. “You need to know what it is authorized to do, what data it can access, what decisions it can make on its own, and who is accountable when something goes wrong. Most organizations have none of that defined.”
“We’re never going to convince businesses to simply sit back and stop encouraging vibe coding,” Naidoo says. “We’ve just got to bring balance into the conversation.”
That balance is more architectural than procedural, according to Riegelhaupt. “Guardrails should be automated and invisible to the developer,” he says.
In that model, the platform handles visibility, policy enforcement, and auditability in the background. Security becomes a property of the environment rather than a checklist.
For that to work, though, the AI tools, the data they access, the workflows they power, and the governance layer that oversees them all have to live close together. When governance is bolted on from a separate system, the gap between what’s being built and what’s being monitored is where risk accumulates.
That doesn’t mean organizations need to abandon the tools their teams are already using. Developers and business users are going to experiment with whatever AI coding tools are fastest and most accessible. Trying to prevent that is a losing strategy.
A more realistic approach is to ensure that wherever an application is built, there’s a governed environment where it can be deployed, tested, integrated, and maintained at enterprise scale. The point isn’t to control where people build. It’s controlling what happens when what they build needs to run in production.
“Avoidance is not going to be the answer,” Naidoo says. “We’ve got to advocate for responsible vibe coding.”
Find out how ServiceNow can help you ensure responsible vibe coding.