Get back to driving innovation and delivering customer value.
©Resolve.ai - All rights reserved
"Vibe coding" has become a polarizing concept in software engineering conversations. Advocates echo this as a path to flow, where you can use AI's buoyancy for turning intent to outcome. Detractors dismiss this as a path to unmaintainable systems and production disasters. But are these the right questions to ask?
When experienced engineers push back against vibe coding, their resistance comes from:
However, these concerns point to a much deeper challenge: the gap between what we can "vibe code" (individual components) and what we actually need to manage (production systems that encompass code, infrastructure, and operational complexity).
Traditional programming requires dual-layer thinking: Your brain must simultaneously hold the high-level intent ("I want users to see their purchase history") AND the low-level implementation details ("I need to join the users table with the orders table, handle pagination, format timestamps..."). You're constantly switching between "what do I want this to do?" and "how do I make the machine understand this?".
Vibe coding collapses this thinking into a single layer focused on intent and system design. When you describe what you want to an AI, your brain can dedicate its full capacity to understand the problem domain, user needs, and architectural decisions. And note, even with advanced vibe coding, engineers remain crucial, as they guide and control the AI's output, using it primarily for high-level tasks and structured environments.
The core tenets of vibe coding are:
The catch? This requires you to trust the AI systems you are relying on. If you have to pause and ask, "But does the AI know about the weird edge case in the legacy billing module?" or "What are the downstream impacts of changing this API?" you’re right back in dual-layer thinking, breaking the “flow” state.
The data on vibe coding points to an interesting insight.
Why the difference?
For an enterprise engineer, blindly trusting AI-generated code feels like negligence. Their job is straightforward conceptually but extremely hard in practice: build new things fast without breaking anything already in production. This defensive posture is a rational response to the environment they work in.
YC startups showing "95% AI-generated code" aren't just working with smaller systems. They're building systems where "coding" encompasses only a fraction of the total complexity. But "coding" in reality is also about how it uses your existing systems and operates together cohesively. That's the real hard problem. New code is just one component of a much larger system, and currently, it's the only thing we can effectively "vibe code."
Meanwhile, enterprises with "~25% AI-generated code" are trying to retrofit AI collaboration onto systems designed for human comprehension. It's like trying to drive a car on train tracks.
But here's the kicker: our enterprise systems have already grown beyond human cognitive capacity. An average enterprise system has millions of lines of code, thousands of dependencies, and decades of accumulated intuition. No single human understands it completely. We've been operating in this territory for years.
The final and most significant blocker is tribal knowledge or engineering intuition. This is the unwritten wisdom held by your most seasoned engineers. The intuitive understanding of which services are brittle, what alerts are secretly critical, and how the system really behaves under pressure. This knowledge is rarely documented, and if it is, it becomes outdated if not maintained daily. Also, this knowledge cannot be fed into any legacy tool or platform.
Resistance to vibe coding isn't just about production safety. It’s about confronting the uncomfortable truth that these systems are already too complex for pure human management. AI systems aren’t just adding to the production complexity. In fact, they are revealing that maintainability was already an illusion. It's also about understanding that, at least for the foreseeable future, AI isn’t capable of replacing human programmers for complex projects, but what are the high-level tasks or operations within structured environments that it can do now with greater speed and efficiency?
The startup vs. enterprise adoption difference isn't really about codebase size. It’s about our approach to systems. When seasoned engineers say AI-generated code is unmaintainable, in some ways, we are also assuming that humans are always going to be responsible for reading and maintaining production.
Consider this thought: an AI system writes code that implements your requirements but uses unconventional patterns. Is it "unmaintainable" only because you assume humans need to modify it manually? But if you have another AI system that can read, understand, and modify your previous work, then “vibing-maintaining” becomes an AI-to-AI communication problem.
We're moving from human-readable code to AI-readable systems based on human design and guardrails. The question shouldn’t be "can I understand this code?" but "can I use AI to understand this system's behavior well enough to direct its evolution?" This shift in thinking is not trivial. It's about confronting the reality that most human changes are also context-blind. We just pretend otherwise because admitting the full scope of what we don’t know about our production systems is too uncomfortable.
The real maintainability question now becomes: can I effectively collaborate with AI to evolve this system over time? This requires entirely different skills from operating production.
It's time we accept that complete human understanding of production systems is impossible, and we need to rely on AI systems that are our digital teammates that can navigate the complexity autonomously and at scale. This represents a philosophical shift from "systems should be understandable" to "systems should be navigable". Agentic AI systems are designed not just to write code, but to understand the production systems. By integrating directly with the operational data (traces, logs, metrics, and incident history), it builds a complete model of how your system actually works, including its hidden dependencies and learned behaviors. It doesn't need a human to explain the tribal knowledge because it learns it directly and continuously from every interaction. When Agentic AI systems work with you in production, it's not a generic solution from the internet; it's a reasoned recommendation grounded in the reality of your system. It can investigate incidents autonomously, explain complex dependencies, and guide resolutions with an awareness that, until now, only existed in the minds of your senior staff. This frees the engineer from the fear of the unknown, finally creating the psychological control required to "give in to the vibe."
Resolve AI is the agentic AI company for software engineering founded by the co-creators of OpenTelemetry.
Resolve AI understands your production environments, reasons like your seasoned engineers, and learns from every interaction to give your teams decisive control over on-call incidents with autonomous investigations and clear resolution guidance. Resolve AI also helps you ship quality code faster and improve reliability by revealing hidden system context and operational behaviors. With Resolve AI, customers like DataStax, Tubi, and Rappi, have increased engineering velocity and systems reliability by putting machines on-call for humans and letting engineers just code.
Varun Krovvidi
Product Marketing Manager
Varun is a product marketer at Resolve AI. As an engineer turned marketer, he is passionate about making complex technology accessible by blending his technical fluency and storytelling. Most recently, he was at Google, bringing the story of multi-agent systems and products like Agent2Agent protocol to market
Manveer Sahota
Product Marketing Manager
Manveer is a product marketer at Resolve AI who enjoys helping technology and business leaders make informed decisions through compelling and straightforward storytelling. Before joining Resolve AI, he led product marketing at Starburst and executive marketing at Databricks.
Resolve AI has launched with a $35M Seed round to automate software operations for engineers using agentic AI, reducing mean time to resolve incidents by 5x, and allowing engineers to focus on innovation by handling operational tasks autonomously.
Resolve AI, powered by advanced Agentic AI, has transformed how Blueground manages production engineering and software operations, seamlessly handling alerts, supporting root cause analysis, and alleviating the stress of on-call shifts.
This blog post explores how Agentic AI can transform software engineering by addressing the deep cognitive challenges engineers face during on-call incidents and daily development. It argues that today's observability tools overwhelm engineers with fragmented data but fail to provide real system understanding. By combining AI agents with dynamic knowledge graphs, Resolve AI aims to replicate engineering intuition at machine scale—enabling proactive, autonomous investigation, and delivering the kind of contextual awareness usually reserved for the most seasoned engineers.