Vibe coding still sucks

Page content
Confused robot

The confusion

I’ve been confused about comments and articles talking about vibe-coding. Many of the comments are along the lines of “it’s only good for small scripts,” or “it always generates garbage.”

That seemed at odds with my own experience with AI-assisted coding. I’ve been getting great results with aider-chat. It’s a command line app with no sub-agents, utilities, or anything like that. I’ve been responsible for much of the planning and guidance, and it’s been working well.

So what if I was missing out? What if these more “advanced” tools were the future, and I was just using the AI equivalent of a text editor while everyone else had moved to IDEs?

I tried to find out.

The experiment

I tested primarily OpenCode, along with ClaudeCode. I wanted to see if the integrated sub-agents, dynamic context management, and “vibe” approach would outperform my current workflow.

The internet comments were right.

These tools are very “chatty,” using a lot of tokens for simple tasks. The context continues to grow instead of being intelligently managed. All those sub-agents consume a lot of tokens. This is slow. This is expensive.

There’s also an over-reliance on the AI to do too much of the work and decision-making. Even when given clear, sequential steps, the tools still try to decide which step to take next - which consumes more tokens and takes a lot of time.

Backend roulette

Maybe the problem was the prompts weren’t tightly-coupled enough to the backend AI? I started with micode on OpenCode, since different models “prefer” different syntax. Kimi has different patterns than Claude or ChatGPT.

I tried using Claude and ChatGPT backends instead. Results improved, but not enough to justify the overhead.

ClaudeCode was better still, but it still couldn’t seem to get past a certain level of complexity without burning through context windows and losing track of the plan.

Is it a skill issue?

Am I using it wrong?

I maintain comprehensive documentation. I keep individual components simple and focused. Quality checks are continual and strict. I apply these same principles when I use aider-chat, and I’ve been getting great results there.

The difference isn’t my process. It’s the tools themselves.

The generalization trap

Vibe-coding tools seem to be too generalized. They’re designed for any kind of coding task, any kind of development flow, and any level of developer.

That last part is the killer. They’re not necessarily designed for very good developers. There’s too much hand-holding, too much assumption that the user wants the AI to drive rather than assist.

The loop problem

A lot of the coding process isn’t dynamic. It’s straightforward:

  • Figure out what you need to do
  • Come up with a plan
  • Follow the plan
  • Re-plan when things change

micode, which tries to follow that pattern, isn’t that good at it. But here’s the thing: it’s a loop. Why not just have a loop? Why wrap it in conversation and sub-agents that obscure the basic cycle of plan-do-check-adjust?

The vibe-coding approach treats development like a brainstorming session when it’s often more like following a recipe, then adjusting the recipe when you realize you’re out of an ingredient.

So, I get it

I understand why so many people complain about using these tools and the results they get. The resource consumption alone makes them impractical for iterative work. The inability to respect clear instructions without adding “helpful” interpretation creates friction, not flow.

That doesn’t mean they won’t eventually get good enough. The underlying models are improving. Someone will crack the interface problem.

I’ll keep trying with OpenCode. It isn’t useless, just not all that good. Maybe I’ll use it to write my own coding agent.

Summary

Despite the hype, modern “vibe-coding” tools like OpenCode and ClaudeCode fall short for serious development. They’re token-hungry, slow, and suffer from over-generalization - trying to be everything for everyone while making too many decisions for you. After testing a couple AI vibe-coding tools, I agree what many internet comments suggested: these tools consume resources inefficiently and hit complexity walls that simpler, focused tools like aider-chat avoid. The issue isn’t skill or documentation - it’s that vibe-coding tools misunderstand the coding process as a dynamic conversation rather than a disciplined loop of planning, execution, and verification.