Ask Questions Until You Have a Complete Understanding of the Problem
You’ve probably seen the “car wash question” floating around AI forums by now. It’s become something of a Turing test for prompting skills, and a lot of LLMs fail it on the first try.
Here’s the setup: you ask an LLM, “I need to wash my car. The car wash is 100 yards away. Should I walk or drive?” A lot of times, the model confidently tells you to walk. After all, it’s only a football field’s distance - why burn gas?
But stop and think about everything that answer assumes. Is the car already at the car wash? Is it even drivable? Maybe you’re going there to meet a friend who’s washing their car, or you’re a detailer arriving for a shift, or the vehicle has two flat tires and a dead battery. The “obvious” answer dissolves the moment you peek under the hood of the scenario.
This isn’t just a party trick. It’s a perfect illustration of how LLMs think - and why your interaction style should change if you want reliable results.
The Helpfulness Trap
Language models are optimized to be helpful and appear competent. When you present them with an ambiguous scenario, they don’t pause to ask whether you’re running a con. A human hearing that car wash question might squint and ask, “Wait, is this a trick?” An LLM just pattern-matches to the shortest path between your words and a plausible answer. One hundred yards? That’s walking distance. Case closed.
The problem isn’t stupidity; it’s eagerness. These systems are designed to reduce friction, which means they’ll fill in your gaps with the most statistically likely assumptions. They don’t know what they don’t know, and they certainly won’t admit it unless you force the issue.
The Safety Valve
There’s a single phrase that inoculates you against this specific failure mode. At the end of every prompt - literally every time you hit enter - append this instruction:
Ask questions until you have a complete understanding of the problem.
That’s it. No complex chain-of-thought templates, no JSON schemas, just a mandate to interrogate the premises.
When you add this constraint, the behavior shift is immediate. Instead of one-shotting an answer, the model switches into discovery mode. It will ask whether the car is currently at your location, whether you possess the keys, whether you’re transporting cleaning supplies that would be unwieldy on foot. It surfaces the unknown unknowns - the context you didn’t realize was missing because you were too close to your own problem.
Be warned: this changes the rhythm of the interaction. You may spend twenty minutes answering clarifying questions before a single line of code is written or a decision is rendered. I’ve burned entire afternoons chasing down edge cases with an LLM that I would have missed otherwise. But that time isn’t wasted; it’s front-loaded risk management.
Don’t Chat - Compile
Here’s where most people leave money on the table. When the model fires back its questions, the natural urge is to reply inline, creating a branching conversation tree. Resist this. Instead, treat your prompt as a living document.
When the LLM asks, “Is the car currently operational?” don’t just type “Yes” in the chat box. Scroll up to your original prompt, add the answer to the context, and resubmit the whole thing. Repeat until you have a single, comprehensive scenario description that accounts for every constraint, preference, and boundary condition.
Why go to the trouble? Because conversational drift is real. The longer the thread, the more the model fixates on recent turns and forgets the original goal. By iterating on the root prompt, you’re building a specification that even a weaker model can execute against. Test it yourself: run the car wash question through a lightweight local model using this iterative clarification method. You’ll get a nuanced, conditional answer that acknowledges the ambiguity - something that much larger models fail at when given the naive version.
The Real Lesson
The car wash problem isn’t about cars or walking. It’s about the dangerous comfort of apparent clarity. When you assume the LLM shares your context, you get confident nonsense. When you force it to excavate the full shape of the problem first, you get useful work.
In practical terms, this means your job shifts from asking to specifying. You become the guardian of the missing context, the one who knows that “100 yards” could mean a stroll across a parking lot or a trek through a minefield depending on what’s left unsaid. The developers who treat LLMs like mind-reading oracles will keep getting burned by the edge cases. The ones who treat them like diligent interns - bright, eager, but requiring exhaustive briefing - will find that even modest tools become remarkably capable.
Try it on the car wash question. Watch how quickly the “obvious” answer evaporates once the machine starts asking the questions instead of answering them.