Arbini.Dev

Judgment in the AI Era

The question was always "is this the right approach?" rather than "does it work?"

But those two questions often got confused. Something could work and still be wrong for the context. A solution could pass tests and still create problems downstream. Implementation could be clean and still miss the point.

That confusion was always there. What's changed is how forgiving the environment is.

The Pace That Forgave

Before AI, software development had a built-in margin for error. The time between decisions created space to think. Writing code by hand forced a kind of engagement that built understanding, even when you weren't aware of it.

When someone went in the wrong direction, the slowness of implementation meant someone else could usually catch it before it compounded. A senior engineer might notice in code review. A teammate might raise a question in a standup. The natural friction of building software gave everyone more chances to course-correct.

Wrong direction was recoverable because implementation was slow.

What Changed

AI compresses all of that. What used to take hours now takes minutes. What used to require deep engagement can now be delegated. The friction that forced reflection is largely gone.

This isn't a complaint. It's genuinely useful. For pet projects, demos, and proofs of concept, vibe coding is amazing. You can try ideas quickly, see what works, throw things away and start over. The cost of a wrong direction is low because the investment is low.

But not every context is the same.

In production systems serving tens of thousands of customers and handling real money, "works" isn't enough. The code that "works" but isn't the right approach doesn't stay contained. It spreads. Other code gets built on top of it. Patterns get repeated. Assumptions get baked in.

By the time anyone notices the drift, there's a lot to unwind.

The Skill Is Knowing Which Context You're In

Vibe coding isn't bad. Neither is using AI to accelerate your work. The problem isn't the tools. It's knowing when they're appropriate and when they're not.

That's judgment.

Judgment is the ability to recognize which context you're operating in. To know when speed matters and when it doesn't. To understand that "works" might be good enough for a demo but isn't good enough for a system people depend on.

It's also the ability to catch yourself, or catch the AI, when the direction is wrong. To notice when the solution fits the prompt but misses the problem. To ask "is this the right approach?" even when "does it work?" has already been answered.

What This Means for Hiring

I've been thinking about this in the context of hiring engineers. If judgment is the skill that matters more now, are we actually assessing it?

Most technical interviews optimize for observable output. Can the candidate write code that works? Can they debug? Can they communicate? These are all valuable. But they don't necessarily tell you whether someone knows the difference between "works" and "right," or whether they can recognize when they're in a context that demands the distinction.

AI fluency is now table stakes. I'm not worried about whether candidates can use the tools. I'm thinking more about what signals reveal whether they can use them well, with judgment rather than just velocity.

I don't have a full answer yet. But I'm reconsidering what we look for. Requirements decomposition, the ability to turn a vague task into the right questions, feels more important than ever. So does adaptability: how someone responds when the direction changes mid-stream.

These aren't new skills. But they matter more now, because the pace no longer forgives their absence.

A Shift, Not a Warning

None of this is meant as alarm. AI is changing how we work, and that change has real implications. But the core of good engineering hasn't changed. It's always been about judgment: asking the right questions, understanding context, and knowing when "works" isn't enough.

What's shifted is how quickly bad judgment becomes expensive. The margin for learning by slow iteration is narrower. The cost of compounding in the wrong direction is higher.

The pace no longer forgives the confusion between "works" and "right."

That's not a problem to fear. It's a shift in what good engineering looks like, and a reminder that the human skill at the center of it matters more, not less.

February 7, 2026