← Back to blog

There is still no silver bullet

In 1986, Fred Brooks wrote that no single technology would give software engineering a tenfold improvement in productivity. The hard part of building software, he argued, was never the typing. It was understanding the problem. He called it essential complexity, and distinguished it from accidental complexity, the stuff that gets in the way: slow compilers, bad tooling, manual memory management.

Forty years later, we have AI coding assistants that can scaffold a project, write tests, refactor modules, and generate entire features from a sentence. But, the accidental complexity didn't disappear, it changed shape. Instead of fighting slow tools and boilerplate, we're now shipping solutions we can't explain, built on methods we never learned.

The research is not encouraging

METR ran a study in 2025 with experienced open-source developers, people who had contributed to their codebases for years. They randomly assigned real issues to be worked with or without AI tools, the developers using AI took 19% longer. Not faster! Slower!?

That number should definitely make you pause. These are skilled engineers using frontier models on codebases they know intimately. If anyone should benefit from AI assistance, it's them. But 69% kept using the tools after the study ended, even though the tools slowed them down. The tools felt productive but data said otherwise.

Then in January 2026, Anthropic published their own findings. Fifty-two engineers learned a new Python library. Half used AI assistance, half didn't. The group that used AI scored 17% lower on comprehension tests afterward, they shipped code they didn't understand.

The kicker: the effect was asymmetric. Senior developers held steady, Junior developers fell behind. Intuitively one would think that AI leveled the playing field yet it tilted it further.

A Microsoft and Carnegie Mellon study from 2025 found something adjacent. The more people relied on AI tools, the less critical thinking they engaged in. The researchers called it cognitive offloading. When you stop doing the thinking, you lose the ability to.

The mechanism is simple

For decades, the struggle of writing code was where developers built understanding of what they were writing. You couldn't implement a webhook handler without understanding webhooks. You couldn't write a database migration without understanding schemas. The accidental complexity was annoying, but it was also the forcing function for learning the essential complexity.

AI removes the struggle, that's the whole point. But, it removes the learning with it.

I asked my AI assistant to add Stripe billing with webhooks this week. The code 'magically' appeared. It handled checkout sessions, invoice payments, subscription cancellations, it included idempotency checks. I reviewed it, it looked reasonable, I shipped it.

Then a code review caught that the idempotency check had a race condition. Two concurrent webhook deliveries could both pass the duplicate check before either one recorded the event. I didn't catch it during my review because I never really understood how Stripe's retry mechanism worked. The AI wrote plausible code and I accepted it.

That's what essential complexity looks like when you skip it. The code compiled and the tests passed, but the understanding was missing.

Brooks was right, but incomplete

Brooks said there would be no silver bullet for essential complexity. He was right. But he didn't anticipate a technology that would make it possible to skip the essential complexity entirely and still ship working code. At least for a while.

Every previous abstraction still required understanding. You could use an ORM, but you still had to understand queries, you could use a framework, but you still had to understand request lifecycles. The abstraction made you faster, not ignorant.

AI is different. You can produce correct-looking code without understanding the problem it solves. Unlike a bad query that fails loudly, code you don't understand fails quietly. Weeks later, under conditions you didn't think to test for.

The only path forward runs through understanding

So how do you keep developers engaged with essential complexity when AI handles all the accidental complexity for them?

You can't just tell people to "be curious" or "review the AI output carefully." The METR study showed developers thought they were being productive while being slower. The Anthropic study showed developers thought they understood the code while scoring lower on tests. Self-assessment doesn't work when the tools make everything feel easy.

After the webhook bug, my AI assistant asked me if I could explain why Stripe's at-least-once delivery guarantee makes idempotency necessary. I couldn't, not really. So I learned it. That question, timed right, about the thing I was actually shipping, was worth more than any documentation I would have skimmed.

That's the idea behind Entendi. It watches what concepts you're working with and checks whether you understand them. Not a quiz, not a gate. More like a colleague who notices you're shipping code that touches distributed systems primitives and asks if you've thought about the failure modes.

What happens next

I think we're going to see a split. Some teams will optimize purely for speed: Maximum AI delegation, ship everything, no questions asked. They'll move fast until something breaks in a way nobody on the team understands. (And then they'll stop moving entirely.)

Other teams will use AI for what it's actually good at, writing the code, while making sure their people still understand what the code does. Those teams will be slower to start with. They'll be a lot faster when it matters.

The silver bullet was never going to be a tool that writes code for you. If there's anything that gets us closer, it's making sure the humans using these tools keep understanding what they build. Essential complexity doesn't care whether you engage with it. It just waits.