Fast, Good, and Cheap: Why the Old Trade-Off No Longer Applies
The iron triangle of software development is dead. AI has changed the math.
This article was written as a message to my team. I'm publishing it here because I believe the ideas apply broadly to any engineering team building products today.
Team,
I want to share some thoughts on something we've been discussing internally. Every developer knows the iron triangle: fast, good, cheap. Pick two. It's been the unspoken law of software development for decades.
I think that rule is dead. And I want to explain why I believe that, because it shapes how I think we should work going forward.
The Time Split Has Flipped
Before AI, building a feature estimated at three days looked something like this: 20-30% of the time went into thinking. Understanding the problem, laying out a plan, figuring out how to tackle it. The remaining 70-80% was spent writing code, writing tests, wiring things together. The mechanical work dominated.
Now that ratio has completely flipped. A three-day task becomes a day and a half. But here's the important part: out of that day and a half, roughly 70% goes into thinking about how to solve the problem. The remaining 30% is spent by AI agents actually building it.
And this is exactly why quality goes up, not down. Thinking time is directly proportional to quality. When you compress the mechanical part and expand the thinking part, you don't just ship faster. You ship something better.
Why We're Pushing for Speed
We're building commercial products and pushing hard to ship fast. Not because we love velocity for its own sake, but because we want to get to market as quickly as possible. We want real customer feedback. Not hypothetical theories from whiteboards and meeting rooms.
The only way to get meaningful feedback is to put something real in front of paying customers. And that means we need to build momentum: ship fast and ship quality. Because shipping garbage doesn't generate useful feedback. Customers can't tell you what to improve if the product doesn't work in the first place.
Execution Is No Longer the Bottleneck
The biggest shift I've personally felt is that execution is no longer a constraint. When I'm speaking to customers or prospects, I can identify their pain points, find the gaps between what we offer and what they need, and turn that into a solution. Fast.
The loop between learning about a problem and delivering a fix has become incredibly tight. I'm pushing into an 80/20 mentality: ship at 80%, then let the remaining 20% be shaped by real customer feedback from actual usage.
Here's what I want all of us to internalize: don't start by thinking about how much budget or time you have to implement something. Think about how you would build it in a perfect world. And then actually do that. AI removed the ceiling. The constraint used to be "what can I afford to build?" Now it's simply "what's the best solution?"
Rethinking Quality
When I say "quality," I don't just mean clean code. Code quality is one dimension under a much larger umbrella. Quality also means:
- Problem-solution fit. Does this actually solve the customer's problem?
- Implementation ease. How easily can a customer adopt this for their specific use case?
- Business value. Does it serve a clear business need?
Customers don't read your code. They don't care about your abstractions or your test coverage. They care about whether the product works and whether it solves their problem. Beautiful code that solves the wrong problem isn't quality.
This is really important for us as developers to understand: code quality is just one dimension, and it needs to serve a clear business need and a clear purpose.
The New Economics of Code Review and Testing
This part might be controversial, and I'm fine with that.
Why did we do code reviews in the past? Because every change to the codebase was expensive. A developer had to find the issue, understand the context, make the change. And if something slipped through, fixing it later was costly. Code review was a form of insurance: catch problems early to save money later.
But that economic argument has shifted. If fixing a bug reported by a real customer costs almost nothing and takes almost no time, the ROI of heavy upfront review changes fundamentally. The same logic applies to exhaustive test suites.
I'm not saying throw all discipline out the window. There's an important distinction here: yes, we can quickly fix a reported bug now. But some bugs cause real damage that no hotfix can undo. Data loss. Security incidents. Broken trust with customers. Being able to ship a fix in minutes does nothing to repair that kind of real-world damage.
So a solid infrastructure and test setup is still essential. The obvious things need to be covered. Critical paths, data integrity, security boundaries. That's non-negotiable. What changes is how much time we spend reviewing every cosmetic detail or writing tests for edge cases that will never happen in practice.
Architecture still matters enormously and needs deep human thinking, ideally done together with AI. Describe how customers use the product, find the right patterns for today, and think about how the problem might evolve in the future. Is the architecture ready to support a changed context down the road?
But spending hours reviewing every line of code for style and minor issues? That math doesn't add up anymore.
Smaller Teams, Bigger Output
The most expensive part of building software was never version one. It was everything after. All the iterations, touching existing code, maintaining backwards compatibility, responding to feedback across a growing codebase.
AI drives that iteration cost toward zero. Writing code, changing code, iterating on code. It approaches a zero-sum game. And that changes how we should structure ourselves.
A smaller team is an advantage, not a compromise. We communicate better. We can be in the same room, talk to customers constantly, share learnings internally. Each person tackles a problem, comes back with improvements, distills those learnings into a knowledge base that levels up the whole company over time.
That's what I want us to be. A tight group that learns fast and ships fast.
Where Confidence Really Comes From
Some people hear "smaller teams, faster shipping, less code review" and think it sounds reckless. I get it. But I'd challenge where we're building our confidence.
Confidence shouldn't come from passing tests in a GitHub Action. It should come from the sentiment and feedback of our actual customers. The people paying us to use the product. If they say "this is great, this solves exactly the problems we have, this helps us in our day-to-day business," then we're doing something right.
That's real confidence. Building it on vanity metrics in a test suite or a code quality dashboard is a false sense of security.
The Developer of the Future
If you're a developer who's only good at coding, you're going to have a hard time. The most important skill now is the ability to talk to your users, quickly build a mental model of their problems, and extract the information you need to improve the product.
You need to translate features into benefits. You need to bridge the gap between what a user struggles with and what the product should do. You need to build these mental models fast and turn them into action.
Our role is evolving from "code writer" to "problem translator." The code is the easy part now. Understanding the problem, that's where the value is.
What This Means for Us
The iron triangle isn't gone. It's just that AI has made execution so cheap and fast that the triangle collapses. We can have all three: fast, good, and cheap. But only if we stop optimizing for code and start optimizing for customer outcomes.
Let's do exactly that.