Genuine question though. If "a successful build is not a reliable signal of correctness," then what is? Because most non technical founders I know ship vibe coded MVPs, get users, and iterate based on what breaks. They never develop "architectural awareness" and some of them are doing just fine.
Feels like "artifact spectator" is the real risk at scale, not at MVP stage.
Mostly, what leads them to "iterate based on what breaks" is the result of a build not implemented with the right scaffolding of abstractions in place.
Having something run successfully has never guaranteed correctness. I am not talking about logical errors here. That is another creeping beast of its own, leading to tech debt that translates into business debt in real time. Successful build not being a reliable signal of correctness is not a recent AI-induced slop creep. It has been the case for decades, even before the new wave of AI-assisted software development began to spread. It has always been human. Something coding agents haven't nailed so far with the current evolution of models.
I am not sidelining the different harnessing approaches being experimented with by frontier labs, tooling companies, and engineering orgs. Interesting approaches are being worked on. A fascinating space.
Back to the question at hand, true, you can quickly create something functional (a fairly basic MVP), especially with CodeGen platforms, and for the more adventurous, AI-integrated IDEs and terminal-based coding agents can be even more empowering.
All these without having to deal with the integration and deployment (when you use CodeGen platforms, this is much more seamless) aspects of the software creation workflow.
But down the road, either you, another technical co-founder or employee #1 in your budding startup idea that gained momentum after a quick vibe-coded MVP must be able to decode what went in the wiring.
That is when architectural awareness from the get-go comes in handy.
What gets wired to what, how and for what reason.
You can tell what is what.
You can translate your wiring idea to the coding agents when you vibe code, and later to the technical help you bring on board.
I believe that, to develop a scalable and, most importantly, maintainable codebase for a software product, having architectural awareness is valuable.
Most importantly, during the initial build process, you get to understand what the coding agents are actually doing at any given point, rather than the idle act of following the fast-rolling closing credits like implementation reports, as artifact spectators.
The "artifact spectator" framing is spot on. I see the same problem from the other direction. Experienced developers can fall into it too, just differently. Instead of not knowing the architecture, they know it so well they stop checking whether the AI is respecting it. Context drifts and they don't catch it because they assumed the model held the same mental map they did.
Great post. But the answer to the question that triggered the post is then.. "everywhere!"
I think we need to make it easier for the "non-techies" to take their MVP from 80% (vibe coding) to 90% (shipped app that's safe from the largest pitfalls)?
I look forward to your reply in another article 🙃 (just kidding, up to you ofc!), thanks for this interesting take of yours.
Genuine question though. If "a successful build is not a reliable signal of correctness," then what is? Because most non technical founders I know ship vibe coded MVPs, get users, and iterate based on what breaks. They never develop "architectural awareness" and some of them are doing just fine.
Feels like "artifact spectator" is the real risk at scale, not at MVP stage.
Good point.
Mostly, what leads them to "iterate based on what breaks" is the result of a build not implemented with the right scaffolding of abstractions in place.
Having something run successfully has never guaranteed correctness. I am not talking about logical errors here. That is another creeping beast of its own, leading to tech debt that translates into business debt in real time. Successful build not being a reliable signal of correctness is not a recent AI-induced slop creep. It has been the case for decades, even before the new wave of AI-assisted software development began to spread. It has always been human. Something coding agents haven't nailed so far with the current evolution of models.
I am not sidelining the different harnessing approaches being experimented with by frontier labs, tooling companies, and engineering orgs. Interesting approaches are being worked on. A fascinating space.
Back to the question at hand, true, you can quickly create something functional (a fairly basic MVP), especially with CodeGen platforms, and for the more adventurous, AI-integrated IDEs and terminal-based coding agents can be even more empowering.
All these without having to deal with the integration and deployment (when you use CodeGen platforms, this is much more seamless) aspects of the software creation workflow.
But down the road, either you, another technical co-founder or employee #1 in your budding startup idea that gained momentum after a quick vibe-coded MVP must be able to decode what went in the wiring.
That is when architectural awareness from the get-go comes in handy.
What gets wired to what, how and for what reason.
You can tell what is what.
You can translate your wiring idea to the coding agents when you vibe code, and later to the technical help you bring on board.
I believe that, to develop a scalable and, most importantly, maintainable codebase for a software product, having architectural awareness is valuable.
Most importantly, during the initial build process, you get to understand what the coding agents are actually doing at any given point, rather than the idle act of following the fast-rolling closing credits like implementation reports, as artifact spectators.
The "artifact spectator" framing is spot on. I see the same problem from the other direction. Experienced developers can fall into it too, just differently. Instead of not knowing the architecture, they know it so well they stop checking whether the AI is respecting it. Context drifts and they don't catch it because they assumed the model held the same mental map they did.
The official best practices guide calls this "the trust-then-verify gap." You let the agent run for 20 minutes, then discover it went off course 18 minutes ago. Your scaffolding framework would actually help here because it gives both the human and the AI a shared vocabulary for what's expected. Covered this and the other common failure patterns here https://reading.sh/context-is-the-new-skill-lessons-from-the-claude-code-best-practices-guide-3d27c2b2f1d8?postPublishedType=repub
Do you find your non-techie readers actually maintain the architectural awareness long-term, or does it tend to fade once the first build is running?
Not fade but evolve, I would say. Most of the time getting lost in what the agents recommended along the way.
Great post. But the answer to the question that triggered the post is then.. "everywhere!"
I think we need to make it easier for the "non-techies" to take their MVP from 80% (vibe coding) to 90% (shipped app that's safe from the largest pitfalls)?
I look forward to your reply in another article 🙃 (just kidding, up to you ofc!), thanks for this interesting take of yours.