Discussion about this post

User's avatar
JP's avatar

The "artifact spectator" framing is spot on. I see the same problem from the other direction. Experienced developers can fall into it too, just differently. Instead of not knowing the architecture, they know it so well they stop checking whether the AI is respecting it. Context drifts and they don't catch it because they assumed the model held the same mental map they did.

The official best practices guide calls this "the trust-then-verify gap." You let the agent run for 20 minutes, then discover it went off course 18 minutes ago. Your scaffolding framework would actually help here because it gives both the human and the AI a shared vocabulary for what's expected. Covered this and the other common failure patterns here https://reading.sh/context-is-the-new-skill-lessons-from-the-claude-code-best-practices-guide-3d27c2b2f1d8?postPublishedType=repub

Do you find your non-techie readers actually maintain the architectural awareness long-term, or does it tend to fade once the first build is running?

C-Suite of One's avatar

Great post. But the answer to the question that triggered the post is then.. "everywhere!"

I think we need to make it easier for the "non-techies" to take their MVP from 80% (vibe coding) to 90% (shipped app that's safe from the largest pitfalls)?

I look forward to your reply in another article 🙃 (just kidding, up to you ofc!), thanks for this interesting take of yours.

3 more comments...

No posts

Ready for more?