7 AI Coding Workflows and the 5 Principles That Actually Matter
Superpowers, GSD, BMAD, Spec Kit and others have 300k+ GitHub stars combined. Before installing any of them, understand the 5 principles that work with any tool.
7 AI Coding Workflows and the 5 Principles That Actually Matter
Tools solve specific problems. Principles adapt to any problem.
The Problem Nobody Shows in Tutorials
Every code-generating AI has a fundamental limitation: the context window. It can only process a limited amount of information at a time. At the start of a session, memory is fresh and potential is at its peak. But as you keep making requests — project setup, screens, database schemas, listings — that memory fills up.
When it fills up, the AI starts forgetting. It forgets which database you're using. It creates duplicate components because it doesn't remember what it already built. It breaks things that were working.
This creates the biggest dilemma in AI-assisted programming: how to split development into sessions without losing context between them.
That's exactly the problem seven popular workflows try to solve.
The 7 Workflows
These frameworks have over 300,000 GitHub stars combined. Each proposes a different approach to the same problem:
| Workflow | Stars | Approach |
|---|---|---|
| Superpowers | 126k | Skills that guide the agent from brainstorm to deploy. Strict TDD, subagents with isolated context |
| Spec Kit (GitHub) | 83k | Specification as the primary artifact. Pipeline: constitution → specify → clarify → plan → tasks → implement |
| GSD | 45k | Pure context engineering. Subagents with fresh windows, XML plans, parallel wave execution |
| BMAD | 43k | Simulates an agile team with 9 AI personas (analyst, architect, dev...) that debate each other |
| Task Master | 26k | Task management with automatic decomposition and progress tracking |
| OpenSpec | — | Delta specs: declares only what changes (added, modified, removed). Good for existing projects |
| Agent OS | — | Full agent orchestration (v2). In v3, removed everything — agents got good enough on their own |
These are serious projects, built by competent people, with active communities. But here's where the concern comes in.
The Problem with Workflows
All of these workflows are tools. You install them, configure them, and follow their pipeline. And that's where the danger lies.
Need to fix a broken link on your site? Imagine going through specification, constitution, clarification, planning, and task phases — for a link.
Need to add a database field? Imagine an analyst researching, another writing the PRD, another defining architecture, another implementing — all for a single field.
Most day-to-day tasks are simple or medium complexity. Using a seven-step pipeline for them is like using a bazooka to kill a fly.
And there's another problem: tools become obsolete. Agent OS is the best example. In version 2, it had full orchestration with subagents and integrated TDD. In version 3, it removed nearly everything. The author realized AI agents had improved so much — as is the case with Claude Code — that the orchestration had become redundant.
Yesterday's tool was too complex for today's agent.
What survives when tools change? Principles.
The 5 Principles
After studying these workflows, testing some of them, and building real projects from scratch, I arrived at five principles. They're not a framework. They don't need installation. They work with any tool — or with none.
1. Have a map, not a route
Before asking AI to build anything, you need a document that describes what the project should do. In software, this is called a PRD (Product Requirements Document) — a simple document that answers: what am I building, for whom, and what are the features.
The difference is that many people create a PRD and never update it. In practice, building reveals things you didn't foresee: features that don't make sense, new ideas that emerge, shifting priorities.
An outdated PRD is worse than no PRD — because it drives decisions in the wrong direction.
The map defines the destination, not the exact path. Update it as the project evolves.
2. Plan the implementation at the right time
This might be the most important one. I'm not talking about the project plan (the map, previous principle). I'm talking about the technical plan: how to build this feature, which files to create, what structure to use.
In manufacturing, Toyota's just-in-time concept means: produce when demand calls for it, don't stockpile ahead. In programming, compilers do the same thing — they compile code when it needs to execute, not before.
When you ask the agent to plan implementation now, it analyzes the project as it is now: existing dependencies, established code patterns, current structure. A technical plan created three weeks ago assumes a version of the project that might not exist anymore.
Many workflows generate detailed technical plans before implementing. Some generate plans for the entire project before writing a single line of code. In practice, those plans become obsolete by the third feature.
Plan the implementation when you're about to implement.
3. Separate the "what" from the "how"
Before asking AI to implement, describe what needs to be done without getting into technical details:
"I need an article listing page with tag filtering and two-language support."
That's the "what." Then, clear the context. In Claude Code, a /clear does the job. In a clean session, ask the agent to read that description and plan the implementation.
Why separate? Because the exploratory discussion of "what do I want?" pollutes the implementation context. You go back and forth, change your mind, discard options. By the time the AI actually implements, half the context is noise from abandoned decisions.
Polluted context leads to inconsistent decisions. Define the destination in one conversation and drive in a separate one.
4. Choose the tool by the demand
In practice, there's a simple test. Ask yourself: "Can I describe what I want in a short list?"
| Answer | Complexity | What to do |
|---|---|---|
| Yes, it's straightforward | Simple | Go direct, no ceremony |
| Yes, but there are decisions to make | Medium | Describe the scope first, then implement |
| Can't even list without exploring | High | Then you need brainstorm and design |
Most day-to-day tasks are simple or medium. Don't force everything into the same mold.
In practice, my workflow with Claude Code is:
- Simple stuff → go direct
- Medium complexity → use Claude Code's native tools (plan mode, planning)
- High complexity → use Superpowers (the only framework I keep installed)
5. AI executes, you decide
AI can implement every task correctly and still produce a result that isn't what you wanted. The sum of the parts isn't always obvious.
After AI implements, validate. And use AI itself for that:
- Ask it to navigate through affected routes
- Ask for screenshots of the pages
- Ask it to compare what was done with what you requested
Then look yourself: does the result make sense? Does the flow work? Is the visual right?
AI is excellent at executing. Deciding what to build, approving the plan, and validating the result — that's irreplaceably human.
Summary: 5 Principles in One Table
| # | Principle | In one sentence |
|---|---|---|
| 1 | Have a map, not a route | A living PRD that evolves with the project |
| 2 | Plan at the right time | Just-in-time technical plan, not anticipated |
| 3 | Separate "what" from "how" | Scope and implementation in separate sessions |
| 4 | Tool by demand | Simple → direct; medium → plan mode; complex → framework |
| 5 | AI executes, you decide | Always validate the final result |
Next Steps
These are the principles I use every day to build apps with Claude Code. If you want to go further and learn how to turn an idea into a complete app — from zero to a published website — check out the App Creator Course.
And if you want a practical Claude Code reference always by your side — commands, settings, skills, all organized step by step — the Claude Code Guide was born from exactly that need.