You’ve been vibe coding. Time to give Claude the context it’s been missing — and watch the difference.

Overview

Apply Module 03’s spec-driven workflow to a real project you’re already working on. Write a CLAUDE.md, create a feature spec, define subagents, and compare results to your old workflow.

Draws from: Modules 02, 03 Time estimate: 2-3 hours Difficulty: Low-medium Prerequisites: An existing project you’re actively developing with Claude Code

Step-by-Step

Hour 1: Build Your CLAUDE.md (45-60 min)

  1. Open a project you’re currently working on
  2. Create CLAUDE.md in the project root
  3. Write each section from the Module 03 template:
    • Overview (2-3 sentences about what this project is)
    • Tech stack (be specific about versions and choices)
    • Project structure (describe directory layout)
    • Architecture decisions (list 3-5 key decisions)
    • Coding conventions (how you name things, organize imports, handle errors)
    • Commands (build, test, deploy)
    • Do Not Touch (files Claude should never modify without permission)
    • Current Focus (what you’re working on right now)
  4. Run Claude Code’s /init command and compare its output to what you wrote. Merge anything it caught that you missed.
  5. Test it: Start a new Claude Code session. Does it correctly understand your project from the CLAUDE.md alone? Ask it “Describe this project’s architecture” and see if it gets it right.

Hour 2: Write a Feature Spec (30-45 min)

  1. Pick a feature you’ve been meaning to build — something non-trivial but scoped (not “rewrite the whole app”)
  2. Write a spec using the Module 03 template:
    • Summary (what and why)
    • Requirements (must-have, nice-to-have — all testable)
    • API contract or UI spec (what the interface looks like)
    • Data model (what’s stored and how)
    • Edge cases (at least 5)
    • Testing requirements
    • Out of scope (what this is NOT)
  3. Save it as specs/[feature-name].md

Hour 3: Implement and Compare (45-60 min)

  1. Tell Claude: “Implement the spec in specs/[feature-name].md
  2. Observe: Does it follow the spec? Does it make decisions you didn’t authorize? Does it use your CLAUDE.md conventions?
  3. The comparison test: Think about how this feature would have gone with your old vibe-coding workflow. Count the decisions the AI made correctly because the spec existed vs. ones it would have guessed wrong.

What Success Looks Like

Stretch Goals


Back to Project Lab | Next: Project B: Build a Personal Data Dashboard