# The 5 Techniques Separating Top Agentic Engineers Right Now

Tom Brewer
Table of Contents

These notes are based on the YouTube video by Cole Medin


Key Takeaways

  • Start with a PRD (Product Requirement Document) – a single markdown file that acts as the project’s North Star and drives all AI‑assistant interactions.
  • Keep global rules lightweight and split task‑specific rules into separate markdown files that are loaded only when needed.
  • “Commandify” any recurring prompt – turn it into a reusable slash‑command or workflow to save keystrokes and ensure consistency.
  • Reset the context window between planning and execution by feeding the AI only the structured plan document, leaving maximal room for reasoning.
  • Treat every bug as a system‑evolution opportunity: update rules, reference docs, or commands so the same mistake never recurs.

Detailed Explanations

1. PRD‑First Development

  • What it is: A markdown file that captures the entire scope of a project—target users, mission, in‑scope/out‑of‑scope items, and architecture.
  • Why it matters:
    • Serves as a single source of truth for the coding agent.
    • Enables you to break the project into granular features (e.g., API, UI, auth) that the agent can handle one at a time.
  • Greenfield vs. Brownfield:
    • Greenfield: PRD defines everything you need to build the MVP.
    • Brownfield: PRD documents the existing codebase and outlines the next set of enhancements.
  • Workflow snippet:
    # Habit Tracker PRD
    ## Target Users
    - People who want to track daily habits
    ## Mission
    - Provide a simple, visual habit‑tracking UI
    ## In Scope
    - Calendar view, habit CRUD, basic auth
    ## Out of Scope
    - Social sharing, advanced analytics
    ## Architecture
    - Frontend: React + Vite
    - Backend: Node.js + Express
    - DB: SQLite
  • Practical tip: After aligning with the AI on what to build, run the slash command /create PRD to generate/update this document automatically.

🔗 See Also: Claude Code Agents: The Feature That Changes Everything

2. Modular Rules Architecture

  • Global rules file (Claude.md, agents.md, etc.) should stay concise (on the order of a few hundred lines) and contain only universal conventions:
    • Project structure
    • Common commands (npm run dev, npm test)
    • Logging standards, naming conventions
  • Task‑specific rule files live in a reference/ folder and are referenced from the global file:
    # Claude.md (global)
    ## Global Rules
    - Project root: ./src
    - Run frontend: npm run dev
    - Run backend: npm run server
    ## References
    - API rules → ./reference/api_rules.md
    - UI component rules → ./reference/ui_rules.md
  • Why modular?
    • Protects the AI’s context window.
    • Loads only the relevant rule set when you’re working on a particular domain (e.g., API, UI).

💡 Related: 5 Claude Code MCP Servers You Need To Be Using

Note: The “≈200 lines” figure is a practical guideline rather than a hard rule; the key point is to keep the global file short enough to stay comfortably within the model’s token budget.

3. Commandifying Everything

  • Definition: Convert any prompt you use more than twice into a reusable slash‑command (or markdown workflow) that the AI can invoke directly.
  • Typical candidates:
    • Creating PRDs (/create PRD)
    • Making Git commits (/git commit)
    • Running validation steps (/validate)
    • System‑evolution actions (/evolve)
  • Benefits:
    • Saves thousands of keystrokes.
    • Guarantees consistent phrasing and parameters.
    • Easy to share across teams.
  • Example command file (illustrative path):
    # /create PRD
    Prompt: "Help me plan a habit‑tracker app. Output a markdown PRD with sections for users, mission, scope, architecture."
    Output: Write to ./docs/PRD.md
    The exact directory (.cloud/commands/) and file format are illustrative; you can store commands wherever your tooling expects them.

🔗 See Also: I Went Deep on Claude Code—These Are My Top 13 Tricks

4. Context Reset (Planning → Execution)

  • Process:
    1. Prime the AI with the current codebase and the PRD (/prime command).
    2. Plan the next feature, outputting a structured markdown plan.
    3. Reset the conversation (/clear or restart the chat).
    4. Execute the plan by feeding only the plan document (/execute plan.md).
  • Result: The AI receives a lean context, maximizing its reasoning capacity for code generation and self‑validation.
  • Sample plan (feature_plan.md):
    # Feature: Calendar Visual Improvements
    ## User Story
    As a user, I want a clearer visual representation of my habit streaks on the calendar.
    ## Tasks
    - Update Calendar component UI
    - Add streak color logic
    - Write unit tests for new UI
    ## Acceptance Criteria
    - Streak colors reflect consecutive days
    - No regression in existing calendar functionality

5. System Evolution (Turning Bugs into Strength)

  • Mindset: When a bug appears, don’t just fix the symptom—identify the missing rule, reference, or command that caused the mistake and update the system.
  • Typical improvement targets:
    • Global rules: Add a one‑liner for a missing import style.
    • Reference docs: Create a detailed auth flow doc if the AI repeatedly mis‑implements authentication.
    • Commands/workflows: Extend the structured plan template to include a mandatory testing step.
  • Workflow example:
    1. Bug discovered (e.g., wrong import style).
    2. Issue a voice or text command: “Claude, I noticed the import style is wrong.”
    3. Update Claude.md with a rule:
      # Import Style Rule
      - Use ES6 named imports: `import { foo } from 'module'`
    4. Re‑prime the AI to ingest the updated rule set.
  • Outcome: The coding agent becomes progressively more reliable, reducing future hallucinations and repetitive errors.

Summary

The video outlines a repeatable, high‑output workflow for “agentic engineers” who collaborate with AI coding assistants. The core practices are:

  1. Begin every project with a comprehensive PRD that guides the AI’s scope and feature breakdown.
  2. Maintain a lightweight global rule set and load task‑specific rules only when needed, preserving the AI’s context window.
  3. Convert frequently used prompts into slash‑commands to automate repetitive interactions.
  4. Separate planning from execution by resetting the conversation and feeding the AI a concise, structured plan.
  5. Iteratively evolve the system by turning each bug into an update to rules, reference docs, or commands, thereby continuously strengthening the AI’s performance.

Adopting these five meta‑skills enables engineers to unlock the full potential of AI coding assistants, dramatically increasing development speed and code quality without requiring new tools—just smarter processes.

🔗 See Also: The EASIEST way to build iOS apps with Claude Code (Opus 4.5)
💡 Related: I Went Deep on Claude Code—These Are My Top 13 Tricks

Tom Brewer Avatar

Thanks for reading my notes! Feel free to check out my other notes or contact me via the social links in the footer.

# Frequently Asked Questions

What is a PRD‑First development workflow and how do I start using it with an AI coding assistant?

A PRD‑First workflow means creating a single markdown Product Requirement Document that captures the project’s purpose, scope, users, and architecture before any code is written. Begin by drafting the PRD, then use a slash‑command like /create PRD to have the AI generate or update the file, and reference that document as the sole source of truth for all subsequent prompts.

Why should I split my AI assistant’s rules into a small global file and separate task‑specific rule files?

Keeping a concise global rules file preserves the model’s context window, allowing more tokens for actual planning and code generation. By modularizing rules—e.g., placing API conventions in reference/api_rules.md and UI standards in reference/ui_rules.md—you load only the relevant rules when needed, which reduces token waste and improves the assistant’s accuracy.

How do I “commandify” recurring prompts and what are the benefits?

Commandifying means turning any prompt you use more than twice into a reusable slash‑command or workflow file (e.g., /git commit, /validate). This saves keystrokes, enforces consistent phrasing, and lets the AI execute complex actions with a single command, making the development process faster and less error‑prone.

What is the purpose of resetting the AI’s context window between planning and execution phases?

Resetting the context window ensures the AI receives only the structured plan (like the PRD) when it starts generating code, freeing up token space for deeper reasoning and fewer hallucinations. To do this, clear the previous conversation or start a new chat thread and feed the AI the latest plan document before asking it to write code.

How can I turn bugs into system‑evolution opportunities rather than setbacks?

When a bug appears, update your rule or command files with the new knowledge—add a rule that prevents the same mistake, or create a command that automates the fix. This way the AI learns from each error, and future prompts automatically incorporate the corrected behavior, turning each bug into a permanent improvement to your workflow.

Continue Reading