Skip to content

Latest commit

 

History

History
1039 lines (809 loc) · 29.1 KB

File metadata and controls

1039 lines (809 loc) · 29.1 KB

Project Management System (Compact)

AI-assisted development with context persistence. Full docs: .project/docs/

{{SLOT:guidelines}}

Rust Production Guidelines

Core Principles

  • Edition: Rust 2021+ (latest stable).
  • Philosophy: Zero-cost abstractions, memory safety, fearless concurrency.
  • Naming:
    • Files: snake_case (e.g., user_repository.rs).
    • Modules: snake_case (e.g., mod database_client;).
    • Types/Traits: PascalCase (e.g., struct UserProfile, trait Serialize).
    • Functions/Variables: snake_case (e.g., fn calculate_hash()).
    • Constants: SCREAMING_SNAKE_CASE (e.g., const MAX_CONNECTIONS: usize = 100;).

Project Structure (Workspace)

├── Cargo.toml           # Workspace manifest
├── src/
│   ├── main.rs          # Binary entry point
│   ├── lib.rs           # Library root
│   ├── models/          # Domain models
│   ├── repositories/    # Data access layer
│   ├── services/        # Business logic
│   ├── handlers/        # HTTP/API handlers
│   └── utils/           # Shared utilities
├── tests/               # Integration tests
├── benches/             # Benchmarks (criterion)
└── examples/            # Usage examples

Production Coding Standards

Error Handling (CRITICAL)

  • NEVER use .unwrap() or .expect() in production code paths.
  • ALWAYS use Result<T, E> for fallible operations.
  • PREFER ? operator for error propagation.
  • USE anyhow for applications, thiserror for libraries.
  • IMPLEMENT proper error types with context.
// GOOD: Production-ready
pub async fn fetch_user(id: UserId) -> Result<User, DatabaseError> {
    db.query_one("SELECT * FROM users WHERE id = $1", &[&id])
        .await
        .map_err(DatabaseError::Query)?
}

// BAD: Crashes on error
pub async fn fetch_user(id: UserId) -> User {
    db.query_one("SELECT * FROM users WHERE id = $1", &[&id])
        .await
        .unwrap()  // ❌ NEVER IN PRODUCTION
}

Resource Management

  • ALWAYS implement Drop for cleanup when holding resources.
  • USE RAII pattern (Resource Acquisition Is Initialization).
  • AVOID clone() unless necessary - prefer borrowing (&T, &mut T).
  • USE Arc<T> for shared ownership, Rc<T> only in single-threaded contexts.
  • PREFER Cow<'a, T> when data might be borrowed or owned.

Async/Await (Modern Rust)

  • RUNTIME: Use Tokio for production (most battle-tested).
  • AVOID blocking operations in async contexts - use tokio::task::spawn_blocking.
  • USE async fn in traits with async-trait crate (or native when stable).
  • PREFER tokio::select! over manual polling.
  • IMPLEMENT timeouts with tokio::time::timeout.
// GOOD: Non-blocking I/O
#[tokio::main]
async fn main() -> Result<()> {
    let result = tokio::time::timeout(
        Duration::from_secs(5),
        fetch_data()
    ).await??;
    Ok(())
}

// BAD: Blocking async runtime
async fn bad_async() {
    std::thread::sleep(Duration::from_secs(1)); // ❌ Blocks executor
}

Concurrency & Thread Safety

  • USE Send + Sync bounds explicitly when needed.
  • PREFER message passing (channels) over shared state.
  • USE tokio::sync::RwLock in async, std::sync::RwLock in sync code.
  • AVOID Mutex for read-heavy workloads - use RwLock.
  • IMPLEMENT graceful shutdown with CancellationToken (tokio-util).

Type System (Leverage Fully)

  • USE newtypes for domain modeling (struct UserId(Uuid);).
  • IMPLEMENT From/TryFrom for conversions.
  • USE #[non_exhaustive] for public enums that might grow.
  • PREFER Option<T> over nullable patterns.
  • USE builder pattern for complex constructors (derive with derive_builder).
// GOOD: Type-safe domain modeling
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UserId(Uuid);

impl From<Uuid> for UserId {
    fn from(id: Uuid) -> Self { Self(id) }
}

// BAD: Primitive obsession
type UserId = String; // ❌ No type safety

Essential Production Crates

Core Infrastructure

  • tokio - Async runtime (features: full)
  • anyhow - Error handling (applications)
  • thiserror - Error derives (libraries)
  • serde + serde_json - Serialization
  • tracing + tracing-subscriber - Structured logging

Web/API (if applicable)

  • axum - Modern web framework (Tokio-based)
  • tower - Middleware and service abstractions
  • hyper - Low-level HTTP (Axum builds on this)
  • reqwest - HTTP client

Database

  • sqlx - Async SQL with compile-time query checks
  • diesel - Sync ORM (if preferred)
  • redis - Redis client (async)

Performance & Observability

  • criterion - Benchmarking (in benches/)
  • pprof - CPU profiling
  • tracing-opentelemetry - Distributed tracing
  • metrics + metrics-exporter-prometheus - Metrics

Security

  • secrecy - Secret types (zeroize on drop)
  • argon2 - Password hashing
  • jsonwebtoken - JWT handling

Linting & Formatting (MANDATORY)

Clippy (Strict)

# Cargo.toml
[lints.clippy]
all = "warn"
pedantic = "warn"
unwrap_used = "deny"           # No unwrap in production
expect_used = "deny"           # No expect in production
panic = "deny"                 # No panics
missing_errors_doc = "warn"    # Document error cases

Rustfmt (Auto-format)

# rustfmt.toml
edition = "2021"
max_width = 100
hard_tabs = false
tab_spaces = 4
newline_style = "Unix"
use_small_heuristics = "Max"

Pre-commit Checks

# ALWAYS run before commit
cargo fmt --check
cargo clippy -- -D warnings
cargo test --all-features
cargo audit  # Security vulnerabilities

Testing Strategy

Unit Tests

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_user_creation() {
        let user = User::new("alice");
        assert_eq!(user.name(), "alice");
    }

    #[tokio::test]
    async fn test_async_operation() {
        let result = fetch_data().await;
        assert!(result.is_ok());
    }
}

Integration Tests (tests/)

  • Test public API surface.
  • Use test fixtures and builders.
  • Mock external dependencies with mockall or wiremock.

Property-based Testing

  • USE proptest or quickcheck for edge cases.
  • Generates randomized inputs to find bugs.

Performance Optimization

Profiling

# CPU profiling
cargo build --release
perf record -g ./target/release/myapp
perf report

# Memory profiling
cargo install dhat
# Use dhat in code

Optimization Tips

  • AVOID premature optimization - profile first.
  • USE #[inline] sparingly (hot paths only).
  • PREFER iterators over manual loops (zero-cost).
  • USE SmallVec for stack-allocated vectors (<= 8 items).
  • ENABLE LTO in release builds:
[profile.release]
lto = true
codegen-units = 1
opt-level = 3

Documentation

Rustdoc (Required)

/// Fetches user data from the database.
///
/// # Arguments
/// * `id` - The unique user identifier
///
/// # Errors
/// Returns `DatabaseError::NotFound` if user doesn't exist.
///
/// # Examples
/// ```
/// let user = fetch_user(UserId::new()).await?;
/// ```
pub async fn fetch_user(id: UserId) -> Result<User, DatabaseError> {
    // ...
}

README.md Structure

  1. Project Overview - What it does
  2. Installation - cargo install or Docker
  3. Configuration - Environment variables
  4. Usage - Quick start examples
  5. Architecture - High-level design
  6. Contributing - How to contribute
  7. License - Apache-2.0 / MIT dual-license (standard)

Security Best Practices

Secrets Management

use secrecy::{Secret, ExposeSecret};

#[derive(Clone)]
pub struct DatabasePassword(Secret<String>);

// Never logs or displays the secret
impl fmt::Debug for DatabasePassword {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "DatabasePassword([REDACTED])")
    }
}

Dependency Auditing

# Install cargo-audit
cargo install cargo-audit

# Run before every release
cargo audit

# Check for outdated deps
cargo outdated

Deny Policy (cargo-deny)

# deny.toml
[advisories]
vulnerability = "deny"
unmaintained = "warn"

Deployment

Docker

FROM rust:1.75 as builder
WORKDIR /app
COPY . .
RUN cargo build --release

FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/myapp /usr/local/bin/
CMD ["myapp"]

Observability

  • Structured Logging: tracing with JSON format
  • Metrics: Export Prometheus metrics
  • Health Checks: /health endpoint (liveness/readiness)
  • Graceful Shutdown: SIGTERM handling

Common Anti-Patterns to AVOID

Using unwrap()/expect() in production ✅ Use ? or match with proper error handling

Blocking in async contexts ✅ Use spawn_blocking for sync operations

Excessive cloning ✅ Prefer borrowing with lifetimes

Arc<Mutex<T>> everywhere ✅ Use channels for message passing

No error context ✅ Add context with .context() (anyhow) or .map_err()

String for everything ✅ Use newtypes and domain types

No integration tests ✅ Test public API in tests/

Ignoring Clippy warnings ✅ Fix or explicitly allow with justification

CI/CD Pipeline

# .github/workflows/ci.yml
name: CI
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: dtolnay/rust-toolchain@stable
      - run: cargo fmt -- --check
      - run: cargo clippy -- -D warnings
      - run: cargo test --all-features
      - run: cargo audit
      - run: cargo build --release

Version & Stability

  • Public API: Follow SemVer strictly (breaking = major bump).
  • MSRV: Document Minimum Supported Rust Version.
  • Deprecation: Use #[deprecated] before removing APIs.
  • Changelog: Maintain CHANGELOG.md (Keep a Changelog format).

References

{{/SLOT:guidelines}}

Directory Structure

.project/
  ├── _templates/          # Templates (task, context, adr, backlog)
  ├── current-task.md      # Active work
  ├── context.md           # Session state persistence
  ├── backlog/             # Tasks: T{XXX}-{name}.md
  ├── completed/           # Archive: {YYYY-MM-DD}-T{XXX}-{name}.md
  ├── decisions/           # ADRs: {YYYY-MM-DD}-ADR{XXX}-{name}.md
  ├── ideas/               # Ideas: I{XXX}-{name}.md
  ├── reports/             # Reports: {YYYY-MM-DD}-R{XXX}-{name}.md
  ├── docs/                # Documentation
  ├── scripts/             # Automation
  └── README.md            # Project overview

Session Protocol

Start:

  1. Read context.md (session state, next_action)
  2. Read current-task.md (active checklist)
  3. Review last commit: git log -1 --oneline
  4. Continue from next_action

Interruption Recovery:

  • Use aipim pause --reason="..." to save state optionally stashing changes.
  • Use aipim resume to restore context and stashed work.

During:

  • Update task checkboxes as completed
  • Commit frequently
  • Add discoveries to task or backlog

End:

  1. Update current-task.md: actual_hours, checkboxes
  2. Update context.md: session++, next_action, summary
  3. Check aipim deps to ensure no blockers
  4. Select next task from backlog (priority + dependencies)
  5. Move to current-task.md
  6. Update context.md: session++, next_action, summary
  7. Run Quality Check (Optional): .project/scripts/analyze-quality.sh --manual
  8. Update metrics (see Metrics Protocol below)
  9. Commit & push

Task Workflow Protocol (MANDATORY)

When user says "do next task" or "start task X", you MUST follow this protocol:

1. Context Loading (REQUIRED FIRST STEP):

  • Read THIS FILE (GEMINI.md/CLAUDE.md) - contains MANDATORY project guidelines
  • Read .project/context.md - session state and metrics
  • Read .project/backlog/ - check which task is next (or specified)
  • Count completed tasks in .project/completed/ to know progress

2. Quality Gates (BEFORE marking task complete):

  • All tests passing (npm test or equivalent)
  • No lint warnings (npm run lint)
  • Definition of Done satisfied (see DoD section above)
  • Code reviewed and clean (no debug code, console.logs, TODOs)

3. Git Commit (ONE atomic commit per task):

  • Format: type(scope): description (SINGLE LINE ONLY)
  • Types: feat, fix, docs, style, refactor, test, chore
  • Example: feat(task): implement user authentication
  • NO multi-line commits, NO extra explanations in commit message

4. Context Awareness (CRITICAL):

  • When your context window is running low (>70% used):
    • ⚠️ WARN the user explicitly: "Context running low, recommend pausing"
    • DO NOT continue to next task automatically
    • Let user start fresh session to maintain quality
  • This prevents rushed work and maintains code quality

5. Session Integrity (CRUCIAL FOR QUALITY):

  • NEVER pick up a session mid-task
  • Complete current task fully OR pause and let user resume later
  • One task = one complete cycle (start → implement → test → commit → done)
  • This ensures each task has full context and attention

Why this matters: Tasks done with fragmented context lead to bugs, inconsistencies, and technical debt. Better to pause and resume fresh than rush through with degraded context.

Large Task Auto-Breakdown Protocol

MANDATORY: Break down tasks >12h into manageable phases

When to Break Down

When starting a task with estimated_hours > 12:

  1. Analyze complexity and identify natural phases
  2. Create 3-5 phases of 2-6 hours each
  3. Document phases in current-task.md under Implementation section
  4. Treat each phase as mini-task with its own commit upon completion

Phase Breakdown Criteria

Typical phase structure:

  1. Setup/Scaffolding (1-3h)

    • Schema/migrations
    • Directory structure
    • Dependencies
    • Basic configuration
  2. Core Implementation (4-6h)

    • Main business logic
    • Data processing
    • API endpoints
    • Integration points
  3. Testing (2-4h)

    • Unit tests
    • Integration tests
    • Document Pain Points: Record any frustration/slowness
    • Edge cases
    • Error handling
  4. Documentation & Polish (1-3h)

    • API documentation
    • README updates
    • Code cleanup
    • Examples

Phase Documentation Format

Add to current-task.md under Implementation:

### Phase 1: Setup & Schema (3h)
- [ ] Define data schema
- [ ] Create database tables/migrations
- [ ] Setup dependencies
- [ ] Basic configuration

**Deliverable:** Database ready, dependencies installed
**Commit message:** feat(phase1): setup schema and dependencies

### Phase 2: Core Implementation (6h)
- [ ] Implement main business logic
- [ ] Create API endpoints
- [ ] Add data validation
- [ ] Error handling

**Deliverable:** Core functionality working
**Commit message:** feat(phase2): implement core functionality

### Phase 3: Testing (4h)
- [ ] Unit tests for core logic
- [ ] Integration tests
- [ ] Edge case coverage
- [ ] Performance validation

**Deliverable:** 80%+ test coverage
**Commit message:** test(phase3): comprehensive test suite

### Phase 4: Documentation (2h)
- [ ] API documentation
- [ ] README updates
- [ ] Usage examples
- [ ] Code comments where needed

**Deliverable:** Complete documentation
**Commit message:** docs(phase4): add comprehensive documentation

Validation Rules

Before starting implementation, verify:

  1. ✅ All phases clearly defined with deliverables
  2. ✅ Sum of phase estimates matches total estimate (±1h tolerance)
  3. ✅ Each phase is 2-6 hours (not too small, not too large)
  4. ✅ Each phase has clear completion criteria
  5. ✅ Commit message convention defined per phase

During implementation:

  • Complete phases sequentially (no skipping)
  • Commit immediately after completing each phase
  • Update checkboxes in real-time
  • If phase exceeds estimate by >50%, reassess remaining phases

Benefits

  • Reduced stall risk: Clear checkpoints prevent midway blockage
  • Better tracking: Visible progress through phase completion
  • Atomic commits: Each phase is a logical unit of work
  • Easier estimation: Breaking down improves accuracy for future tasks
  • Context preservation: Easier to resume if interrupted

Example: 18h Task Breakdown

Before (risky):

title: "Implement Oracle Profiling System"
estimated_hours: 18

After (manageable):

## Implementation

### Phase 1: Setup & Schema (3h)
- [ ] Define profile schema (tactics, patterns, confidence)
- [ ] Create PostgreSQL tables with indexes
- [ ] Write migrations
- [ ] Setup model relationships

### Phase 2: Tactical Pattern Detection (6h)
- [ ] Implement pin detection algorithm
- [ ] Add fork detection logic
- [ ] Create skewer pattern recognition
- [ ] Build discovered attack tracker
- [ ] Add confidence scoring system

### Phase 3: Testing & Validation (5h)
- [ ] Unit tests for each pattern detector
- [ ] Integration tests with sample games
- [ ] Edge case handling (ambiguous positions)
- [ ] Performance tests (1000+ game analysis)

### Phase 4: Documentation & Polish (4h)
- [ ] API documentation for profiling endpoints
- [ ] README with usage examples
- [ ] Code cleanup and optimization
- [ ] Add inline comments for complex algorithms

Result: 4 phases × avg 4.5h = 18h total, each with clear deliverable

Metrics Tracking Protocol

MANDATORY: Update metrics after completing each task

When to Update

  1. After completing each task - Update productivity metrics
  2. Weekly on Fridays - Generate weekly report
  3. Monthly on last Friday - Generate monthly trend report

Metrics to Track

Update the Metrics section in context.md with:

Productivity:

- Tasks completed this week: X
- Tasks completed this month: Y
- Estimate accuracy: Z (actual/estimated avg)
- Velocity trend: ↗️/→/↘️ (compare last 4 weeks)

Calculation formulas:

  • Estimate accuracy = sum(actual_hours from completed/) / sum(estimated_hours from completed/)
  • Velocity trend = Compare task counts: last week vs 4-week average
    • ↗️ Improving: >10% increase
    • → Stable: ±10%
    • ↘️ Declining: >10% decrease

Quality:

- Test coverage: X% (from coverage reports)
- Bugs reported this week/month: Y
- Code quality warnings: Z (linter output)

Blockers:

- Most common type: X (parse blocker: field from completed tasks)
- Average resolution time: Y hours
- Active blockers: Z (from current-task.md)

Trends (Last 30 Days):

- Tasks completed: X (prev: Y) ↗️ +Z%
- Average task size: Xh (prev: Yh)
- Rework rate: X% (tasks requiring fixes after completion)

Automation Helper

Optional: Use .project/scripts/calculate-metrics.sh to auto-calculate metrics from completed task files.

# Generate metrics report
.project/scripts/calculate-metrics.sh

# Output formatted for context.md
.project/scripts/calculate-metrics.sh --format=markdown

Example Update

After completing TASK-003 (5h estimated, 4.5h actual):

  **Productivity:**
- - Tasks completed this week: 2
+ - Tasks completed this week: 3
- - Estimate accuracy: 1.15
+ - Estimate accuracy: 1.12

Backlog Health Check Protocol

MANDATORY: Weekly review on Fridays

Protocol

  1. Analyze Backlog:
    • Identify "Stale" tasks (>4 weeks without update)
    • Identify "Blocked" tasks (>2 weeks blocked)
  2. Generate Report:
    • Run .project/scripts/backlog-health.sh
  3. Take Action:
    • Archive: Move widely irrelevant/stale tasks to .project/backlog/archived/
    • Unblock: Schedule action items for blocked tasks
    • Deprioritize: Downgrade P2s to P3s if they are stalling

Health Metrics

  • Stale Rate: % of backlog items untouched for >30 days (Target: <20%)
  • Blocker Age: Max days a task has been blocked (Target: <14 days)

Task File Format

---
title: "Feature Name"
created: 2025-01-07T10:00:00-03:00
last_updated: 2025-01-07T14:00:00-03:00
priority: P1-M              # P1-S/M/L | P2-S/M/L | P3 | P4
estimated_hours: 8
actual_hours: 0
status: in-progress         # backlog | in-progress | blocked | completed
blockers: []
tags: [backend, api]
---

Template: .project/_templates/task.md

Priority System

Code Meaning Action
P1-S Critical + Small (<2h) Do now
P1-M Critical + Medium (2-8h) Today
P1-L Critical + Large (>8h) Break down
P2-S High + Small Quick win
P2-M High + Medium This week
P2-L High + Large Plan
P3 Nice to have Backlog
P4 Low priority Maybe never

Definition of Done (Essential)

Must check ALL before completing:

Functionality: [ ] Works [ ] Edge cases [ ] Errors [ ] Loading [ ] Responsive
Testing: [ ] Unit [ ] Feature [ ] Browser OK [ ] 80%+ coverage
Performance: [ ] No N+1 [ ] Eager load [ ] Indexes [ ] Cache [ ] <2s [ ] Paginated
Security: [ ] Validation [ ] Auth [ ] No secrets logged [ ] CSRF [ ] SQL-safe [ ] XSS-safe
Code: [ ] PSR-12 [ ] Docs [ ] No debug [ ] Clean names
Docs: [ ] Time logged [ ] ADR if needed [ ] README if API changed
Git: [ ] Atomic commits [ ] Convention [ ] No conflicts

Full checklist: .project/docs/definition-of-done.md

Common Commands

# Start new task from template (for new tasks)
cp .project/_templates/task.md .project/current-task.md

# Start task from backlog (IMPORTANT: use mv not cp)
mv .project/backlog/T{XXX}-{name}.md .project/current-task.md

# Complete task
mv .project/current-task.md .project/completed/$(date +%Y-%m-%d)-T{XXX}-{name}.md

# Create ADR
cp .project/_templates/adr.md .project/decisions/$(date +%Y-%m-%d)-ADR{XXX}-{name}.md

# Validate quality
.project/scripts/validate-dod.sh

# Check session budget
.project/scripts/pre-session.sh

⚠️ Critical: When starting a task from backlog, use mv (not cp) to remove it from backlog. Task should only exist in one place: backlog → current-task → completed.

Context Management

context.md Structure

---
session: 42
last_updated: 2025-01-07T14:30:00-03:00
active_branches: [feature/auth]
blockers: []
next_action: "Complete login tests"
---

# Current State
[2-3 sentences: where are we?]

# Active Work
[What's in progress, what's blocked]

# Recent Decisions
[Last 2-3 important decisions]

# Next Steps
1. Immediate action
2. Then this
3. Then that

Keep concise: Max 200 lines. Archive old sessions to context-archive/.

Template: .project/_templates/context.md

Automatic Context Pruning

MANDATORY: Archive old sessions every 10 sessions

When session number % 10 == 0:

  1. Create archive directory (if needed):

    mkdir -p .project/context-archive
  2. Archive old sessions:

    • Extract sessions 1 through (N-5) from context.md
    • Save to .project/context-archive/YYYY-MM-period.md
    • Filename format: 2026-01-weeks1-2.md (based on date range)
  3. Update context.md:

    • Keep only last 5 session summaries
    • Preserve current state, active work, recent decisions
    • Update session counter
  4. Commit:

    git add .project/context.md .project/context-archive/
    git commit -m "chore: archive context sessions 1-N"

Archive structure:

.project/context-archive/
├── 2026-01-weeks1-2.md    # Sessions 1-10
├── 2026-01-weeks3-4.md    # Sessions 11-20
└── 2026-02-weeks1-2.md    # Sessions 21-30

Why: Keeps context.md under 200 lines, reduces token consumption, preserves history.

Script: Optional helper at .project/scripts/archive-context.sh (see below)

File Management

Complex tasks (>20 checkboxes OR >500 lines): Use directory

current-task/
  ├── main.md           # Master checklist
  ├── implementation.md # Technical details
  └── testing.md        # Test plan

Completion: Move entire directory to completed/YYYY-MM-DD-name/

Code Context Loading (IMPORTANT)

Load selectively to save tokens:

  • Only files being actively edited (max 3-4 per turn)
  • Use view tool for reference, don't keep in memory
  • Avoid loading large files entirely (use line ranges)

Example:

# GOOD (specific)
view app/Models/User.php

# BAD (loads everything)
view app/

Output Guidelines

Be concise by default:

  • Code: No explanatory comments unless complex
  • Responses: 2-3 sentences per point
  • Examples: Only when requested
  • No apologies or meta-commentary

Expand when:

  • User asks "explain in detail"
  • Security/performance critical
  • Complex architectural decision

Architectural Decisions (ADR)

Create ADR for:

  • Technology/framework selection
  • Database schema design
  • API architecture
  • Security approach
  • Major refactoring

Format: See .project/_templates/adr.md

Status values: Proposed | Accepted | Deprecated | Superseded

ADR Auto-Detection Protocol

MANDATORY: Check for architectural decisions after each task

Detection Triggers

If you (the Agent or User) have made any of the following changes, strictly prompt for an ADR:

  1. Technology/Library Choice: "Chose X over Y" (e.g., zod vs yup, postgres vs mongo)
  2. Schema Design: "Designed database structure for X" (e.g., relations, indexing strategy)
  3. API Architecture: "Defined endpoints/patterns for X" (e.g., REST vs GraphQL, error envelope)
  4. Security Mechanism: "Implemented auth/security for X" (e.g., JWT, RBAC, encryption)
  5. Performance Pattern: "Optimized X using Y" (e.g., caching strategy, lazy loading)
  6. Refactoring Pattern: "Refactored to use [Pattern Name]" (e.g., Factory, Strategy, Observer)

Agent Protocol

If a trigger is detected, the Agent MUST ask:

"I noticed we made an architectural decision regarding [TOPIC]. Should we create an ADR to document the context, alternatives, and rationale?

Run: cp .project/_templates/adr.md .project/decisions/$(date +%Y-%m-%d)-ADR{XXX}-[topic].md"

Automation

Validation Script

# Before completing task
.project/scripts/validate-dod.sh

Checks: debug statements, tests, N+1 queries, security, documentation

Pre-session Check

# Before starting session
.project/scripts/pre-session.sh

Shows: token estimate, large files, git status

Working with Multiple Devs

Option 1: Personal task files

current-task-rafhael.md
current-task-joao.md

Option 2: Branch-based (recommended)

git checkout feature/auth     → .project/current-task.md
git checkout feature/payments → .project/current-task.md (different)

Metrics to Track

Quality:

  • Test coverage trend
  • Bugs in production
  • DoD compliance rate

Velocity:

  • Tasks completed/week
  • Estimate accuracy (actual vs estimated)
  • Time to restore context

Process:

  • Stale tasks (>14 days not updated)
  • Blocker resolution time
  • ADR review compliance

Troubleshooting

context.md too long?

mkdir -p .project/context-archive/
mv .project/context.md .project/context-archive/2025-01-Q1.md
cp .project/_templates/context.md .project/context.md

Backlog overwhelming?

# Archive low-priority old items
mkdir -p .project/ideas/archived/
mv .project/backlog/*P4*.md .project/ideas/archived/

Hit token limits?

  • Run pre-session.sh before starting
  • Archive old context sessions
  • Load fewer files in memory
  • Use line ranges with view tool

Documentation Links

Full documentation:

  • .project/docs/task-management.md - Complete task workflow
  • .project/docs/definition-of-done.md - Full DoD checklist
  • .project/docs/adr-guide.md - Architecture decision records
  • .project/docs/examples/ - Complete examples

Quick references:

  • Git convention: type(scope): description
    • Types: feat, fix, docs, style, refactor, test, chore
  • Commit atomically (one logical change per commit)
  • Branch naming: feature/name or fix/name

Version & Updates

Version: 1.2 Compact
Last updated: 2025-01-07
Full version: Available as CLAUDE-full.md for reference

When to use full version:

  • First time using system (read once)
  • Training new team members
  • Need detailed examples

When to use compact version:

  • Daily development (this file)
  • Quick reference
  • Token optimization