AI-assisted development with context persistence. Full docs:
.project/docs/
{{SLOT:guidelines}}
- Edition: Rust 2021+ (latest stable).
- Philosophy: Zero-cost abstractions, memory safety, fearless concurrency.
- Naming:
- Files:
snake_case(e.g.,user_repository.rs). - Modules:
snake_case(e.g.,mod database_client;). - Types/Traits:
PascalCase(e.g.,struct UserProfile,trait Serialize). - Functions/Variables:
snake_case(e.g.,fn calculate_hash()). - Constants:
SCREAMING_SNAKE_CASE(e.g.,const MAX_CONNECTIONS: usize = 100;).
- Files:
├── Cargo.toml # Workspace manifest
├── src/
│ ├── main.rs # Binary entry point
│ ├── lib.rs # Library root
│ ├── models/ # Domain models
│ ├── repositories/ # Data access layer
│ ├── services/ # Business logic
│ ├── handlers/ # HTTP/API handlers
│ └── utils/ # Shared utilities
├── tests/ # Integration tests
├── benches/ # Benchmarks (criterion)
└── examples/ # Usage examples
- NEVER use
.unwrap()or.expect()in production code paths. - ALWAYS use
Result<T, E>for fallible operations. - PREFER
?operator for error propagation. - USE
anyhowfor applications,thiserrorfor libraries. - IMPLEMENT proper error types with context.
// GOOD: Production-ready
pub async fn fetch_user(id: UserId) -> Result<User, DatabaseError> {
db.query_one("SELECT * FROM users WHERE id = $1", &[&id])
.await
.map_err(DatabaseError::Query)?
}
// BAD: Crashes on error
pub async fn fetch_user(id: UserId) -> User {
db.query_one("SELECT * FROM users WHERE id = $1", &[&id])
.await
.unwrap() // ❌ NEVER IN PRODUCTION
}- ALWAYS implement
Dropfor cleanup when holding resources. - USE RAII pattern (Resource Acquisition Is Initialization).
- AVOID
clone()unless necessary - prefer borrowing (&T,&mut T). - USE
Arc<T>for shared ownership,Rc<T>only in single-threaded contexts. - PREFER
Cow<'a, T>when data might be borrowed or owned.
- RUNTIME: Use Tokio for production (most battle-tested).
- AVOID blocking operations in async contexts - use
tokio::task::spawn_blocking. - USE
async fnin traits withasync-traitcrate (or native when stable). - PREFER
tokio::select!over manual polling. - IMPLEMENT timeouts with
tokio::time::timeout.
// GOOD: Non-blocking I/O
#[tokio::main]
async fn main() -> Result<()> {
let result = tokio::time::timeout(
Duration::from_secs(5),
fetch_data()
).await??;
Ok(())
}
// BAD: Blocking async runtime
async fn bad_async() {
std::thread::sleep(Duration::from_secs(1)); // ❌ Blocks executor
}- USE
Send+Syncbounds explicitly when needed. - PREFER message passing (channels) over shared state.
- USE
tokio::sync::RwLockin async,std::sync::RwLockin sync code. - AVOID
Mutexfor read-heavy workloads - useRwLock. - IMPLEMENT graceful shutdown with
CancellationToken(tokio-util).
- USE newtypes for domain modeling (
struct UserId(Uuid);). - IMPLEMENT
From/TryFromfor conversions. - USE
#[non_exhaustive]for public enums that might grow. - PREFER
Option<T>over nullable patterns. - USE builder pattern for complex constructors (derive with
derive_builder).
// GOOD: Type-safe domain modeling
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub struct UserId(Uuid);
impl From<Uuid> for UserId {
fn from(id: Uuid) -> Self { Self(id) }
}
// BAD: Primitive obsession
type UserId = String; // ❌ No type safetytokio- Async runtime (features:full)anyhow- Error handling (applications)thiserror- Error derives (libraries)serde+serde_json- Serializationtracing+tracing-subscriber- Structured logging
axum- Modern web framework (Tokio-based)tower- Middleware and service abstractionshyper- Low-level HTTP (Axum builds on this)reqwest- HTTP client
sqlx- Async SQL with compile-time query checksdiesel- Sync ORM (if preferred)redis- Redis client (async)
criterion- Benchmarking (inbenches/)pprof- CPU profilingtracing-opentelemetry- Distributed tracingmetrics+metrics-exporter-prometheus- Metrics
secrecy- Secret types (zeroize on drop)argon2- Password hashingjsonwebtoken- JWT handling
# Cargo.toml
[lints.clippy]
all = "warn"
pedantic = "warn"
unwrap_used = "deny" # No unwrap in production
expect_used = "deny" # No expect in production
panic = "deny" # No panics
missing_errors_doc = "warn" # Document error cases# rustfmt.toml
edition = "2021"
max_width = 100
hard_tabs = false
tab_spaces = 4
newline_style = "Unix"
use_small_heuristics = "Max"# ALWAYS run before commit
cargo fmt --check
cargo clippy -- -D warnings
cargo test --all-features
cargo audit # Security vulnerabilities#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_user_creation() {
let user = User::new("alice");
assert_eq!(user.name(), "alice");
}
#[tokio::test]
async fn test_async_operation() {
let result = fetch_data().await;
assert!(result.is_ok());
}
}- Test public API surface.
- Use test fixtures and builders.
- Mock external dependencies with
mockallorwiremock.
- USE
proptestorquickcheckfor edge cases. - Generates randomized inputs to find bugs.
# CPU profiling
cargo build --release
perf record -g ./target/release/myapp
perf report
# Memory profiling
cargo install dhat
# Use dhat in code- AVOID premature optimization - profile first.
- USE
#[inline]sparingly (hot paths only). - PREFER iterators over manual loops (zero-cost).
- USE
SmallVecfor stack-allocated vectors (<= 8 items). - ENABLE LTO in release builds:
[profile.release]
lto = true
codegen-units = 1
opt-level = 3/// Fetches user data from the database.
///
/// # Arguments
/// * `id` - The unique user identifier
///
/// # Errors
/// Returns `DatabaseError::NotFound` if user doesn't exist.
///
/// # Examples
/// ```
/// let user = fetch_user(UserId::new()).await?;
/// ```
pub async fn fetch_user(id: UserId) -> Result<User, DatabaseError> {
// ...
}- Project Overview - What it does
- Installation -
cargo installor Docker - Configuration - Environment variables
- Usage - Quick start examples
- Architecture - High-level design
- Contributing - How to contribute
- License - Apache-2.0 / MIT dual-license (standard)
use secrecy::{Secret, ExposeSecret};
#[derive(Clone)]
pub struct DatabasePassword(Secret<String>);
// Never logs or displays the secret
impl fmt::Debug for DatabasePassword {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "DatabasePassword([REDACTED])")
}
}# Install cargo-audit
cargo install cargo-audit
# Run before every release
cargo audit
# Check for outdated deps
cargo outdated# deny.toml
[advisories]
vulnerability = "deny"
unmaintained = "warn"FROM rust:1.75 as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/myapp /usr/local/bin/
CMD ["myapp"]- Structured Logging:
tracingwith JSON format - Metrics: Export Prometheus metrics
- Health Checks:
/healthendpoint (liveness/readiness) - Graceful Shutdown: SIGTERM handling
❌ Using unwrap()/expect() in production
✅ Use ? or match with proper error handling
❌ Blocking in async contexts
✅ Use spawn_blocking for sync operations
❌ Excessive cloning ✅ Prefer borrowing with lifetimes
❌ Arc<Mutex<T>> everywhere
✅ Use channels for message passing
❌ No error context
✅ Add context with .context() (anyhow) or .map_err()
❌ String for everything
✅ Use newtypes and domain types
❌ No integration tests
✅ Test public API in tests/
❌ Ignoring Clippy warnings ✅ Fix or explicitly allow with justification
# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@stable
- run: cargo fmt -- --check
- run: cargo clippy -- -D warnings
- run: cargo test --all-features
- run: cargo audit
- run: cargo build --release- Public API: Follow SemVer strictly (breaking = major bump).
- MSRV: Document Minimum Supported Rust Version.
- Deprecation: Use
#[deprecated]before removing APIs. - Changelog: Maintain
CHANGELOG.md(Keep a Changelog format).
{{/SLOT:guidelines}}
.project/
├── _templates/ # Templates (task, context, adr, backlog)
├── current-task.md # Active work
├── context.md # Session state persistence
├── backlog/ # Tasks: T{XXX}-{name}.md
├── completed/ # Archive: {YYYY-MM-DD}-T{XXX}-{name}.md
├── decisions/ # ADRs: {YYYY-MM-DD}-ADR{XXX}-{name}.md
├── ideas/ # Ideas: I{XXX}-{name}.md
├── reports/ # Reports: {YYYY-MM-DD}-R{XXX}-{name}.md
├── docs/ # Documentation
├── scripts/ # Automation
└── README.md # Project overview
Start:
- Read
context.md(session state, next_action) - Read
current-task.md(active checklist) - Review last commit:
git log -1 --oneline - Continue from next_action
Interruption Recovery:
- Use
aipim pause --reason="..."to save state optionally stashing changes. - Use
aipim resumeto restore context and stashed work.
During:
- Update task checkboxes as completed
- Commit frequently
- Add discoveries to task or backlog
End:
- Update
current-task.md: actual_hours, checkboxes - Update
context.md: session++, next_action, summary - Check
aipim depsto ensure no blockers - Select next task from backlog (priority + dependencies)
- Move to
current-task.md - Update
context.md: session++, next_action, summary - Run Quality Check (Optional):
.project/scripts/analyze-quality.sh --manual - Update metrics (see Metrics Protocol below)
- Commit & push
When user says "do next task" or "start task X", you MUST follow this protocol:
1. Context Loading (REQUIRED FIRST STEP):
- Read THIS FILE (GEMINI.md/CLAUDE.md) - contains MANDATORY project guidelines
- Read
.project/context.md- session state and metrics - Read
.project/backlog/- check which task is next (or specified) - Count completed tasks in
.project/completed/to know progress
2. Quality Gates (BEFORE marking task complete):
- All tests passing (
npm testor equivalent) - No lint warnings (
npm run lint) - Definition of Done satisfied (see DoD section above)
- Code reviewed and clean (no debug code, console.logs, TODOs)
3. Git Commit (ONE atomic commit per task):
- Format:
type(scope): description(SINGLE LINE ONLY) - Types: feat, fix, docs, style, refactor, test, chore
- Example:
feat(task): implement user authentication - NO multi-line commits, NO extra explanations in commit message
4. Context Awareness (CRITICAL):
- When your context window is running low (>70% used):
⚠️ WARN the user explicitly: "Context running low, recommend pausing"- DO NOT continue to next task automatically
- Let user start fresh session to maintain quality
- This prevents rushed work and maintains code quality
5. Session Integrity (CRUCIAL FOR QUALITY):
- NEVER pick up a session mid-task
- Complete current task fully OR pause and let user resume later
- One task = one complete cycle (start → implement → test → commit → done)
- This ensures each task has full context and attention
Why this matters: Tasks done with fragmented context lead to bugs, inconsistencies, and technical debt. Better to pause and resume fresh than rush through with degraded context.
MANDATORY: Break down tasks >12h into manageable phases
When starting a task with estimated_hours > 12:
- Analyze complexity and identify natural phases
- Create 3-5 phases of 2-6 hours each
- Document phases in current-task.md under Implementation section
- Treat each phase as mini-task with its own commit upon completion
Typical phase structure:
-
Setup/Scaffolding (1-3h)
- Schema/migrations
- Directory structure
- Dependencies
- Basic configuration
-
Core Implementation (4-6h)
- Main business logic
- Data processing
- API endpoints
- Integration points
-
Testing (2-4h)
- Unit tests
- Integration tests
- Document Pain Points: Record any frustration/slowness
- Edge cases
- Error handling
-
Documentation & Polish (1-3h)
- API documentation
- README updates
- Code cleanup
- Examples
Add to current-task.md under Implementation:
### Phase 1: Setup & Schema (3h)
- [ ] Define data schema
- [ ] Create database tables/migrations
- [ ] Setup dependencies
- [ ] Basic configuration
**Deliverable:** Database ready, dependencies installed
**Commit message:** feat(phase1): setup schema and dependencies
### Phase 2: Core Implementation (6h)
- [ ] Implement main business logic
- [ ] Create API endpoints
- [ ] Add data validation
- [ ] Error handling
**Deliverable:** Core functionality working
**Commit message:** feat(phase2): implement core functionality
### Phase 3: Testing (4h)
- [ ] Unit tests for core logic
- [ ] Integration tests
- [ ] Edge case coverage
- [ ] Performance validation
**Deliverable:** 80%+ test coverage
**Commit message:** test(phase3): comprehensive test suite
### Phase 4: Documentation (2h)
- [ ] API documentation
- [ ] README updates
- [ ] Usage examples
- [ ] Code comments where needed
**Deliverable:** Complete documentation
**Commit message:** docs(phase4): add comprehensive documentationBefore starting implementation, verify:
- ✅ All phases clearly defined with deliverables
- ✅ Sum of phase estimates matches total estimate (±1h tolerance)
- ✅ Each phase is 2-6 hours (not too small, not too large)
- ✅ Each phase has clear completion criteria
- ✅ Commit message convention defined per phase
During implementation:
- Complete phases sequentially (no skipping)
- Commit immediately after completing each phase
- Update checkboxes in real-time
- If phase exceeds estimate by >50%, reassess remaining phases
- Reduced stall risk: Clear checkpoints prevent midway blockage
- Better tracking: Visible progress through phase completion
- Atomic commits: Each phase is a logical unit of work
- Easier estimation: Breaking down improves accuracy for future tasks
- Context preservation: Easier to resume if interrupted
Before (risky):
title: "Implement Oracle Profiling System"
estimated_hours: 18After (manageable):
## Implementation
### Phase 1: Setup & Schema (3h)
- [ ] Define profile schema (tactics, patterns, confidence)
- [ ] Create PostgreSQL tables with indexes
- [ ] Write migrations
- [ ] Setup model relationships
### Phase 2: Tactical Pattern Detection (6h)
- [ ] Implement pin detection algorithm
- [ ] Add fork detection logic
- [ ] Create skewer pattern recognition
- [ ] Build discovered attack tracker
- [ ] Add confidence scoring system
### Phase 3: Testing & Validation (5h)
- [ ] Unit tests for each pattern detector
- [ ] Integration tests with sample games
- [ ] Edge case handling (ambiguous positions)
- [ ] Performance tests (1000+ game analysis)
### Phase 4: Documentation & Polish (4h)
- [ ] API documentation for profiling endpoints
- [ ] README with usage examples
- [ ] Code cleanup and optimization
- [ ] Add inline comments for complex algorithmsResult: 4 phases × avg 4.5h = 18h total, each with clear deliverable
MANDATORY: Update metrics after completing each task
- After completing each task - Update productivity metrics
- Weekly on Fridays - Generate weekly report
- Monthly on last Friday - Generate monthly trend report
Update the Metrics section in context.md with:
Productivity:
- Tasks completed this week: X
- Tasks completed this month: Y
- Estimate accuracy: Z (actual/estimated avg)
- Velocity trend: ↗️/→/↘️ (compare last 4 weeks)Calculation formulas:
- Estimate accuracy =
sum(actual_hours from completed/) / sum(estimated_hours from completed/) - Velocity trend = Compare task counts: last week vs 4-week average
↗️ Improving: >10% increase- → Stable: ±10%
↘️ Declining: >10% decrease
Quality:
- Test coverage: X% (from coverage reports)
- Bugs reported this week/month: Y
- Code quality warnings: Z (linter output)Blockers:
- Most common type: X (parse blocker: field from completed tasks)
- Average resolution time: Y hours
- Active blockers: Z (from current-task.md)Trends (Last 30 Days):
- Tasks completed: X (prev: Y) ↗️ +Z%
- Average task size: Xh (prev: Yh)
- Rework rate: X% (tasks requiring fixes after completion)Optional: Use .project/scripts/calculate-metrics.sh to auto-calculate metrics from completed task files.
# Generate metrics report
.project/scripts/calculate-metrics.sh
# Output formatted for context.md
.project/scripts/calculate-metrics.sh --format=markdownAfter completing TASK-003 (5h estimated, 4.5h actual):
**Productivity:**
- - Tasks completed this week: 2
+ - Tasks completed this week: 3
- - Estimate accuracy: 1.15
+ - Estimate accuracy: 1.12MANDATORY: Weekly review on Fridays
- Analyze Backlog:
- Identify "Stale" tasks (>4 weeks without update)
- Identify "Blocked" tasks (>2 weeks blocked)
- Generate Report:
- Run
.project/scripts/backlog-health.sh
- Run
- Take Action:
- Archive: Move widely irrelevant/stale tasks to
.project/backlog/archived/ - Unblock: Schedule action items for blocked tasks
- Deprioritize: Downgrade P2s to P3s if they are stalling
- Archive: Move widely irrelevant/stale tasks to
- Stale Rate: % of backlog items untouched for >30 days (Target: <20%)
- Blocker Age: Max days a task has been blocked (Target: <14 days)
---
title: "Feature Name"
created: 2025-01-07T10:00:00-03:00
last_updated: 2025-01-07T14:00:00-03:00
priority: P1-M # P1-S/M/L | P2-S/M/L | P3 | P4
estimated_hours: 8
actual_hours: 0
status: in-progress # backlog | in-progress | blocked | completed
blockers: []
tags: [backend, api]
---Template: .project/_templates/task.md
| Code | Meaning | Action |
|---|---|---|
| P1-S | Critical + Small (<2h) | Do now |
| P1-M | Critical + Medium (2-8h) | Today |
| P1-L | Critical + Large (>8h) | Break down |
| P2-S | High + Small | Quick win |
| P2-M | High + Medium | This week |
| P2-L | High + Large | Plan |
| P3 | Nice to have | Backlog |
| P4 | Low priority | Maybe never |
Must check ALL before completing:
Functionality: [ ] Works [ ] Edge cases [ ] Errors [ ] Loading [ ] Responsive
Testing: [ ] Unit [ ] Feature [ ] Browser OK [ ] 80%+ coverage
Performance: [ ] No N+1 [ ] Eager load [ ] Indexes [ ] Cache [ ] <2s [ ] Paginated
Security: [ ] Validation [ ] Auth [ ] No secrets logged [ ] CSRF [ ] SQL-safe [ ] XSS-safe
Code: [ ] PSR-12 [ ] Docs [ ] No debug [ ] Clean names
Docs: [ ] Time logged [ ] ADR if needed [ ] README if API changed
Git: [ ] Atomic commits [ ] Convention [ ] No conflicts
Full checklist: .project/docs/definition-of-done.md
# Start new task from template (for new tasks)
cp .project/_templates/task.md .project/current-task.md
# Start task from backlog (IMPORTANT: use mv not cp)
mv .project/backlog/T{XXX}-{name}.md .project/current-task.md
# Complete task
mv .project/current-task.md .project/completed/$(date +%Y-%m-%d)-T{XXX}-{name}.md
# Create ADR
cp .project/_templates/adr.md .project/decisions/$(date +%Y-%m-%d)-ADR{XXX}-{name}.md
# Validate quality
.project/scripts/validate-dod.sh
# Check session budget
.project/scripts/pre-session.shmv (not cp) to remove it from backlog. Task should only exist in one place: backlog → current-task → completed.
---
session: 42
last_updated: 2025-01-07T14:30:00-03:00
active_branches: [feature/auth]
blockers: []
next_action: "Complete login tests"
---
# Current State
[2-3 sentences: where are we?]
# Active Work
[What's in progress, what's blocked]
# Recent Decisions
[Last 2-3 important decisions]
# Next Steps
1. Immediate action
2. Then this
3. Then thatKeep concise: Max 200 lines. Archive old sessions to context-archive/.
Template: .project/_templates/context.md
MANDATORY: Archive old sessions every 10 sessions
When session number % 10 == 0:
-
Create archive directory (if needed):
mkdir -p .project/context-archive
-
Archive old sessions:
- Extract sessions 1 through (N-5) from context.md
- Save to
.project/context-archive/YYYY-MM-period.md - Filename format:
2026-01-weeks1-2.md(based on date range)
-
Update context.md:
- Keep only last 5 session summaries
- Preserve current state, active work, recent decisions
- Update session counter
-
Commit:
git add .project/context.md .project/context-archive/ git commit -m "chore: archive context sessions 1-N"
Archive structure:
.project/context-archive/
├── 2026-01-weeks1-2.md # Sessions 1-10
├── 2026-01-weeks3-4.md # Sessions 11-20
└── 2026-02-weeks1-2.md # Sessions 21-30
Why: Keeps context.md under 200 lines, reduces token consumption, preserves history.
Script: Optional helper at .project/scripts/archive-context.sh (see below)
Complex tasks (>20 checkboxes OR >500 lines): Use directory
current-task/
├── main.md # Master checklist
├── implementation.md # Technical details
└── testing.md # Test plan
Completion: Move entire directory to completed/YYYY-MM-DD-name/
Load selectively to save tokens:
- Only files being actively edited (max 3-4 per turn)
- Use
viewtool for reference, don't keep in memory - Avoid loading large files entirely (use line ranges)
Example:
# GOOD (specific)
view app/Models/User.php
# BAD (loads everything)
view app/
Be concise by default:
- Code: No explanatory comments unless complex
- Responses: 2-3 sentences per point
- Examples: Only when requested
- No apologies or meta-commentary
Expand when:
- User asks "explain in detail"
- Security/performance critical
- Complex architectural decision
Create ADR for:
- Technology/framework selection
- Database schema design
- API architecture
- Security approach
- Major refactoring
Format: See .project/_templates/adr.md
Status values: Proposed | Accepted | Deprecated | Superseded
MANDATORY: Check for architectural decisions after each task
If you (the Agent or User) have made any of the following changes, strictly prompt for an ADR:
- Technology/Library Choice: "Chose X over Y" (e.g.,
zodvsyup,postgresvsmongo) - Schema Design: "Designed database structure for X" (e.g., relations, indexing strategy)
- API Architecture: "Defined endpoints/patterns for X" (e.g., REST vs GraphQL, error envelope)
- Security Mechanism: "Implemented auth/security for X" (e.g., JWT, RBAC, encryption)
- Performance Pattern: "Optimized X using Y" (e.g., caching strategy, lazy loading)
- Refactoring Pattern: "Refactored to use [Pattern Name]" (e.g., Factory, Strategy, Observer)
If a trigger is detected, the Agent MUST ask:
"I noticed we made an architectural decision regarding [TOPIC]. Should we create an ADR to document the context, alternatives, and rationale?
Run:
cp .project/_templates/adr.md .project/decisions/$(date +%Y-%m-%d)-ADR{XXX}-[topic].md"
# Before completing task
.project/scripts/validate-dod.shChecks: debug statements, tests, N+1 queries, security, documentation
# Before starting session
.project/scripts/pre-session.shShows: token estimate, large files, git status
Option 1: Personal task files
current-task-rafhael.md
current-task-joao.md
Option 2: Branch-based (recommended)
git checkout feature/auth → .project/current-task.md
git checkout feature/payments → .project/current-task.md (different)
Quality:
- Test coverage trend
- Bugs in production
- DoD compliance rate
Velocity:
- Tasks completed/week
- Estimate accuracy (actual vs estimated)
- Time to restore context
Process:
- Stale tasks (>14 days not updated)
- Blocker resolution time
- ADR review compliance
context.md too long?
mkdir -p .project/context-archive/
mv .project/context.md .project/context-archive/2025-01-Q1.md
cp .project/_templates/context.md .project/context.mdBacklog overwhelming?
# Archive low-priority old items
mkdir -p .project/ideas/archived/
mv .project/backlog/*P4*.md .project/ideas/archived/Hit token limits?
- Run
pre-session.shbefore starting - Archive old context sessions
- Load fewer files in memory
- Use line ranges with view tool
Full documentation:
.project/docs/task-management.md- Complete task workflow.project/docs/definition-of-done.md- Full DoD checklist.project/docs/adr-guide.md- Architecture decision records.project/docs/examples/- Complete examples
Quick references:
- Git convention:
type(scope): description- Types: feat, fix, docs, style, refactor, test, chore
- Commit atomically (one logical change per commit)
- Branch naming:
feature/nameorfix/name
Version: 1.2 Compact
Last updated: 2025-01-07
Full version: Available as CLAUDE-full.md for reference
When to use full version:
- First time using system (read once)
- Training new team members
- Need detailed examples
When to use compact version:
- Daily development (this file)
- Quick reference
- Token optimization