Skip to content

Commit eea3b5b

Browse files
committed
docs: add SE2.0 philosophy outline with talking points
- Structure Part I into 8 sections covering paradigm shift, management parallel, three core principles, tool design for agency, meta-tooling, and human touch - Add talking points for each subsection - Include explicit interfaces and enforced next steps principles - Reference MicroCLI AST enforcement as implementation detail - Summary lists 6 principles total (3 core + 2 tool + 1 human)
1 parent f874c7c commit eea3b5b

1 file changed

Lines changed: 227 additions & 0 deletions

File tree

Lines changed: 227 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,227 @@
1+
---
2+
title: The Philosophy of Software Engineering 2.0
3+
subtitle: Outline with Talking Points
4+
status: draft
5+
---
6+
7+
# Part I: The Philosophy of Software Engineering 2.0
8+
9+
## 1. Introduction: The Paradigm Shift
10+
11+
### The Context Saturation Problem
12+
- **Talking point:** No matter how large the context window (1M tokens, 10M tokens), we will eventually saturate it
13+
- **Talking point:** LLMs don't forget—they *drift*. They reinterpret based on lossy, blurry context
14+
- **Talking point:** Symptoms: 95% done, but the last 5% requires arguing with the LLM to get it right
15+
- **Talking point:** The real problem isn't memory—it's *meaning*. Models lose track of what's important
16+
17+
### Why Traditional SE Isn't Enough
18+
- **Talking point:** SE1.0 principles (DRY, YAGNI, SOLID) assume human readers
19+
- **Talking point:** These principles optimize for human comprehension, not AI comprehension
20+
- **Talking point:** New principles needed that work for both humans and AI
21+
- **Talking point:** The shift: from "write code for humans" to "design systems for intelligibility" (bidirectional)
22+
23+
### Thesis: Agent-as-Collaborator
24+
- **Talking point:** AI is not a tool to be wielded—it's a collaborator to be organized
25+
- **Talking point:** The relationship changes: from human-drives-AI to human-and-AI-drive-together
26+
- **Talking point:** This requires rethinking processes, not just prompts
27+
- **Talking point:** SE2.0 = SE1.0 principles extended to include AI as a first-class participant
28+
29+
---
30+
31+
## 2. Good AI Users Are Good Managers
32+
33+
### The Parallel to People Management
34+
- **Talking point:** All the science of people management applies to AI management
35+
- **Talking point:** Clear roles and responsibilities, explicit documentation, don't assume—ask
36+
- **Talking point:** The same skills that make a good manager make a good AI orchestrator
37+
- **Talking point:** Example: You don't micromanage good employees; you don't micromanage well-designed AI workflows
38+
39+
### SE1.0: Processes for People
40+
- **Talking point:** SE1.0's insight: the most critical aspect of software is people
41+
- **Talking point:** Processes exist to help people collaborate effectively
42+
- **Talking point:** Code review, standups, planning poker—all designed for human cognition and communication
43+
- **Talking point:** These processes worked because they accounted for human limitations (memory, attention, context switching)
44+
45+
### SE2.0: Processes for AI-as-Collaborator
46+
- **Talking point:** AI has different limitations: context drift, hallucination, inability to recover gracefully from errors
47+
- **Talking point:** AI has different strengths: parallel processing, perfect memory (when designed right), tireless execution
48+
- **Talking point:** SE2.0 extends SE1.0's insight: make processes work for AI *in addition to* people
49+
- **Talking point:** The goal: create systems where human + AI together are more effective than either alone
50+
51+
---
52+
53+
## 3. Principle One: Make Important Things Explicit
54+
55+
### Context Engineering
56+
- **Talking point:** There are two kinds of context: what you put in, and what the model retains
57+
- **Talking point:** The art: don't pollute context with noise, but re-inject what's relevant
58+
- **Talking point:** This is a tradeoff, not a solution—and being conscious of it is the first step
59+
- **Talking point:** Example: In a long project, the model forgets *why* a decision was made months ago
60+
61+
### Files as Long-Term Memory
62+
- **Talking point:** Important decisions should be committed to writing before acted upon
63+
- **Talking point:** Plans in markdown files = physical source of truth
64+
- **Talking point:** Research summaries stored in real-time prevent re-research
65+
- **Talking point:** The journal pattern: chronological record of decisions for future reference
66+
- **Talking point:** These files serve both human and AI—shared understanding
67+
68+
### The Cost of Implicit Knowledge
69+
- **Talking point:** "It should be obvious from the code" fails when context is saturated
70+
- **Talking point:** Explicit documentation isn't just for humans—it's for AI continuity
71+
- **Talking point:** The cost of implicit: model invents plausible but wrong explanations
72+
- **Talking point:** Example: Why did we use library X over library Y? If it's not documented, the AI will guess
73+
74+
---
75+
76+
## 4. Principle Two: Resist the Urge to Guess
77+
78+
### Explicit Commands Over Implicit Skills
79+
- **Talking point:** Implicit skills (auto-activated based on context) are powerful but uncontrollable
80+
- **Talking point:** Explicit commands: `/plan`, `/research`, `/build`—you invoke them, you control them
81+
- **Talking point:** Example: Instead of hoping the model knows to make a plan, explicitly invoke `/plan`
82+
- **Talking point:** The tradeoff: explicit is more verbose, but more controllable
83+
84+
### Agents Should Ask, Not Assume
85+
- **Talking point:** A good collaborator asks when uncertain—this applies to AI too
86+
- **Talking point:** Design systems where asking is cheap and rewarded, not penalized
87+
- **Talking point:** Example: A model that asks "should I research this library's API?" is more trustworthy than one that guesses
88+
- **Talking point:** The failure mode: models that seem confident but are wrong (hallucination)
89+
90+
### Planning Before Execution
91+
- **Talking point:** The `/plan` cycle: understand → plan → execute
92+
- **Talking point:** Plans should be physical files, not mental states
93+
- **Talking point:** Plans serve as contracts between human intent and AI execution
94+
- **Talking point:** Example: "Read the plan, follow the plan, update the plan if needed"
95+
96+
---
97+
98+
## 5. Principle Three: Delegate, Delegate, Delegate
99+
100+
### Context Isolation
101+
- **Talking point:** The key insight: subagents have *private* context that doesn't pollute the main session
102+
- **Talking point:** Example: A 30-minute research loop with hundreds of web pages returns only a summary
103+
- **Talking point:** This enables long-running tasks without context exhaustion
104+
- **Talking point:** The cost: you only get back what the subagent summarizes—but that's usually enough
105+
106+
### Specialists and Subagents
107+
- **Talking point:** Different tasks need different thinking styles
108+
- **Talking point:** Subagents can be specialized: researcher, planner, writer, reviewer
109+
- **Talking point:** Each subagent has a clear role and boundary—don't overlap contexts unnecessarily
110+
- **Talking point:** Example: The researcher scours the web; returns synthesized findings. The main agent builds on them.
111+
112+
### Preserving Reasoning Without Pollution
113+
- **Talking point:** Internal reasoning should be kept private, summarized results shared
114+
- **Talking point:** This is the opposite of "think step by step" prompting—it's "think privately, report cleanly"
115+
- **Talking point:** The agent shouldn't have to remember *how* the research was done, only *what* was found
116+
- **Talking point:** This principle enables scaling: 10 subagents working in parallel, one main agent synthesizing
117+
118+
---
119+
120+
## 6. Tools That Enable Agency
121+
122+
### Instructions, Not Data
123+
- **Talking point:** Traditional tools return data: "Here's the result"
124+
- **Talking point:** Agent-friendly tools return instructions: "Here's the result, and here's what to do next"
125+
- **Talking point:** Example pattern: `.explain()` — generates exact command strings with hydrated arguments
126+
- **Talking point:** The shift: from "return a value" to "return a next step"
127+
128+
### Self-Discoverable Interfaces
129+
- **Talking point:** Tools should be explorable without documentation
130+
- **Talking point:** Example: `--tour` mode that walks through all commands and their possible next steps
131+
- **Talking point:** Agents can discover capabilities at runtime, not just at design time
132+
- **Talking point:** This enables agents to learn new tools without human intervention
133+
134+
### Explicit Interfaces (No Implicits)
135+
- **Talking point:** CLI tools for agents should have no implicit defaults, no positional arguments—all explicit
136+
- **Talking point:** One way to call a tool: `tool.py --flag value` is the only pattern, always
137+
- **Talking point:** Why: agents can't guess defaults or remember argument order; they need exact invocations
138+
- **Talking point:** Example: `--name "value"` instead of positional `"value"`, `--save` flag instead of implicit behavior
139+
- **Talking point:** Consistency enables predictability: if every tool follows the same interface pattern, agents can generalize
140+
141+
### Enforced Next Steps (Design Principle)
142+
- **Talking point:** Every command should return `m.ok` or `m.fail` with a `next=` parameter specifying what to do next
143+
- **Talking point:** The principle: tools should never leave agents wondering "what now?"
144+
- **Talking point:** This isn't just convention—it should be enforced by the tool framework
145+
- **Talking point:** Example: `m.fail("Preview mode", next="Run with --save to apply")` or `m.ok("Created", next="Run --deploy to publish")`
146+
- **Talking point:** The agent never invents next steps—the tool provides them, always
147+
148+
### Descriptive Failures
149+
- **Talking point:** Silent failures are the enemy of agentic workflows
150+
- **Talking point:** Example: `m.fail("Docker not installed")` tells the agent exactly what went wrong
151+
- **Talking point:** Failure modes should be documented in the tool itself, not just the docs
152+
- **Talking point:** The goal: when something fails, the agent knows exactly why and what to do
153+
154+
### The Two-Phase Pattern
155+
- **Talking point:** Preview → Confirm → Execute
156+
- **Talking point:** Example: "Preview mode. Run with --save to apply"
157+
- **Talking point:** This pattern enables safety without friction—human-in-the-loop when it matters
158+
- **Talking point:** Agents can explore and preview without breaking things
159+
160+
---
161+
162+
## 7. Meta-Tooling: Agents Creating Tools
163+
164+
### The Feedback Loop
165+
- **Talking point:** Agents that use tools can improve tools
166+
- **Talking point:** The cycle: agent uses tool → identifies friction → builds better tool → uses it
167+
- **Talking point:** This is the key to self-improving systems
168+
- **Talking point:** Example: An agent that notices repetitive workflows can create a new command
169+
170+
### Tools That Help Agents Build Tools
171+
- **Talking point:** Meta-tools: tools that help agents create other tools
172+
- **Talking point:** Example: A scaffolding command that generates boilerplate for new commands
173+
- **Talking point:** The agent doesn't need to know everything—it needs to know how to delegate tool creation
174+
- **Talking point:** This extends the "delegate, delegate, delegate" principle to tooling itself
175+
- **Talking point:** Implementation detail: MicroCLI enforces `m.ok`/`m.fail` with `next=` via AST parsing at decorator time
176+
- **Talking point:** The framework is a meta-tool that enforces tool quality—developers can't forget to return next steps
177+
178+
### Self-Improving Systems
179+
- **Talking point:** The long-term vision: systems that evolve based on usage patterns
180+
- **Talking point:** Agents identify inefficiencies and create solutions
181+
- **Talking point:** This requires tools that are designed for modification, not just use
182+
- **Talking point:** The open question: how much autonomy to give agents in tool creation?
183+
184+
---
185+
186+
## 8. The Human Touch
187+
188+
### What AI Automates (The 80%)
189+
- **Talking point:** AI handles the first 80%: compilation, research synthesis, draft generation
190+
- **Talking point:** This 80% is valuable precisely because it's repetitive and time-consuming
191+
- **Talking point:** AI is good at: gathering, organizing, drafting, iterating fast
192+
- **Talking point:** Example: Research that would take a day can be synthesized in minutes
193+
194+
### What Remains Irreducibly Human (The 20%)
195+
- **Talking point:** The final 20%: taste, judgment, personal voice, ethical reasoning
196+
- **Talking point:** AI drafts are "AI-ish"—they lack the human fingerprint that makes work original
197+
- **Talking point:** The 20% is where the real work happens: polishing, questioning, reimagining
198+
- **Talking point:** Example: "This article is probably 80% different from what the AI gave me"
199+
200+
### The Collaboration That Works
201+
- **Talking point:** The best results come from human + AI, not either alone
202+
- **Talking point:** AI amplifies human capability; it doesn't replace human judgment
203+
- **Talking point:** The key: knowing when to use AI and when to override it
204+
- **Talking point:** SE2.0 isn't about replacing human engineers—it's about making them more effective
205+
206+
### The Principles Endure
207+
- **Talking point:** These principles work for people too, not just AI
208+
- **Talking point:** SE2.0 extends SE1.0—it doesn't replace it
209+
- **Talking point:** The future: agents creating agents, humans orchestrating teams of agents
210+
- **Talking point:** The constants: clarity, intentionality, good judgment
211+
212+
---
213+
214+
## Summary: The Principles of SE2.0
215+
216+
### Core Three Principles
217+
1. **Make important things explicit** — Context engineering, files as memory, no implicit knowledge
218+
2. **Resist the urge to guess** — Explicit commands, agents should ask, plan before execute
219+
3. **Delegate, delegate, delegate** — Context isolation, specialists, preserve reasoning
220+
221+
### Tool Design Principles
222+
4. **Design for agency** — Instructions > data, self-discoverable, explicit interfaces, no implicits
223+
5. **Enforced next steps** — Every command returns `m.ok`/`m.fail` with `next=`; framework enforces this
224+
6. **Enable self-improvement** — Meta-tooling, agents creating tools, self-improving systems
225+
226+
### The Human Principle
227+
6. **Human in the loop** — AI amplifies, human judges; the 80/20 split

0 commit comments

Comments
 (0)