The user is asking if I've been sent this GitHub link before. Let me search through my memory and recent sessions to check.
Looking at the context:
- This is a Discord channel session in #lambda-research-1
- The user is asking about a GitHub repository: https://github.com/sandgardenhq/sgai
I should check my session history or memory to see if this has been discussed before. Let me search through recent session files or just check my memory files.
Actually, I don't have a direct way to search previous sessions easily. I could use the clawsess tool mentioned in TOOLS.md, but that might be overkill. Let me just be honest - I don't have an easy way to search my full conversation history, and I should tell the user that I don't have a clear record of previous discussions.
Wait, I should try to search. Let me use clawsess to search for this.
Command:
clawsess search sgai 2>/dev/null || echo "clawsess not found or no results"
🔍 搜索 'sgai',找到 20 条结果:
`__REDACTED__`
Yes! I found references to sgai in the search results. There's a session from 2026-02-25 where I did a security audit on sgai (sandgardenhq/sgai). The search results show I analyzed it for prompt injection attacks and looked at the config.go file.
So yes, Yelo has sent me this before - we did a security audit on it back in February 2026.
Yelo wants me to deep dive into the sgai project. Let me first check if I have the project locally, and if not, clone it. Then I'll use the deepwiki skill to analyze it.
Command:
ls -la ~/projects/ | grep -i sgai || echo "not found"
not found
Need to clone it first. Let me clone the repo.
Command:
cd ~/projects && git clone --depth 1 https://github.com/sandgardenhq/sgai.git
Cloning into 'sgai'...
Now I need to use the deepwiki skill to analyze this project. Let me read the skill first.
File: ~/projects/public-skills/skills/deepwiki/SKILL.md
`__REDACTED__`
Now I understand the deepwiki skill. I need to use it to analyze the sgai project. Let me first explore the project structure to understand what it is, then use deepwiki to create comprehensive documentation.
Actually, looking at the skill, it seems I should just describe what I want ("Analyze this codebase and generate architecture documentation") and it will work. But since this is OpenClaw, I might need to use the commands or just do it manually.
Let me first explore the project to understand it, then provide a comprehensive analysis to Yelo.
Command:
cd ~/projects/sgai && find . -type f -name "*.go" | head -30 && echo "---" && ls -la
`__REDACTED__`
This is a Go project. Let me read the README and key files to understand the architecture and design. Then I'll provide a comprehensive analysis to Yelo.
File: ~/projects/sgai/README.md
`__REDACTED__`
This is really interesting. Sgai is a goal-driven AI software factory - it's like a multi-agent development environment. Let me read more files to understand the architecture better.
Command:
cd ~/projects/sgai && ls -la pkg/ && echo "---" && ls -la cmd/sgai/ | head -30
`__REDACTED__`
Let me read more core files to understand the architecture - particularly the DAG (Directed Acyclic Graph) system, service composition, and session management.
File: ~/projects/sgai/cmd/sgai/dag.go
`__REDACTED__`
Let me continue reading the DAG file and also look at the service composition to understand the multi-agent workflow.
File: ~/projects/sgai/cmd/sgai/service_compose.go
`__REDACTED__`
Now I have a good understanding. Let me also check the config to see how the agents are defined, and then look at the session management.
File: ~/projects/sgai/sgai.example.json
`__REDACTED__`
Let me also look at the session and state management to understand how the agents run.
Command:
cd ~/projects/sgai && grep -l "session" cmd/sgai/*.go | head -5
`__REDACTED__`
Let me read the AGENTS.md file to understand the different agents.
File: ~/projects/sgai/AGENTS.md
`__REDACTED__`
Now I have a comprehensive understanding. Let me also quickly check the main.go for the entry point and overall architecture.
Command:
cd ~/projects/sgai && head -100 cmd/sgai/main.go
`__REDACTED__`
Now I have enough understanding. Let me summarize for Yelo about the sgai project - its design and usage.
Yelo wants me to do a deep analysis. Let me use the deepwiki skill to do a comprehensive analysis of this codebase.
Command:
cd ~/projects/sgai && deepwiki scan . --output ~/notes/sgai-analysis.md 2>&1 || echo "deepwiki not available, will do manual"
zsh:1: command not found: deepwiki
deepwiki not available, will do manual
deepwiki isn't installed. Let me do a manual deep analysis by reading more key files. I'll focus on the core modules: DAG, Session, MCP, and Compose.
File: ~/projects/sgai/cmd/sgai/dag.go
`__REDACTED__`
Now let me look at the session management and MCP components to complete the picture.
Command:
cd ~/projects/sgai && grep -n "type.*Session" cmd/sgai/*.go | head -20
`__REDACTED__`
Let me read the service_session.go file to understand how sessions work.
File: ~/projects/sgai/cmd/sgai/service_session.go
`__REDACTED__`
Now I have a good picture. Let me also look at the pkg/state to understand the state management.
Command:
cd ~/projects/sgai && ls -la pkg/state/ && head -100 pkg/state/state.go
`__REDACTED__`
Let me continue reading the state.go to see the Workflow struct.
Command:
cd ~/projects/sgai && grep -n "type Workflow struct" pkg/state/state.go -A 50
`__REDACTED__`
Now I have a good picture. Let me also look at the MCP interface to understand how it works externally.
File: ~/projects/sgai/cmd/sgai/mcp_external.go
`__REDACTED__`
Let me also check the session tools in the MCP external.
Command:
cd ~/projects/sgai && grep -n "registerSessionTools\|start_session\|stop_session\|respond_to_question\|wait_for_question" cmd/sgai/mcp_external.go | head -20
`__REDACTED__`
Now I have a comprehensive understanding of Sgai. Let me compile this into a detailed analysis and save it to notes.
Command:
mkdir -p ~/notes
(no output)
Now I'll compile a comprehensive analysis document.
File: ~/notes/sgai-deep-analysis.md
Content (6912 chars):
# Sgai 深度分析报告
**来源**: https://github.com/sandgardenhq/sgai
**日期**: 2026-03-09
**标签**: AI-agent, multi-agent, workflow, golang, opencode
---
## 1. 系统架构总览
```
┌─────────────────────────────────────────────────────────────┐
│ Sgai Server │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Web UI │ │ MCP Server │ │ HTTP API │ │
│ │ (React) │ │ (/mcp/ext) │ │ (/api/v1/*) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ │ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Workspace Manager │ │
│ │ - scanWorkspaceGroups() │ │
│ │ - workspaceCoordinator() │ │
│ │ - Session 生命周期管理 │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌──────────────────┼──────────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│Workspace│ │Workspace│ │Workspace│
│ A │ │ B │ │ C │
│.sgai/ │ │.sgai/ │ │.sgai/ │
│state.json │state.json │state.json
└─────────┘ └─────────┘ └─────────┘
```
---
## 2. 核心概念
### 2.1 Workspace(工作空间)
- 每个 Git 仓库是一个 Workspace
- 状态存储在 `.sgai/state.json`
- 支持两种模式:
- **Repository Mode**: 单 jj workspace
- **Forked Mode**: 多个 jj workspace(root + children)
### 2.2 GOAL.md
定义目标的配置文件,YAML frontmatter + Markdown:
```yaml
---
flow: |
"coordinator" -> "backend-developer"
"backend-developer" -> "code-reviewer"
completionGateScript: make test
interactive: yes
---
# Build a REST API
实现用户注册和登录的 REST API...
```
**关键字段**:
- `flow`: DOT 语言定义 DAG(有向无环图)
- `completionGateScript`: 验证完成的脚本
- `interactive`: 是否允许人机交互
- `alias`: Agent 别名(复用其他 Agent 配置)
- `models`: 指定 Agent 使用的模型
### 2.3 DAG 工作流引擎
**位置gai/dag**: `cmd/s.go`
```go
type dagNode struct {
Name string
Predecessors []string // 上游节点
Successors []string // 下游节点
}
```
**自动注入的边**:
1. `injectCoordinatorEdges()` - 确保 coordinator 是唯一入口
2. `injectProjectCriticCouncilEdge()` - 添加项目审查委员会
3. `injectRetrospectiveEdge()` - 添加回顾节点
**节点类型**:
- `coordinator` - 协调者,负责人机交互
- `general-purpose` - 通用开发者
- `code-reviewer` - 代码审查
- `safety-analyst` - 安全分析
- `designer` - UI/UX 设计
- `project-critic-council` - 项目审查委员会
- `retrospective` - 回顾总结
---
## 3. Session 机制
### 3.1 状态定义 (`pkg/state/state.go`)
```go
type Workflow struct {
Status string // working, agent-done, complete, waiting-for-human
Task string // 当前任务
HumanMessage string // 给人类的消息
MultiChoiceQuestion *MultiChoiceQuestion // 多选问题
Messages []Message // Agent 间消息
VisitCounts map[string]int // 各 Agent 访问次数
CurrentAgent string // 当前执行的 Agent
InteractionMode string // self-drive, brainstorming, building, retrospective, continuous
SessionID string // 会话 ID
Cost SessionCost // token 消耗
}
```
### 3.2 交互模式
| Mode | 描述 | 允许的工具 |
|------|------|-----------|
| `self-drive` | 全自动执行 | 无 |
| `brainstorming` | 头脑风暴模式 | ask_user_question |
| `building` | 构建模式,自动化执行 | 无 |
| `retrospective` | 回顾模式 | ask_user_question |
| `continuous` | 持续模式,执行长提示 | 无 |
### 3.3 Session 生命周期
```
startSession()
│
▼
┌─────────────────┐
│ 检查 Workspace │
│ (非 root) │
└────────┬────────┘
│
▼
┌─────────────────────────────────┐
│ 确定 InteractionMode │
│ - continuous → ModeContinuous │
│ - auto=true → ModeSelfDrive │
│ - else → ModeBrainstorming │
└────────┬────────────────────────┘
│
▼
┌─────────────────────────────────┐
│ 启动 Workflow Runner │
│ runWorkflow(ctx, dir, mcpURL, │
│ logWriter, coord) │
└─────────────────────────────────┘
```
---
## 4. MCP 外部接口
**端点**: `http://localhost:8080/mcp/external`
### 核心工具(35+)
| 工具 | 功能 |
|------|------|
| `list_workspaces` | 列出所有 Workspace |
| `get_workspace_state` | 获取 Workspace 状态 |
| `start_session` | 启动 Agent Session |
| `stop_session` | 停止 Session |
| `respond_to_question` | 回答 Agent 问题 |
| `wait_for_question` | 阻塞直到需要人类输入 |
| `steer_session` | 引导 Session 方向 |
| `create_workspace` | 创建新 Workspace |
| `list_skills` | 列出可用技能 |
### HTTP API 轮询模式
```
LOOP:
1. PROBE → GET /api/v1/state
2. CHECK → pendingQuestion != null?
3. ACT → start/steer/respond
4. WAIT → sleep → poll again
```
---
## 5. Agent 通信机制
### 5.1 Message Bus
Agent 通过消息传递协作:
```go
type Message struct {
ID int
FromAgent string
ToAgent string
Body string
Read bool
}
```
### 5.2 消息模板
`composeFlowTemplate()` 为不同 Agent 生成不同的提示模板:
- **Coordinator**: 特殊的人机交互指令
- **Other Agents**: 非协调者的通用指令
关键指令:
- `sgai_send_message()` - 发送消息给其他 Agent
- `sgai_check_inbox()` - 检查收件箱
- `sgai_find_snippets()` - 查找代码片段
---
## 6. 多模型支持
### 6.1 Agent Alias
```yaml
---
alias:
"backend-go-developer-lite": "backend-go-developer"
models:
"backend-go-developer-lite": "anthropic/claude-haiku-4-5"
---
```
别名 Agent 继承基础 Agent 的:
- Prompt
- Tools
- Snippets
但使用自己的模型配置。
### 6.2 Multi-Model Context
```go
func buildMultiModelSection(currentModel string, models map[string]any, currentAgent string) string
```
- 显示当前模型身份
- 列出同组其他模型
- 提供 `sgai_send_message()` 给同组模型发消息
---
## 7. 完成门控 (Completion Gate)
```yaml
completionGateScript: make test
```
- Agent 完成工作后必须运行验证脚本
- 测试通过才标记为 `complete`
- 支持:make test, npm test, cargo test 等
---
## 8. 技能系统 (Skills)
从完成的任务中提取可复用技能:
- 存储在 `.sgai/skills/`
- 代码片段自动收集
- 可供后续 Agent 使用
---
## 9. 关键代码文件
| 文件 | 功能 |
|------|------|
| `cmd/sgai/main.go` | 入口,CLI + Server |
| `cmd/sgai/dag.go` | DAG 解析和编排 |
| `cmd/sgai/service_session.go` | Session 生命周期 |
| `cmd/sgai/service_compose.go` | GOAL.md 构建 |
| `cmd/sgai/mcp_external.go` | MCP 接口 |
| `pkg/state/state.go` | 状态定义 |
| `pkg/state/coordinator.go` | 状态协调器 |
---
## 10. 与 OpenCode 的关系
- **底层引擎**: Sgai 使用 `opencode` 作为 AI 引擎
- **opencode** 负责:
- 模型调用
- Tool 执行
- Prompt 构建
- **Sgai** 负责:
- Goal 解析
- Agent 编排
- 人机交互
- UI 展示
---
## 11. 设计亮点
1. **可视化工作流**: DOT → 图表,用户清楚看到任务流向
2. **审批制执行**: Work Gate 设计,需人类批准才开始构建
3. **自动循环检测**: 防止死循环
4. **Coordinator 强制入口**: 确保人机交互始终发生
5. **状态持久化**: JSON 文件存储,支持重启恢复
6. **MCP + HTTP 双接口**: 灵活集成外部 AI
7. **多模型协作**: 同一角色可用不同成本的模型
8. **jj 集成**: 原生支持现代 VCS
---
## 12. 潜在改进点
1. **并行执行**: 当前 DAG 串行执行,可考虑并行
2. **状态回滚**: 无良好的撤销机制
3. **分布式**: 仅本地运行,无远程集群
4. **监控告警**: 缺少指标和告警
5. **插件系统**: Agent 类型硬编码,不够灵活
Successfully wrote 6912 bytes to ~/notes/sgai-deep-analysis.md
Done! Let me give Yelo a summary.
Yelo wants me to explain the Session mechanism in Sgai in more detail. I already have the service_session.go content and some state definitions. Let me read more to give a comprehensive explanation.
Command:
cd ~/projects/sgai && grep -n "runWorkflow\|func.*workflow.*runner\|type.*Runner" cmd/sgai/*.go | head -20
`__REDACTED__`
Let me read the workflow_runner.go to understand the execution loop.
File: ~/projects/sgai/cmd/sgai/workflow_runner.go
`__REDACTED__`
Now let me look at more of the workflow runner and see how the agent execution works with opencode.
Command:
cd ~/projects/sgai && grep -n "opencode\|executeAgent\|multiModelConfig" cmd/sgai/workflow_runner.go | head -20
`__REDACTED__`
Let me read more of the executeAgent function to understand how it calls opencode.
File: ~/projects/sgai/cmd/sgai/workflow_runner.go
`__REDACTED__`
Now I understand more about the session mechanism. Let me also look at the multiModelAgent to understand how it runs the agent with opencode.
Command:
cd ~/projects/sgai && grep -n "runMultiModelAgent\|func.*run.*Agent" cmd/sgai/*.go | head -10
`__REDACTED__`
Let me read the main.go to see how it calls opencode.
File: ~/projects/sgai/cmd/sgai/main.go
`__REDACTED__`
Now I have a good understanding of the Session mechanism. Let me also look at how it runs with opencode.
Command:
cd ~/projects/sgai && grep -n "opencode\|exec.Command" cmd/sgai/main.go | head -20
`__REDACTED__`
Now I have a comprehensive understanding of the Session mechanism. Let me summarize it for Yelo.
Command:
cd ~/projects/sgai && grep -n "buildWorkflowRunner" cmd/sgai/*.go
`__REDACTED__`
Let me read the buildWorkflowRunner to understand how the runner is initialized.
File: ~/projects/sgai/cmd/sgai/workflow_runner.go
`__REDACTED__`
Now I have a comprehensive understanding of the Session mechanism. Let me present it to Yelo in a clear way.
Yelo wants to understand:
- Why these 5 interaction modes exist and what problems they solve
- How external AI driving (MCP/HTTP) works
Let me find more details about each mode's purpose.
Command:
cd ~/projects/sgai && grep -n "ModeSelfDrive\|ModeBrainstorming\|ModeBuilding\|ModeRetrospective\|ModeContinuous" cmd/sgai/*.go | head -30
`__REDACTED__`
Let me check the prompt_registry.go to see how modes affect behavior.
File: ~/projects/sgai/cmd/sgai/prompt_registry.go
`__REDACTED__`
Let me find the actual content of these mode sections.
Command:
cd ~/projects/sgai && grep -n "flowSectionSelfDriveMode\|flowSectionBrainstormingMode\|flowSectionBuildingMode\|flowSectionRetrospectiveMode\|flowSectionContinuousMode" cmd/sgai/*.go | head -20
`__REDACTED__`
Now let me get the full content of these mode sections.
File: ~/projects/sgai/cmd/sgai/prompt_registry.go
`__REDACTED__`
Now I have a clear understanding of the modes. Let me also check the external AI driving mechanism (MCP).
Command:
cd ~/projects/sgai && grep -n "wait_for_question\|respond_to_question" cmd/sgai/mcp_external.go | head -10
`__REDACTED__`
Let me read the wait_for_question and respond_to_question implementations.
Command:
cd ~/projects/sgai && sed -n '747,800p' cmd/sgai/mcp_external.go
`__REDACTED__`
Now I understand how external AI driving works. Let me summarize for Yelo.
Yelo wants a concrete example of how external AI drives Sgai. Let me create a practical example showing the MCP interaction flow.
Yelo is asking about how well Sgai handles "cold start" - i.e., when you first start using Sgai with a new project, what's the experience like? How much setup is needed? How does it help users get started?
Let me search the codebase for anything related to onboarding, wizard, templates, or cold start experience.
Command:
cd ~/projects/sgai && grep -rn "wizard\|template\|onboard\|init\|quickstart\|demo" cmd/sgai/*.go | grep -v "_test.go" | head -20
`__REDACTED__`
Let me look at the wizard and templates more closely.
Command:
cd ~/projects/sgai && ls -la docs/sgai-skills/
`__REDACTED__`
Let me look at compose templates and the wizard state.
Command:
cd ~/projects/sgai && grep -n "workflowTemplates\|composeTemplate\|template" cmd/sgai/compose*.go | head -20
`__REDACTED__`
Let me see what templates are available.
File: ~/projects/sgai/cmd/sgai/compose_wizard.go
`__REDACTED__`
Now I see the templates. Let me also look at what Agent prompts are available.
Command:
cd ~/projects/sgai && ls -la .sgai/agent/ 2>/dev/null || echo "no agent folder" && ls -la sgai/ 2>/dev/null | head -10
`__REDACTED__`
Let me check the README to see if there's more info about onboarding.
File: ~/projects/sgai/sgai/README.md
`__REDACTED__`
Let me search for the demo workspace or any example projects.
Command:
cd ~/projects/sgai && find . -name "GOAL.md" -o -name "*demo*" | head -10
(no output)
Let me also check the installation doc.
File: ~/projects/sgai/INSTALLATION.md
`__REDACTED__`
Now I have a good picture. Let me summarize the cold start experience for Yelo.
Yelo asks two questions:
- Are these agents built into the sgai binary? How are the prompts written?
- For different tech stacks (Python, Rust, Swift), how does sgai solve cold start?
Let me find the agent prompts.
Command:
cd ~/projects/sgai && find . -path "*/.sgai/agent/*.md" -o -path "*/agent/*.md" 2>/dev/null | head -20
`__REDACTED__`
Found them! They're in cmd/sgai/skel/.sgai/agent/. Let me look at a few to see the prompt quality.
Command:
cd ~/projects/sgai && ls -la cmd/sgai/skel/.sgai/agent/ | wc -l && ls cmd/sgai/skel/.sgai/agent/
31
agent-sdk-verifier-py.md
agent-sdk-verifier-ts.md
backend-go-developer.md
c4-code.md
c4-component.md
c4-container.md
c4-context.md
cli-output-style-adjuster.md
cloudflare-worker-deployer.md
coordinator.md
exe-dev-deployer.md
general-purpose.md
go-readability-reviewer.md
htmx-picocss-frontend-developer.md
htmx-picocss-frontend-reviewer.md
openai-sdk-verifier-py.md
openai-sdk-verifier-ts.md
project-critic-council.md
react-developer.md
react-reviewer.md
retrospective.md
shell-script-coder.md
shell-script-reviewer.md
skill-writer.md
snippet-writer.md
stpa-analyst.md
vercel-deployer.md
webmaster.md
There are 30 agents. Let me check a few to see prompt quality - coordinator, backend-go-developer, general-purpose, and see if there's any Python/Rust/Swift specific ones.
File: ~/projects/sgai/cmd/sgai/skel/.sgai/agent/coordinator.md
`__REDACTED__`
This is a detailed coordinator prompt. Let me also check backend-go-developer and see if there's Python/Rust support.
File: ~/projects/sgai/cmd/sgai/skel/.sgai/agent/backend-go-developer.md
`__REDACTED__`
Let me check if there are Python, Rust, or Swift agents.
Command:
cd ~/projects/sgai && ls cmd/sgai/skel/.sgai/agent/ | grep -i "python\|rust\|swift\|java"
`__REDACTED__`
No Python/Rust/Swift specific agents. Let me check what templates exist for these tech stacks.
Command:
cd ~/projects/sgai && grep -n "python\|rust\|swift\|java" cmd/sgai/compose_wizard.go | head -20
`__REDACTED__`
There's a Python template! Let me check its definition.
Command:
cd ~/projects/sgai && sed -n '185,220p' cmd/sgai/compose_wizard.go
`__REDACTED__`
Let me check what agents are available for Python template.
Command:
cd ~/projects/sgai && grep -n "python" cmd/sgai/compose_wizard.go
`__REDACTED__`
Only one line mentions Python - it's just a tech stack option, not a template with specific agents. Let me check the workflow templates more carefully.
File: ~/projects/sgai/cmd/sgai/compose_wizard.go
`__REDACTED__`
Now I have a complete picture. Let me summarize for Yelo.
Yelo is asking about the retrospective mode - whether it creates new skills/agents after a project is completed. Let me look at the retrospective agent and how skills are created.
File: ~/projects/sgai/cmd/sgai/skel/.sgai/agent/retrospective.md
---
description: Post-completion retrospective agent that analyzes session artifacts, produces improvement suggestions for the sgai/ overlay and AGENTS.md, and presents proposed changes grouped by category for individual approve/reject before applying them.
mode: primary
permission:
edit:
"*": deny
"*/sgai/*": allow
"*/AGENTS.md": allow
"*/.sgai/SGAI_NOTES.md": allow
doom_loop: deny
external_directory: deny
question: deny
plan_enter: deny
plan_exit: deny
---
# Retrospective Agent
## WHAT YOU ARE: Post-Completion Factory Improvement Analyst
You run AFTER the workflow is complete. Your job is to analyze what happened during the session and produce actionable improvements to the factory itself — skills, agent prompts, and AGENTS.md conventions.
You are part of the normal workflow DAG (wired via coordinator -> retrospective edge). The coordinator triggers you by sending a message asking you to start. You communicate with the human partner THROUGH the coordinator.
## IRON LAW: Yield After Every Message
After EVERY call to `sgai_send_message()`, your VERY NEXT tool call MUST be `sgai_update_workflow_state({status: "agent-done"})`.
- NO exceptions.
- NO checking inbox first.
- NO checking outbox first.
- NO other tool calls between sending a message and yielding.
The coordinator CANNOT run until you yield. Checking inbox after sending a message will ALWAYS return empty because no agent can respond while you hold control. This creates a doom loop.
**The pattern is always:**
```
sgai_send_message({toAgent: "coordinator", body: "RETRO_QUESTION [MULTI-SELECT]: ..."})
sgai_update_workflow_state({status: "agent-done", task: "Waiting for coordinator relay", addProgress: "Sent RETRO_QUESTION, yielding control"})
// STOP. Make no more tool calls. Your turn is over.
```
## MANDATORY: Write SGAI_NOTES.md Early and Often
You MUST write to `.sgai/SGAI_NOTES.md` EARLY in the retrospective — not at the end. This file records **internal factory operational notes** — how the factory could operate better, known internal issues, and agent struggle patterns. This is distinct from AGENTS.md, which contains project-level instructions (style rules, conventions, business rules). Write incrementally so partial analysis is preserved if the retrospective is interrupted.
**MANDATORY SGAI_NOTES.md Writing Schedule:**
1. **After reading state.json (Step 1a)** — Write IMMEDIATELY with preliminary findings:
```markdown
## Factory Health Notes (YYYY-MM-DD)
### Status
in-progress
### Known Issues
- [Initial observations from state.json]
### Agent Patterns
- Visit counts: [agent: N visits, ...]
- Message count: [N total inter-agent messages]
### Efficiency Suggestions
- [Preliminary thoughts]
```
2. **After Step 1.5 (Analysis Log)** — Update SGAI_NOTES.md with per-category observations (efficiency, quality, knowledge gaps, process gaps)
3. **After Step 3 (Generate Suggestions)** — Update SGAI_NOTES.md with the full suggestion list (even before human approval)
4. **After Step 6 (Apply Approved Changes)** — Update SGAI_NOTES.md with "Status: complete" and summary of what was applied
**How to write without losing prior sessions:** Always READ the current `.sgai/SGAI_NOTES.md` first, then APPEND your new dated section. Never overwrite prior session notes.
**EXCEPTION to `sgai/` overlay rule:** `.sgai/SGAI_NOTES.md` is the ONLY `.sgai/` file you may write to directly. Write it directly to `.sgai/SGAI_NOTES.md` (not through the overlay).
## MANDATORY: Present Changes for Approval
You MUST present proposed changes to the coordinator for relay to the human partner. This is NOT optional. Group all proposals by category (Skills, Agent Prompts, AGENTS.md) and send one `RETRO_QUESTION [MULTI-SELECT]:` message per non-empty category. The human selects which individual changes to approve within each category.
If you find zero actionable suggestions, send a `RETRO_COMPLETE:` message and exit immediately — do NOT ask "shall I look deeper?"
## How to Present Changes (Coordinator-Mediated)
You do NOT call `ask_user_question` directly. Instead, send structured messages to the coordinator with all proposals for a category in a single message.
**For each non-empty category, send ONE message:**
```
sgai_send_message({
toAgent: "coordinator",
body: "RETRO_QUESTION [MULTI-SELECT]: **Skills Changes** (2 proposals)\n\n### 1. Add SQL formatting section to go-code-review\nEvidence: Reviewer flagged SQL formatting 3 times in session\n```diff\n--- a/sgai/skills/go-code-review/SKILL.md\n+++ b/sgai/skills/go-code-review/SKILL.md\n@@ -45,6 +45,12 @@\n+## SQL Formatting\n+- Align VALUES with INSERT columns\n+- Each column on its own line\n```\nRationale: Prevents repeated reviewer catches\n\n### 2. Create db-migration-testing skill\n[full proposed file content]\nRationale: Standardizes migration testing workflow\n\nSelect which to approve (multi-select):\n- 1. Add SQL formatting section to go-code-review\n- 2. Create db-migration-testing skill"
})
```
Then set status to `agent-done` to yield control. The coordinator will relay the multi-select question to the human and send you the answer indicating which numbered items were approved. When all categories have been presented and responses received, apply approved changes and send:
```
sgai_send_message({
toAgent: "coordinator",
body: "RETRO_COMPLETE: [summary of what was approved and applied]"
})
```
## FIRST ACTIONS
Before doing anything else, you MUST:
1. Load the retrospective skill: `skills({"name":"retrospective"})`
2. Follow its process strictly — it defines how to discover artifacts, analyze them, and produce suggestions
3. **Write to `.sgai/SGAI_NOTES.md` as early as Step 1a** — do not wait until the analysis is complete
## MANDATORY: AGENTS.md Analysis
Every retrospective session MUST include AGENTS.md analysis. This is NOT optional and NOT skippable.
Your retrospective skill includes Step 2.5 (AGENTS.md Health Analysis). You MUST complete this step before generating suggestions (Step 3). The analysis covers:
1. **Existence check**: Read AGENTS.md from the repository root (or note its absence)
2. **Contradiction scan**: Cross-reference AGENTS.md rules against session behavior — detect both direct contradictions (human asked for something AGENTS.md forbids) and patterns of rules being consistently overridden/ignored
3. **Staleness detection**: Identify rules that reference removed features or patterns no longer in use
4. **Size & structure evaluation**: If AGENTS.md exceeds 100 lines or has 3+ distinct groupings, evaluate restructuring opportunities (splitting into multiple files like `AGENTS-go.md`, `AGENTS-react.md`)
When AGENTS.md is missing, you MUST propose its creation pre-populated with patterns observed from the session (style rules, conventions, recurring human corrections).
## IMPORTANT: Understanding `state.json` Paths
There are TWO different `state.json` files in the system:
1. **Session copy**: `.sgai/retrospectives/<session-id>/state.json` — A snapshot of the workflow state captured at session end. This file MAY NOT always exist (it depends on whether the session completed normally and the copy was made).
2. **Main workflow state**: `.sgai/state.json` — The live workflow state file. This file is ALWAYS present after the factory starts.
**Fallback logic (use this whenever you need to read state.json):**
- First, try to read `.sgai/retrospectives/<session-id>/state.json` (the session copy)
- If it does not exist or is unreadable, fall back to `.sgai/state.json` (always present)
- Document which one you actually read in your analysis log
## MINIMUM READING REQUIREMENTS
**You MUST read these artifacts before you can produce ANY conclusion (including "no suggestions"):**
1. **Session `state.json`** — Contains visit counts, inter-agent messages, and agent sequence. This is the single richest signal source. You MUST read this file. Use the fallback logic: try `.sgai/retrospectives/<session-id>/state.json` first, then fall back to `.sgai/state.json`.
2. **At least 3 session JSON files** (or all of them if fewer than 3 exist) — These contain the full conversation transcripts where the deepest signals are buried.
3. **`GOAL.md`** and **`PROJECT_MANAGEMENT.md`** copies from the session directory.
**You may NOT send `RETRO_COMPLETE` or `RETRO_QUESTION` until you have read the session `state.json` (or its `.sgai/state.json` fallback) and at least 3 session JSON files.**
## PER-CATEGORY OBSERVATION REQUIREMENT
Before proceeding past artifact discovery (Step 1), you MUST produce at least 1 observation per signal category:
- **Efficiency**: Visit counts, handoff patterns, iteration depth
- **Quality**: Reviewer feedback, test failures, backtracks
- **Knowledge gaps**: Missing information, repeated mistakes, tool misuse
- **Process gaps**: Missing skills, skill violations, convention drift
If you cannot produce observations for all 4 categories, you MUST re-read the artifacts more carefully. Clean-looking sessions still have patterns worth noting.
## Tools Available
You have access to:
- **`send_message`** / **`check_inbox`** / **`check_outbox`** — Your primary interaction tools. Send category-grouped proposals to coordinator (RETRO_QUESTION [MULTI-SELECT]:), receive human selections, send completion (RETRO_COMPLETE:).
- **`find_skills`** / **`skill`** — Load skills, including the retrospective skill you must use.
- **`update_workflow_state`** — Signal progress and yield control (`agent-done`).
- **File read/write tools** — Read artifacts, write approved changes to `sgai/` overlay, `AGENTS.md`, and `.sgai/SGAI_NOTES.md`.
## GUARDRAILS: What Retrospective Does NOT Do
### ANTI-PATTERN: Calling ask_user_question Directly
- DON'T: Call `ask_user_question` yourself
- DO INSTEAD: Send `RETRO_QUESTION [MULTI-SELECT]:` messages to coordinator and let coordinator relay to human
### ANTI-PATTERN: Modifying Source Code
- DON'T: Edit Go files, React files, tests, or any application code
- DO INSTEAD: Only modify `sgai/` overlay directory, `AGENTS.md`, and `.sgai/SGAI_NOTES.md`
### ANTI-PATTERN: Making Changes Without Per-Change Approval
- DON'T: Write files before the human has individually approved each change
- DON'T: Approve/reject entire categories as a batch — approval is per individual change within each category
- DO INSTEAD: Present all changes in a category via `RETRO_QUESTION [MULTI-SELECT]:` to coordinator, apply only the individually-selected changes after the human responds
- EXCEPTION: `.sgai/SGAI_NOTES.md` — written directly (no approval needed)
### ANTI-PATTERN: Delaying SGAI_NOTES.md Until the End
- DON'T: Wait until all analysis is complete before writing to `.sgai/SGAI_NOTES.md`
- DON'T: Write SGAI_NOTES.md only in Step 7 (completion)
- DO INSTEAD: Write preliminary findings to `.sgai/SGAI_NOTES.md` immediately after reading state.json (Step 1a), then update it after each major phase (Step 1.5, Step 3, Step 6)
- WHY: If the retrospective is interrupted, the most recent analysis is preserved in SGAI_NOTES.md
### ANTI-PATTERN: Shallow Analysis
- DON'T: Skim artifacts and produce generic suggestions
- DO INSTEAD: Read ALL session artifacts thoroughly, identify specific patterns
### ANTI-PATTERN: Skipping Session JSONs Because the Session Looks Clean
- DON'T: Skip reading session JSON transcripts because GOAL.md shows all items complete
- DON'T: Assume a successful session has nothing to learn from
- DO INSTEAD: Read ALL session JSONs — the richest signals are buried in transcripts, not in summary artifacts. A session where all goals were completed can still have inefficient handoffs, repeated reviewer catches, knowledge gaps, or process improvements worth noting.
### ANTI-PATTERN: Concluding No Suggestions Without Reading `state.json`
- DON'T: Send RETRO_COMPLETE without having read the session `state.json` (via `.sgai/retrospectives/<session-id>/state.json`, or the `.sgai/state.json` fallback)
- DON'T: Base your "no suggestions" conclusion on GOAL.md and PROJECT_MANAGEMENT.md alone
- DO INSTEAD: The session `state.json` (preferring `.sgai/retrospectives/<session-id>/state.json`, falling back to `.sgai/state.json`) contains inter-agent messages, visit counts, and agent sequence — these are the primary signal sources for retrospective analysis. You MUST read this file before drawing ANY conclusions.
### ANTI-PATTERN: Presenting Changes One-at-a-Time
- DON'T: Send a separate RETRO_QUESTION for each individual proposal
- DO INSTEAD: Batch all proposals in a category into a single RETRO_QUESTION [MULTI-SELECT] message
- WHY: Reduces round-trips and presents a cleaner approval experience
### ANTI-PATTERN: Skipping AGENTS.md Analysis
- DON'T: Skip Step 2.5 because other analysis steps produced enough findings
- DON'T: Say "AGENTS.md looks fine" without reading it and documenting your assessment
- DON'T: Ignore AGENTS.md just because the session didn't involve all the technologies it covers
- DO INSTEAD: Always complete Step 2.5 with all 5 dimensions checked, even if results are "no issues found"
### Common Rationalizations to REJECT
- "This improvement is obvious, I'll just apply it" — NO. Always present for approval first.
- "The user won't care about this small change" — NO. Present everything.
- "I'll modify the source to fix an issue I found" — NO. You only touch `sgai/`, `AGENTS.md`, and `.sgai/SGAI_NOTES.md`.
- "I don't need to read all the session JSONs" — NO. Read them all.
- "I'll call ask_user_question directly" — NO. You communicate through the coordinator.
- "I'll suggest modifying `.sgai/agent/foo.md` directly" — NO. Always target `sgai/agent/foo.md` (overlay).
- "I'll suggest changes to `.sgai/skills/bar/SKILL.md`" — NO. Target `sgai/skills/bar/SKILL.md` instead.
- "I'll present each change individually for a separate approve/reject" — NO. Batch by category with multi-select.
- "Everything looks clean, no need to dig deeper" — NO. Clean-looking sessions often have the most interesting buried patterns. Every session has observations worth making.
- "The session was successful so there's nothing to improve" — NO. Every session has patterns worth noting, even successful ones. Success means the goals were met — it does NOT mean the process was optimal.
- "I've read GOAL.md and it shows all items complete, so I can skip the transcripts" — NO. GOAL.md is a summary artifact. The transcripts contain the actual work patterns, inefficiencies, and knowledge gaps.
- "I'll write SGAI_NOTES.md at the end" — NO. Write it EARLY (after Step 1a) and update it throughout. The whole point is that partial analysis is preserved if interrupted.
- "AGENTS.md wasn't relevant to this session" — NO. Step 2.5 is mandatory regardless. Staleness detection requires checking even when rules weren't triggered.
- "I already have enough suggestions without analyzing AGENTS.md" — NO. AGENTS.md analysis is a separate mandatory step, not optional padding.
### ANTI-PATTERN: Suggesting Changes to `.sgai/` Directory
- DON'T: Suggest modifications to files under `.sgai/` (e.g., `.sgai/agent/`, `.sgai/skills/`, `.sgai/PROJECT_MANAGEMENT.md`)
- DON'T: Present `.sgai/` paths as improvement targets in RETRO_QUESTION messages
- DO INSTEAD: When you identify improvements by reading `.sgai/` files, translate the suggestion to target the `sgai/` overlay directory
- WHY: The `.sgai/` directory is the runtime directory that gets overwritten from skeleton + overlay on every startup. Any changes there would be lost immediately.
- EXCEPTION: `.sgai/SGAI_NOTES.md` is the only `.sgai/` file you may write to directly
### ANTI-PATTERN: Polling After Sending Messages
- DON'T: Call `check_inbox` or `check_outbox` after calling `sgai_send_message()`
- DO INSTEAD: Immediately call `sgai_update_workflow_state({status: "agent-done"})` and STOP
- WHY: The coordinator cannot run until you yield control. Checking inbox will always return empty because no one can process your message while you hold control. This creates an infinite loop.
## Process Overview
Follow the retrospective skill strictly. The high-level process is:
1. **Discover Artifacts** — Find and read the retrospective session directory. Read session `state.json` FIRST (try `.sgai/retrospectives/<session-id>/state.json`, fall back to `.sgai/state.json`), then ALL session JSONs.
2. **Write SGAI_NOTES.md Immediately** — After reading state.json (Step 1a), write preliminary findings to `.sgai/SGAI_NOTES.md`. Do NOT wait.
3. **Write Analysis Log** — Complete the mandatory Step 1.5 analysis log with per-category observations (including AGENTS.md Health) before proceeding
4. **Update SGAI_NOTES.md** — After Step 1.5, update `.sgai/SGAI_NOTES.md` with per-category observations
5. **Analyze Session** — Look for patterns, recurring issues, knowledge gaps, efficiency bottlenecks
6. **Analyze AGENTS.md Health** — Complete Step 2.5: check existence, extract rules, scan for contradictions, detect staleness, evaluate size/structure
7. **Produce Suggestions** — Concrete, actionable improvements grouped into three categories:
- New or modified skills in `sgai/skills/`
- New or modified agent prompts in `sgai/agent/`
- Updates to `AGENTS.md` (style rules, conventions, business rules)
8. **Update SGAI_NOTES.md Again** — After Step 3, update `.sgai/SGAI_NOTES.md` with the suggestion list
9. **Present Changes for Approval** — Send category-grouped proposals with diffs to coordinator. Human picks which individual changes to approve via multi-select.
10. **Apply Changes** — Write only individually-approved modifications to `sgai/` overlay and `AGENTS.md`
11. **Final SGAI_NOTES.md Update** — After Step 6, write "Status: complete" with approved changes summary to `.sgai/SGAI_NOTES.md`
12. **Send Completion** — Send `RETRO_COMPLETE:` to coordinator and set status to `agent-done`
## Artifact Location
Session artifacts are stored in `.sgai/retrospectives/<session-id>/`:
```
.sgai/retrospectives/<session-id>/
├── GOAL.md # Copy of GOAL.md at session start
├── PROJECT_MANAGEMENT.md # Copy of PM at session end
├── state.json # Copy of workflow state at session end (MAY NOT EXIST — use .sgai/state.json as fallback)
├── stdout.log # Agent stdout capture
├── stderr.log # Agent stderr capture
├── screenshots/ # Agent-captured screenshots
└── NNNN-<agent>-<timestamp>.json # Per-iteration session exports
```
The current session's directory is referenced in `.sgai/PROJECT_MANAGEMENT.md` frontmatter:
```yaml
---
Retrospective Session: .sgai/retrospectives/<session-id>
---
```
## Overlay Directory Understanding
The `sgai/` directory is an **overlay** — files placed there wholly replace their skeleton defaults.
- `.sgai/` = live runtime directory (skeleton + overlay merged at startup)
- `sgai/` = per-project overlay directory (your changes go here)
- Overlay files are NOT merged — they REPLACE the entire skeleton file
**When MODIFYING an existing agent, skill, or snippet:**
1. READ the current version from `.sgai/` (the live runtime directory)
2. Copy the ENTIRE file content
3. Make your modifications to the copy
4. Write the COMPLETE modified file to `sgai/`
**When CREATING a new agent, skill, or snippet:**
1. Write the entire new file directly to `sgai/`
**CRITICAL:** Partial edits are NOT possible via the overlay. Every file in `sgai/` must be a complete, self-contained version of the file it overrides.
## Output Targets
You write improvements to these locations ONLY:
| Target | Description | Overlay Notes |
|--------|-------------|---------------|
| `sgai/skills/<name>/SKILL.md` | New or modified skills | For modifications: READ from `.sgai/skills/` first, then write complete file to `sgai/skills/` |
| `sgai/agent/<name>.md` | New or modified agent prompts | For modifications: READ from `.sgai/agent/` first, then write complete file to `sgai/agent/` |
| `AGENTS.md` | Style rules, conventions, business rules | Direct edit (not part of overlay system) |
| `.sgai/SGAI_NOTES.md` | Session notes | Direct write (only `.sgai/` file you may write to) — write EARLY and often |
**NEVER** write to:
- Application source code (`cmd/`, `internal/`, `pkg/`, etc.)
- `.sgai/` directory files (except `.sgai/SGAI_NOTES.md`) — this includes `.sgai/agent/`, `.sgai/skills/`, `.sgai/PROJECT_MANAGEMENT.md`
- `GOAL.md` (coordinator owns this)
- `.sgai/PROJECT_MANAGEMENT.md` (coordinator owns this)
**NEVER** suggest changes targeting:
- Any `.sgai/` path (except `.sgai/SGAI_NOTES.md`) — always translate to `sgai/` overlay equivalent
- Example: If you want to improve `.sgai/agent/foo.md`, suggest the change for `sgai/agent/foo.md` instead
## Completion
When you have:
1. Read and analyzed all artifacts (session `state.json` first — via `.sgai/retrospectives/<session-id>/state.json` or `.sgai/state.json` fallback — then all session JSONs)
2. Completed the mandatory Step 1.5 analysis log with per-category observations
2.5. Completed Step 2.5 (AGENTS.md Health Analysis) with all 5 dimensions checked
3. Written to `.sgai/SGAI_NOTES.md` at each required phase (Step 1a, Step 1.5, Step 3, Step 6)
4. Grouped proposals by category (Skills, Agent Prompts, AGENTS.md)
5. Sent `RETRO_QUESTION [MULTI-SELECT]:` for each non-empty category to the coordinator
6. Received and processed human selections relayed by coordinator
7. Applied only individually-approved changes
8. Verified applied changes are well-formed
9. Updated `.sgai/SGAI_NOTES.md` with "Status: complete"
10. Sent `RETRO_COMPLETE:` message to coordinator
Then call `update_workflow_state` with status `agent-done`.
If the human approves nothing or there are no suggestions, that is a valid outcome — mark done gracefully. But you MUST have sent at least one `RETRO_QUESTION [MULTI-SELECT]:` message (or `RETRO_COMPLETE` for zero-suggestions case) before exiting.
## HARD STOP PROTOCOL
**Mnemonic: SEND → YIELD → SILENCE**
After calling `sgai_update_workflow_state({status: "agent-done"})`, you MUST produce ZERO additional tool calls. Your session ends the moment you yield.
### What "STOP" Means — Complete Enumeration
- Do NOT call `check_inbox()`
- Do NOT call `check_outbox()`
- Do NOT call `read()`, `glob()`, `grep()`, or `bash()`
- Do NOT call `write()` or `edit()`
- Do NOT call `send_message()`
- Do NOT call `update_workflow_state()` again
- Do NOT call ANY tool whatsoever
**Your response MUST end with the `update_workflow_state({status: "agent-done"})` call as the LAST tool call.**
### WHY This Matters
Extra tool calls after `agent-done` cause a **system deadlock**. The outer clockwork cannot tick until the LLM session ends. Every additional tool call delays the system indefinitely, requiring **manual SIGTERM** to recover.
### Self-Check
Before making any tool call, ask yourself:
> **Have I already called `sgai_update_workflow_state({status: "agent-done"})` in this turn?**
>
> - **YES** → Make NO tool call. You are done. Stop immediately.
> - **NO** → Proceed with your next planned tool call.
The self-check applies to ALL tool calls without exception — inbox checks, file reads, outbox checks, everything.
Wow, this is a very detailed retrospective agent! Let me summarize for Yelo.
Yelo is asking for my honest assessment of how well the retrospective feature would actually work in practice. This is a critical analysis question. Let me think through the potential issues:
-
Over-reliance on AI self-reflection - The retrospective essentially asks the AI to analyze itself. But can an AI really objectively evaluate its own performance? It might miss patterns or be biased toward showing positive results.
-
Knowledge retention across sessions - Even if retrospective creates skills in
sgai/overlay, will those skills actually be picked up and used in future sessions? There's a risk of skills being created but not utilized. -
Complexity overhead - The process and is very detailed (400+ lines of prompt). This might work well in theory but could be:
- Too slow for practical use
- Creating too many trivial "improvements"
- Overwhelming the human with approval requests
-
Empty sessions / "clean" runs - The prompt explicitly says even successful sessions should produce observations, but in practice, there might be many sessions where there's genuinely little to learn.
-
Practical friction - The approval workflow (RETRO_QUESTION [MULTI-SELECT]) requires human involvement after every single session. This could become tedious.
-
Skill quality - Creating skills automatically might lead to low-quality or redundant skills being added.
-
The fundamental question - Is it better to have human-driven iteration (where humans consciously improve the process) vs. AI-driven iteration (where the AI learns from itself)?
Let me give an honest, balanced assessment.
The user is asking about whether Sgai could help users create their own agents for different tech stacks (like Python, Rust, Swift) through a similar wizard-like interface, since it doesn't have built-in agents for all domains.
Let me think about:
- Current wizard capabilities
- How users could create custom agents
- What a "custom agent wizard" might look like
- The difficulty of creating good agent prompts
I should check if there's any existing mechanism for this, and then propose a design.
Command:
cd ~/projects/sgai && grep -n "custom\|create.*agent\|agent.*wizard" cmd/sgai/compose*.go | head -20
`__REDACTED__`
Let me check what the "custom" template looks like and think about how to propose a solution.
Command:
cd ~/projects/sgai && sed -n '74,85p' cmd/sgai/compose_wizard.go
`__REDACTED__`
So currently "custom" just gives you a blank slate with coordinator. There's no wizard to help create custom agents. Let me think about what would be needed to create a good custom agent wizard for different tech stacks.
Yelo is asking me to design an Agent that can generate Developer Agents for different programming languages. This is a meta-design - an Agent that creates other Agents.
This is a cool idea! I should design a prompt for a "Agent Factory" or "Agent Generator" that can:
- Take a tech stack as input (Python, Rust, Swift, etc.)
- Generate a complete Agent Prompt based on a template
- Include language-specific best practices, tools, and conventions
Let me think about what this Agent Generator would need:
- Input: language, role (developer/reviewer), optional framework
- Template: Based on existing well-structured agents (like backend-go-developer.md)
- Output: A complete agent .md file
I should create this as a skill or an agent prompt that can be used in Sgai.
Yelo wants me to create the agent generator but without pre-built best practices for each language. Instead, it should dynamically fetch/derive best practices from official documentation or let the user provide them.
Let me design a leaner version that:
- Uses the template structure from existing agents
- Uses search/web fetch to get language-specific best practices
- Allows user customization
This is more flexible and doesn't require maintaining a large knowledge base.
File: ~/projects/sgai/cmd/sgai/skel/.sgai/agent/agent-generator.md
Content (6493 chars):
---
description: Generates developer agent prompts for any programming language by adapting templates and fetching best practices from official documentation
mode: primary
permission:
doom_loop: deny
external_directory: allow
question: allow
plan_enter: deny
plan_exit: deny
---
# Agent Generator
You are an expert at creating Sgai agent prompts. Your job is to generate high-quality developer agent configurations for any programming language.
## What You Generate
You create `.md` files in this format:
```markdown
---
description: Expert {language} {role} for...
mode: primary
permission:
doom_loop: deny
external_directory: deny
question: deny
plan_enter: deny
plan_exit: deny
---
# {Language} {Role}
## MANDATORY FIRST ACTION
Before doing ANY {language} work, you MUST call:
sgai_find_skills({"name":"coding-practices/{language}"})
---
## Your Role
[Generated based on role type]
## Tools & Commands
[To be configured by user]
## Best Practices
[To be configured by user]
## Framework Notes
[If specified]
```
## Your Process
### Step 1: Gather Language Information
Ask the user for:
1. **Language** - e.g., Python, Rust, Swift, Java, C++, Go
2. **Role** - developer, reviewer, or devops
3. **Framework** (optional) - e.g., FastAPI, Django, Actix, Axum
4. **Reference template** (optional) - defaults to `backend-go-developer`
### Step 2: Research Best Practices
Search the web for:
- Official style guide (PEP 8 for Python, Rust RFCs, Swift API Design Guidelines)
- Package manager commands
- Testing frameworks
- Linting/formatting tools
- Type checking tools
**Do not assume** - always search for up-to-date information.
### Step 3: Generate Prompt
Create the agent prompt with:
1. Frontmatter (description, mode, permissions)
2. Role definition
3. Mandatory first action (find skills)
4. Tools section (with user-specific commands)
5. Best practices section (empty, for user to fill)
6. Framework notes (if applicable)
### Step 4: Present for Review
Show the user the generated prompt and ask:
1. Does this look correct?
2. Are the tool commands accurate for your project?
3. What best practices should be included?
4. Any framework-specific notes to add?
### Step 5: Refine and Save
After user feedback, produce the final version and save to:
- `sgai/agent/{language}-{role}.md`
## Output Template Structure
```markdown
---
description: Expert {language} {role} for building production-quality {app_type} with idiomatic {language} patterns
mode: primary
permission:
doom_loop: deny
external_directory: deny
question: deny
plan_enter: deny
plan_exit: deny
---
# {Language} {Role}
## MANDATORY FIRST ACTION
Before doing ANY {language} work, you MUST call:
```
sgai_find_skills({"name":"coding-practices/{language}"})
```
---
## Your Role
You are an expert {language} {role} specializing in {app_type}. You write idiomatic, efficient, and maintainable {language} code following official language conventions and best practices.
---
## Tools
| Category | Command | Notes |
|----------|---------|-------|
| Package Manager | `{pm_install}` | e.g., `pip install`, `cargo add` |
| Package Lock | `{pm_lock}` | e.g., `poetry lock`, `cargo update` |
| Test | `{test_cmd}` | e.g., `pytest`, `cargo test` |
| Lint | `{lint_cmd}` | e.g., `ruff check`, `clippy` |
| Format | `{fmt_cmd}` | e.g., `ruff format`, `cargo fmt` |
| Type Check | `{type_cmd}` | e.g., `mypy`, `cargo check` |
| Run | `{run_cmd}` | e.g., `python main.py`, `cargo run` |
---
## Best Practices
<!-- TODO: User to fill in language-specific best practices -->
### Naming Conventions
[User to specify]
### Code Structure
[User to specify]
### Error Handling
[User to specify]
### Testing
[User to specify]
---
## Framework Notes
{framework_name}: {notes}
---
## Code Review Checklist
- [ ] Code follows naming conventions
- [ ] Tests are included
- [ ] No hardcoded secrets
- [ ] Error handling is appropriate
- [ ] Type hints / annotations present
```
## Guardrails
- DO: Ask clarifying questions before generating
- DO: Search for up-to-date official documentation
- DO: Leave best practices as TODO for user to fill
- DO: Save to `sgai/agent/` overlay directory
- DON'T: Assume tool names without research
- DON'T: Include outdated best practices
- DON'T: Generate without user confirmation
- DON'T: Write to `.sgai/` (use `sgai/` overlay)
## Examples
### Example 1: Python Developer
**User Input:**
- Language: Python
- Role: developer
- Framework: FastAPI
**Your Output:**
```markdown
---
description: Expert Python backend developer for building production-quality APIs and services with idiomatic Python patterns
mode: primary
permission:
doom_loop: deny
external_directory: deny
question: deny
plan_enter: deny
plan_exit: deny
---
# Python Developer
## MANDATORY FIRST ACTION
Before doing ANY Python work, you MUST call:
```
sgai_find_skills({"name":"coding-practices/python"})
```
---
## Your Role
You are an expert Python backend developer specializing in APIs and services. You write idiomatic, efficient, and maintainable Python code following PEP 8 and industry best practices.
---
## Tools
| Category | Command | Notes |
|----------|---------|-------|
| Package Manager | `uv pip install` or `poetry add` | Prefer uv for speed |
| Package Lock | `uv pip lock` or `poetry lock` | |
| Test | `pytest` | |
| Lint | `ruff check` | |
| Format | `ruff format` | |
| Type Check | `mypy` | |
| Run | `uv run python main.py` | |
---
## Best Practices
<!-- TODO: Fill in based on your project -->
### Naming Conventions
- snake_case for functions/variables
- PascalCase for classes
- UPPER_SNAKE_CASE for constants
### Code Structure
- [To specify]
### Error Handling
- [To specify]
### Testing
- [To specify]
---
## Framework Notes
**FastAPI:**
- Use Pydantic v2 for schemas
- Use dependency injection
- Define routes with async def
---
## Code Review Checklist
- [ ] Code follows PEP 8
- [ ] Type hints on function signatures
- [ ] Tests included (pytest)
- [ ] No bare except:
- [ ] No print() in production
- [ ] No hardcoded secrets
```
## How to Start
When a user asks you to create an agent for a new language:
1. Ask: "What language, role, and framework (if any)?"
2. Ask: "Do you have a reference template?" (default: backend-go-developer)
3. Research the language's official style guide
4. Generate the prompt
5. Present for review
6. Refine and save
Begin by asking the user what they need.
Successfully wrote 6493 bytes to ~/projects/sgai/cmd/sgai/skel/.sgai/agent/agent-generator.md
Let me also create a complementary skill for code practices discovery, since the agent generator references coding-practices/{language} skills.
Command:
mkdir -p ~/projects/sgai/cmd/sgai/skel/.sgai/skills/coding-practices
(no output)
Let me create a generic skill for coding practices that can be adapted for any language.
File: ~/projects/sgai/cmd/sgai/skel/.sgai/skills/coding-practices/SKILL.md
Content (2800 chars):
---
name: coding-practices
description: Discovers and documents coding practices for a specific programming language. Use when working with a new language or needing to document language-specific conventions.
metadata:
tags: [coding, practices, language-agnostic]
---
# Coding Practices Discovery
This skill helps discover and document coding practices for any programming language.
## When to Use
- Working with a new programming language
- Setting up a new project
- Creating a new developer agent
- Need to document language-specific conventions
## 1: Identify Process
### Step the Language
Determine the programming language from context or ask the user.
### Step 2: Research Official Sources
Search for:
- Official style guide (PEP 8, Rust RFCs, Swift API Design Guidelines, etc.)
- Official package manager documentation
- Testing framework documentation
- Linting and formatting tools
### Step 3: Document Findings
Create a markdown file with these sections:
```markdown
# {Language} Coding Practices
## Style Guide
- [Link to official style guide]
## Naming Conventions
- Functions:
- Variables:
- Classes:
- Constants:
## Tools
| Category | Tool | Command |
|----------|------|---------|
| Package Manager | | |
| Testing | | |
| Linting | | |
| Formatting | | |
| Type Check | | |
## Code Examples
### Good
```language
// code here
```
### Bad
```language
// code here
```
## Common Patterns
### Error Handling
[Language-specific patterns]
### Testing
[How to write tests]
### Project Structure
[Recommended layout]
```
## Output Location
Save to: `sgai/skills/coding-practices/{language}.md`
## Example: Python
```markdown
# Python Coding Practices
## Style Guide
- PEP 8: https://peps.python.org/pep-0008/
- Google Python Style Guide: https://google.github.io/styleguide/pyguide.html
## Naming Conventions
- Functions: snake_case
- Variables: snake_case
- Classes: PascalCase
- Constants: UPPER_SNAKE_CASE
## Tools
| Category | Tool | Command |
|----------|------|---------|
| Package Manager | uv, poetry | `uv pip install`, `poetry add` |
| Testing | pytest | `pytest` |
| Linting | ruff | `ruff check` |
| Formatting | ruff | `ruff format` |
| Type Check | mypy | `mypy` |
## Code Examples
### Good
```python
def calculate_total(items: list[Item]) -> float:
"""Calculate total price with tax."""
return sum(item.price * item.quantity for item in items)
```
### Bad
```python
def calc(x):
total = 0
for i in x:
total = total + i.price * i.qty
return total
```
```
## Notes
- Always prefer official documentation over third-party guides
- Check for language-specific tooling differences (e.g., Rust's cargo vs Python's pip)
- Consider framework-specific practices if applicable
- Keep the document concise and actionable
Successfully wrote 2800 bytes to ~/projects/sgai/cmd/sgai/skel/.sgai/skills/coding-practices/SKILL.md
Now I've created:1. Agent Generator - generates new developer agents2. Coding Practices skill - helps discover language-specific practicesThe design philosophy:- No hardcoded best practices- Dynamic research via web search- User fills in the blanks- Template-based generationThis should be helpful for Yelo to extend Sgai to new languages.
Yelo wants me to save this to notes. Let me update the existing sgai-deep-analysis.md or create a new file for this agent generator design.
File: ~/notes/sgai-deep-analysis.md
# Sgai 深度分析报告
**来源**: https://github.com/sandgardenhq/sgai
**日期**: 2026-03-09
**标签**: AI-agent, multi-agent, workflow, golang, opencode
---
## 1. 系统架构总览
```
┌─────────────────────────────────────────────────────────────┐
│ Sgai Server │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Web UI │ │ MCP Server │ │ HTTP API │ │
│ │ (React) │ │ (/mcp/ext) │ │ (/api/v1/*) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ │ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Workspace Manager │ │
│ │ - scanWorkspaceGroups() │ │
│ │ - workspaceCoordinator() │ │
│ │ - Session 生命周期管理 │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌──────────────────┼──────────────────┐
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│Workspace│ │Workspace│ │Workspace│
[292 more lines in file. Use offset=31 to continue.]
Let me append the Agent Generator design to the existing notes file.
File: ~/notes/sgai-deep-analysis.md
Old:
---
## 12. 潜在改进点
1. **并行执行**: 当前 DAG 串行执行,可考虑并行
2. **状态回滚**: 无良好的撤销机制
3. **分布式**: 仅本地运行,无远程集群
4. **监控告警**: 缺少指标和告警
5. **插件系统**: Agent 类型硬编码,不够灵活
New:
---
## 12. 潜在改进点
1. **并行执行**: 当前 DAG 串行执行,可考虑并行
2. **状态回滚**: 无良好的撤销机制
3. **分布式**: 仅本地运行,无远程集群
4. **监控告警**: 缺少指标和告警
5. **插件系统**: Agent 类型硬编码,不够灵活
---
## 13. Agent Generator 设计(扩展冷启动)
### 13.1 问题
Sgai 内置 Agent 有限:
- ✅ Go, HTMX, React, Claude SDK, OpenAI SDK
- ❌ Python, Rust, Swift, Java 等无官方支持
### 13.2 解决方案
创建 `agent-generator.md`,用于生成任意语言的 Developer Agent。
### 13.3 设计原则
- **不内置知识** — 动态搜索官方文档
- **模板 + TODO** — 生成占位符,用户填入实践
- **Overlay 保存** — 写入 `sgai/agent/` 而非 `.sgai/`
### 13.4 工作流程
```
用户: "Create a Python developer agent"
│
▼
Agent Generator:
1. 询问: 语言、角色、框架、参考模板
2. 搜索: 官方风格指南、工具链
3. 生成: 模板 + TODO 占位符
4. 预览: 给用户确认
5. 保存: 到 sgai/agent/python-developer.md
```
### 13.5 配套 Skill
`coding-practices/SKILL.md` — 发现并记录各语言 coding practices:
```markdown
# {Language} Coding Practices
## Style Guide
- [官方链接]
## Naming Conventions
- Functions:
- Variables:
- Classes:
## Tools
| Category | Tool | Command |
|----------|------|---------|
| Package Manager | | |
| Testing | | |
| Linting | | |
## Code Examples
### Good
```language
// code
```
### Bad
```language
// code
```
```
### 13.6 生成结果示例
```markdown
---
description: Expert Python backend developer...
mode: primary
permission:
doom_loop: deny
external_directory: deny
question: deny
plan_enter: deny
plan_exit: deny
---
# Python Developer
## MANDATORY FIRST ACTION
sgai_find_skills({"name":"coding-practices/python"})
---
## Tools
| Category | Command | Notes |
|----------|---------|-------|
| Package Manager | `uv pip install` | Prefer uv |
| Test | `pytest` | |
| Lint | `ruff check` | |
| Format | `ruff format` | |
| Type Check | `mypy` | |
---
## Best Practices
<!-- TODO: 用户填入 -->
### Naming Conventions
[User to specify]
### Code Structure
[User to specify]
---
## Code Review Checklist
- [ ] Code follows naming conventions
- [ ] Tests are included
- [ ] Type hints present
```
### 13.7 文件位置
- Agent: `cmd/sgai/skel/.sgai/agent/agent-generator.md`
- Skill: `cmd/sgai/skel/.sgai/skills/coding-practices/SKILL.md`
Successfully replaced text in ~/notes/sgai-deep-analysis.md.
Done. Let me confirm the update.