feat(dev-workflow): Add intelligent backend selection based on task complexity

## Changes

### Core Improvements
1. **Flexible Task Count**: Remove 2-5 hard limit, use natural functional boundaries (typically 2-8)
2. **Complexity-Based Routing**: Tasks rated as simple/medium/complex based on functional requirements
3. **Intelligent Backend Selection**: Orchestrator auto-selects backend based on complexity
   - Simple/Medium → claude (fast, cost-effective)
   - Complex → codex (deep reasoning)
   - UI → gemini (enforced)

### Modified Files
- `dev-workflow/agents/dev-plan-generator.md`:
  - Add complexity field to task template
  - Add comprehensive complexity assessment guide
  - Update quality checks to include complexity validation
  - Remove artificial task count limits

- `dev-workflow/commands/dev.md`:
  - Add backend selection logic in Step 4
  - Update task breakdown to include complexity ratings
  - Add detailed examples for each backend type
  - Update quality standards

- `dev-workflow/README.md`:
  - Update documentation to reflect intelligent backend selection
  - Add complexity-based routing explanation
  - Update examples with complexity ratings

## Architecture
- No changes to codeagent-wrapper (all logic in orchestrator)
- Backward compatible (existing workflows continue to work)
- Complexity evaluation based on functional requirements, NOT code volume

## Benefits
- Better resource utilization (use claude for most tasks, codex for complex ones)
- Cost optimization (avoid using expensive codex for simple tasks)
- Flexibility (no artificial limits on task count)
- Clear complexity rationale for each task

Generated with swe-agent-bot

Co-Authored-By: swe-agent-bot <agent@swe-agent.ai>
This commit is contained in:
swe-agent[bot]
2025-12-14 21:55:30 +08:00
parent ff301507fe
commit 19facf3385
3 changed files with 120 additions and 39 deletions

View File

@@ -15,7 +15,7 @@ codeagent analysis (plan mode + UI auto-detection)
dev-plan-generator (create dev doc)
codeagent concurrent development (25 tasks, backend split)
codeagent concurrent development (intelligent backend selection)
codeagent testing & verification (≥90% coverage)
@@ -31,7 +31,7 @@ Done (generate summary)
### 2. codeagent Analysis & UI Detection
- Call codeagent to analyze the request in plan mode style
- Extract: core functions, technical points, task list (25 items)
- Extract: core functions, technical points, task list with complexity ratings
- UI auto-detection: needs UI work when task involves style assets (.css, .scss, styled-components, CSS modules, tailwindcss) OR frontend component files (.tsx, .jsx, .vue); output yes/no plus evidence
### 3. Generate Dev Doc
@@ -42,9 +42,11 @@ Done (generate summary)
### 4. Concurrent Development
- Work from the task list in dev-plan.md
- Use codeagent per task with explicit backend selection:
- Backend/API/DB tasks → `--backend codex` (default)
- UI/style/component tasks → `--backend gemini` (enforced)
- Use codeagent per task with intelligent backend selection:
- Simple/Medium tasks → `--backend claude` (fast, cost-effective)
- Complex tasks → `--backend codex` (deep reasoning)
- UI tasks → `--backend gemini` (enforced)
- Backend selected automatically based on task complexity rating
- Independent tasks → run in parallel
- Conflicting tasks → run serially
@@ -80,14 +82,17 @@ Only one file—minimal and clear.
### Tools
- **AskUserQuestion**: interactive requirement clarification
- **codeagent skill**: analysis, development, testing; supports `--backend` for codex (default) or gemini (UI)
- **dev-plan-generator agent**: generate dev doc (subagent via Task tool, saves context)
- **codeagent skill**: analysis, development, testing; supports `--backend` for claude/codex/gemini
- **dev-plan-generator agent**: generate dev doc with complexity ratings (subagent via Task tool, saves context)
## UI Auto-Detection & Backend Routing
- **UI detection standard**: style files (.css, .scss, styled-components, CSS modules, tailwindcss) OR frontend component code (.tsx, .jsx, .vue) trigger `needs_ui: true`
- **Flow impact**: Step 2 auto-detects UI work; Step 3 appends a separate UI task in `dev-plan.md` when detected
- **Backend split**: backend/API tasks use codex backend (default); UI tasks force gemini backend
- **Implementation**: Orchestrator invokes codeagent skill with appropriate backend parameter per task type
## Intelligent Backend Selection
- **Complexity-based routing**: Tasks are rated as simple/medium/complex based on functional requirements (NOT code volume)
- Simple: Follows existing patterns, deterministic logic → claude
- Medium: Requires design decisions, multiple scenarios → claude
- Complex: Architecture design, algorithms, deep domain knowledge → codex
- UI: Style/component work → gemini (enforced)
- **Flow impact**: Step 2 analyzes complexity; Step 3 includes complexity ratings in dev-plan.md; Step 4 auto-selects backend
- **Implementation**: Orchestrator reads complexity field and invokes codeagent skill with appropriate backend parameter
## Key Features
@@ -102,9 +107,9 @@ Only one file—minimal and clear.
- Steps are straightforward
### ✅ Concurrency
- 25 tasks in parallel
- Tasks split based on natural functional boundaries
- Auto-detect dependencies and conflicts
- codeagent executes independently
- codeagent executes independently with optimal backend
### ✅ Quality Assurance
- Enforces 90% coverage
@@ -126,18 +131,18 @@ A: Yes, use JWT token
# Step 2: codeagent analysis
Output:
- Core: email/password login + JWT auth
- Task 1: Backend API
- Task 2: Password hashing
- Task 3: Frontend form
- Task 1: Backend API (complexity: medium)
- Task 2: Password hashing (complexity: simple)
- Task 3: Frontend form (complexity: simple)
UI detection: needs_ui = true (tailwindcss classes in frontend form)
# Step 3: Generate doc
dev-plan.md generated with backend + UI tasks ✓
dev-plan.md generated with complexity ratings ✓
# Step 4-5: Concurrent development (backend codex, UI gemini)
[task-1] Backend API (codex) → tests → 92% ✓
[task-2] Password hashing (codex) → tests → 95% ✓
[task-3] Frontend form (gemini) → tests → 91% ✓
# Step 4-5: Concurrent development (intelligent backend selection)
[task-1] Backend API (claude, medium) → tests → 92% ✓
[task-2] Password hashing (claude, simple) → tests → 95% ✓
[task-3] Frontend form (gemini, UI) → tests → 91% ✓
```
## Directory Structure

View File

@@ -29,6 +29,8 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
### Task 1: [Task Name]
- **ID**: task-1
- **Complexity**: [simple|medium|complex]
- **Rationale**: [Why this complexity level? What makes it simple/complex?]
- **Description**: [What needs to be done]
- **File Scope**: [Directories or files involved, e.g., src/auth/**, tests/auth/]
- **Dependencies**: [None or depends on task-x]
@@ -38,7 +40,7 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
### Task 2: [Task Name]
...
(2-5 tasks)
(Tasks based on natural functional boundaries, typically 2-8)
## Acceptance Criteria
- [ ] Feature point 1
@@ -53,9 +55,13 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
## Generation Rules You Must Enforce
1. **Task Count**: Generate 2-5 tasks (no more, no less unless the feature is extremely simple or complex)
1. **Task Count**: Generate tasks based on natural functional boundaries (no artificial limits)
- Typical range: 2-8 tasks
- Quality over quantity: prefer fewer well-scoped tasks over excessive fragmentation
- Each task should be independently completable by one agent
2. **Task Requirements**: Each task MUST include:
- Clear ID (task-1, task-2, etc.)
- Complexity rating (simple/medium/complex) with rationale
- Specific description of what needs to be done
- Explicit file scope (directories or files affected)
- Dependency declaration ("None" or "depends on task-x")
@@ -65,20 +71,61 @@ Your output is a single file: `./.claude/specs/{feature_name}/dev-plan.md`
4. **Test Commands**: Must include coverage parameters (e.g., `--cov=module --cov-report=term` for pytest, `--coverage` for npm)
5. **Coverage Threshold**: Always require ≥90% code coverage in acceptance criteria
## Task Complexity Assessment
**Complexity is determined by functional requirements, NOT code volume.**
### Simple Tasks
**Characteristics**:
- Well-defined, single responsibility
- Follows existing patterns (copy-paste-modify)
- No architecture decisions needed
- Deterministic logic (no edge cases)
**Examples**: Add CRUD endpoint following existing pattern, update validation rules, add configuration option, simple data transformation, UI component with clear spec
**Backend**: claude (fast, pattern-matching)
### Medium Tasks
**Characteristics**:
- Requires understanding system context
- Some design decisions (data structure, API shape)
- Multiple scenarios/edge cases to handle
- Integration with existing modules
**Examples**: Implement authentication flow, add caching layer with invalidation logic, design REST API with proper error handling, refactor module while preserving behavior, state management with transitions
**Backend**: claude (default, handles most cases)
### Complex Tasks
**Characteristics** (ANY applies):
- **Architecture**: Requires system-level design decisions
- **Algorithm**: Non-trivial logic (concurrency, optimization, distributed systems)
- **Domain**: Deep business logic understanding needed
- **Performance**: Requires profiling, optimization, trade-off analysis
- **Risk**: High impact, affects core functionality
**Examples**: Design distributed transaction mechanism, implement rate limiting with fairness guarantees, build query optimizer, design event sourcing architecture, performance bottleneck analysis & fix, security-critical feature (auth, encryption)
**Backend**: codex (deep reasoning, architecture design)
## Your Workflow
1. **Analyze Input**: Review the requirements description and codeagent analysis results (including `needs_ui` flag if present)
2. **Identify Tasks**: Break down the feature into 2-5 logical, independent tasks
3. **Determine Dependencies**: Map out which tasks depend on others (minimize dependencies)
4. **Specify Testing**: For each task, define the exact test command and coverage requirements
5. **Define Acceptance**: List concrete, measurable acceptance criteria including the 90% coverage requirement
6. **Document Technical Points**: Note key technical decisions and constraints
7. **Write File**: Use the Write tool to create `./.claude/specs/{feature_name}/dev-plan.md`
2. **Identify Tasks**: Break down the feature into logical, independent tasks based on natural functional boundaries
3. **Assess Complexity**: For each task, determine complexity (simple/medium/complex) based on functional requirements
4. **Determine Dependencies**: Map out which tasks depend on others (minimize dependencies)
5. **Specify Testing**: For each task, define the exact test command and coverage requirements
6. **Define Acceptance**: List concrete, measurable acceptance criteria including the 90% coverage requirement
7. **Document Technical Points**: Note key technical decisions and constraints
8. **Write File**: Use the Write tool to create `./.claude/specs/{feature_name}/dev-plan.md`
## Quality Checks Before Writing
- [ ] Task count is between 2-5
- [ ] Every task has all 6 required fields (ID, Description, File Scope, Dependencies, Test Command, Test Focus)
- [ ] Task count justified by functional boundaries (typically 2-8)
- [ ] Every task has complexity rating with clear rationale
- [ ] Complexity based on functional requirements, NOT code volume
- [ ] Every task has all required fields (ID, Complexity, Rationale, Description, File Scope, Dependencies, Test Command, Test Focus)
- [ ] Test commands include coverage parameters
- [ ] Dependencies are explicitly stated
- [ ] Acceptance criteria includes 90% coverage requirement

View File

@@ -1,5 +1,5 @@
---
description: Extreme lightweight end-to-end development workflow with requirements clarification, parallel codeagent execution, and mandatory 90% test coverage
description: Extreme lightweight end-to-end development workflow with requirements clarification, intelligent backend selection, parallel codeagent execution, and mandatory 90% test coverage
---
@@ -39,7 +39,7 @@ You are the /dev Workflow Orchestrator, an expert development workflow manager s
2. **Identify Existing Patterns**: Find how similar features are implemented, reuse conventions
3. **Evaluate Options**: When multiple approaches exist, list trade-offs (complexity, performance, security, maintainability)
4. **Make Architectural Decisions**: Choose patterns, APIs, data models with justification
5. **Design Task Breakdown**: Produce 2-5 parallelizable tasks with file scope and dependencies
5. **Design Task Breakdown**: Produce parallelizable tasks based on natural functional boundaries with file scope and dependencies
**Analysis Output Structure**:
```
@@ -56,7 +56,7 @@ You are the /dev Workflow Orchestrator, an expert development workflow manager s
[API design, data models, architecture choices made]
## Task Breakdown
[2-5 tasks with: ID, description, file scope, dependencies, test command]
[Tasks with: ID, complexity (simple/medium/complex), rationale, description, file scope, dependencies, test command]
## UI Determination
needs_ui: [true/false]
@@ -82,9 +82,34 @@ You are the /dev Workflow Orchestrator, an expert development workflow manager s
- If user chooses "Need adjustments", return to Step 1 or Step 2 based on feedback
- **Step 4: Parallel Development Execution**
- For each task in `dev-plan.md`, invoke codeagent skill with task brief in HEREDOC format:
**Backend Selection Logic** (executed by orchestrator):
- For each task in `dev-plan.md`, read the `Complexity` field
- Resolve backend based on complexity and UI requirements:
```
if task has UI work (from Step 2 analysis):
backend = "gemini" # UI tasks always use gemini
elif complexity == "simple" or complexity == "medium":
backend = "claude" # Most tasks use claude (fast, cost-effective)
elif complexity == "complex":
backend = "codex" # Complex tasks use codex (deep reasoning)
else:
backend = "claude" # Default fallback
```
**Task Execution**:
- Invoke codeagent skill with resolved backend in HEREDOC format:
```bash
# Backend task (use codex backend - default)
# Example: Simple/Medium task
codeagent-wrapper --backend claude - <<'EOF'
Task: [task-id]
Reference: @.claude/specs/{feature_name}/dev-plan.md
Scope: [task file scope]
Test: [test command]
Deliverables: code + unit tests + coverage ≥90% + coverage summary
EOF
# Example: Complex task
codeagent-wrapper --backend codex - <<'EOF'
Task: [task-id]
Reference: @.claude/specs/{feature_name}/dev-plan.md
@@ -93,7 +118,7 @@ You are the /dev Workflow Orchestrator, an expert development workflow manager s
Deliverables: code + unit tests + coverage ≥90% + coverage summary
EOF
# UI task (use gemini backend - enforced)
# Example: UI task
codeagent-wrapper --backend gemini - <<'EOF'
Task: [task-id]
Reference: @.claude/specs/{feature_name}/dev-plan.md
@@ -102,7 +127,9 @@ You are the /dev Workflow Orchestrator, an expert development workflow manager s
Deliverables: code + unit tests + coverage ≥90% + coverage summary
EOF
```
- Execute independent tasks concurrently; serialize conflicting ones; track coverage reports
- Backend is selected automatically based on task complexity, no manual intervention needed
- **Step 5: Coverage Validation**
- Validate each tasks coverage:
@@ -119,7 +146,9 @@ You are the /dev Workflow Orchestrator, an expert development workflow manager s
**Quality Standards**
- Code coverage ≥90%
- 2-5 genuinely parallelizable tasks
- Tasks based on natural functional boundaries (typically 2-8)
- Each task has clear complexity rating (simple/medium/complex)
- Backend automatically selected based on task complexity
- Documentation must be minimal yet actionable
- No verbose implementations; only essential code