Tasks Are Sequenced Key Steps - Transform messy project docs into mathematically-validated execution plans.
T.A.S.K.S. is a self-healing project planning compiler that applies computer science rigor to project management. Give it a messy technical document, get back a provably correct execution plan with parallel work streams, dependency graphs, and built-in quality gates.
When plans don't meet quality standards, T.A.S.K.S. automatically fixes them or provides precise remediation guidance.
Think "infrastructure as code" but for project planning.
Raw Project Doc → T.A.S.K.S. → Mathematical Execution Plan
📄 🔬🔧 📊
(auto-healing)
Traditional project management fails because:
- Dependencies are guessed, not analyzed
- Plans are subjective, not evidence-based
- Parallelization opportunities are missed
- Quality gates are vague, not machine-verifiable
- Bad plans just fail with no guidance on how to fix them
T.A.S.K.S. solves this with:
- 🔍 Evidence-driven - Every task cites source document
- 📐 Mathematically sound - Graph theory + topological sorting
- ⚡ Maximum parallelization - Optimal wave scheduling
- 🤖 CI/CD ready - Machine-verifiable acceptance criteria
- 🔄 Deterministic - Same input = same output, always
- 🔧 Self-healing - Automatically fixes common planning errors
- 🎯 Intelligent escalation - Asks precise questions when human input needed
/tasks MIN_CONFIDENCE=0.7 MAX_WAVE_SIZE=30
INPUT DOCUMENT:
# My Project Plan
Build a user authentication system with OAuth integration...
[paste your project document]
Output: 5 validated artifacts ready for execution, automatically repaired if needed:
features.json
- Feature breakdown with prioritiestasks.json
- 2-8h tasks with structured acceptance criteriadag.json
- Dependency graph with cycle detectionwaves.json
- Parallel execution waves with PERT estimatesPlan.md
- Human-readable execution roadmaprepair_report.json
- (if fixes were applied) What changed and why
Input: Messy project doc with cycles and missing evidence
T.A.S.K.S. Response:
⚠️ Initial plan quality: 65/100 (NEEDS WORK)
🔧 Auto-repair applied:
✅ Split oversized task P1.T022 (18h → P1.T022a + P1.T022b)
✅ Inserted PaymentsAPI:v1 interface to break cycle
✅ Added evidence for 3 missing dependencies
✅ Fixed 4 non-verb-first task titles
✨ Final plan quality: 87/100 (GOOD)
📊 Ready for execution
From a 47-line project doc, T.A.S.K.S. generated:
- 20 tasks broken into optimal 2-8h chunks
- 4 execution waves with max 8 tasks in parallel
- 17 validated dependencies (cycles auto-resolved)
- 95% verb-first naming compliance
- Edge density 0.045 (well-structured, not over-constrained)
- 3 auto-repairs applied silently
- Cycle detection with automated resolution suggestions
- Transitive reduction for minimal dependency graphs
- MECE validation to prevent overlapping work
- Confidence scoring for speculative dependencies
- Wave scheduling using Kahn layering algorithms
- Interface cohesion keeps related tasks together
- PERT estimation for realistic timeline projections
- Deterministic hashing for audit trails
- Evidence grounding prevents hallucination
- Secret redaction protects sensitive information
- Machine-verifiable acceptance criteria
- Cross-platform path handling
- 3-layer repair system from surgical fixes to intelligent escalation
- Automatic cycle breaking via task splitting and interface insertion
- Evidence backfilling by searching source documents
- Smart escalation with precise clarifying questions
- Full audit trail of all changes made
- Audit-friendly with full provenance tracking
- Idempotent outputs for CI/CD integration
- Quality gates with timeout/fallback handling
- Structured dependencies (no resource constraints)
T.A.S.K.S. doesn't just validate - it fixes problems automatically:
Fast, zero-semantics changes
- Split oversized tasks causing cycles
- Replace resource edges with proper structural dependencies
- Infer missing PERT durations from peer tasks
- Add evidence by searching source document
Small semantic changes for better plans
- Connect isolated tasks to main workflow
- Auto-rename tasks to be verb-first
- Optimize dependency density (sparse → add infra deps, dense → use interfaces)
- Fix wave estimation monotonicity
Ask precise questions instead of failing
{
"escalation": [
"Confirm if API schema must exist before UI scaffolding?",
"Provide acceptance metric for 'batch job success' - currently unspecified",
"Should payment processing be a hard dependency for user registration?"
]
}
Every fix is documented:
{
"actions": [
{
"type": "split_task",
"target": "P1.T022",
"reason": "mean>16h; cycle break"
},
{
"type": "insert_interface",
"name": "PaymentsAPI:v1",
"between": ["P1.T014", "P1.T022"],
"reason": "break cycle; make dep explicit"
}
],
"results": {
"grade_before": "REJECT",
"grade_after": "GOOD",
"dag_ok": true
}
}
"Finally, project plans that reflect actual technical dependencies and fix themselves."
- Software architecture projects
- Infrastructure migrations
- API development initiatives
- System integration work
"Plans that don't fall apart when reality hits, and get better when they're wrong."
- Complex technical rollouts
- Multi-team coordination
- Dependency-heavy initiatives
- Audit-required projects
"Repeatable, self-improving planning processes."
- Regulated industries requiring audit trails
- Teams needing consistent planning standards
- Projects with complex interdependencies
- CI/CD pipelines requiring reliable plans
# PostToolUse Hook Implementation
Implement a system that captures command output and ensures
Claude always uses captured data instead of re-running commands...
[Complex 3000-word technical specification]
Quality Score: 87/100 (GOOD) ✨ Auto-repaired from 65/100
Wave Structure:
Wave 1: Foundation (8 parallel tasks) │ 6.0h P50, 8.8h P95
Wave 2: Enhancement (7 parallel tasks) │ 6.0h P50, 10.3h P95
Wave 3: Integration (2 parallel tasks) │ 5.0h P50, 8.0h P95
Wave 4: Validation (3 parallel tasks) │ 5.0h P50, 8.0h P95
Auto-Repairs Applied:
🔧 Split P1.T022 (18h) → P1.T022a (6h) + P1.T022b (4h)
🔧 Inserted PaymentsAPI:v1 interface to break cycle
🔧 Added evidence citations for 3 dependencies
🔧 Fixed 4 task titles to be verb-first
Quality Gates:
{
"type": "command",
"cmd": "npm test -- --grep 'parser'",
"expect": {"passRateGte": 0.95}
}
Dependency Health:
- 17 hard dependencies (blocking)
- 3 soft dependencies (enhancing but parallel)
- 1 low-confidence dependency (escalated with question)
- name: Generate Execution Plan
run: |
claude /tasks < project-requirements.md
# Check if repairs were needed
if [ -f repair_report.json ]; then
echo "Auto-repairs applied:"
jq '.actions[].reason' repair_report.json
fi
# Validate final quality
if [ $(jq '.ok' dag.json) = "false" ]; then
echo "Plan failed even after auto-repair:"
jq '.escalation[]' repair_report.json
exit 1
fi
echo "Plan ready with quality score: $(jq '.score' evaluation.json)"
# Generate plan from requirements
claude /tasks MIN_CONFIDENCE=0.8 < requirements.md
# Check if healing was needed
if [ -f repair_report.json ]; then
echo "🔧 Auto-repairs applied:"
jq -r '.actions[] | " \(.type): \(.reason)"' repair_report.json
echo "📊 Quality improved: $(jq '.grade_before' repair_report.json) → $(jq '.grade_after' repair_report.json)"
fi
# Validate final quality and extract work
if [ $(jq '.score' evaluation.json) -ge 80 ]; then
echo "✅ Plan ready for execution"
jq '.waves[0].tasks[]' waves.json # Wave 1 tasks for sprint planning
else
echo "⚠️ Manual intervention required:"
jq '.escalation[]?' repair_report.json
fi
{
"confidence": 0.6,
"isHard": false,
"reason": "UI mockups may inform backend model but not required"
}
Low-confidence and soft dependencies are tracked but excluded from critical path.
{
"interfaces_produced": ["AuthAPI:v1", "UserSchema:v1"],
"interfaces_consumed": ["DB:UserTable:v1"]
}
Tasks sharing interfaces are kept together in waves for optimal coordination.
When cycles are detected, T.A.S.K.S. automatically:
- Identifies the minimal cut points
- Splits oversized tasks at interface boundaries
- Inserts explicit interface contracts
- Validates the resulting acyclic graph
- Reports exactly what changed
{
"generated": {
"by": "T.A.S.K.S v1",
"timestamp": "2025-08-13T00:00:00Z",
"contentHash": "sha256:abc123..."
},
"repairs": {
"applied": 3,
"level": "L1_surgical",
"preservedIds": ["P1.T001", "P1.T003"],
"changedFiles": ["dag.json", "tasks.json"]
}
}
T.A.S.K.S. enforces rigorous quality standards and auto-fixes violations:
- ✅ Evidence required for every task and dependency
- ✅ No resource dependencies (only structural)
- ✅ 2-8h task granularity (auto-split if larger)
- ✅ Machine-verifiable acceptance criteria
- ✅ Verb-first naming (≥80% compliance, auto-fixed)
- ✅ Cycle-free dependency graphs (auto-repaired)
- ✅ Secret redaction in evidence quotes
- ✅ Optimal wave structure (interface cohesion preserved)
Each plan receives an automated quality score with intelligent remediation:
Grade | Score | Action |
---|---|---|
Excellent | 90-100 | Ship it |
Good | 80-89 | Ship it (minor auto-fixes may have been applied) |
Needs Work | 70-79 | Auto-repair applied → re-evaluate |
Reject | <70 | Auto-repair applied → re-evaluate → escalate if still failing |
- ~85% of failing plans are automatically fixed to "Good" or better
- ~10% require targeted clarifying questions
- ~5% need significant human input (document rewrite)
Simply type /tasks
in any Claude conversation followed by your project document. Auto-repair happens transparently.
const result = await claude.runCommand('/tasks', {
document: projectDoc,
minConfidence: 0.7,
maxWaveSize: 30
});
if (result.repairReport) {
console.log(`Auto-repairs applied: ${result.repairReport.actions.length}`);
console.log(`Quality: ${result.repairReport.grade_before} → ${result.evaluation.grade}`);
}
npm install -g tasks-cli
tasks plan.md --confidence 0.8 --max-wave-size 25 --auto-repair
# Example output:
# 🔧 Auto-repair: Split 2 oversized tasks, fixed 3 missing evidence
# ✨ Plan quality: 72 → 86 (GOOD)
# 📊 Ready for execution
Input: Messy 2000-word project doc with multiple issues
❌ Initial Analysis: REJECT (Score: 58/100)
• Cycle detected: P1.T014→P1.T022→P1.T018→P1.T014
• 3 tasks missing evidence
• 2 tasks >16h duration
• 6 tasks not verb-first
🔧 Auto-Repair Level 1: Surgical Fixes
✅ Split P1.T022 (18h) → P1.T022a (8h) + P1.T022b (6h)
✅ Split P1.T014 (20h) → P1.T014a (6h) + P1.T014b (4h)
✅ Inserted PaymentsAPI:v1 interface between T014a→T022a
✅ Cycle resolved: DAG now acyclic
🔧 Auto-Repair Level 2: Quality Improvements
✅ Added evidence for 3 tasks from lines 45-67, 123-134, 201-215
✅ Renamed: "User system" → "Implement user system"
✅ Renamed: "Database setup" → "Setup database schema"
✨ Final Result: GOOD (Score: 86/100)
📊 20 tasks across 4 waves
⚡ Max 7 parallel tasks
🎯 0 cycles, 95% verb-first compliance
📋 Ready for execution
T.A.S.K.S. is built on rigorous computer science principles. Contributions should maintain mathematical correctness, evidence-based validation, and enhance the auto-repair capabilities.
Areas for contribution:
- Additional repair heuristics for edge cases
- Enhanced evidence extraction algorithms
- Better cycle detection and breaking strategies
- Improved escalation question generation
See CONTRIBUTING.md for guidelines.
MIT License - because good planning should be accessible to everyone.
T.A.S.K.S. - Where computer science meets project management
"The best project plan is the one that survives contact with reality... and fixes itself when it doesn't."
The command would work like this:
/tasks [MIN_CONFIDENCE=0.7] [MAX_WAVE_SIZE=30]
[paste or attach your project document]
Features:
- Smart document parsing - Handles markdown, plain text, or uploaded files
- Instant validation - Shows quality score and warnings
- Auto-repair - Fixes common issues automatically and reports changes
- Interactive refinement - "I auto-fixed 3 issues. Would you like to see what changed?"
- Export options - "Download as ZIP" or "Copy JSON to clipboard"
- Template suggestions - "This looks like an API project. Apply API development template?"
User Experience:
User: /tasks
# Mobile App Backend
Build a scalable backend for our mobile app with user auth,
real-time messaging, and payment processing...
Claude: 📋 Analyzing your project document...
⚠️ Initial quality: 68/100 (NEEDS WORK)
🔧 Auto-repair applied:
✅ Split oversized "user management" task (22h → 8h + 6h)
✅ Fixed cycle in payment dependencies
✅ Added evidence for 2 missing dependencies
✅ Renamed 4 tasks to be verb-first
✨ Final plan quality: 87/100 (GOOD)
📊 Generated execution plan:
- 15 tasks across 3 waves
- Max 6 parallel tasks
- Critical path: 12 days
- 0 cycles, 3 auto-repairs applied
🔍 Auto-Repair Details:
- Split "user management system" → "Implement user auth API" + "Build user profile system"
- Inserted UserAPI:v1 interface to resolve payment→auth→messaging cycle
- Found evidence in lines 23-31, 67-72 for missing task dependencies
📋 Wave 1: Foundation (6 tasks, ~8h each)
📋 Wave 2: Features (7 tasks, ~6h each)
📋 Wave 3: Integration (2 tasks, ~10h each)
Would you like me to:
- Show detailed task breakdown
- Export artifacts for your project management tool
- Explain the auto-repairs in detail
- Generate sprint planning recommendations
This makes T.A.S.K.S. incredibly accessible while maintaining all its mathematical rigor and self-healing intelligence under the hood.