Complete automation for bug-fix loop: tests → tickets → fixes → retests.
Planfile provides automated CI/CD integration that:
- Runs tests and code analysis
- Generates bug reports using LLM when tests fail
- Creates tickets in PM systems (GitHub, Jira, GitLab)
- Optionally auto-fixes bugs with LLM
- Repeats until all tests pass or strategy is complete
pip install planfile[all]
pip install llx # For AI analysisexport GITHUB_TOKEN=your_token export GITHUB_REPO=owner/repo
export JIRA_URL=https://company.atlassian.net export JIRA_EMAIL=your@email.com export JIRA_TOKEN=your_token export JIRA_PROJECT=PROJ
export GITLAB_TOKEN=your_token export GITLAB_PROJECT_ID=123
export OPENAI_API_KEY=your_key export ANTHROPIC_API_KEY=your_key
### 3. Run Auto-Loop
```bash
planfile auto loop \
--strategy ./strategy.yaml \
--project . \
--backend github \
--backend jira \
--max-iterations 5 \
--auto-fix
pytest tests/ -v --cov=src
ruff check src/ mypy src/
### Phase 2: Bug Detection
- Identify failing tests
- Analyze code quality issues
- Check security vulnerabilities
- Detect performance regressions
# Generate bug report
bug_report = llx.analyze_failure(
test_output=test_results,
code_context=source_code,
error_type="test_failure"
)
title: "Fix: Authentication module test failures" body: |
Test: test_auth_login_invalid_credentials
Error: AssertionError: Expected 401, got 500
The authentication service is not properly handling invalid credentials, causing a server error instead of returning 401 Unauthorized.
Update the auth_service.py to catch validation errors and return
appropriate HTTP status codes.
src/auth/auth_service.pytests/test_auth.py
# Generate fix code
fix_code = llx.generate_fix(
bug_report=bug_report,
source_code=auth_service_code,
context="authentication_error_handling"
)
# Apply fix
apply_fix(fix_code, file_path="src/auth/auth_service.py")
pytest tests/test_auth.py -v
if tests_pass: close_ticket(ticket_id) else: update_ticket_status(ticket_id, "needs_review")
### Dockerfile
```dockerfile
FROM python:3.11-slim
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install Planfile
RUN pip install planfile[all]
# Copy project
WORKDIR /workspace
COPY . .
# Run auto-loop
CMD ["planfile", "auto", "loop", "--strategy", "strategy.yaml"]
version: '3.8'
services:
planfile-runner:
build: .
environment:
- GITHUB_TOKEN=${GITHUB_TOKEN}
- GITHUB_REPO=${GITHUB_REPO}
- OPENAI_API_KEY=${OPENAI_API_KEY}
volumes:
- .:/workspace
- ./results:/app/results
command: planfile auto loop --strategy strategy.yaml --max-iterations 10name: "CI/CD Automation Strategy"
project_type: "web"
domain: "software"
sprints:
- id: 1
name: "Bug Fix Sprint"
length_days: 7
quality_gates:
- type: "test_coverage"
threshold: 80
- type: "security_scan"
threshold: "no_critical"
tasks:
- type: "bug_fix"
pattern: "test_failure"
auto_fix: true
priority: "high"quality_gates:
- name: "Test Coverage"
type: "coverage"
threshold: 80
command: "pytest --cov=src --cov-report=xml"
- name: "Security Scan"
type: "security"
threshold: "no_critical"
command: "bandit -r src/"
- name: "Code Quality"
type: "quality"
threshold: "no_issues"
command: "ruff check src/"name: Planfile Auto-Loop
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
auto-loop:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install Planfile
run: pip install planfile[all]
- name: Run Auto-Loop
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
planfile auto loop \
--strategy .github/strategy.yaml \
--project . \
--backend github \
--max-iterations 3 \
--auto-fixplanfile auto ci-status
planfile auto results --format json
planfile strategy report
--strategy strategy.yaml
--output ci-report.md
### Metrics Collection
```yaml
metrics:
- name: "bug_fix_rate"
type: "percentage"
target: 95
- name: "test_coverage"
type: "percentage"
target: 80
- name: "auto_fix_success"
type: "percentage"
target: 70
from strategy.integrations.github import GitHubBackend
github = GitHubBackend(
token="github_token",
repo="owner/repo"
)
# Create issue for bug
issue = github.create_issue(
title="Fix: Authentication test failure",
body=bug_report,
labels=["bug", "auto-generated"]
)
# Update issue status
github.update_issue(issue.id, state="closed")from strategy.integrations.jira import JiraBackend
jira = JiraBackend(
url="https://company.atlassian.net",
email="user@company.com",
token="jira_token",
project="PROJ"
)
# Create ticket
ticket = jira.create_ticket(
summary="Authentication module bug fix",
description=bug_report,
issue_type="Bug",
priority="High"
)from strategy.integrations.gitlab import GitLabBackend
gitlab = GitLabBackend(
token="gitlab_token",
project_id=123
)
# Create issue
issue = gitlab.create_issue(
title="Fix authentication bug",
description=bug_report,
labels=["bug", "auto-generated"]
)from strategy.ci_runner import CustomTestRunner
class CustomRunner(CustomTestRunner):
def run_tests(self):
# Custom test logic
results = subprocess.run([
"pytest", "tests/",
"--cov=src",
"--junitxml=results.xml"
])
return results.returncode == 0from strategy.ci_runner import BugAnalyzer
class CustomAnalyzer(BugAnalyzer):
def analyze_failure(self, test_output, code_context):
# Custom analysis logic
return {
"type": "logic_error",
"severity": "high",
"suggested_fix": "Update validation logic"
}ticket_templates:
bug_fix:
title: "Fix: {test_name} failure"
body: |
## Bug Report
**Test**: {test_name}
**Error**: {error_message}
## Analysis
{analysis}
## Suggested Fix
{suggested_fix}
labels: ["bug", "auto-generated"]
priority: "high"- Keep sprints focused and time-boxed
- Define clear quality gates
- Use task patterns for consistency
- Structure tests by feature/module
- Use descriptive test names
- Include assertion messages
- Provide meaningful error messages
- Include context in bug reports
- Use structured logging
- Provide clear prompts for LLM
- Validate AI-generated fixes
- Use human-in-the-loop for critical changes
- Track key metrics
- Set up alerts for failures
- Regular strategy reviews
planfile auto ci-status
planfile auto loop --max-iterations 1 --dry-run
# Check API keys
echo $OPENAI_API_KEY
# Test AI connection
planfile ai test --provider openai
# Fallback to manual mode
planfile auto loop --no-auto-fix
planfile backend test github
planfile backend test jira
planfile backend check github --permissions
# Enable debug logging
planfile auto loop --debug --log-level debug
# Save detailed logs
planfile auto loop --log-file debug.log
# Dry run mode
planfile auto loop --dry-run --verbose
Planfile - Automating your SDLC, one loop at a time. 🚀