Back to Features

📝 Project Context Tracking

Prevent context loss and circular debugging with implementation tracking that maintains continuity across AI agent sessions.

❌ The Problem: Context Loss & Circular Debugging

When working with AI coding agents across multiple sessions, critical information gets lost:

  • Lost Implementation Context: New agent sessions don't know what was built, why decisions were made, or what approaches were tried.
  • Repeated Failed Experiments: Agents try solutions that were already proven not to work, wasting time and effort.
  • Forgotten Design Decisions: Important technical decisions and their rationale disappear when context windows reset.
  • Circular Debugging: Teams go in circles, re-attempting the same failed fixes because there's no record of what doesn't work.
  • Hallucination Without Facts: AI agents without documented facts may confidently suggest incorrect solutions based on assumptions.

Result: Wasted developer time, frustration from repeating work, and reduced productivity as context constantly needs rebuilding.

🎯 The Solution: WORKLOG.md

AI Guardrails introduces WORKLOG.md - a lightweight implementation tracking system that maintains project context across agent sessions through brief, structured entries.

✅ How It Works

WORKLOG.md is a simple markdown file that tracks:

  1. Features Added: Brief summaries (1-2 lines) of what was built
  2. Findings & Decisions: Why certain approaches were chosen and important discoveries
  3. What Doesn't Work: Failed approaches that should not be re-attempted

Each entry is dated and kept intentionally brief to maximize value while minimizing maintenance overhead.

🆚 WORKLOG.md vs CONTEXT.md

CONTEXT.md - Standards

  • Purpose: How to code (standards, rules)
  • Updates: Rarely (only when standards change)
  • Content: Design principles, security rules, coding conventions
  • Example: "Never commit secrets, use type hints, quote bash variables"

WORKLOG.md - Implementation

  • Purpose: What was built (features, decisions)
  • Updates: Frequently (after significant work)
  • Content: Features, findings, failed approaches
  • Example: "Built auth system, tried JWT but Redis sessions work better"

📋 Entry Format

Each WORKLOG entry follows a simple, consistent structure:

## 2026-02-15 ### Features Added - Implemented JWT authentication with refresh tokens - Added rate limiting middleware using Redis - Created token refresh endpoint /api/auth/refresh ### Findings & Decisions - JWT access tokens expire in 15 minutes, refresh tokens in 7 days - Redis chosen for rate limiting - 10x faster than database queries - httpOnly cookies prevent XSS attacks on stored tokens ### What Doesn't Work - ❌ Storing JWT in localStorage - vulnerable to XSS attacks - ❌ Long-lived access tokens - can't revoke without blacklist overhead - ❌ Database queries for every auth check - too slow (200ms vs 2ms)

🔄 Dual-Approach Workflow

AI Guardrails uses a combined approach to ensure WORKLOG.md stays current:

1. Manual Prompting (Primary Method)

Human-Driven Updates

Developers explicitly prompt AI agents to update the worklog:

# Starting a session "Read WORKLOG.md before starting. Has JWT auth been attempted before?" # After completing work "Update WORKLOG.md with: - What you just implemented - Key decisions made - Any approaches that didn't work Keep each section to 1-3 lines."

Why manual? Humans maintain quality control, can review before committing, and stay in the loop on documentation.

2. Context File Reminders (Backup Method)

Automatic Reminders in AI Configs

AI agent configuration files include instructions to use WORKLOG.md:

  • AGENTS.md: Section 2.1 with detailed workflow instructions
  • CLAUDE.md: Prominent reminder after "Your Role" section
  • .cursor/rules/001_workspace.mdc: "Read WORKLOG.md First" in workspace context
  • CONTRIBUTING.md: Part of development workflow and documentation checklist

These reminders ensure agents are aware of WORKLOG.md even without explicit prompting.

🎓 Example: Good vs Bad Entries

❌ Bad Entry (Too Vague)

## 2026-02-15 ### Features Added - Fixed some stuff - Updated files ### Findings & Decisions - Made it work better ### What Doesn't Work - Some things didn't work

Problem: No actionable information. Future agents won't know what was done or why.

✅ Good Entry (Specific & Actionable)

## 2026-02-15 ### Features Added - Fixed memory leak in WebSocket connection handler - Added connection cleanup in src/websocket/manager.ts ### Findings & Decisions - WeakMap prevents memory leaks (automatic cleanup) - Must use socket.terminate() not socket.close() - Connection pool limit: 1000 concurrent connections ### What Doesn't Work - ❌ Using Map for connections - causes memory leak - ❌ Relying on disconnect event - unreliable - ❌ setTimeout for cleanup - race conditions

Why it's good: Specific files, concrete reasons, and clear anti-patterns documented.

🔧 Implementation Workflow

┌─────────────────────────────────────────┐ │ 1. New AI Agent Session Starts │ └─────────────┬───────────────────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 2. Agent Reads Config Files │ │ → Learns about WORKLOG.md │ └─────────────┬───────────────────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 3. Developer Prompts: │ │ "Read WORKLOG.md before starting" │ └─────────────┬───────────────────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 4. Agent Reads WORKLOG.md │ │ → Understands recent work │ │ → Sees what doesn't work │ │ → Learns from past decisions │ └─────────────┬───────────────────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 5. Agent Implements Feature │ │ → Makes informed decisions │ │ → Avoids known pitfalls │ └─────────────┬───────────────────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 6. Developer Prompts: │ │ "Update WORKLOG.md with summary" │ └─────────────┬───────────────────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 7. Agent Updates WORKLOG.md │ │ → Documents features added │ │ → Records findings & decisions │ │ → Notes what doesn't work │ └─────────────┬───────────────────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 8. Developer Reviews & Commits │ │ → git add WORKLOG.md │ │ → git commit -m "..." │ └─────────────────────────────────────────┘

✨ Benefits

For AI Agents

For Developers

For Projects

🚀 Getting Started

WORKLOG.md is included automatically when you use AI Guardrails:

# 1. Clone AI Guardrails git clone https://github.com/christopherpaquin/Guardrails-AI .ai-guardrails # 2. Bootstrap copies WORKLOG.md template automatically ./.ai-guardrails/template/bootstrap-guardrails.sh # 3. Start using it immediately cat WORKLOG.md # Read before starting work # 4. Prompt AI agents to update "Update WORKLOG.md with what we just built"

📚 Documentation

Complete documentation is provided in WORKLOG_USAGE.md which includes:

💡 Best Practices

🔗 Related Resources

🎯 The Bottom Line

WORKLOG.md prevents context loss and circular debugging by maintaining a lightweight, human-readable log of implementation work that AI agents can reference across sessions.

Combined with manual prompting and automatic reminders in context files, it ensures critical project knowledge is preserved, failed experiments are documented, and development velocity stays high even as agent sessions reset.

Get Started with AI Guardrails