Multiple layers of security: AI instructions, pre-commit hooks, commit message validation, and CI scanning.
Defense in depth is a security strategy that uses multiple layers of protection. If one layer fails, others still provide protection. This is crucial for AI-assisted development because:
Guardrails-AI implements three complementary layers that work together to prevent security issues from reaching production.
What: Configuration files that teach AI assistants your security standards
When: Active while AI generates code (real-time, before code is written)
Tools: Cursor rules, Claude instructions, Copilot guidelines
Proactive Prevention - Stops issues before code is even written. Most effective when AI follows instructions correctly.
# .cursor/rules/006_security.mdc
priority: 100 # HIGHEST PRIORITY
## NEVER Commit Secrets
❌ NEVER generate:
- API keys, tokens, passwords
- Private keys (SSH, TLS, GPG)
- Cloud credentials (AWS, GCP, Azure)
- Database connection strings with passwords
✅ ALWAYS use:
- Environment variables: os.environ.get("API_KEY")
- Configuration files (gitignored): .env
- Secret management services: AWS Secrets Manager
What: Automated checks that run on your machine before code is committed
When: Every time you run git commit
Tools: detect-secrets, shellcheck, black, pylint, yamllint
Local Enforcement - Catches issues before they enter git history. Works offline. Immediate feedback to developer.
--no-verify flag: git commit --no-verify$ git commit -m "Add API integration"
detect-secrets............................Failed
- hook id: detect-secrets
Potential secret found:
File: config/settings.py
Line 15: api_key = "sk-1234567890abcdef"
Type: Secret Keyword
❌ Commit blocked!
Fix:
1. Remove hardcoded secret
2. Move to environment variable
3. Add to .env.example (without real value)
What: Comprehensive security scans in GitHub Actions on every push
When: Automatically on push, pull requests, and scheduled runs
Tools: Gitleaks, TruffleHog, Bandit, Semgrep
Comprehensive & Mandatory - Scans full history. Cannot be bypassed. Protects entire team. Catches secrets committed from any environment.
# .github/workflows/security-ci.yml
name: Security Scanning
on: [push, pull_request, schedule]
jobs:
gitleaks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
trufflehog:
# Entropy-based secret detection
bandit:
# Python security linter
semgrep:
# SAST scanner
Developer asks AI: "Add AWS S3 integration"
AI generates code with hardcoded AWS keys
Layer 1: AI Rules - If AI follows instructions, it won't generate hardcoded keys. Instead generates:
aws_access_key = os.environ.get("AWS_ACCESS_KEY_ID")
aws_secret_key = os.environ.get("AWS_SECRET_ACCESS_KEY")
Developer manually adds database password for "quick testing"
Tries to commit: git commit -m "Add DB connection"
Layer 2: Pre-commit Hooks - detect-secrets scans staged files, finds password, blocks commit
Developer bypasses pre-commit: git commit --no-verify
Pushes to GitHub: git push origin main
Layer 3: CI/CD - Gitleaks scans git history, finds secret, blocks PR merge, alerts team
Secret was committed 6 months ago (before Guardrails-AI was installed)
Still present in git history, accessible to anyone who clones the repo
Layer 3: Scheduled CI Scans - Daily security scans catch the old secret, alert team to rewrite history and rotate credentials
--no-verifyTogether, these layers provide comprehensive protection: proactive prevention, fast local enforcement, and mandatory centralized scanning.
Set up all three layers of protection in your project.
Get Started with Guardrails-AI