By default Claude is trained to be agreeable. This resets that. Works best in Settings β Global Instructions so it applies to every session.
Claude's context window is a spotlight, not storage. More tokens doesn't mean Claude remembers more β the spotlight gets dimmer. Anthropic calls this "context rot."
Symptoms:
The 4 fixes:
- /compact β Compress session history. Use proactively at task boundaries, not after quality drops.
- /clear β Full reset. Use when switching tasks entirely. Pair with a written spec.
- /rewind β Roll back to a previous state without killing the session.
- Trim CLAUDE.md β Loads every session. Over 300 lines = token tax on every request.
- ABOUT ME/ β about-me.md, my-voice.md (tone + phrases you never use), my-rules.md (ask first, show a plan)
- PROJECTS/ β one subfolder per project
- TEMPLATES/ β reusable prompt templates
- OUTPUTS/ β everything Claude generates
Global Instructions:
- Edit, don't restart. Edit previous messages rather than starting a new chat.
- New chat every ~15 messages. Shorter sessions = more headroom.
- Batch into one message. Combine follow-ups. One message, one response.
- Turn off Extended Thinking when you don't need deep reasoning.
- Pace your day. Don't burn your quota in the first session of the morning.
- Plan Mode Default: Use for any non-trivial task (3+ steps). Stop and re-plan immediately if something goes wrong.
- Subagent Strategy: Use subagents to keep main context clean. Offload research and parallel analysis.
- Self-Improvement Loop: After every session, update tasks/lessons. Iterate on failing tests until they pass.
- Verify Before Done: Diff changes, run tests, check logs. Demonstrate correctness before claiming complete.
- Demand Elegance: Ask "Is there a more elegant way?" Don't push unseen solutions.
- Zero Handholding: Fix bugs at root cause. Zero context switching required from user.
- Task Management: Write a checkable plan first. Explain changes. Capture lessons.
- Extract Architecture β Claude reverse engineers its own code into a master markdown file.
- Boundary Testing β Force failing unit tests for all auth and payment paths.
- Establish Telemetry β Integrate error logging (Sentry etc.) to catch silent failures.
- Refactor for Modularity β Break monolithic files into single-responsibility modules.
- Sanitize Dependencies β Audit every imported library for vulnerabilities.
- Document Deployment β Create a containerized build script.
- Threat Model β Run an adversarial AI agent to find injection flaws before hackers do.
Before Claude starts, have it identify what it actually needs to know. This eliminates the biggest source of bad output: Claude making assumptions instead of asking.
The prompt:
Power variation:
Instead of describing the format you want, show Claude 2β3 examples of the exact output. Claude's pattern matching is exceptional β it will nail your format, tone, and structure on the first try.
Template:
When your prompt has multiple parts, wrap each section in XML tags. Claude was trained on structured data and natively understands XML hierarchy β it attends to each section more accurately.
Template:
After Claude gives you an answer, ask it to critique its own response. It finds gaps and missing angles it didn't catch the first time. The second pass is almost always noticeably better.
Basic (one follow-up):
Full loop (build into the original prompt):
For important decisions:
Claude has strong default behaviors β bullet points, hedging, summaries, corporate-speak. Explicitly prohibiting these breaks habits faster than trying to describe the style you want.