The problem with one-off corrections
When Claude does something wrong, you correct it and move on. That correction lives only in your head — or at most in a comment to yourself. Next week, when a different session makes the same mistake, you correct it again.
The pattern repeats without accumulating into anything you can act on. Each correction is isolated. Nothing compounds.
Think of a reporter who notices every time a source misleads them but never updates how they evaluate that source. The individual corrections are real, but they're not building anything. The same instincts that led to the first mistake lead to the next one.
What error logging is
Error logging is a structured system for capturing failures when they happen. You record what you asked for, what the AI did instead, what category of failure it was, and a hypothesis about the root cause.
Common failure categories: hallucination, instruction ignored, wrong tool selected, incomplete execution, context lost between steps. These aren't vague — each one points to a different fix.
The exact prompt that triggered the failure is the most critical piece. You need the verbatim text, not your recollection of it. Memory smooths over the details that matter most for diagnosis.
The /log_error command
This is a slash command that forks the current conversation to preserve context, asks you what went wrong, and writes a structured entry to a log file. You invoke it immediately after a failure — while the session context is still fresh and before you've started compensating for the mistake.
Over time, the log becomes a record of what your AI environment gets wrong. It's also a record of conditions: what were you working on, how long had the session been running, what had already been loaded into context. Patterns emerge from that detail that don't emerge from vague recollections.
Log wins too
The mirror command is /log_success. It captures prompts that worked well: what you asked for, what made the phrasing effective, whether it can be templated for reuse.
Successful patterns are as useful as failure analysis. A prompt that produced exactly the right output — the right level of detail, the right format, the right scope — is the input that eventually becomes a skill or a standing instruction in your CLAUDE.md.
Without a log, those wins disappear the same way errors do. You remember vaguely that "something like this worked before" but can't reconstruct it precisely enough to reuse it.
Where the logs feed back
The Module 2 exercise asks you to add a "hard-won lessons" section to your CLAUDE.md and leave it empty for now. This is where log entries feed back: each one starts as a /log_error capture, gets analyzed for its root cause, and becomes a rule that prevents the same mistake from recurring.
Over a project, that section becomes the most useful part of your context file. It's the accumulated record of what this specific AI, in this specific workflow, gets wrong — and what you've figured out about why.
That's different from generic advice about how to prompt AI. It's specific to your work, your tools, your common tasks. Nobody else has that file. You built it.
SOURCE: Adapted from "The error logging system" pattern, originally by The Agentic Lab, extracted and organized at github.com/jamditis/stash.
NEXT: Head to Module 3 to start building your skills library — and start the habit of logging what works.