What this skill does
Creates structured retrospective documents that capture what worked, what didn't, and—most importantly—what you thought you were building versus what was actually needed. The goal is honest, specific documentation that saves others from the same mistakes.
Investigations
Research projects
Events
Conferences, workshops
Publications
Newsletters, podcasts
Tools
Newsroom software
The most important section: "The real problem"
This is the most valuable part of any retrospective. It answers:
"What did we THINK we were building vs. what was ACTUALLY needed?"
Good example
"We built a comprehensive tagging system when users just needed full-text search. Three weeks on features no one used."
Bad example (too generic)
"We learned the importance of user research."
LESSONS.md template structure
Project metadata
Name, dates, status, author
Summary
What it did, impact, why it matters
What worked
Technical wins, process wins
What didn't work
Critical failures, technical debt, external factors
The real problem
What you thought vs. what was needed
Recommendations
If continuing, if starting fresh, tech stack verdict
Reusable artifacts
Components others can use
Questions for next time
Unanswered questions worth investigating
The specificity test
For each item in "What didn't work," ask: Can I name the specific decision? Can I quantify the impact? Would this help someone avoid the same mistake?
Too vague
- • Communication could have been better
- • We underestimated the complexity
- • Testing was insufficient
Specific and actionable
- • Skipped schema validation on data files. Cost: 3 hours debugging a typo that caused silent failures.
- • Built custom date picker when browser native input would have worked. 2 days wasted.
- • No error messages when data fails to load—users just see blank screen.
Red flags in your writing
If you find yourself writing these, stop and be more specific. These are placeholders for real insights:
A strong "real problem" example
"We built an admin dashboard for editors when they actually needed a Slack bot. They live in Slack—forcing them to open a web app was friction they'd never accept. The dashboard has 2 monthly active users; the Slack bot prototype we built in a day has 47."
Journalism-specific templates
research-project.md
Investigations, data journalism
For deep-dive investigative projects with data analysis, source management, and publication timelines.
event.md
Conferences, workshops
For in-person and virtual events with attendance metrics, speaker feedback, and logistics lessons.
publication.md
Newsletters, podcasts
For ongoing content series with audience growth, production workflows, and sustainability considerations.
editorial-tool.md
Newsroom software, AI tools
For internal tools with adoption metrics, integration challenges, and user feedback tracking.
Voice guidelines
- • Honest, specific, slightly self-deprecating — Like explaining to a friend why the project took twice as long
- • No corporate speak or blame-shifting — Name specific mistakes, not vague "challenges"
- • Quantify when possible — "2 days wasted" is better than "some time lost"
- • Focus on decisions, not people — Include failures with context, exclude blame for individuals
Installation
# Clone the repository
git clone https://github.com/jamditis/claude-skills-journalism.git
# Copy the skill to your Claude config
cp -r claude-skills-journalism/project-retrospective ~/.claude/skills/
Or download just this skill from the GitHub repository.
Start capturing lessons
The best retrospectives are written by people who got burned and want to save others from the same fate.
View on GitHub