How to Build a Weekly Retrospective System for Side Projects (Inside Your Browser)
How to Build a Weekly Retrospective System for Side Projects (Inside Your Browser)
Most developers I know have tried to run a weekly retrospective on their side projects. Most of them quit by week three.
The pattern is almost always the same. Sunday evening, open a fresh Notion page, write three sections (what went well, what didn’t, what’s next), stare at a blank cursor, type two bullets, close the laptop. The next week the page is blank because nobody opened it. By week four the doc is buried under five other “systems” and the habit is dead.
The reason isn’t laziness. It’s that retrospectives written for company teams do not match how solo side projects actually run. You don’t have sprint goals. You don’t have stakeholders. You don’t have a scrum master nagging you to fill in the template. What you have is forty-seven open tabs, three projects in flight, and a hunch that you spent the week on the wrong one.
This article is about a retrospective system that survives past week three. It is light enough to do in fifteen minutes, structured enough to surface real signal, and works for developers running multiple side projects in parallel.
Why side project retrospectives usually fail
Before we get to the format, it helps to name the failure modes.
Templates that ask the wrong questions. “What did you accomplish this sprint?” assumes a sprint. Side projects don’t have sprints. They have weeks where you shipped a feature, weeks where you read three articles and refactored nothing, and weeks where you opened the repo once. A template needs to handle all three without making you feel like a fraud.
No data to ground the answers. When you sit down on Sunday to write “what did I do this week”, you are running on memory. Memory lies. You will overweight the last thing you did and forget the four hours you spent on Wednesday morning. Without a log of commits, completed tasks, and resources you saved, the retrospective becomes a vibe report.
Friction between writing and reading. A retrospective is only useful if you read last week’s before you write this week’s. If the doc lives somewhere you have to navigate to, you will not open it. The whole loop has to live where you already spend time.
Performative scope. People write retros like they expect a manager to read them. Solo retros should be private, blunt, and unembarrassed. “I procrastinated on the auth refactor for the third week in a row” is more useful than “Worked on authentication module.”
A working system has to fix all four.
What a working retrospective needs
After cycling through five different setups across two years of side projects, the format that stuck has five properties.
- Data-anchored. It pulls in commits, completed tasks, and saved resources automatically so the writing part starts from facts, not memory.
- Three-window scope. Daily logs feed weekly reviews feed monthly summaries. You skip a daily, the weekly still works. You skip a weekly, the monthly still works.
- Five questions max. More than five and you stop filling it in. Less than three and you don’t extract enough signal.
- Lives where the work lives. Not a separate Notion workspace. Not an Obsidian vault you open once a week. The retrospective lives in the same surface as the projects it reviews.
- Comparable across periods. Week 12 should be visually comparable to week 11 without manual diff work. Patterns only emerge when you can see them.
That last one is what most homemade systems miss. A pile of weekly markdown files is searchable but not comparable. You need the system to surface “this is the third week running you spent more time on Project A than B” without you having to count.
The five-question weekly review
Here is the actual template. It takes about fifteen minutes if you have daily data, thirty if you are reconstructing the week from memory.
1. What did the data say?
Before you write a word of opinion, look at the numbers. Commits per project. Tasks completed vs created. Hours logged. Resources saved by category. The point is to anchor the rest of the answers in actual activity, not what you feel like you did.
2. What shipped?
A specific list of things that left your machine. A merged PR. A deployed change. A doc published. A demo recorded. If nothing shipped, write “nothing shipped” honestly. That is also signal.
3. What got stuck?
The thing you avoided. The PR you left in draft for five days. The task you dragged across three days without touching. Name it specifically. “Auth refactor blocked by uncertainty about token storage” beats “didn’t make progress on auth.”
4. What did you learn?
A link, an article, a tutorial, a debugging story. Something you would teach a future version of yourself. If you have an archive of saved resources for the week, half of this answer is already written.
5. What is the one thing for next week?
Not a list of five. One thing. The single piece of work that, if you finished it, would make next week’s review feel good. Everything else is either supporting that one thing or noise.
That’s it. Five questions, fifteen minutes, no more.
The monthly zoom-out
The weekly review is for momentum. The monthly review is for direction.
Once a month, look at four weeks of weekly reviews stacked together and ask three questions.
- Which project absorbed the most time? Compare it to which project you said was your priority. The gap is the diagnostic.
- What pattern repeated across “what got stuck”? If the same thing showed up three weeks running, it is no longer a stuck task. It is a structural problem with how you work or what you are building.
- Was the monthly direction worth it? Sometimes you finish a month and realize the project you spent it on was the wrong project. Better to notice in month one than month six.
The monthly review is the one that prevents you from spending a quarter on the wrong thing. It is also the one most people skip.
Where STACKFOLO fits
The reason I built this system inside STACKFOLO is that the data the retrospective needs is already in the browser. GitHub commits across all my repos are in the Timeline view. Completed tasks for the week are in the Tasks panel. Saved resources for the week sit in the Archive grouped by category and project.
The Reports feature is what closed the loop. Daily logs (sleep, focus, mood, satisfaction, one-line note) feed an AI-generated daily report. Weekly, monthly, quarterly, and yearly views aggregate those daily logs into a Life Area radar chart and a period summary. The five questions above map directly to fields in the weekly retrospective form, with AI-drafted starting text per field that you can either accept, edit, or rewrite.
The point is not that you need an AI to do your retros. The point is that ninety percent of the friction in keeping a retrospective habit is the blank-page problem. If the data is already pulled in and the first draft is half written, the only thing left is the part that requires your judgment.
A two-week trial
If you have never run a retrospective system that stuck, try this for two weeks before deciding if it works.
- Days 1 through 6: Just keep daily logs. Two minutes each. Sleep, focus hours, one-line note. Skip the weeks part.
- Day 7: Open the week. Answer the five questions. Pick one thing for next week.
- Days 8 through 13: Same daily logs. On day 10, re-read your day-7 review. Notice what you said you’d ship. Notice if you are.
- Day 14: Second weekly review. Compare week 1 vs week 2. The signal is in the comparison.
Two weeks is enough to know if the format fits how you work. If it does, the third week is when most homemade systems die. The reason this one survives is that it never asked you to keep a separate journal in the first place.
Want a weekly retrospective system that pulls from the projects you already track? Try STACKFOLO free on Chrome Web Store →
STACKFOLO turns your Chrome new tab into a project dashboard. Manage side projects, track tasks, save resources with AI, and stay focused.
Try STACKFOLO Free →