Inspiration
This started from a practical frustration inside my own team. As our usage of AI coding tools like Copilot, Cursor, etc., increased, our velocity went up—but careful checking of the AI-generated code went down. Engineers were committing code they hadn’t truly examined. Reviews were happening later, sometimes too late, and often superficially. This led to abstruse bugs and long debugging at prod. Clearly, we needed a solution.
I didn’t want another dashboard. I wanted a strong nudge to review code at the right place—exactly where responsibility is bound to exist: git commit.
git-lrc was born from the idea that review shouldn’t be an afterthought. It should be structurally encouraged while putting the developer in control. So in git-lrc, while a review is triggered automatically, the dev can still consciously skip the review. Or they can manually review and "vouch" for the change they are making.
All these micro review decisions get recorded in git log—for future analysis so that the team could operate at higher engineering standards.
As to git-lrc, it takes 60 seconds to set up and is completely free for any number of reviews—thanks to Google Gemini's Free tier.
I encourage you to give git-lrc a try and see the difference in the quality of your code as well as concrete outcomes such as reduced production bugs.
What it does
git-lrc hooks into your Git workflow and reviews staged changes before a commit is finalized.
It takes your diff, runs an AI-powered review, and shows structured feedback — inline comments, issues, and a summary — before the commit lands. You either fix the issues or consciously proceed.
It shifts review left. Not to PR time. To commit time.
How I built it
I built it in Go as a CLI tool that installs global Git hooks.
When you commit:
- It captures the staged diff.
- Sends it to the review engine.
- Opens a local review UI that looks like a lightweight code review system.
- Blocks or allows the commit depending on your decision.
The key design constraint was zero ceremony. Installation is one command. After that, it just works across repositories.
Challenges I ran into
The hardest problem wasn’t technical. It was behavioral.
Too much friction, and engineers disable it.
Too little friction, and it becomes noise.
I had to tune the review depth, latency, and UI clarity so that it felt like assistance — not policing.
Handling large diffs, edge cases in Git hooks, and making the system robust under network/API variability were also non-trivial.
Accomplishments that I'm proud of
- Making review structurally unavoidable without being oppressive.
- Achieving sub-flow disruption: it feels like part of Git, not an external tool.
- Getting engineers to actually read their own diffs again.
- Shipping something that enforces standards without meetings or policy documents.
What I learned
AI generation increases output but decreases deliberate thinking unless counterbalanced.
If you want better engineering behavior, change the system, not the people.
Git hooks are far more powerful than most teams use them for.
What's next for git-lrc
- Smarter context-aware reviews (understanding project structure).
- Team-level policies and shared review baselines.
- Editor integrations.
- Better signal filtering to reduce over-reporting.
- Making it viable for larger organizations without turning it into enterprise bloat.
Log in or sign up for Devpost to join the conversation.