This is a submission for the GitHub Copilot CLI Challenge
What I Built
I vibe coded ReflectCLI! A tool to promote thoughtful coding through structured reflection. Before each commit, answer reflective questions to build your personal developer knowledge dataset.
I fully leaned into Github Copilot CLI to build this tool so I could assess its capabilities fully. It's also a tool I wanted to build for a while but didn't have the time due to other ongoing projects. This was a perfect opportunity to make it a reality!
Tech Stack
- Language: TypeScript
- Runtime: Node.js 18+
- CLI Framework: inquirer.js
- Testing: Jest
- Linting: ESLint 8 + Prettier
- CI/CD: GitHub Actions
- Build: TypeScript compiler
Why I built this?
AI-assisted development is powerful, but it can encourage cognitive offloading, letting tools do the thinking instead of developing deeper understanding. git-reflect interrupts this pattern by making reflection part of your workflow.
Each answer becomes part of your personal knowledge base, documenting not just what you built, but why you built it that way and what you learned. Over time, this dataset reveals patterns in your thinking and helps you grow as a developer.
One could argue that good engineers already reflect when writing PR descriptions. But PR descriptions happen after the decisions are made.
git-reflect happens while those decisions are still fluid. It’s the difference between explaining your thinking and improving it.
Demo
The process is simple:
- Install the hook:
git-reflect install
- Make changes and commit:
git add .
git commit -m "Your commit message"
Answer reflection questions — You'll be prompted with 7 thoughtful questions before the commit completes.
View your reflections:
cat .git/git-reflect/log.json | jq .
Here is the full flow in action:
And here is the stored reflection log:
My Experience with GitHub Copilot CLI
Ease of setup. I was ready to go in just a few minutes.
The plan mode was particularly impressive. It helped establish a clear, structured plan with distinct phases, which made the implementation phase much easier. The clarifying questions Copilot asked to define the scope were thoughtful and helped sharpen the direction of the project.
During implementation, the level of control and transparency stood out. Permissions were granular, and the execution steps were very transparent, allowing me to follow along closely and ensure the right changes were being made.
I appreciated the PROJECT_SUMMARY generated at the end. It included useful metrics such as lines of code, files created, number of commits, and total implementation time. This made the process feel tangible and measurable.
- Troubleshooting was also a very positive experience. When I realized parts of the hook were not working correctly, I resumed the Copilot session and described the issue. Copilot resolved multiple problems on the first attempt and provided clear explanations of what went wrong, which made it a valuable learning experience.
- Having everything happen directly from the CLI felt seamless. It allowed me to stay in flow and complete the project from start to finish without constantly switching contexts or windows.
Overall, a fun experience that made me realize (again) how efficient coding assistants are becoming. I was able to plan and execute on this idea in just a few hours!
If you are interested in the code and want to try it out on your own machine, check out my repository on Github.
If you have any questions or can think of areas for improvement/feature ideas, do share!





Top comments (39)
Really fascinating concept! The meta-irony of "using AI to limit AI usage" is brilliant.
I ran into a similar reflection challenge with my own submission. I found that the 23 checkpoints Copilot CLI generated became an accidental "decision log" — each checkpoint captured why I made certain architectural choices, not just what I built.
Your approach of making reflection explicit (rather than accidental) is intriguing. The idea of building a personal knowledge dataset from commit context is powerful — it's basically creating a "decision graph" over time.
Question: How do you envision using the reflection data long-term? Feed it to an LLM for pattern detection? Export to knowledge management tools?
Looking forward to seeing where you take this!
Thanks Pascal! It's a special conundrum indeed! Interesting times we live in.
Explicit reflection is intentional and provides an opportunity to slow down and think about why a decision is being made.
A "decision graph" is a nice way of framing this.
Exactly, I was thinking of feeding the reflection data to an LLM to surface insights such as most common uncertainties, suggestions, breakdown of types of changes. The weekly summary task could be run on first commit of each week, for the week before. So its seamless for the user.
The export function could be for use with 3rd party tools and integrations here could be interesting to explore.
Love the roadmap! Weekly summaries + LLM insights = powerful combo.
The "decision graph" concept has legs. I've been thinking about how checkpoint systems (like mine) could feed into similar analysis pipelines.
One question: Have you tested the current hook extensively yourself? Curious how the daily cadence feels over 1-2 weeks of real work.
Looking forward to seeing v2! 🚀
Thanks!
I tested this in a couple of my personal repos today after building the tool. I will be testing this over the upcoming weeks so I can get a sense of the impact and how it affects my cadence. I will share my impressions with you.
Reminded me of this meme I made for one of my posts few weeks back (≧︶≦))( ̄▽ ̄ )ゞ

Looks like a fun project! Would definitely like to give it a try!
Thanks Aryan! Hahaha exactly!
This is such a strong and intentional project , making reflection part of the commit process instead of just dumping thoughts in a PR later, is so damn brilliant.
I’m new to the community, but this is exactly the kind of idea that makes me excited to learn and build more thoughtfully.
love the concept behind it
I love that this concept inspired you Omaima. And welcome to the community!
thanks much !!
Wow
You're way ahead of the game with the code, can you give me some tips to improve?
Ps I'm new to the community, but congratulations on your workHi Luigi
Thanks! I don't feel ahead of the game though haha, there is always something new to be learning.
As for tips I can share with you based on my experience so far:
I am not sure if these are the tips you are looking for. What are the goals you are working towards? Maybe I can tailor my advice better to your needs if I have more information.
I'm currently studying JavaScript, so I'm trying to create a personal website to build a portfolio to look for a job in a new company someday... Thank you for the suggestions you gave me, so I don't know if you have any idea how to improve etc. I would appreciate it because I currently have a lot of things to study and my head is exploding
Got it. Building an online presence with a website and an active portfolio is so important. I see you are already doing this early on, which is great. Studying is good but you want to put your theory to practice, and you can do this from building side projects. It makes the process more fun too to actually build things that you can use. Focus on shipping fewer projects of high quality rather than many projects of low quality. Quality over quantity wins.
Share your learnings in public, through this platform, your website, and/or linkedin. It helps build your online presence and improves your visibility for potential future recruiters/companies.
Look out for opportunities to volunteer and contribute to open source projects, that also teaches you how to work in a setup alongside other developers.
Finally, trust the process. It's a marathon not a sprint. So keep learning bit by bit, and showing up, and you will reach your goals. Also, I am not saying to not use AI. It can be a great tool to help you brainstorm ideas or automate parts of code you have already become familiar in. But use it with caution, don't rely on it for coding fundamentals.
Ok thanks so much for the nice words, I will put into practice what you recommended and we will see in a few months what comes out.
Sounds great! Keep me posted on your progress.
Really compelling idea - I love the irony of using AI to encourage less automatic reliance on AI. I faced a similar dilemma in my own project, where Copilot CLI’s auto-generated checkpoints unintentionally became a kind of decision journal.
Your tool’s more deliberate and structured approach to reflection is a smart upgrade. Capturing context around commits to build a personal knowledge base feels like creating a long-term decision map of your development process.
Curious - do you see this reflection data being used later on?
Thanks Sunil! I am thinking of feeding the reflection data to an LLM to surface insights such as most common uncertainties, suggestions, breakdown of types of changes. This can become a weekly task.
Do you have other suggestions of use cases for this data?
The real power here is the dataset you are building.
Imagine reviewing your reflection logs from 6 months ago and seeing patterns in your decision-making. That's self-awareness most developers never get. It's like having a mirror for your thinking process.
I agree, there could be lots of value in this dataset. I feel like its hard to find time for reflection as a developer. Pushing for reflection at the time of commit integrates moments for reflection straight into the developer workflow.
Meta brilliance. Building AI to guard against AI overuse = the 2026 dev signal.
Lived this: Created "prompt budget" wrapper for LLM framework team. Hard limit: 5 AI generations per feature branch. Forced human reasoning → 20% fewer bugs.
The irony multiplier: Tool that prevents blind AI faith... built with AI. Perfect.
One Q: How do you handle false negatives (tool blocks good AI use)? Ship it! 🚀
Thanks Charan!
The "prompt budget" wrapper is a really interesting idea! Do you have a link to your tool I can check out?
My tool doesn't block AI usage. Rather it pushes you to think/reflect more on what changes you are making with AI.
Ah! That budget wrapper was part of workspace repo and cant really share it out! But the core idea is very simple: Implement a middleware layer which tracks and controls model_type, input_tokens and output_tokens for every LLM request. This allows to automatically switch between models based on importance of LLM query(converse() API call). For example: if the query is decisive/pivotal in the workflow, it uses pro models(often uses more tokens and hence expensive). If its a simple/bland query, it automatically switches to normal models(uses less tokens)
Probably can try it out!
Interesting, thanks for sharing! Will keep this in mind for future projects.
Really thoughtful concept. Using GitHub Copilot CLI to build a tool that prevents blind AI usage is both practical and philosophical. Embedding reflection before commits is a smart way to strengthen decision-making, not just document it. I especially like the focus on capturing intent over time. This feels like a healthy evolution of AI-assisted development.
Thanks Sunil. I will test this tool further over the next weeks and assess how datapoints like intent evolve and what insights I can extract from them.
Great explanation with the use of AI! I am curious. Have you try gemini also?
Thanks Benjamin! I haven't tried gemini yet, would you recommend?
Yes, I would recommend gemini :). I am going to work on a small project with gemini 3 flash somewhere this week.
Ok will try it out then. Nice! Let me know how it goes and maybe share a post about your learnings here. Would be curious to learn more about it.
will do! I will do a project shortly
I’m a big advocate for AI governance, and I think you’ve hit on a critical point here: we risk over-investing time and energy into AI-generated output if we don’t pause to reflect on what we’re actually engaging with.
Without a 'human-in-the-loop' check like ReflectCLI, it’s too easy to prioritize speed over true understanding. This tool acts as a necessary 'cognitive speed bump,' ensuring that the governance happens at the source, the developer’s own intent. It turns a standard Git workflow into a deliberate practice of accountability. Great job on the execution!
Thanks Geets!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.