DEV Community

Cover image for From Manual Chaos to Workflow Engineering: Automating LeetCode with AI
Micheal Angelo
Micheal Angelo

Posted on

From Manual Chaos to Workflow Engineering: Automating LeetCode with AI

Automating the LeetCode Workflow with Mistral (via Ollama)

Daily LeetCode practice is simple in theory:

  • Solve one problem
  • Push it to GitHub
  • Write a clean explanation
  • Stay consistent

In reality, the friction builds up.

The actual algorithm might take 20–30 minutes.

Formatting files, updating README, sorting entries, writing explanations, and committing properly take additional effort.

That repetitive overhead becomes the bottleneck.

To address this, a CLI-based automation tool was structured to handle the entire workflow.

🔗 Repository:

https://github.com/micheal000010000-hub/LEETCODE-AUTOSYNC

The goal:

Automate the predictable. Focus on solving.


The Core Problem

Maintaining a structured LeetCode repository usually involves:

  • Manually creating solution files
  • Adding standardized headers
  • Updating README under the correct difficulty
  • Sorting entries numerically
  • Avoiding duplicate entries
  • Writing structured markdown explanations
  • Committing and pushing consistently

None of these improve algorithmic skill.

They are mechanical tasks — and mechanical tasks should be automated.


The Workflow

Running:

python autosync.py
Enter fullscreen mode Exit fullscreen mode

Provides two options:

1 → Add new solution locally + Generate AI solution post
2 → Push existing changes to GitHub
Enter fullscreen mode Exit fullscreen mode

Option 1 — Add Solution + Generate AI Explanation

You provide:

  • Problem number
  • Problem name
  • Difficulty
  • Problem link
  • Python solution

The tool then executes four steps.


1️⃣ Structured File Creation

Solution files are automatically placed inside:

easy/
medium/
hard/
Enter fullscreen mode Exit fullscreen mode

With standardized headers:

"""
LeetCode 506_Relative Ranks
Difficulty: Easy
Link: https://leetcode.com/...
"""
Enter fullscreen mode Exit fullscreen mode

2️⃣ Automatic README Update

The tool:

  • Inserts the new entry under the correct difficulty section
  • Keeps entries numerically sorted
  • Prevents duplicates

Manual README edits are eliminated.


3️⃣ LLM-Generated Structured Explanation (Mistral)

Instead of relying on hosted APIs with rate limits, the project now supports:

  • Local Mistral via Ollama
  • Or Hosted Mistral API endpoints

Example model:

mistralai/mistral-7b
Enter fullscreen mode Exit fullscreen mode

The model generates:

  • A descriptive solution title
  • ## Intuition
  • ## Approach
  • ## Time Complexity
  • ## Space Complexity
  • Properly formatted python3 code block

The generated markdown can be directly pasted into LeetCode’s “Solutions” section.

Running locally via Ollama removes:

  • API rate limits
  • External dependency concerns
  • Cloud inference latency

4️⃣ Clean Output Handling

Generated markdown is stored in:

copy_paste_solution/
Enter fullscreen mode Exit fullscreen mode

This folder:

  • Is cleared before each run
  • Always contains one fresh solution
  • Is excluded from Git tracking

The main repository remains clean.


Option 2 — Git Automation

The CLI runs:

git add .
git commit -m "commit_DD_MM_YYYY"
git push -f
Enter fullscreen mode Exit fullscreen mode

Using an automatically generated date-based commit message.


Running Mistral Locally with Ollama

The project supports local inference via ollama, which exposes an HTTP API (default: http://localhost:11434).

Typical setup:

  1. Install Ollama
  2. Pull a Mistral model:
   ollama pull mistralai/mistral-7b
Enter fullscreen mode Exit fullscreen mode
  1. Configure .env:
LEETCODE_REPO_PATH=ABSOLUTE_PATH
OLLAMA_URL=http://localhost:11434
MISTRAL_MODEL=mistralai/mistral-7b
Enter fullscreen mode Exit fullscreen mode
  1. Ensure llm_generator.py sends requests to:
/api/generate
Enter fullscreen mode Exit fullscreen mode

This enables fully local AI-assisted explanation generation.


Architecture Overview

autosync.py          # CLI entry point
repo_manager.py      # File creation + README updates
git_manager.py       # Git automation
llm_generator.py     # Mistral integration (Ollama or hosted)
config.py            # Environment handling
Enter fullscreen mode Exit fullscreen mode

Clear separation of concerns.

  • Deterministic file logic stays in code.
  • Explanation generation is delegated to the LLM.
  • Git operations remain isolated.

Why Mistral?

  • Strong technical explanation capabilities
  • Efficient local inference
  • Open model ecosystem
  • No external rate limits when using Ollama
  • Flexible hosted or local deployment

It allows full control over the workflow.


Why This Matters

This project is not just about LeetCode.

It demonstrates:

  • Workflow engineering
  • LLM integration into real developer tooling
  • Local AI deployment via Ollama
  • Clean automation architecture
  • Reducing cognitive overhead

Consistency becomes easier when friction is removed.


Who This May Help

  • Students practicing daily
  • Developers maintaining public GitHub consistency
  • Anyone exploring local LLM deployment
  • Anyone tired of repetitive markdown formatting

Future Improvements

Possible extensions:

  • Auto-copy markdown to clipboard
  • Auto-open LeetCode submission page
  • Add statistics dashboard
  • Add model selection CLI flag
  • Add logging and structured error handling

Contributions are welcome.


Final Thought

Consistency is not about discipline alone.

It is about removing friction from the system.

Automate the boring.

Solve the hard.

Stay consistent.

Top comments (0)