DEV Community

Bowen
Bowen

Posted on

I Built a Zero-Code LLM Fine-Tuning App for Mac — Here's How It Works

The Problem

Fine-tuning LLMs locally on a Mac usually means:

  • Wrestling with Python environments
  • Writing training scripts
  • Managing model weights manually
  • Figuring out how to actually use your fine-tuned model

I wanted something simpler.

The Solution: M-Courtyard

M-Courtyard is an open-source macOS desktop app that handles the entire
fine-tuning pipeline through a visual interface:

  1. Import your documents (txt, pdf, docx)
  2. Generate training data automatically using your local Ollama model
  3. Fine-tune with LoRA — watch the loss curve in real-time
  4. Test your model with built-in chat
  5. Export to Ollama with one click (Q4/Q8/F16 quantization)

Everything Runs Locally

No cloud services. No API keys. No data leaves your Mac.

It's built on Apple's MLX framework, which takes full advantage of
Apple Silicon's unified memory architecture.

Supported Models

Works with models from the mlx-community on Hugging Face:

  • Qwen 3 (0.6B to 14B)
  • DeepSeek R1
  • GLM 5
  • Llama 3
  • And many more

One-Click Export to Ollama

After fine-tuning, export your model directly to Ollama.
Then just run ollama run your-model-name.

Tech Stack

  • Frontend: React + TypeScript
  • Backend: Rust (Tauri 2.x)
  • ML Engine: mlx-lm (Apple MLX)
  • Model Runtime: Ollama
  • License: AGPL 3.0

Try It Out

Requirements: macOS 14+, Apple Silicon (M1/M2/M3/M4), 16GB+ RAM recommended

Feedback and contributions welcome! ⭐

Top comments (0)