The Problem
Fine-tuning LLMs locally on a Mac usually means:
- Wrestling with Python environments
- Writing training scripts
- Managing model weights manually
- Figuring out how to actually use your fine-tuned model
I wanted something simpler.
The Solution: M-Courtyard
M-Courtyard is an open-source macOS desktop app that handles the entire
fine-tuning pipeline through a visual interface:
- Import your documents (txt, pdf, docx)
- Generate training data automatically using your local Ollama model
- Fine-tune with LoRA — watch the loss curve in real-time
- Test your model with built-in chat
- Export to Ollama with one click (Q4/Q8/F16 quantization)
Everything Runs Locally
No cloud services. No API keys. No data leaves your Mac.
It's built on Apple's MLX framework, which takes full advantage of
Apple Silicon's unified memory architecture.
Supported Models
Works with models from the mlx-community on Hugging Face:
- Qwen 3 (0.6B to 14B)
- DeepSeek R1
- GLM 5
- Llama 3
- And many more
One-Click Export to Ollama
After fine-tuning, export your model directly to Ollama.
Then just run ollama run your-model-name.
Tech Stack
- Frontend: React + TypeScript
- Backend: Rust (Tauri 2.x)
- ML Engine: mlx-lm (Apple MLX)
- Model Runtime: Ollama
- License: AGPL 3.0
Try It Out
- GitHub: github.com/tuwenbo0120/m-courtyard
- Download: Latest Release (.dmg)
- Discord: Join the community
Requirements: macOS 14+, Apple Silicon (M1/M2/M3/M4), 16GB+ RAM recommended
Feedback and contributions welcome! ⭐










Top comments (0)