I've been building Drape, a mobile IDE for iOS that lets you code, preview, and ship apps entirely from your phone. The AI doesn't just suggest code - it's an autonomous agent that reads your project, writes files, runs commands, and iterates.
In this post I want to share how it works under the hood, the architectural decisions I made, and the challenges of building a real IDE on mobile.
What Drape Does
Before diving into the tech, here's what the user experience looks like:
- You create a new project (or clone from GitHub)
- You describe what you want: "A dashboard with revenue analytics and a chart"
- The AI picks a model (Gemini 3.0, GPT, Claude - your choice), reads the project context, and starts coding
- Files appear in the file explorer, code is written, dependencies are installed
- A live preview shows your app running in real time
- You iterate: "Make the chart interactive" → AI edits the code → preview updates
All of this happens on your phone. No laptop, no desktop, no SSH into a remote machine.
┌─────────────────────────────┐
│ iOS App │
│ (React Native + Expo) │
│ │
│ ┌─────────┐ ┌───────────┐ │
│ │ AI Chat │ │ Terminal │ │
│ │ │ │ (xterm) │ │
│ ├─────────┤ ├───────────┤ │
│ │ File │ │ Live │ │
│ │ Manager │ │ Preview │ │
│ └────┬────┘ └─────┬─────┘ │
│ │ │ │
└───────┼─────────────┼───────┘
│ SSE │ WebView
▼ ▼
┌─────────────────────────────┐
│ Backend (Node.js) │
│ │
│ ┌──────────┐ ┌───────────┐│
│ │ Agent │ │ VM ││
│ │ Loop │ │ Manager ││
│ └────┬─────┘ └─────┬─────┘│
│ │ │ │
└───────┼──────────────┼──────┘
│ │
▼ ▼
┌──────────────┐ ┌───────────┐
│ AI Models │ │ Fly.io │
│ (Gemini/GPT/ │ │ Micro-VMs │
│ Claude) │ │ │
└──────────────┘ └───────────┘
The App Layer
Built with React Native + Expo using the new architecture (Fabric). State management is split across multiple Zustand stores:
- chatStore: Conversation history, message state
- tabStore: Terminal items per tab (each project gets tabs)
- uiStore: UI state (modals, panels, selections)
- workstationStore: Project/VM state
The app has four main views: AI Chat, Terminal, File Manager, and Live Preview. They all share state and update in real time.
The Agent Loop
This is the core of the AI experience. It's not a simple request/response - it's an agentic loop.
User message
↓
Agent receives message + project context
↓
┌─→ Model decides next action ──┐
│ (edit_file, run_command, │
│ read_file, ask_user) │
│ ↓ │
│ Execute action on VM │
│ ↓ │
│ Observe result │
│ ↓ │
└── Need more actions? ←────────┘
↓ (done)
Final response to user
The agent can:
- Read files: Understands project structure and existing code
- Create/edit files: Writes new components, modifies existing ones
-
Run terminal commands:
npm install,git commit, build scripts - Check preview: Verifies the app is running correctly
All events stream to the app via Server-Sent Events (SSE). The app processes them in real time - you see the AI "thinking", then files appearing, terminal commands running, and the preview updating.
Cloud VMs
Every project gets its own Fly.io micro-VM. This is a real Linux environment with:
- Node.js + npm
- Full filesystem
- Network access
- Process management
Why not WebAssembly or a browser sandbox? Because real projects need real environments. You can't run a Node.js server in WASM (well, not reliably). The VM approach adds some latency but gives you an authentic development experience.
## Challenges I Faced
1. SSE Streaming + React State
The AI agent generates a stream of events: thinking, text, tool_start, tool_input, tool_complete, etc. Processing these correctly while keeping React state consistent was the hardest part.
Key lesson: Never merge events in-place. Always append as new events. If you mutate an existing array element, React refs that track "last processed index" won't detect the change because the array length didn't change.
2. Fabric View Management
React Native's new architecture (Fabric) tracks native subview indices precisely. If you try to insert a native overlay view into a Fabric-managed view hierarchy, it shifts indices and causes crashes:
Attempt to unmount a view which has a different index
Solution: Add overlays to UIWindow directly (outside the Fabric tree) and track position with CADisplayLink.
3. Multi-Model AI Support
Each model (Gemini, GPT, Claude) has different APIs, capabilities, and quirks:
- Different message formats
- Different tool calling conventions
- Different streaming formats
- Different context window sizes
The backend abstracts these differences so the frontend just sees a unified event stream.
4. Mobile UX for Development
Coding on a phone sounds terrible. Making it feel good required:
- Smart defaults (AI does the heavy lifting, you guide)
- Split views that actually work on a phone screen
- Quick actions instead of typing everything
- File tree that's easy to navigate with touch

Full workspace with file explorer
What's Next
- Android version
- Collaborative editing
- More AI models
- Plugin system
Try It
Drape is free to start. Download it and build something.
drape.info
I'd love to hear from anyone who's tried mobile development before, or anyone building AI-powered dev tools. What features would make you switch from your laptop?
Follow me for more posts about building dev tools, AI agents, and mobile development.

Top comments (0)