In the rapidly evolving landscape of AI agents, one of the biggest challenges is moving beyond simple chatbots to reliable, goal-oriented assistants. A critical component of this reliability is the Human-in-the-Loop (HITL) pattern. Today, we're exploring how to build a robust travel planning agent that plans, asks for approval, and then executes—all within a modern TypeScript/Next.js environment.
The Problem: Autonomous Agents are Scary
Fully autonomous agents can be unpredictable. When you ask an agent to "Book a trip to Paris," you probably don't want it to instantly charge $3,000 to your credit card without you seeing the itinerary first. You need a Review & Approve stage.
The Solution: LangGraph Breakers
This project is inspired by the architectural patterns documented by MarkTechPost, where they demonstrated this flow in Python. We've taken that core idea and ported it to LangGraph.js, leveraging the power of Next.js to create a seamless, interactive user experience.
Architecture Overview
Our agent is defined as a state machine with explicit control flow. Here’s how the logic is structured:
- Planning Node: The LLM parses the user's natural language request into a structured JSON schema.
- Validation: A custom helper ensures the JSON is well-formed. This is especially important for local models (Ollama) which might struggle with perfect formatting.
- Interrupt (HITL): Using LangGraph's
interrupt()function, the graph execution literally pauses. The browser UI picks up this state and presents the user with an editable JSON editor. - Execution Node: Once the user clicks "Approve," the graph resumes, passing the (potentially edited) plan to the tool execution logic.
Technical Deep Dive
1. Robust Local LLM Support
One of the key enhancements we added during the build was deep support for Ollama. Local models like llama3.2 are powerful but can be finicky with JSON output. We implemented:
-
Native JSON Mode: Utilizing Ollama's
format: "json"configuration. - Balanced-Brace Extraction: A robust parsing mechanism to strip away LLM "chatter" and extract the core JSON object.
- Type Fallbacks: If a model forgets a field, our validatePlan helper injects sensible defaults instead of crashing the workflow.
2. State Management with LangGraph.js
LangGraph.js manages the agent's state across multiple turns. We use the MemorySaver checkpointer, which allows us to "hibernate" the agent's state while waiting for user input. This makes the application feel incredibly responsive and reliable.
// Example of the main Graph definition
const workflow = new StateGraph(StateAnnotation)
.addNode("planning", make_llm_plan)
.addNode("approve", wait_for_approval)
.addNode("execute", execute_tools)
.addEdge(START, "planning")
.addEdge("planning", "approve")
.addEdge("approve", "execute")
.addEdge("execute", END);
3. Modern Next.js Frontend
The UI is a sleek, dark-themed dashboard built with standard CSS and React. It features:
- Real-time Streaming feedback (simulated via graph states).
- JSON Input/Output synchronization.
- A Multi-Provider Settings Drawer for switching between OpenAI, Aisa.one, and local Ollama instances.
What's Next?
This project is an open template for the community. You can fork this today and:
- Connect it to Real Booking APIs (Amadeus, Skyscanner).
- Add a Clarification Loop where the agent asks questions before the first plan.
- Implement Budget Constraints that guard the tool execution.
Human-in-the-loop isn't just a safety feature; it's a UX requirement for the next generation of AI agents. By combining LangGraph.js with the speed of Next.js, we’ve created a blueprint for agents that are both powerful and trustworthy.
Special thanks to MarkTechPost for the original architectural inspiration.

Top comments (0)