1) What “Laravel AI” actually is
Laravel split AI into three first-party pieces:
Laravel AI SDK (in-app AI features)
A first-party package you install in your Laravel app to build AI features with a unified API across providers: agents, tools, structured output, multimodal (images/audio), embeddings, reranking, files, vector stores, and failover.
Laravel Boost (better AI-assisted coding in your repo)
A dev dependency that installs an MCP server + guidelines/skills so tools like Cursor / Claude Code / etc get accurate context about your Laravel app (routes, schema, config, logs, docs knowledge).
Laravel MCP (expose your app as an MCP server)
A package/pattern for building MCP servers in Laravel: define tools (actions), resources (read-only data), and prompts (templates), secured with Laravel middleware/OAuth/Sanctum patterns.
2) Laravel AI SDK: install + configure
Step 1 — Install the package
composer require laravel/ai
Step 2 — Publish config + migrations
php artisan vendor:publish --provider="Laravel\Ai\AiServiceProvider"
php artisan migrate
This creates the tables used for conversation storage: agent_conversations and agent_conversation_messages.
Step 3 — Add provider keys in .env
Laravel AI SDK supports multiple providers; you can configure them via config/ai.php or env vars. Common env keys include:
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GEMINI_API_KEY=
MISTRAL_API_KEY=
OLLAMA_API_KEY=
XAI_API_KEY=
COHERE_API_KEY=
JINA_API_KEY=
VOYAGEAI_API_KEY=
ELEVENLABS_API_KEY=
Step 4 — (Optional) set custom base URLs
Useful if you route through LiteLLM / gateways / proxies:
'providers' => [
'openai' => [
'driver' => 'openai',
'key' => env('OPENAI_API_KEY'),
'url' => env('OPENAI_BASE_URL'),
],
],
Custom base URLs are supported for several providers (OpenAI/Anthropic/Gemini/Groq/Cohere/DeepSeek/xAI/OpenRouter).
3) Your first Agent (the Laravel way)
Laravel AI SDK centers around an Agent class: instructions, tools, schema, context, and output live in one place.
Step 1 — Generate an agent class
php artisan make:agent SalesCoach
php artisan make:agent SalesCoach --structured
Step 2 — Implement instructions (your “system prompt”)
Example pattern from the docs:
<?php
namespace App\Ai\Agents;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Promptable;
use Stringable;
class SalesCoach implements Agent
{
use Promptable;
public function instructions(): Stringable|string
{
return 'You are a sales coach, analyzing transcripts and providing feedback.';
}
}
Step 3 — Call the agent from a route/controller
use App\Ai\Agents\SalesCoach;
Route::post('/coach', function () {
$response = (new SalesCoach)->prompt('Analyze this sales transcript...');
return ['analysis' => (string) $response];
});
Structured output (strongly recommended)
Agents can implement structured output via a schema (great for reliable JSON responses).
4) Streaming, queueing, and “real app” behavior
Streaming responses
If you’re building chat-like UX, stream tokens/events:
use Laravel\Ai\Responses\StreamedAgentResponse;
Route::get('/coach', function () {
return (new SalesCoach)
->stream('Analyze this sales transcript...')
->then(function (StreamedAgentResponse $response) {
// $response->text, $response->events, $response->usage...
});
});
Queueing long jobs
use Laravel\Ai\Responses\AgentResponse;
use Throwable;
Route::post('/coach', function (Request $request) {
return (new SalesCoach)
->queue($request->input('transcript'))
->then(function (AgentResponse $response) {
// ...
})
->catch(function (Throwable $e) {
// ...
});
});
5) Tools: let the agent “do things” (safely)
There are two big categories:
A) Your own Tools (custom PHP tools)
You can create a tool class that exposes a schema + handler (example shown on laravel.com/ai).
B) Provider Tools (web/file search, etc.)
Laravel provides provider-specific tools like WebFetch and FileSearch.
Example wiring tools into an agent:
use Laravel\Ai\Providers\Tools\WebSearch;
use Laravel\Ai\Providers\Tools\WebFetch;
use Laravel\Ai\Providers\Tools\FileSearch;
public function tools(): iterable
{
return [
new WebSearch,
new WebFetch,
new FileSearch(stores: ['store_id']),
];
}
FileSearch supports metadata filtering and multiple vector store IDs for RAG.
6) RAG in Laravel AI SDK: embeddings + vector search
Laravel gives you two common RAG paths:
Option 1 — Similarity search over your own DB rows (pgvector)
You can store embeddings in a vector column and query them. Vector queries are currently supported on PostgreSQL with the pgvector extension.
Migration pattern:
Schema::ensureVectorExtensionExists();
Schema::create('documents', function (Blueprint $table) {
$table->id();
$table->string('title');
$table->text('content');
$table->vector('embedding', dimensions: 1536);
$table->timestamps();
});
You can add an HNSW index with cosine distance via ->index().
Option 2 — Vector stores + FileSearch (provider-backed)
Vector stores let you upload documents, vectorize them, then let agents query them using FileSearch for RAG.
7) Multimodal: images, audio, transcription, reranking
Laravel AI SDK exposes “high-level” facades/classes:
Image generation / remixing
use Laravel\Ai\Image;
use Laravel\Ai\Files;
$image = Image::of('A donut sitting on the kitchen counter.')
->quality('high')
->landscape()
->timeout(120)
->generate();
// Remix an existing image
$image = Image::of('Update this photo to be in the style of an impressionist painting.')
->attachments([Files\Image::fromStorage('photo.jpg')])
->landscape()
->generate();
Audio (TTS) + transcription (STT)
use Laravel\Ai\Audio;
use Laravel\Ai\Transcription;
$audio = Audio::of('I love coding with Laravel.')->generate();
$transcript = Transcription::fromStorage('audio.mp3')->generate();
Reranking (very useful for search results)
use Laravel\Ai\Reranking;
$response = Reranking::of([
'Django is a Python web framework.',
'Laravel is a PHP web application framework.',
'React is a JavaScript library for building user interfaces.',
])->rerank('PHP frameworks');
$best = $response->first()->document;
Provider availability differs per feature (text/images/embeddings/reranking/files/etc.).
8) Failover: keep your AI feature online
You can provide an array of providers/models so the SDK fails over automatically on rate limits/outages:
$response = (new SalesCoach)->prompt(
'Analyze this sales transcript...',
provider: ['openai', 'anthropic'],
);
This is one of the biggest “production-ish” wins: you avoid writing your own fallback glue.
9) Laravel Boost: make Cursor/Claude Code stop guessing
Boost is how you get higher-quality AI coding output because the agent can inspect your app (routes, schema, config, logs, artisan commands, etc.) through MCP tools and Laravel-specific guidelines.
Install Boost
composer require laravel/boost --dev
php artisan boost:install
The installer generates agent guidelines/skill files based on the coding agent/editor you pick.
What you get in practice
When you ask your coding agent “add feature X”, it can use Boost’s capabilities like application introspection, DB tools, route inspection, artisan discovery, log analysis, browser logs, and even tinker integration—so it can propose changes grounded in your actual codebase, not generic Laravel snippets.
10) Laravel MCP: turn your app into “tools” for AI clients
If Boost is “AI inside your editor”, Laravel MCP is “AI clients can call your app”. It’s designed around MCP servers, tools, resources, and prompts.
Typical flow:
- Create an MCP server class listing tools/resources/prompts (see Flightio example on laravel.com/ai/mcp).
- Register MCP routes (web/local server) and secure with middleware (auth/sanctum/oauth routes are part of the typical setup).
- Implement Tools with input schemas + handlers so AI clients can execute safe actions.
This is the clean approach when you want something like: “AI client can create a ticket, fetch an itinerary, run a search, etc.” without giving the model raw DB access.
11) Where your product fits: Vibecoderplanner.com
Laravel AI gives you the building blocks, but the hard part in real life is keeping AI work structured: what’s the feature, what’s the spec, what’s the acceptance criteria, which prompts do we run first, what’s the commit order, what’s the migration plan, and how do we avoid “AI spaghetti”?
That’s exactly where https://vibecoderplanner.com/ fits as the “planning + execution map” on top of Laravel AI:
A practical way to use both together:
- In VibeCoderPlanner, define the feature as a small backlog (example: “AI support chat with RAG”, “document ingest”, “rerank results”, “agent tool to create tickets”).
- For each task card, store the exact prompt you want Cursor/Claude Code to run (including “use Laravel AI SDK agents + tools + schema output”).
- Use Boost in the repo so the agent can read your real routes/models/migrations while executing those prompts.
- Implement the feature with Laravel AI SDK (agents, embeddings, vector stores, reranking, failover).
- If you want external AI clients to trigger actions safely, expose those actions via Laravel MCP tools/resources/prompts.
So: Laravel AI handles the capabilities, and VibeCoderPlanner keeps the build “vibe-coded” but still professional: scoped tasks, reproducible prompts, clean delivery.
Conclusion
Laravel AI is not just another wrapper around an API. It is a structured, production-ready foundation for building real AI features inside Laravel applications. With the AI SDK you can create agents, tools, structured outputs, streaming responses, embeddings, reranking, and multimodal workflows using a unified interface. With Boost, your coding assistant actually understands your application instead of hallucinating generic snippets. And with MCP, you can expose your Laravel app as a secure, AI-callable system built on proper middleware and authorization.
What makes this powerful is not just the technology stack, but the architecture discipline it encourages. Agents are classes. Tools are explicit. Schemas are defined. Vector search is integrated. Failover is supported. Everything fits naturally into Laravel’s service container, queues, events, and testing ecosystem. That means you can move from prototype to production without rewriting your entire approach.
However, AI development without structure quickly becomes chaotic. Prompts drift. Features grow without scope. Context gets lost between sessions. That is why pairing Laravel AI with a planning system like https://vibecoderplanner.com/
makes a significant difference. Instead of randomly prompting your coding agent, you define clear feature tasks, structured prompts, execution order, and acceptance criteria. Boost handles repository awareness. Laravel AI handles capability. VibeCoderPlanner handles direction.
The result is simple: you move faster without sacrificing architecture.
Laravel AI gives you the engine.
Boost gives you intelligence inside your codebase.
MCP gives you controlled extensibility.
VibeCoderPlanner gives you execution clarity.
If you are serious about building AI-powered Laravel applications in 2026 and beyond, this stack is not just experimental. It is strategic.
Top comments (0)