Could we help you? Please click the banners. We are young and desperately need the money
In the first post of this series, we
connected OpenAI and Claude directly using Laravel's HTTP client. Raw API calls,
custom service classes, manual headers. It works well and teaches you exactly what
happens under the hood.
But Laravel being Laravel, the framework now has an official, first-party answer to
AI integration: the Laravel AI SDK. Released in early 2026, it brings the same
clean, expressive syntax you expect from Laravel — but for AI providers.
Instead of writing HTTP requests and parsing JSON responses yourself, you create
dedicated agent classes, run Artisan commands, and call ->prompt(). The SDK handles
OpenAI, Anthropic (Claude), Gemini, and more through one unified interface.
In this guide we'll install the SDK, understand how Agents work, build some real
features, and cover what makes this approach better than the raw HTTP approach for
most projects.
The Laravel AI SDK (laravel/ai) is an official first-party package that gives you a
consistent way to work with multiple AI providers without caring about their individual
API differences.
Key things it handles for you:
Install the package via Composer:
composer require laravel/ai
Publish the config and migration files:
php artisan vendor:publish --provider="Laravel\Ai\AiServiceProvider"
Run the migrations. The SDK needs two tables to store conversation history:
php artisan migrate
This creates agent_conversations and agent_conversation_messages tables automatically.
Add whichever providers you plan to use in your .env file:
OPENAI_API_KEY=sk-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-claude-key-here
GEMINI_API_KEY=your-gemini-key-here
The SDK reads these automatically through the published config/ai.php file. You don't
need to wire them up manually like we did in part one — the package takes care of it.
The core concept in the Laravel AI SDK is the Agent. Instead of writing AI logic
scattered across controllers and service classes, you create a dedicated PHP class
for each AI task in your app.
Think of an agent as a specialized assistant you configure once:
Generate your first agent with Artisan:
php artisan make:agent SupportBot
This creates app/Ai/Agents/SupportBot.php. Here's what a basic agent looks like:
// app/Ai/Agents/SupportBot.php
namespace App\Ai\Agents;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Promptable;
use Stringable;
class SupportBot implements Agent
{
use Promptable;
/**
* Instructions the AI should follow.
*/
public function instructions(): Stringable|string
{
return 'You are a friendly customer support assistant for AcmeCorp.
Only answer questions about our products and services.
Be concise and helpful. If you cannot help, say so politely.';
}
}
Prompt it anywhere in your app:
$response = (new SupportBot)->prompt('How do I reset my password?');
return (string) $response;
That's it. No HTTP clients, no headers, no JSON parsing. The Promptable trait handles all of it.
You can control which provider and model an agent uses through PHP attributes placed directly on the class:
use Laravel\Ai\Attributes\MaxTokens;
use Laravel\Ai\Attributes\Model;
use Laravel\Ai\Attributes\Provider;
use Laravel\Ai\Attributes\Temperature;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Enums\Lab;
use Laravel\Ai\Promptable;
#[Provider(Lab::Anthropic)]
#[Model('claude-haiku-4-5-20251001')]
#[MaxTokens(500)]
#[Temperature(0.7)]
class SupportBot implements Agent
{
use Promptable;
public function instructions(): string
{
return 'You are a helpful support assistant.';
}
}
The available attributes:
Lab::OpenAI, Lab::Anthropic, Lab::Gemini, etc.)If you don't set a provider, the SDK uses the default configured in config/ai.php.
Let's build features you'd actually ship.
A simple agent that summarizes any article or document a user pastes in:
// app/Ai/Agents/Summarizer.php
namespace App\Ai\Agents;
use Laravel\Ai\Attributes\MaxTokens;
use Laravel\Ai\Attributes\Provider;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Enums\Lab;
use Laravel\Ai\Promptable;
#[Provider(Lab::Anthropic)]
#[MaxTokens(300)]
class Summarizer implements Agent
{
use Promptable;
public function instructions(): string
{
return 'You are a summarization assistant. When given text, summarize it
in 3 clear bullet points. Be concise and focus on key takeaways.';
}
}
Use it in a controller:
public function summarize(Request $request)
{
$request->validate(['text' => 'required|string|max:10000']);
$summary = (new Summarizer)->prompt($request->text);
return response()->json(['summary' => (string) $summary]);
}
Instead of getting back a raw string, you can ask the SDK to return structured data
that matches a schema you define. No more parsing messy text responses.
Generate a structured agent:
php artisan make:agent TicketClassifier --structured
// app/Ai/Agents/TicketClassifier.php
namespace App\Ai\Agents;
use Illuminate\Contracts\JsonSchema\JsonSchema;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Contracts\HasStructuredOutput;
use Laravel\Ai\Promptable;
class TicketClassifier implements Agent, HasStructuredOutput
{
use Promptable;
public function instructions(): string
{
return 'You are a support ticket classifier. Given a ticket message,
determine its category and priority level.';
}
public function schema(JsonSchema $schema): array
{
return [
'category' => $schema->string()
->enum(['billing', 'technical', 'account', 'general'])
->required(),
'priority' => $schema->string()
->enum(['low', 'medium', 'high'])
->required(),
'summary' => $schema->string()->required(),
];
}
}
Access the result like an array — no string parsing required:
$result = (new TicketClassifier)->prompt($ticket->body);
$ticket->update([
'category' => $result['category'],
'priority' => $result['priority'],
'summary' => $result['summary'],
]);
For a chatbot that remembers previous messages, use the RemembersConversations
trait. The SDK stores and retrieves conversation history from the database automatically
— no manual message management needed.
// app/Ai/Agents/ChatAssistant.php
namespace App\Ai\Agents;
use Laravel\Ai\Concerns\RemembersConversations;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Contracts\Conversational;
use Laravel\Ai\Promptable;
class ChatAssistant implements Agent, Conversational
{
use Promptable, RemembersConversations;
public function instructions(): string
{
return 'You are a helpful assistant. Be friendly and concise.';
}
}
Start a conversation for a user and get back a conversation ID to continue it later:
// Start a new conversation
$response = (new ChatAssistant)->forUser($user)->prompt('Hello!');
$conversationId = $response->conversationId;
// Continue the same conversation later
$response = (new ChatAssistant)
->continue($conversationId, as: $user)
->prompt('Can you remind me what we talked about?');
AI API calls can take several seconds. For anything non-interactive — like processing
a batch of uploaded documents — queue the work so your HTTP response stays fast:
Route::post('/documents/analyze', function (Request $request) {
$document = Document::create(['path' => $request->file('doc')->store('documents')]);
(new Summarizer)
->queue("Summarize the document with ID {$document->id}: {$document->content}")
->then(function ($response) use ($document) {
$document->update(['summary' => (string) $response]);
})
->catch(function (\Throwable $e) use ($document) {
logger()->error('Summarization failed', [
'document_id' => $document->id,
'error' => $e->getMessage(),
]);
});
return response()->json(['message' => 'Document queued for analysis.']);
});
Pass an array of providers and the SDK will automatically switch to the next one if
the first is unavailable or rate-limited:
use Laravel\Ai\Enums\Lab;
$response = (new SupportBot)->prompt(
'Help me understand my invoice.',
provider: [Lab::OpenAI, Lab::Anthropic],
);
If OpenAI is down or returns a rate limit error, the request automatically retries
with Anthropic. No fallback logic, no try/catch, no extra code on your end.
Tools let an agent call your application code while generating a response. For example,
an agent that can look up a user's order status in your database:
php artisan make:tool GetOrderStatus
// app/Ai/Tools/GetOrderStatus.php
namespace App\Ai\Tools;
use App\Models\Order;
use Illuminate\Contracts\JsonSchema\JsonSchema;
use Laravel\Ai\Contracts\Tool;
use Laravel\Ai\Tools\Request;
use Stringable;
class GetOrderStatus implements Tool
{
public function description(): Stringable|string
{
return 'Look up the status of an order by order number.';
}
public function handle(Request $request): Stringable|string
{
$order = Order::where('number', $request['order_number'])->first();
if (! $order) {
return 'Order not found.';
}
return "Order {$order->number}: {$order->status}. Expected delivery: {$order->delivery_date}.";
}
public function schema(JsonSchema $schema): array
{
return [
'order_number' => $schema->string()->required(),
];
}
}
Register the tool in your agent's tools() method:
use App\Ai\Tools\GetOrderStatus;
use Laravel\Ai\Contracts\HasTools;
class SupportBot implements Agent, HasTools
{
use Promptable;
public function instructions(): string
{
return 'You are a support assistant. Use the GetOrderStatus tool
when a customer asks about their order.';
}
public function tools(): iterable
{
return [
new GetOrderStatus,
];
}
}
When a user asks "What's the status of order #1234?", the agent automatically calls
your tool, gets the real data from the database, and includes it in the response.
One of the biggest advantages of using the SDK is testability. You never want real
API calls hitting OpenAI or Anthropic in your test suite — it's slow, costs money,
and is unreliable.
The SDK makes faking agents straightforward:
use App\Ai\Agents\TicketClassifier;
it('classifies tickets correctly', function () {
// Fake the agent — no real API call is made
TicketClassifier::fake([
[
'category' => 'billing',
'priority' => 'high',
'summary' => 'Customer cannot process payment.',
],
]);
$result = (new TicketClassifier)->prompt('My credit card keeps getting declined');
expect($result['category'])->toBe('billing');
expect($result['priority'])->toBe('high');
// Assert the agent was actually called
TicketClassifier::assertPrompted('My credit card keeps getting declined');
});
For tests where you want to make sure no unexpected agent calls happen:
TicketClassifier::fake()->preventStrayPrompts();
// Now any unmatched prompt will throw an exception
If you don't configure a default provider in config/ai.php and don't set one via
attribute, the SDK might not know which provider to use.
Bad:
// No provider set anywhere, and config/ai.php still has defaults pointing to nothing
class SupportBot implements Agent
{
use Promptable;
}
Good:
#[Provider(Lab::Anthropic)]
class SupportBot implements Agent
{
use Promptable;
}
Or set a global default in config/ai.php so all agents use it unless overridden.
The whole point of agents is to keep AI logic organized. Don't bypass the pattern.
Bad:
// Messy, untestable, hard to reuse
public function summarize(Request $request)
{
$response = agent(instructions: 'Summarize this.')->prompt($request->text);
return (string) $response;
}
Good:
// Clean, testable, reusable
public function summarize(Request $request)
{
return (string) (new Summarizer)->prompt($request->text);
}
Calls to AI providers can take 5–30 seconds for large inputs. Running them synchronously
blocks your server and frustrates users.
Bad:
// Blocks the request for potentially 20+ seconds
$response = (new DocumentAnalyzer)->prompt($longDocument);
Good:
// Non-blocking, processed in the background
(new DocumentAnalyzer)
->queue($longDocument)
->then(fn ($response) => $this->saveResult($response));
If you don't fake your agents in tests, every test run makes real API calls.
Bad:
it('saves the summary', function () {
// This hits the real OpenAI API every time 🚫
$result = (new Summarizer)->prompt('Some long article...');
expect($result)->not->toBeEmpty();
});
Good:
it('saves the summary', function () {
Summarizer::fake(['Here are 3 key takeaways...']);
$result = (new Summarizer)->prompt('Some long article...');
expect((string) $result)->toBe('Here are 3 key takeaways...');
});
The SDK supports quick anonymous agents with the agent() helper. These are fine for
one-off scripts or quick prototyping, but don't use them for features you'll reuse
or test — a named class is always easier to maintain.
OK for quick scripts:
use function Laravel\Ai\{agent};
$response = agent(instructions: 'You are a helpful assistant.')
->prompt('What is Laravel?');
Better for real features:
// Dedicated class: testable, configurable, reusable
(new SupportBot)->prompt('What is Laravel?');
# Create a basic agent
php artisan make:agent AgentName
# Create an agent with structured output
php artisan make:agent AgentName --structured
# Create a tool
php artisan make:tool ToolName
# Create agent middleware
php artisan make:agent-middleware MiddlewareName
// Basic prompt
$response = (new MyAgent)->prompt('Your message here');
return (string) $response;
// Override provider at runtime
$response = (new MyAgent)->prompt('Message', provider: Lab::Anthropic);
// Failover between providers
$response = (new MyAgent)->prompt('Message', provider: [Lab::OpenAI, Lab::Anthropic]);
// Queue the prompt
(new MyAgent)->queue('Message')->then(fn ($r) => ...)->catch(fn ($e) => ...);
#[Provider(Lab::Anthropic)] // Set provider
#[Model('claude-haiku-4-5-20251001')] // Set model
#[MaxTokens(500)] // Limit response length
#[Temperature(0.7)] // Set creativity (0.0–1.0)
#[MaxSteps(10)] // Max tool-use iterations
#[Timeout(120)] // HTTP timeout in seconds
#[UseCheapestModel] // Auto-select cheapest model
#[UseSmartestModel] // Auto-select most capable model
Agent // Required. Base interface for all agents
Conversational // Agent has conversation history
HasStructuredOutput // Agent returns typed schema instead of plain text
HasTools // Agent can call tools
HasMiddleware // Agent has middleware pipeline
HasProviderOptions // Agent passes provider-specific options
// Fake with fixed response
MyAgent::fake(['Response text']);
// Fake with dynamic response
MyAgent::fake(fn ($prompt) => 'Response to: ' . $prompt->prompt);
// Prevent real API calls from slipping through
MyAgent::fake()->preventStrayPrompts();
// Assertions
MyAgent::assertPrompted('Expected text');
MyAgent::assertNeverPrompted();
For most Laravel projects started today, the SDK is the right default.
The Laravel AI SDK brings AI into Laravel properly — not as a bolted-on HTTP call,
but as a first-class feature with agents, Artisan commands, queues, testing helpers,
and a consistent interface across providers.
Key takeaways:
composer require laravel/ai, publish config, and run migrationsphp artisan make:agent#[Provider], #[Model], #[MaxTokens]) to configure each agentHasStructuredOutput to get typed schema responses instead of raw stringsRemembersConversations to automatically persist and load chat history->queue() to keep your app responsiveMyAgent::fake() — never hit a real API in your test suiteIf you haven't read part one of this series, it covers how to call the OpenAI and
Claude APIs directly using Laravel's HTTP client — a good foundation for understanding
what the SDK is doing for you behind the scenes.