Could we help you? Please click the banners. We are young and desperately need the money
You've been hearing about AI everywhere. Your boss wants a "smart" feature in the app.
Your clients are asking why their software doesn't have a chatbot yet. You open the
OpenAI or Anthropic docs and suddenly you're staring at API keys, tokens, and JSON
payloads wondering where to even start.
The good news: integrating AI into a Laravel app is simpler than it looks. Both OpenAI
and Anthropic (the company behind Claude) offer clean REST APIs, and Laravel makes
consuming them straightforward.
In this guide, we'll connect both APIs to a fresh Laravel project, build a simple
feature with each, compare how they work, and help you decide which one fits your use case.
Before writing any code, let's get the concepts straight.
Both APIs follow a similar pattern: you send a message, you get a message back.
The differences are in the details — and we'll get to those.
Start with a fresh Laravel install if you don't have one:
composer create-project laravel/laravel ai-demo
cd ai-demo
We'll use Laravel's HTTP client (which wraps Guzzle) to call both APIs.
No extra packages required for the basics — Laravel ships with everything you need.
Never hardcode API keys in your code. Add them to your .env file:
OPENAI_API_KEY=sk-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
Then register them in config/services.php:
// config/services.php
'openai' => [
'key' => env('OPENAI_API_KEY'),
],
'anthropic' => [
'key' => env('ANTHROPIC_API_KEY'),
],
This way you access them cleanly with config('services.openai.key') anywhere in your app.
OpenAI's chat endpoint expects a list of messages with roles: system (instructions for the AI),
user (what the user says), and assistant (previous AI replies for context).
Here's a basic service class:
// app/Services/OpenAIService.php
namespace App\Services;
use Illuminate\Support\Facades\Http;
class OpenAIService
{
protected string $apiKey;
protected string $endpoint = 'https://api.openai.com/v1/chat/completions';
public function __construct()
{
$this->apiKey = config('services.openai.key');
}
public function ask(string $prompt): string
{
$response = Http::withToken($this->apiKey)
->post($this->endpoint, [
'model' => 'gpt-4o',
'messages' => [
['role' => 'system', 'content' => 'You are a helpful assistant.'],
['role' => 'user', 'content' => $prompt],
],
'max_tokens' => 500,
]);
return $response->json('choices.0.message.content') ?? 'No response.';
}
}
Let's break down what's happening:
Authorization: Bearer headergpt-4o is OpenAI's current flagshipsystem role sets AI behavior, user is your inputAnthropic's API is similar but has a few differences in structure. The system prompt
is a top-level field (not inside the messages array), and you must pass a version header.
// app/Services/AnthropicService.php
namespace App\Services;
use Illuminate\Support\Facades\Http;
class AnthropicService
{
protected string $apiKey;
protected string $endpoint = 'https://api.anthropic.com/v1/messages';
public function __construct()
{
$this->apiKey = config('services.anthropic.key');
}
public function ask(string $prompt): string
{
$response = Http::withHeaders([
'x-api-key' => $this->apiKey,
'anthropic-version' => '2023-06-01',
'Content-Type' => 'application/json',
])
->post($this->endpoint, [
'model' => 'claude-sonnet-4-5',
'max_tokens' => 500,
'system' => 'You are a helpful assistant.',
'messages' => [
['role' => 'user', 'content' => $prompt],
],
]);
return $response->json('content.0.text') ?? 'No response.';
}
}
Key differences from OpenAI:
content array of blocksLet's create a simple controller that uses both services:
// app/Http/Controllers/AiController.php
namespace App\Http\Controllers;
use App\Services\OpenAIService;
use App\Services\AnthropicService;
use Illuminate\Http\Request;
class AiController extends Controller
{
public function __construct(
protected OpenAIService $openai,
protected AnthropicService $anthropic,
) {}
public function ask(Request $request)
{
$request->validate([
'prompt' => 'required|string|max:1000',
'provider' => 'required|in:openai,anthropic',
]);
$answer = match ($request->provider) {
'openai' => $this->openai->ask($request->prompt),
'anthropic' => $this->anthropic->ask($request->prompt),
};
return response()->json(['answer' => $answer]);
}
}
Add the route in routes/web.php or routes/api.php:
Route::post('/ai/ask', [AiController::class, 'ask']);
Now you can POST to /ai/ask with a prompt and provider field and get a response from either AI.
Let's look at practical things you might build with this setup.
A user pastes a long article and wants a short summary. Pass the text as context in the prompt:
public function summarize(string $text): string
{
$prompt = "Summarize the following text in 3 bullet points:\n\n{$text}";
return $this->anthropic->ask($prompt);
}
Claude handles long documents especially well — it has a large context window and follows
formatting instructions reliably. This makes it a strong choice for summarization tasks.
An e-commerce app that auto-writes product descriptions from a product name and features:
public function generateDescription(string $name, array $features): string
{
$featureList = implode(', ', $features);
$prompt = "Write a short, engaging product description for: {$name}.
Key features: {$featureList}. Keep it under 100 words.";
return $this->openai->ask($prompt);
}
Automatically tag incoming support tickets by category:
public function classifyTicket(string $ticketContent): string
{
$prompt = "Classify this support ticket into one of these categories:
billing, technical, account, general.
Reply with only the category name, nothing else.
Ticket: {$ticketContent}";
return strtolower(trim($this->anthropic->ask($prompt)));
}
APIs fail. Networks fail. You should always handle errors gracefully:
public function ask(string $prompt): string
{
try {
$response = Http::withToken($this->apiKey)
->timeout(30)
->post($this->endpoint, [
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => $prompt],
],
'max_tokens' => 500,
]);
if ($response->failed()) {
logger()->error('OpenAI error', ['status' => $response->status(), 'body' => $response->body()]);
return 'The AI service is currently unavailable. Please try again.';
}
return $response->json('choices.0.message.content') ?? 'No response.';
} catch (\Exception $e) {
logger()->error('OpenAI exception', ['message' => $e->getMessage()]);
return 'Something went wrong. Please try again.';
}
}
Both APIs work well. Here's a practical comparison to help you choose:
For most Laravel apps, either one works fine. Try both with your specific use case and see which output quality you prefer.
Bad:
$response = Http::withToken('sk-abc123realkey...')
->post($this->endpoint, [...]);
Good:
$response = Http::withToken(config('services.openai.key'))
->post($this->endpoint, [...]);
Your API key will end up in version control and get leaked. Always use .env.
If you skip max_tokens, the AI might return a very long response and you'll be charged for it.
Always set a reasonable limit based on what you actually need.
// Bad: no limit set
'model' => 'gpt-4o',
// Good: limit is set
'model' => 'gpt-4o',
'max_tokens' => 300,
If the same prompt gets asked repeatedly (like generating a static FAQ), cache the response:
use Illuminate\Support\Facades\Cache;
public function ask(string $prompt): string
{
$cacheKey = 'ai_response_' . md5($prompt);
return Cache::remember($cacheKey, now()->addHours(24), function () use ($prompt) {
return $this->callApi($prompt);
});
}
This saves money and speeds up your app significantly for repeated queries.
Users can try to "jailbreak" your AI feature with clever prompts. Use a system prompt to set boundaries:
// Weak: no system instructions
'messages' => [
['role' => 'user', 'content' => $userInput],
]
// Better: system prompt limits what the AI will do
'messages' => [
['role' => 'system', 'content' => 'You are a customer support assistant for AcmeCorp.
Only answer questions about our products. Politely decline anything unrelated.'],
['role' => 'user', 'content' => $userInput],
]
Bad:
// This will crash if the API call fails
return $response->json('choices.0.message.content');
Good:
if ($response->failed()) {
return 'Unable to get a response right now.';
}
return $response->json('choices.0.message.content') ?? 'No response.';
Http::withToken($key)->post('https://api.openai.com/v1/chat/completions', [
'model' => 'gpt-4o', // or gpt-4o-mini for cheaper/faster
'max_tokens' => 500,
'messages' => [
['role' => 'system', 'content' => 'Your instructions here'],
['role' => 'user', 'content' => $prompt],
],
]);
// Get the response text:
$response->json('choices.0.message.content');
Http::withHeaders([
'x-api-key' => $key,
'anthropic-version' => '2023-06-01',
])->post('https://api.anthropic.com/v1/messages', [
'model' => 'claude-sonnet-4-5', // or claude-haiku-4-5 for cheaper/faster
'max_tokens' => 500,
'system' => 'Your instructions here',
'messages' => [
['role' => 'user', 'content' => $prompt],
],
]);
// Get the response text:
$response->json('content.0.text');
OpenAI:
gpt-4o → Most capable, higher cost
gpt-4o-mini → Fast and cheap, great for most tasks
Anthropic:
claude-opus-4-5 → Most capable, higher cost
claude-sonnet-4-5 → Balanced speed and quality
claude-haiku-4-5 → Fastest and cheapest
Adding AI to a Laravel app doesn't require special packages or complex setup. Both OpenAI
and Claude follow a simple pattern: send a message, get a message back. Laravel's built-in
HTTP client handles the rest.
Key takeaways:
.env and access them via config() — never hardcode themBearer token header; Anthropic uses a custom x-api-key headersystem prompt goes inside the messages array; Anthropic's sits at the top levelmax_tokens to control costs and response length$response->failed() for graceful error handlingStart small — pick one feature, wire up one API, and get it working end to end.
Once you have a working service class, adding more AI-powered features is just a matter
of writing new prompts.