Menü schliessen
Created: March 31st 2026
Categories: Artificial intelligence (AI),  IT Development,  Laravel
Author: Milos Jevtic

Integrating OpenAI and Claude API into a Laravel App

Tags:  AI,  ChatGPT,  Claude,  Laravel,  OpenAI,  PHP

Introduction: So You Want to Add AI to Your Laravel App?

You've been hearing about AI everywhere. Your boss wants a "smart" feature in the app.
Your clients are asking why their software doesn't have a chatbot yet. You open the
OpenAI or Anthropic docs and suddenly you're staring at API keys, tokens, and JSON
payloads wondering where to even start.

The good news: integrating AI into a Laravel app is simpler than it looks. Both OpenAI
and Anthropic (the company behind Claude) offer clean REST APIs, and Laravel makes
consuming them straightforward.

In this guide, we'll connect both APIs to a fresh Laravel project, build a simple
feature with each, compare how they work, and help you decide which one fits your use case.

What Are These APIs, Exactly?

Before writing any code, let's get the concepts straight.

  • OpenAI API - Made by OpenAI, gives you access to models like GPT-4o and GPT-3.5. The most widely used AI API in the world right now.
  • Anthropic API (Claude) - Made by Anthropic, gives you access to Claude models (Sonnet, Haiku, Opus). Known for following instructions carefully and handling long documents well.
  • Model - The actual AI brain you're talking to. Different models have different speeds, costs, and capabilities.
  • Prompt - The message or instruction you send to the AI.
  • Completion / Response - What the AI sends back.
  • Token - How AI measures text length. Roughly 1 token ≈ 4 characters. Both APIs charge by tokens used.

Both APIs follow a similar pattern: you send a message, you get a message back.
The differences are in the details — and we'll get to those.

Setting Up Your Laravel Project

Start with a fresh Laravel install if you don't have one:

composer create-project laravel/laravel ai-demo
cd ai-demo

We'll use Laravel's HTTP client (which wraps Guzzle) to call both APIs.
No extra packages required for the basics — Laravel ships with everything you need.

Storing Your API Keys

Never hardcode API keys in your code. Add them to your .env file:

OPENAI_API_KEY=sk-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here

Then register them in config/services.php:

// config/services.php

'openai' => [
    'key' => env('OPENAI_API_KEY'),
],

'anthropic' => [
    'key' => env('ANTHROPIC_API_KEY'),
],

This way you access them cleanly with config('services.openai.key') anywhere in your app.

Calling the OpenAI API

OpenAI's chat endpoint expects a list of messages with roles: system (instructions for the AI),
user (what the user says), and assistant (previous AI replies for context).

Here's a basic service class:

// app/Services/OpenAIService.php

namespace App\Services;

use Illuminate\Support\Facades\Http;

class OpenAIService
{
    protected string $apiKey;
    protected string $endpoint = 'https://api.openai.com/v1/chat/completions';

    public function __construct()
    {
        $this->apiKey = config('services.openai.key');
    }

    public function ask(string $prompt): string
    {
        $response = Http::withToken($this->apiKey)
            ->post($this->endpoint, [
                'model'    => 'gpt-4o',
                'messages' => [
                    ['role' => 'system', 'content' => 'You are a helpful assistant.'],
                    ['role' => 'user',   'content' => $prompt],
                ],
                'max_tokens' => 500,
            ]);

        return $response->json('choices.0.message.content') ?? 'No response.';
    }
}

Let's break down what's happening:

  • withToken() - Laravel's HTTP client automatically adds the Authorization: Bearer header
  • model - Which GPT model to use. gpt-4o is OpenAI's current flagship
  • messages - An array of message objects. The system role sets AI behavior, user is your input
  • max_tokens - Limits how long the response can be. Keeps costs predictable
  • choices.0.message.content - OpenAI returns an array of choices. We grab the first one's text

Calling the Anthropic (Claude) API

Anthropic's API is similar but has a few differences in structure. The system prompt
is a top-level field (not inside the messages array), and you must pass a version header.

// app/Services/AnthropicService.php

namespace App\Services;

use Illuminate\Support\Facades\Http;

class AnthropicService
{
    protected string $apiKey;
    protected string $endpoint = 'https://api.anthropic.com/v1/messages';

    public function __construct()
    {
        $this->apiKey = config('services.anthropic.key');
    }

    public function ask(string $prompt): string
    {
        $response = Http::withHeaders([
                'x-api-key'         => $this->apiKey,
                'anthropic-version' => '2023-06-01',
                'Content-Type'      => 'application/json',
            ])
            ->post($this->endpoint, [
                'model'      => 'claude-sonnet-4-5',
                'max_tokens' => 500,
                'system'     => 'You are a helpful assistant.',
                'messages'   => [
                    ['role' => 'user', 'content' => $prompt],
                ],
            ]);

        return $response->json('content.0.text') ?? 'No response.';
    }
}

Key differences from OpenAI:

  • x-api-key header - Anthropic uses a custom header instead of Bearer token
  • anthropic-version - Required header. Always include this or the request will fail
  • system - Sits at the top level of the request body, not inside messages
  • content.0.text - Anthropic wraps the response in a content array of blocks

Wiring It Up With a Controller

Let's create a simple controller that uses both services:

// app/Http/Controllers/AiController.php

namespace App\Http\Controllers;

use App\Services\OpenAIService;
use App\Services\AnthropicService;
use Illuminate\Http\Request;

class AiController extends Controller
{
    public function __construct(
        protected OpenAIService   $openai,
        protected AnthropicService $anthropic,
    ) {}

    public function ask(Request $request)
    {
        $request->validate([
            'prompt'   => 'required|string|max:1000',
            'provider' => 'required|in:openai,anthropic',
        ]);

        $answer = match ($request->provider) {
            'openai'    => $this->openai->ask($request->prompt),
            'anthropic' => $this->anthropic->ask($request->prompt),
        };

        return response()->json(['answer' => $answer]);
    }
}

Add the route in routes/web.php or routes/api.php:

Route::post('/ai/ask', [AiController::class, 'ask']);

Now you can POST to /ai/ask with a prompt and provider field and get a response from either AI.

Real-World Scenarios

Let's look at practical things you might build with this setup.

Scenario 1: Summarizing User-Submitted Text

A user pastes a long article and wants a short summary. Pass the text as context in the prompt:

public function summarize(string $text): string
{
    $prompt = "Summarize the following text in 3 bullet points:\n\n{$text}";

    return $this->anthropic->ask($prompt);
}

Claude handles long documents especially well — it has a large context window and follows
formatting instructions reliably. This makes it a strong choice for summarization tasks.

Scenario 2: Generating Product Descriptions

An e-commerce app that auto-writes product descriptions from a product name and features:

public function generateDescription(string $name, array $features): string
{
    $featureList = implode(', ', $features);

    $prompt = "Write a short, engaging product description for: {$name}. 
               Key features: {$featureList}. Keep it under 100 words.";

    return $this->openai->ask($prompt);
}

Scenario 3: Classifying Support Tickets

Automatically tag incoming support tickets by category:

public function classifyTicket(string $ticketContent): string
{
    $prompt = "Classify this support ticket into one of these categories: 
               billing, technical, account, general.
               Reply with only the category name, nothing else.
               
               Ticket: {$ticketContent}";

    return strtolower(trim($this->anthropic->ask($prompt)));
}

Scenario 4: Adding Error Handling

APIs fail. Networks fail. You should always handle errors gracefully:

public function ask(string $prompt): string
{
    try {
        $response = Http::withToken($this->apiKey)
            ->timeout(30)
            ->post($this->endpoint, [
                'model'      => 'gpt-4o',
                'messages'   => [
                    ['role' => 'user', 'content' => $prompt],
                ],
                'max_tokens' => 500,
            ]);

        if ($response->failed()) {
            logger()->error('OpenAI error', ['status' => $response->status(), 'body' => $response->body()]);
            return 'The AI service is currently unavailable. Please try again.';
        }

        return $response->json('choices.0.message.content') ?? 'No response.';

    } catch (\Exception $e) {
        logger()->error('OpenAI exception', ['message' => $e->getMessage()]);
        return 'Something went wrong. Please try again.';
    }
}
  • timeout(30) - Don't let a slow API hang your entire request
  • failed() - Laravel's HTTP client returns true for 4xx and 5xx responses
  • logger()->error() - Log the error silently so you can debug later without showing details to users

OpenAI vs Claude: How Do They Compare?

Both APIs work well. Here's a practical comparison to help you choose:

  • Pricing - Both charge per token (input + output). Check their websites for current rates — they change often. For most beginner projects the cost is very low.
  • Speed - GPT-4o Mini and Claude Haiku are the fastest (and cheapest) models from each provider. Use them for high-volume features.
  • Instruction following - Claude tends to follow detailed formatting instructions more precisely. Useful when you need structured output.
  • Context window - Claude handles very large inputs well. Good for summarizing long documents or codebases.
  • Ecosystem - OpenAI has a larger community, more tutorials, and more third-party packages.
  • Safety - Both have content filters. Anthropic's are stricter by default.

For most Laravel apps, either one works fine. Try both with your specific use case and see which output quality you prefer.

Common Mistakes to Avoid

1. Hardcoding API Keys

Bad:

$response = Http::withToken('sk-abc123realkey...')
    ->post($this->endpoint, [...]);

Good:

$response = Http::withToken(config('services.openai.key'))
    ->post($this->endpoint, [...]);

Your API key will end up in version control and get leaked. Always use .env.

2. Not Setting max_tokens

If you skip max_tokens, the AI might return a very long response and you'll be charged for it.
Always set a reasonable limit based on what you actually need.

// Bad: no limit set
'model' => 'gpt-4o',

// Good: limit is set
'model'      => 'gpt-4o',
'max_tokens' => 300,

3. Calling the API on Every Request Without Caching

If the same prompt gets asked repeatedly (like generating a static FAQ), cache the response:

use Illuminate\Support\Facades\Cache;

public function ask(string $prompt): string
{
    $cacheKey = 'ai_response_' . md5($prompt);

    return Cache::remember($cacheKey, now()->addHours(24), function () use ($prompt) {
        return $this->callApi($prompt);
    });
}

This saves money and speeds up your app significantly for repeated queries.

4. Sending User Input Directly Without Sanitizing Intent

Users can try to "jailbreak" your AI feature with clever prompts. Use a system prompt to set boundaries:

// Weak: no system instructions
'messages' => [
    ['role' => 'user', 'content' => $userInput],
]

// Better: system prompt limits what the AI will do
'messages' => [
    ['role' => 'system', 'content' => 'You are a customer support assistant for AcmeCorp. 
     Only answer questions about our products. Politely decline anything unrelated.'],
    ['role' => 'user', 'content' => $userInput],
]

5. Ignoring Failed Responses

Bad:

// This will crash if the API call fails
return $response->json('choices.0.message.content');

Good:

if ($response->failed()) {
    return 'Unable to get a response right now.';
}

return $response->json('choices.0.message.content') ?? 'No response.';

Quick Reference

OpenAI Request Structure

Http::withToken($key)->post('https://api.openai.com/v1/chat/completions', [
    'model'      => 'gpt-4o',          // or gpt-4o-mini for cheaper/faster
    'max_tokens' => 500,
    'messages'   => [
        ['role' => 'system', 'content' => 'Your instructions here'],
        ['role' => 'user',   'content' => $prompt],
    ],
]);

// Get the response text:
$response->json('choices.0.message.content');

Anthropic (Claude) Request Structure

Http::withHeaders([
    'x-api-key'         => $key,
    'anthropic-version' => '2023-06-01',
])->post('https://api.anthropic.com/v1/messages', [
    'model'      => 'claude-sonnet-4-5',  // or claude-haiku-4-5 for cheaper/faster
    'max_tokens' => 500,
    'system'     => 'Your instructions here',
    'messages'   => [
        ['role' => 'user', 'content' => $prompt],
    ],
]);

// Get the response text:
$response->json('content.0.text');

Models Cheat Sheet

OpenAI:
  gpt-4o          → Most capable, higher cost
  gpt-4o-mini     → Fast and cheap, great for most tasks

Anthropic:
  claude-opus-4-5   → Most capable, higher cost
  claude-sonnet-4-5 → Balanced speed and quality
  claude-haiku-4-5  → Fastest and cheapest

Conclusion

Adding AI to a Laravel app doesn't require special packages or complex setup. Both OpenAI
and Claude follow a simple pattern: send a message, get a message back. Laravel's built-in
HTTP client handles the rest.

Key takeaways:

  • Store API keys in .env and access them via config() — never hardcode them
  • OpenAI uses a Bearer token header; Anthropic uses a custom x-api-key header
  • OpenAI's system prompt goes inside the messages array; Anthropic's sits at the top level
  • Always set max_tokens to control costs and response length
  • Wrap API calls in try/catch and check $response->failed() for graceful error handling
  • Cache repeated prompts to save money and improve performance
  • Use system prompts to control what the AI will and won't do in your app

Start small — pick one feature, wire up one API, and get it working end to end.
Once you have a working service class, adding more AI-powered features is just a matter
of writing new prompts.