Laravel AI Integration with OpenAI & OpenRouter: Complete Guide 2025

AI features are no longer a nice-to-have — users expect intelligent responses, smart search, and automated assistance. In this guide you'll learn how to integrate both OpenAI's ChatGPT API and OpenRouter's multi-model platform into your Laravel app, covering setup, controllers, conversation history, streaming, and cost management.

What You'll Learn

OpenAI and OpenRouter setup, building a chatbot controller, storing conversation history, streaming responses for a real-time feel, function calling, and managing API costs in production.

Why Use OpenRouter Alongside OpenAI?

OpenAI is the default choice, but OpenRouter gives you access to 100+ models — GPT-4, Claude 3.5, Gemini, Llama — through a single API endpoint. This means no vendor lock-in, easier model comparison, and access to free-tier models for development. Both approaches are covered here so you can choose what fits your project.

  • Automated customer support — answer user questions 24/7 without adding staff
  • Content generation — blogs, product descriptions, marketing copy at scale
  • Smart search — semantic search that understands intent, not just keywords
  • Code assistance — help users write and debug code within your app

1. Prerequisites & Setup

Get your API keys

For OpenAI, sign up at platform.openai.com. For OpenRouter, sign up at openrouter.ai. Both offer free trial credits to get started.

Install the OpenAI PHP package

composer require openai-php/laravel php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"

Configure your environment

# .env OPENAI_API_KEY=sk-your-key-here # OpenRouter (if using) OPENROUTER_API_KEY=sk-or-v1-your-key-here OPENROUTER_SITE_URL=https://yoursite.com OPENROUTER_APP_NAME=YourAppName

2. OpenAI Integration

A simple chat controller using the OpenAI facade:

<?php // app/Http/Controllers/AIChatController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use OpenAI\Laravel\Facades\OpenAI; class AIChatController extends Controller { public function chat(Request $request) { $request->validate([ 'message' => 'required|string|max:1000', ]); try { $result = OpenAI::chat()->create([ 'model' => 'gpt-4o-mini', // Use Mini for cost efficiency 'messages' => [ ['role' => 'system', 'content' => 'You are a helpful assistant.'], ['role' => 'user', 'content' => $request->message], ], 'max_tokens' => 500, ]); return response()->json([ 'success' => true, 'response' => $result->choices[0]->message->content, ]); } catch (\Exception $e) { return response()->json([ 'success' => false, 'error' => 'AI service unavailable. Please try again.', ], 500); } } }

Register the route

// routes/api.php use App\Http\Controllers\AIChatController; Route::post('/ai/chat', [AIChatController::class, 'chat']) ->middleware('throttle:20,1'); // 20 requests per minute per user

3. OpenRouter Integration

OpenRouter uses the same OpenAI-compatible API format, so the integration is straightforward via Laravel's HTTP client:

use Illuminate\Support\Facades\Http; public function chatWithOpenRouter(Request $request) { $request->validate([ 'message' => 'required|string|max:1000', 'model' => 'nullable|string', ]); $model = $request->model ?? 'openai/gpt-4o-mini'; try { $response = Http::withHeaders([ 'Authorization' => 'Bearer ' . env('OPENROUTER_API_KEY'), 'HTTP-Referer' => env('OPENROUTER_SITE_URL'), 'X-Title' => env('OPENROUTER_APP_NAME'), ])->timeout(30)->post('https://openrouter.ai/api/v1/chat/completions', [ 'model' => $model, 'messages' => [ ['role' => 'user', 'content' => $request->message] ], ]); $data = $response->json(); return response()->json([ 'success' => true, 'response' => $data['choices'][0]['message']['content'], 'model' => $model, ]); } catch (\Exception $e) { return response()->json([ 'success' => false, 'error' => 'Service unavailable', ], 500); } }

Popular OpenRouter models

$models = [ 'openai/gpt-4o-mini' => 'Cheap & Fast', 'anthropic/claude-3.5-sonnet' => 'Best for reasoning', 'google/gemini-pro' => 'Free tier available', 'meta-llama/llama-3.1-70b' => 'Open source', ];

4. Conversation History

For a real chatbot, you need to persist conversation context. Start with a migration:

php artisan make:migration create_conversations_table
Schema::create('conversations', function (Blueprint $table) { $table->id(); $table->foreignId('user_id')->constrained(); $table->string('session_id')->index(); $table->string('model'); $table->text('message'); $table->text('response'); $table->timestamps(); });

Controller with history support

use App\Models\Conversation; public function chatWithHistory(Request $request) { $sessionId = $request->session_id ?? Str::uuid(); // Load last 10 messages for context $history = Conversation::where('session_id', $sessionId) ->latest()->take(10)->get()->reverse(); $messages = [['role' => 'system', 'content' => 'You are a helpful assistant.']]; foreach ($history as $conv) { $messages[] = ['role' => 'user', 'content' => $conv->message]; $messages[] = ['role' => 'assistant', 'content' => $conv->response]; } $messages[] = ['role' => 'user', 'content' => $request->message]; $result = OpenAI::chat()->create(['model' => 'gpt-4o-mini', 'messages' => $messages]); $response = $result->choices[0]->message->content; Conversation::create([ 'user_id' => auth()->id(), 'session_id' => $sessionId, 'model' => 'gpt-4o-mini', 'message' => $request->message, 'response' => $response, ]); return response()->json([ 'success' => true, 'response' => $response, 'session_id' => $sessionId, ]); }

5. Streaming Responses

Streaming makes your app feel like ChatGPT — tokens arrive in real time rather than waiting for the full response:

public function streamChat(Request $request) { return response()->stream(function () use ($request) { $stream = OpenAI::chat()->createStreamed([ 'model' => 'gpt-4o-mini', 'messages' => [['role' => 'user', 'content' => $request->message]], ]); foreach ($stream as $response) { if (isset($response->choices[0]->delta->content)) { echo "data: " . json_encode([ 'content' => $response->choices[0]->delta->content ]) . "\n\n"; flush(); } } }, 200, [ 'Content-Type' => 'text/event-stream', 'Cache-Control' => 'no-cache', ]); }

6. Cost Management

Approximate pricing per 1M tokens (check provider sites for current rates)

GPT-4o: ~$5 input | GPT-4o Mini: ~$0.15 input | Claude 3.5 Sonnet: ~$3 input | Gemini Flash: ~$0.075 input

The most effective cost controls in production: cache identical or near-identical prompts with Redis, use a cheap model like GPT-4o Mini for simple tasks and only escalate to GPT-4o or Claude when quality demands it, set sensible max_tokens limits, and throttle at the route level so no single user can run up your bill.

$key = 'ai_' . md5($request->message); $response = Cache::remember($key, 3600, function () use ($request) { return OpenAI::chat()->create([ 'model' => 'gpt-4o-mini', 'messages' => [['role' => 'user', 'content' => $request->message]], ])->choices[0]->message->content; });

Common Troubleshooting

Rate limit errors — move heavy AI tasks to queue jobs so they don't block the request cycle. Timeouts — set HTTP client timeout to 30+ seconds for longer prompts. OpenRouter errors — make sure the HTTP-Referer header matches your registered site URL. Token limits — summarise older conversation history rather than sending the full context every time.

Start cheap, scale up

Use free OpenRouter models during development. Switch to GPT-4o Mini for production — it handles the vast majority of use cases well at a fraction of the cost of GPT-4o. Only upgrade when you actually see quality problems.

Conclusion

You now have everything you need to add AI to your Laravel application — from a basic chat endpoint to conversation history, streaming, and cost-aware model selection. Start simple, measure your token usage from day one, and add complexity only as your use case demands it.

For more on building scalable Laravel backends, see the REST API Best Practices guide and Performance Optimization guide.

Need AI Features Built Into Your Laravel App?

I've integrated OpenAI and OpenRouter into production Laravel applications — chatbots, content generators, and smart search. Happy to help you build something that actually ships.

Based in Bangladesh · Remote worldwide · Fast turnaround

About the Author

Kamruzzaman Polash — Software Engineer specialising in Laravel, REST APIs, and scalable backend systems. 10+ projects delivered for clients worldwide.