OpenAIProvider
Use OpenAI directly, or route through any OpenAI-compatible endpoint (OpenRouter, Ollama, Cloudflare AI Gateway, llama.cpp) by setting baseUrl. OpenAIProvider is also the parent class for GeminiProvider and MistralProvider.
Construction
public function __construct(
ProviderConfig $config,
ClientContract|null $client = null,
Closure|null $sleep = null,
)
apiKey, model, baseUrl, plus a passthrough options bag).OpenAI\Contracts\ClientContract (e.g. for tests). Defaults to the SDK's OpenAI::factory() chain.sleep(...).Methods
generateObject
Builds a chat.completions request with response_format: { type: 'json_schema', strict: true }, sends it, and returns the decoded JSON object.
$provider->generateObject(
messages: [
['role' => 'system', 'content' => 'Return JSON only.'],
['role' => 'user', 'content' => 'Pick three colors.'],
],
schema: [
'type' => 'object',
'properties' => ['colors' => ['type' => 'array', 'items' => ['type' => 'string']]],
'required' => ['colors'],
'additionalProperties' => false,
],
);
Throws ProviderException when the decoded response is not a JSON object, or when the upstream call returns 4xx/5xx after the retry chain.
generateText
Sends a chat.completions request without response_format and returns the message content.
$provider->generateText(
messages: [
['role' => 'user', 'content' => 'Describe three primary colors.'],
],
);
Throws ProviderException when the response carries no text content, or when the upstream call returns 4xx/5xx after the retry chain.
Retry Behavior
Up to 3 attempts on RateLimitException (429), ServerException (5xx), TransporterException (network), and ErrorException with status 429 or ≥500. The wrapper honors Retry-After, falling back to 2^attempt seconds.
After 3 failed attempts the wrapper throws ProviderException with reason: 'request failed: <message>', responseExcerpt (body shortened to 200 chars), httpCode, and previous set to the original Throwable.
OpenAI-Compatible Endpoints
OpenAIProvider doubles as the transport for any OpenAI-compatible API. Configure the base URL and model in config.php:
'providers' => [
'openai' => [
'apiKey' => env('OPENROUTER_API_KEY'),
'baseUrl' => 'https://openrouter.ai/api/v1',
'model' => 'anthropic/claude-3.5-sonnet',
],
],
'providers' => [
'openai' => [
'apiKey' => 'sk-no-key-required',
'baseUrl' => 'https://llama.example.com/v1',
'model' => 'llama-3.2-3b-instruct',
],
],
'providers' => [
'openai' => [
'apiKey' => env('OPENAI_API_KEY'),
'baseUrl' => 'https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai',
'model' => 'gpt-5.4',
],
],
For Chat Completions vs. Responses API selection, see the api option.
json_schema translation. Test before relying on blocks or layout generation through this path.Provider-Specific Options
Anything in providers.openai that isn't apiKey, model, or baseUrl lands in ProviderConfig::$options and is spread into every request payload:
'providers' => [
'openai' => [
'apiKey' => env('OPENAI_API_KEY'),
'temperature' => 0.7,
'reasoning_effort' => 'medium',
'top_p' => 0.9,
],
],
The plugin doesn't validate option names – anything that doesn't match an OpenAI Chat Completions field is sent as-is and may be rejected upstream.