Troubleshooting

Fix generation timeouts, API key errors, malformed blocks output, and missing inline suggestions across Herd, production nginx, and Cloudflare setups.

This page covers common issues you might encounter when using Kirby Copilot, along with solutions to get you back on track.

Long Generations Time Out

If you're generating longer content (300+ words) and see errors like No object generated: could not parse the response, JSON parsing failed: Unterminated string in JSON, a 504 Gateway Timeout, or an abruptly closed connection, the AI response is being cut off before it completes.

Starting with Copilot v3, all AI requests go through a server-side PHP proxy for security. This means your web server needs to keep the connection alive for the entire generation – which can take 60+ seconds for longer content. The culprit is usually a timeout at the web server level, not PHP itself.

The Copilot proxy already calls set_time_limit(0) to disable PHP's execution timeout. The issue is typically nginx closing the connection before PHP finishes.

Laravel Herd

Herd uses nginx with FastCGI, and the default fastcgi_read_timeout is 60 seconds – often too short for longer AI generations.

Edit Herd's global config at ~/Library/Application Support/Herd/config/nginx/herd.conf. Inside the existing location ~ [^/]\.php(/|$) { } block, add:

herd.conf
fastcgi_read_timeout 300;
fastcgi_send_timeout 300;
send_timeout 300;

Then restart Herd:

herd restart
Herd may overwrite its global config files on update. For a more durable setup, run herd isolate (or herd secure) on the affected site – this generates a dedicated per-site config at ~/Library/Application Support/Herd/config/valet/Nginx/<your-domain> that you can safely edit.

Production Environments

For deployed sites, the configuration depends on your hosting setup:

EnvironmentWhat to check
nginx + PHP-FPMRaise fastcgi_read_timeout, fastcgi_send_timeout, and send_timeout to 300s
Apache + PHP-FPMRaise Apache's ProxyTimeout and FPM's request_terminate_timeout
Apache + mod_fcgidSet FcgidOutputBufferSize 0 – the default 64 KB buffer delays streamed tokens
If you use Cloudflare, the default 120-second response timeout (HTTP 524) is an idle timeout between successive reads from your origin – not a wall-clock limit. Since the Copilot proxy streams tokens via Server-Sent Events, you should not hit this limit during normal streaming generations. If you do see 524 errors, ensure no intermediate layer buffers the response – for nginx, add fastcgi_buffering off; to the __copilot__/proxy location block. If streams still arrive in bursts, Cloudflare's automatic compression may be buffering text/event-stream; the proxy already sends Cache-Control: no-transform to opt out, but you can also disable Brotli/gzip for that route in your Cloudflare dashboard.

API Key Not Working

If requests fail with Missing API key for provider: …, the plugin received an empty API key for the selected provider:

  1. Confirm the key is present under providers.<name>.apiKey – not at a higher level.
  2. If you load the key via env('…'), make sure the variable is actually set in the environment the Panel runs under. CLI and web server environments often differ.
  3. If you use a closure, verify it returns a non-empty string for the current Panel user – for example when returning different keys based on user role.

If requests fail with Invalid provider: …, the top-level provider key in your config doesn't match one of google, openai, anthropic, or mistral.

Blocks Generation Returns Malformed Content

When generating blocks or layouts, you might see missing fields, empty results, or incorrectly structured content. This usually comes down to the AI model's ability to handle nested JSON schemas.

A few things to try:

  1. Switch to Google Gemini – it has the best support for structured output with nested schemas.
  2. Simplify your prompt or generate fewer blocks at a time.
  3. Enable debug logging to inspect the raw AI response:
    config.php
    'johannschopplich.copilot' => [
        'logLevel' => 'debug'
    ]
    
See the Blocks and Layouts guide for more details on structured content generation.

Requests Fail with 404 or JSON Parse Errors from a Gateway

If your OpenAI-compatible endpoint returns 404 on /v1/responses or requests fail with parse errors, the endpoint likely only exposes /v1/chat/completions. Set providers.openai.api to chat.

See the api option and compatibility table for details.

Inline Suggestions Not Appearing

If ghost text suggestions don't appear when typing in writer fields, check the following:

  1. Make sure the copilot-suggestions mark is added to your writer field:
Writer Field
text:
  type: writer
  marks:
    - copilot-suggestions
  1. Verify that completion is not disabled in your global configuration.
Learn more about how inline suggestions work.