Troubleshooting
This page covers common issues you might encounter when using Kirby Copilot, along with solutions to get you back on track.
Long Generations Time Out
If you're generating longer content (300+ words) and see errors like No object generated: could not parse the response, JSON parsing failed: Unterminated string in JSON, a 504 Gateway Timeout, or an abruptly closed connection, the AI response is being cut off before it completes.
Starting with Copilot v3, all AI requests go through a server-side PHP proxy for security. This means your web server needs to keep the connection alive for the entire generation – which can take 60+ seconds for longer content. The culprit is usually a timeout at the web server level, not PHP itself.
set_time_limit(0) to disable PHP's execution timeout. The issue is typically nginx closing the connection before PHP finishes.Laravel Herd
Herd uses nginx with FastCGI, and the default fastcgi_read_timeout is 60 seconds – often too short for longer AI generations.
Edit Herd's global config at ~/Library/Application Support/Herd/config/nginx/herd.conf. Inside the existing location ~ [^/]\.php(/|$) { } block, add:
fastcgi_read_timeout 300;
fastcgi_send_timeout 300;
send_timeout 300;
Then restart Herd:
herd restart
herd isolate (or herd secure) on the affected site – this generates a dedicated per-site config at ~/Library/Application Support/Herd/config/valet/Nginx/<your-domain> that you can safely edit.Production Environments
For deployed sites, the configuration depends on your hosting setup:
| Environment | What to check |
|---|---|
| nginx + PHP-FPM | Raise fastcgi_read_timeout, fastcgi_send_timeout, and send_timeout to 300s |
| Apache + PHP-FPM | Raise Apache's ProxyTimeout and FPM's request_terminate_timeout |
| Apache + mod_fcgid | Set FcgidOutputBufferSize 0 – the default 64 KB buffer delays streamed tokens |
fastcgi_buffering off; to the __copilot__/proxy location block. If streams still arrive in bursts, Cloudflare's automatic compression may be buffering text/event-stream; the proxy already sends Cache-Control: no-transform to opt out, but you can also disable Brotli/gzip for that route in your Cloudflare dashboard.API Key Not Working
If requests fail with Missing API key for provider: …, the plugin received an empty API key for the selected provider:
- Confirm the key is present under
providers.<name>.apiKey– not at a higher level. - If you load the key via
env('…'), make sure the variable is actually set in the environment the Panel runs under. CLI and web server environments often differ. - If you use a closure, verify it returns a non-empty string for the current Panel user – for example when returning different keys based on user role.
If requests fail with Invalid provider: …, the top-level provider key in your config doesn't match one of google, openai, anthropic, or mistral.
Blocks Generation Returns Malformed Content
When generating blocks or layouts, you might see missing fields, empty results, or incorrectly structured content. This usually comes down to the AI model's ability to handle nested JSON schemas.
A few things to try:
- Switch to Google Gemini – it has the best support for structured output with nested schemas.
- Simplify your prompt or generate fewer blocks at a time.
- Enable debug logging to inspect the raw AI response:config.php
'johannschopplich.copilot' => [ 'logLevel' => 'debug' ]
Requests Fail with 404 or JSON Parse Errors from a Gateway
If your OpenAI-compatible endpoint returns 404 on /v1/responses or requests fail with parse errors, the endpoint likely only exposes /v1/chat/completions. Set providers.openai.api to chat.
Inline Suggestions Not Appearing
If ghost text suggestions don't appear when typing in writer fields, check the following:
- Make sure the
copilot-suggestionsmark is added to your writer field:
text:
type: writer
marks:
- copilot-suggestions
- Verify that
completionis not disabled in your global configuration.