Request Timeout
ken199191964
HOBBYOP

3 months ago

I am creating a n8n workflow to generate an article using Basic LLM Chains.
I am using gpt 4.0 mini; however, after running 5 mins. it shows up error "Request Time out". Is there any setting able to avoid it?

$10 Bounty

2 Replies

Railway
BOT

3 months ago

Hey there! We've found the following might help you get unblocked faster:

If you find the answer from one of these, please let us know by solving the thread!


bytekeim
PRO

3 months ago

Hey @ken199191964, saw your bounty on the n8n timeout issue with GPT-4o mini (I think you meant 4o mini, common mixup). I've run into similar headaches with LLM chains in n8n, especially when generating longer stuff like articles. The "Request Timeout" after ~5 mins is super common—it's often due to n8n's built-in execution limits or the AI node hitting a hardcoded 300-second cap. No magic setting fixes it entirely, but here's what works based on my tinkering and community fixes.

First off, the root cause: n8n has a default timeout around 5-10 mins for nodes to prevent hangs, and for AI stuff like Basic LLM Chains, if the model takes too long (GPT-4o mini can be slow on big outputs), it bombs out. If you're self-hosted, you can tweak env vars, but even then, it's not foolproof for super long tasks.

Quick fixes to try:

  • Boost the global timeout: If you're running n8n self-hosted (Docker or whatever), add these to your .env file:

    text

    EXECUTIONS_TIMEOUT=1800  # 30 mins in seconds
    EXECUTIONS_TIMEOUT_MAX=3600  # Max cap

    Restart n8n, and it should give more breathing room. But heads up, this won't help if it's the OpenAI API timing out on their end—check your API logs for that.

  • Redesign for shorter calls: This is the real winner. Break your article gen into chunks. Use a "Loop Over Items" node to process sections one by one. For example:

    1. First LLM chain: Generate an outline (quick, under 1 min).

    2. Loop over outline sections: Feed each to a separate LLM call.

    3. Add a "Wait" node (like 3-5 secs) between loops to avoid rate limits.

    4. Merge everything at the end. This way, no single call hits the 5-min wall. I've used this for similar content workflows and it drops errors big time.

  • Switch to HTTP Request node: Instead of Basic LLM Chains, use a custom HTTP Request to OpenAI's API. You can set your own timeout param there (e.g., timeout: 600000 ms in the node settings). Pair it with batching via loops. Example prompt setup: Limit max_tokens to 2000-3000 per call and chain them.

  • Hosting tweaks: If you're on Railway or similar, their free tier has a 5-min idle limit that could be biting you. Upgrade or switch to a VPS for more control. Also, test with a faster model like GPT-3.5 if 4o mini's being pokey.

If none of that clicks, share a screenshot of your workflow or logs—might spot something specific. This setup fixed my timeouts on long AI gens, hope it does the trick for you!


Loading...