Unable to call Streaming Response from FastAPI in production

simondpalmerHOBBY

10 months ago

My StreamingResponse with FastAPI using Hypercorn works in development but not during production on Railway.
The deploy logs show a Prisma debug but stops mid way through the function with no error. On the frontend it Errors with 504 because it just Timesout.

Is there anything unique I should be aware of with Streaming Responses on Railway?

Project ID: 272293fe-814d-4a92-9d85-82c242f56daa

My API route I am calling is attached

0 Replies

10 months ago

this is just SSE right?


simondpalmerHOBBY

10 months ago

Yes its via an API call from a next.js server


10 months ago

no issues with SSE on railway -


10 months ago

are you sending SSEs to a client's browser or? need a little more context here


simondpalmerHOBBY

10 months ago

Yes, sorry, I am sending it to a clients browser. They make an API call from the next.js backend to Railway for this 'gen_query'.


10 months ago

where does fastapi come into play with next and a clients browser


simondpalmerHOBBY

10 months ago

A call from next/api is sent to fastAPI via:


simondpalmerHOBBY

10 months ago

const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
    method: 'POST',
    headers: {
      'Accept': 'application/json',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
  })

simondpalmerHOBBY

10 months ago

the whole route.ts is as follows:

import { NextResponse, NextRequest } from 'next/server'
import { OpenAIStream, StreamingTextResponse } from 'ai'
export const maxDuration = 300;
export const dynamic = 'force-dynamic'; // always run dynamically

// POST /api/
export async function POST(req: NextRequest) {

const { lotInterestAccess } = await req.json();

try {
  // const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:5000' : 'https://ideally-api.up.railway.app'}/ideal/zoneinfo?lotInterestId=${lotInterestAccess.interest.id}&zoneType=${lotInterestAccess.interest.lot.zoneType}&zoneDescription=${lotInterestAccess.interest.lot.zoneDescription}`)
  const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
    method: 'POST',
    headers: {
      'Accept': 'application/json',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
  })

  return new StreamingTextResponse(fetchResponse.body!);

10 months ago

for testing, cut out the nextjs app and call the public domain of the fastapi service


simondpalmerHOBBY

10 months ago

Okay will do. I have tested several different ways to make API calls but it seems once it hits one error or warning it stalls and I cant call it again… I thought it was a hypercorn thing maybe


10 months ago

this is no doubt a code or config issue, its just a question of where


simondpalmerHOBBY

10 months ago

What is the best way of logging on Railway during API calls?


10 months ago

json structured logs would be best


simondpalmerHOBBY

10 months ago

okay i'll try it out. thanks!


simondpalmerHOBBY

10 months ago

How come debugging in Deploy Logs is highlighted red with a level: "error" with really no other information besides this?


simondpalmerHOBBY

10 months ago

I get it that this means that its printing to stderr


10 months ago

are you doing json logging?


simondpalmerHOBBY

10 months ago

alot of it is print(). Should I use 'structlog' or is there a preference on Railway?


10 months ago

if you are just using print what other information would you expect to be printed beside your message?


simondpalmerHOBBY

10 months ago

I was just confused to why it 'errored' with printing to stderr.
The main problem is I am just struggling to work out how to debug this issue because all I get is a FUNCTIONINVOCATIONTIMEOUT when I make calls in production. In Development I get no errors come up and it works fine in development.
What would be the best way to debug this?


10 months ago

adding verbose debug logging, you are finding its hard to debug because you do not have the level of observability into your code that you need to.


simondpalmerHOBBY

10 months ago

Ok so I added

import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

Which offers plenty of system info during deploy. Although the debugging log stops displaying once prisma is disconnected. After that nothing (There should be callbacks logged at this point). If I try to make any further requests no debugging is displayed at all.


10 months ago

are you making sure to log unbuffered?


simondpalmerHOBBY

10 months ago

How do I do that?


10 months ago

you would need to reference the loggers / python docs for that


simondpalmerHOBBY

10 months ago

I figured it out. When disconnecting from Prisma Query Engine it would just freeze the server. I switched from using Hypercorn to Uvicorn and now it works!


10 months ago

awesome, glad to hear it


simondpalmerHOBBY

10 months ago

thanks for the support


10 months ago

no problem!