Unable to call Streaming Response from FastAPI in production

simondpalmerHOBBY

8 months ago

My StreamingResponse with FastAPI using Hypercorn works in development but not during production on Railway.
The deploy logs show a Prisma debug but stops mid way through the function with no error. On the frontend it Errors with 504 because it just Timesout.

Is there anything unique I should be aware of with Streaming Responses on Railway?

Project ID: 272293fe-814d-4a92-9d85-82c242f56daa

My API route I am calling is attached

0 Replies

8 months ago

this is just SSE right?


simondpalmerHOBBY

8 months ago

Yes its via an API call from a next.js server


8 months ago

no issues with SSE on railway -


8 months ago

are you sending SSEs to a client's browser or? need a little more context here


simondpalmerHOBBY

8 months ago

Yes, sorry, I am sending it to a clients browser. They make an API call from the next.js backend to Railway for this 'gen_query'.


8 months ago

where does fastapi come into play with next and a clients browser


simondpalmerHOBBY

8 months ago

A call from next/api is sent to fastAPI via:


simondpalmerHOBBY

8 months ago

const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
    method: 'POST',
    headers: {
      'Accept': 'application/json',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
  })

simondpalmerHOBBY

8 months ago

the whole route.ts is as follows:

import { NextResponse, NextRequest } from 'next/server'
import { OpenAIStream, StreamingTextResponse } from 'ai'
export const maxDuration = 300;
export const dynamic = 'force-dynamic'; // always run dynamically

// POST /api/
export async function POST(req: NextRequest) {

const { lotInterestAccess } = await req.json();

try {
  // const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:5000' : 'https://ideally-api.up.railway.app'}/ideal/zoneinfo?lotInterestId=${lotInterestAccess.interest.id}&zoneType=${lotInterestAccess.interest.lot.zoneType}&zoneDescription=${lotInterestAccess.interest.lot.zoneDescription}`)
  const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
    method: 'POST',
    headers: {
      'Accept': 'application/json',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
  })

  return new StreamingTextResponse(fetchResponse.body!);

8 months ago

for testing, cut out the nextjs app and call the public domain of the fastapi service


simondpalmerHOBBY

8 months ago

Okay will do. I have tested several different ways to make API calls but it seems once it hits one error or warning it stalls and I cant call it again… I thought it was a hypercorn thing maybe


8 months ago

this is no doubt a code or config issue, its just a question of where


simondpalmerHOBBY

8 months ago

What is the best way of logging on Railway during API calls?


8 months ago

json structured logs would be best


simondpalmerHOBBY

8 months ago

okay i'll try it out. thanks!


simondpalmerHOBBY

8 months ago

How come debugging in Deploy Logs is highlighted red with a level: "error" with really no other information besides this?


simondpalmerHOBBY

8 months ago

I get it that this means that its printing to stderr


8 months ago

are you doing json logging?


simondpalmerHOBBY

8 months ago

alot of it is print(). Should I use 'structlog' or is there a preference on Railway?


8 months ago

if you are just using print what other information would you expect to be printed beside your message?


simondpalmerHOBBY

8 months ago

I was just confused to why it 'errored' with printing to stderr.
The main problem is I am just struggling to work out how to debug this issue because all I get is a FUNCTIONINVOCATIONTIMEOUT when I make calls in production. In Development I get no errors come up and it works fine in development.
What would be the best way to debug this?


8 months ago

adding verbose debug logging, you are finding its hard to debug because you do not have the level of observability into your code that you need to.


simondpalmerHOBBY

8 months ago

Ok so I added

import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

Which offers plenty of system info during deploy. Although the debugging log stops displaying once prisma is disconnected. After that nothing (There should be callbacks logged at this point). If I try to make any further requests no debugging is displayed at all.


8 months ago

are you making sure to log unbuffered?


simondpalmerHOBBY

8 months ago

How do I do that?


8 months ago

you would need to reference the loggers / python docs for that


simondpalmerHOBBY

8 months ago

I figured it out. When disconnecting from Prisma Query Engine it would just freeze the server. I switched from using Hypercorn to Uvicorn and now it works!


8 months ago

awesome, glad to hear it


simondpalmerHOBBY

8 months ago

thanks for the support


8 months ago

no problem!