Unable to call Streaming Response from FastAPI in production
simondpalmer
HOBBYOP

a year ago

My StreamingResponse with FastAPI using Hypercorn works in development but not during production on Railway.
The deploy logs show a Prisma debug but stops mid way through the function with no error. On the frontend it Errors with 504 because it just Timesout.

Is there anything unique I should be aware of with Streaming Responses on Railway?

Project ID: 272293fe-814d-4a92-9d85-82c242f56daa

My API route I am calling is attached

30 Replies

brody
EMPLOYEE

a year ago

this is just SSE right?


simondpalmer
HOBBYOP

a year ago

Yes its via an API call from a next.js server


brody
EMPLOYEE

a year ago

no issues with SSE on railway -


brody
EMPLOYEE

a year ago

are you sending SSEs to a client's browser or? need a little more context here


simondpalmer
HOBBYOP

a year ago

Yes, sorry, I am sending it to a clients browser. They make an API call from the next.js backend to Railway for this 'gen_query'.


brody
EMPLOYEE

a year ago

where does fastapi come into play with next and a clients browser


simondpalmer
HOBBYOP

a year ago

A call from next/api is sent to fastAPI via:


simondpalmer
HOBBYOP

a year ago

const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
    method: 'POST',
    headers: {
      'Accept': 'application/json',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
  })

simondpalmer
HOBBYOP

a year ago

the whole route.ts is as follows:

import { NextResponse, NextRequest } from 'next/server'
import { OpenAIStream, StreamingTextResponse } from 'ai'
export const maxDuration = 300;
export const dynamic = 'force-dynamic'; // always run dynamically

// POST /api/
export async function POST(req: NextRequest) {

const { lotInterestAccess } = await req.json();

try {
  // const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:5000' : 'https://ideally-api.up.railway.app'}/ideal/zoneinfo?lotInterestId=${lotInterestAccess.interest.id}&zoneType=${lotInterestAccess.interest.lot.zoneType}&zoneDescription=${lotInterestAccess.interest.lot.zoneDescription}`)
  const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
    method: 'POST',
    headers: {
      'Accept': 'application/json',
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
  })

  return new StreamingTextResponse(fetchResponse.body!);

brody
EMPLOYEE

a year ago

for testing, cut out the nextjs app and call the public domain of the fastapi service


simondpalmer
HOBBYOP

a year ago

Okay will do. I have tested several different ways to make API calls but it seems once it hits one error or warning it stalls and I cant call it again… I thought it was a hypercorn thing maybe


brody
EMPLOYEE

a year ago

this is no doubt a code or config issue, its just a question of where


simondpalmer
HOBBYOP

a year ago

What is the best way of logging on Railway during API calls?


brody
EMPLOYEE

a year ago

json structured logs would be best


simondpalmer
HOBBYOP

a year ago

okay i'll try it out. thanks!


simondpalmer
HOBBYOP

a year ago

How come debugging in Deploy Logs is highlighted red with a level: "error" with really no other information besides this?


simondpalmer
HOBBYOP

a year ago

I get it that this means that its printing to stderr


brody
EMPLOYEE

a year ago

are you doing json logging?


simondpalmer
HOBBYOP

a year ago

alot of it is print(). Should I use 'structlog' or is there a preference on Railway?


brody
EMPLOYEE

a year ago

if you are just using print what other information would you expect to be printed beside your message?


simondpalmer
HOBBYOP

a year ago

I was just confused to why it 'errored' with printing to stderr.
The main problem is I am just struggling to work out how to debug this issue because all I get is a FUNCTIONINVOCATIONTIMEOUT when I make calls in production. In Development I get no errors come up and it works fine in development.
What would be the best way to debug this?


brody
EMPLOYEE

a year ago

adding verbose debug logging, you are finding its hard to debug because you do not have the level of observability into your code that you need to.


simondpalmer
HOBBYOP

a year ago

Ok so I added

import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))

Which offers plenty of system info during deploy. Although the debugging log stops displaying once prisma is disconnected. After that nothing (There should be callbacks logged at this point). If I try to make any further requests no debugging is displayed at all.


brody
EMPLOYEE

a year ago

are you making sure to log unbuffered?


simondpalmer
HOBBYOP

a year ago

How do I do that?


brody
EMPLOYEE

a year ago

you would need to reference the loggers / python docs for that


simondpalmer
HOBBYOP

a year ago

I figured it out. When disconnecting from Prisma Query Engine it would just freeze the server. I switched from using Hypercorn to Uvicorn and now it works!


brody
EMPLOYEE

a year ago

awesome, glad to hear it


simondpalmer
HOBBYOP

a year ago

thanks for the support


brody
EMPLOYEE

a year ago

no problem!


Loading...