Hectic response time metrics
djordje-st
HOBBYOP

2 months ago

My project has been having varying response times ranging from 5ms up to 1.5 seconds (see screenshot in attachment).

I'm using private networking and everything so I'm curious what may be causing this.

I have a TanStack Start project with Drizzle and a Postgres database.

Let me know if more details are required.

Attachments

$10 Bounty

8 Replies

Railway
BOT

2 months ago

Hey there! We've found the following might help you get unblocked faster:

If you find the answer from one of these, please let us know by solving the thread!


ilyassbreth
FREE

2 months ago

i think this looks like connection pooling , if you're using postgres.js with drizzle you gotta set max connections otherwise it opens new ones per request

ts

const client = postgres(process.env.DATABASE_URL, { max: 10 })

what does your db client setup look like?


ilyassbreth

i think this looks like connection pooling , if you're using postgres.js with drizzle you gotta set max connections otherwise it opens new ones per requesttsconst client = postgres(process.env.DATABASE_URL, { max: 10 })what does your db client setup look like?

djordje-st
HOBBYOP

2 months ago

This is my setup:

import { drizzle } from 'drizzle-orm/node-postgres'
import { Pool } from 'pg'
import * as schema from './schema'

const pool = new Pool({
  connectionString: process.env.DATABASE_URL!,
  max: 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000,
  keepAlive: true,
  keepAliveInitialDelayMillis: 10000,
})

export const db = drizzle(pool, {
  schema,
})

ilyassbreth
FREE

2 months ago

interesting, your pooling setup looks solid actually. few questions:

do the spikes happen after periods of inactivity? (could be cold start) , also is it all routes or specific ones? and have you tried testing with the public db url temporarily to rule out private networking?


djordje-st
HOBBYOP

2 months ago

Don't think it's because of cold starts, the longest period of inactivity is about 30 minutes, probably even less.

Looking at the logs queries that take 1s or longer are all server functions from TanStack Start, not public API endpoints.

I'll give it a go with the public URL and report back.


ilyassbreth
FREE

2 months ago

ok i'm waiting your response


2 months ago

The lack of connection pooling or misconfigured connection pooling was a very good callout, but I think the most productive action that could be taken here would be to implement some extensive tracing so that you can exactly pinpoint where the slowdown is coming from.


djordje-st
HOBBYOP

2 months ago

Unfortunately the public URL change didn't make a difference. Metrics screenshot attached.

I also added a cache layer for Drizzle using this https://orm.drizzle.team/docs/cache#custom-cache.

Everything feels fast on my end but the metrics still look off.

@brody do you happen to know a good example for tanstack start?

I have logging middleware for every request and in some server functions as well.

I'll attach a cleaned up sample of the logs from the global middleware. These are all for requests that take 1s+.


Loading...