11 days ago
I never had any issue with requests to my service. Recently I am having 502 error as below during some requests, it sometimes work others no. It is random but can be due to the number of requests which is maximum 50 per second.
[Nest] 15 - 08/12/2025, 3:20:10 PM ERROR [ExceptionsHandler] request aborted
BadRequestError: request aborted
at IncomingMessage.onAborted (/app/nodemodules/raw-body/index.js:245:10) at IncomingMessage.emit (node:events:517:28) at IncomingMessage.destroy (node:httpincoming:224:10)
at destroy (node:internal/streams/destroy:109:10) at IncomingMessage.destroy (node:internal/streams/destroy:71:5) at abortIncoming (node:httpserver:781:9) at socketOnClose (node:http_server:775:3)
at Socket.emit (node:events:529:35)
at TCP. (node:net:350:12)
Any idea to keep it stable?
0 Replies
It seems to be an application-level error. Could you please drop the snippet of your build, & HTTP logs that must have the error
Build
Is the URL: cdm-server-production.up.railway.com?
It is only listening to http request!! Try that
And why on earth, it is showing this in the UI
unfortunately I cant find any example right now in PROD.
But yes. My understanding is that it can be due to a big number of requests coming at a sudden. But still this should be able to configure or wait a bit more, before 502 is thrown
Can you please provide the initial setup config lines for your server?
this is my main.ts:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
const port = process.env.PORT || 3000;
async function bootstrap() {
const app = await NestFactory.create(AppModule);
// Enable CORS globally
app.enableCors({
origin: '*',
methods: 'GET,HEAD,PUT,PATCH,POST,DELETE,OPTIONS',
allowedHeaders: 'Content-Type, Authorization, X-API-Key',
});
await app.listen(port, "0.0.0.0");
}
bootstrap();
In your server file, you can add the following changes:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { Request, Response, NextFunction } from 'express';
const port = process.env.PORT || 3000;
async function bootstrap() {
const app = await NestFactory.create(AppModule);
// Enable CORS globally
app.enableCors({
origin: '*',
methods: 'GET,HEAD,PUT,PATCH,POST,DELETE,OPTIONS',
allowedHeaders: 'Content-Type, Authorization, X-API-Key',
});
// Middleware to listen for aborted requests
app.use((req: Request, res: Response, next: NextFunction) => {
req.on('aborted', () => {
console.warn('Request aborted by the client');
});
next();
});
// Global error handling middleware for aborted request error
app.use((err: any, req: Request, res: Response, next: NextFunction) => {
if (err.name === 'BadRequestError' && err.message === 'request aborted') {
console.warn('Caught request aborted error:', err);
// 499 indicates client closed the request
res.status(499).send('Client Closed Request');
} else {
next(err);
}
});
// Increase server timeout (adjust timeout as needed)
const server = app.getHttpServer();
server.setTimeout(120000); // 2 minutes
await app.listen(port, "0.0.0.0");
}
bootstrap();
This will:
a. Log aborted requests without crashing the app.
b. Respond with status 499 (Client Closed Request) to aborted requests gracefully.
c. Increase server timeout to reduce premature request terminations under load.
And I am sure that with these changes, you would cover the small edge-cases, that were previoulsy untouched
Seems that unfortunately the error can now be reproduced:
{
"requestId": "-0kJlgvZTtSALGLtg5js2Q",
"timestamp": "2025-08-13T09:43:48.888053779Z",
"method": "POST",
"path": "/slotService/setCdmData",
"host": "cdm-server-production.up.railway.app",
"httpStatus": 502,
"upstreamProto": "",
"downstreamProto": "HTTP/1.1",
"responseDetails": "Retried single replica",
"totalDuration": 28,
"upstreamAddress": "",
"clientUa": "",
"upstreamRqDuration": 27,
"txBytes": 109,
"rxBytes": 352,
"srcIp": "xxxxx",
"edgeRegion": "europe-west4",
"upstreamErrors": "[{\"deploymentInstanceID\":\"009a9bb2-1e3c-4215-bace-4d1902ff37b7\",\"duration\":22,\"error\":\"connection closed unexpectedly\"},{\"deploymentInstanceID\":\"009a9bb2-1e3c-4215-bace-4d1902ff37b7\",\"duration\":5,\"error\":\"body read after close\"}]"
}
Roger, you can do one thing:
Edit the PORT NUMBER, and change it to something above 5000 (sometimes lower-number ports are for internal routing), and thus may pose issues sometimes.
Make sure to change it in the public networking setting also
One in main.ts, and the second in the Railway UI:
Select the desired service on the dashboard, go to Settings -> Public Networking -> Edit -> Update the port number there
Give that a spin, by creating a new deployment of the service
This was coming in the build log or the http logs?
And this was not showing up previously, right (came after you update the server file)
If the previous change did not help, revert the file to your original state, & then just update the PORT NUMBER. Most of the time, 502 is because of the use of the wrong port!
Are you sure that the port you are using is specific to just one service (not being used by any other of your deployed service)!
Also confirm, that you have not set the service as Serverless!
Just to make sure, in main.ts:
should I change; const port = process.env.PORT || 3000; ?
Yes, do this in main.ts:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
const port = process.env.PORT || 5656;
async function bootstrap() {
const app = await NestFactory.create(AppModule);
// Enable CORS globally
app.enableCors({
origin: '*',
methods: 'GET,HEAD,PUT,PATCH,POST,DELETE,OPTIONS',
allowedHeaders: 'Content-Type, Authorization, X-API-Key',
});
const server = app.getHttpServer();
server.setTimeout(120000); // 2 minutes
await app.listen(port, "0.0.0.0");
}
bootstrap();
And then in the public networking section, update that to 5656 too!
Use the setTImeout function too, so that the server can handle requests for a longer time
That's the best I can guide you.
Btw, you are on which plan?
Please provide a snip of the dropdown of the port number in expanded form
Your app is listening to 8080 be default! So, mention that in the server file!
Just select 8080 from the dropdown, and update the main.ts too
If this did not worked, try this:
Yours is just a next.js server, right? And main.ts is the only responsible file for the same.
Railway usually finds the target port by what you have defined in your server file. Assume that you are using 8080 as the port number in your main.ts file, then do these things:
a. Provide a PORT variable in the VARIABLES panel for that service, & set it to 8080,
b. Then, via the PUBLIC ENDPOINT config, select the port 8080
Generally, in all of the instances, the magic port found by Railway matches what is defined in the code. But to be on the safer side, make sure everything is the same, so that all of the routing can be done to the correct port.
Great!! Do mark my previous reply as the solution for your query via the help-station
Unfortunately, the main issue is still no solved:
Deploy logs:
Request aborted by the client
[Nest] 15 - 08/13/2025, 10:50:10 AM ERROR [ExceptionsHandler] request aborted
BadRequestError: request aborted
at IncomingMessage.onAborted (/app/node_modules/raw-body/index.js:245:10)
at IncomingMessage.emit (node:events:529:35)
at IncomingMessage._destroy (node:_http_incoming:224:10)
at _destroy (node:internal/streams/destroy:109:10)
at IncomingMessage.destroy (node:internal/streams/destroy:71:5)
at abortIncoming (node:_http_server:781:9)
at socketOnClose (node:_http_server:775:3)
at Socket.emit (node:events:529:35)
at TCP. (node:net:350:12)
HTTP logs:
{
"requestId": "o4bZiwMATXOKuoSbSh8_Fw",
"timestamp": "2025-08-13T10:50:10.037079244Z",
"method": "POST",
"path": "/slotService/setCdmData",
"host": "cdm-server-production.up.railway.app",
"httpStatus": 502,
"upstreamProto": "",
"downstreamProto": "HTTP/1.1",
"responseDetails": "Retried single replica",
"totalDuration": 982,
"upstreamAddress": "",
"clientUa": "",
"upstreamRqDuration": 982,
"txBytes": 109,
"rxBytes": 351,
"srcIp": "109.154.104.110",
"edgeRegion": "europe-west4",
"upstreamErrors": "[{\"deploymentInstanceID\":\"519c2578-3898-4662-8963-bf970346a9c6\",\"duration\":977,\"error\":\"connection reset by peer\"},{\"deploymentInstanceID\":\"519c2578-3898-4662-8963-bf970346a9c6\",\"duration\":5,\"error\":\"body read after close\"}]"
}
Using:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { Request, Response, NextFunction } from 'express';
const port = process.env.PORT || 8080;
async function bootstrap() {
const app = await NestFactory.create(AppModule);
// Enable CORS globally
app.enableCors({
origin: '*',
methods: 'GET,HEAD,PUT,PATCH,POST,DELETE,OPTIONS',
allowedHeaders: 'Content-Type, Authorization, X-API-Key',
});
// Middleware to listen for aborted requests
app.use((req: Request, res: Response, next: NextFunction) => {
req.on('aborted', () => {
console.warn('Request aborted by the client');
// Optional: Cleanup resources related to this request here
});
next();
});
// Global error handling middleware for aborted request error
app.use((err: any, req: Request, res: Response, next: NextFunction) => {
if (err.name === 'BadRequestError' && err.message === 'request aborted') {
console.warn('Caught request aborted error:', err);
res.status(499).send('Client Closed Request'); // 499 indicates client closed the request
} else {
next(err);
}
});
// Increase server timeout (adjust timeout as needed)
const server = app.getHttpServer();
server.setTimeout(120000); // 2 minutes
await app.listen(port, "0.0.0.0");
}
bootstrap();
Did some equality operators change:
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { Request, Response, NextFunction } from 'express';
const port = process.env.PORT || 8080;
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.enableCors({
origin: '*',
methods: 'GET,HEAD,PUT,PATCH,POST,DELETE,OPTIONS',
allowedHeaders: 'Content-Type, Authorization, X-API-Key',
});
// Middleware to detect aborted requests
app.use((req: Request, res: Response, next: NextFunction) => {
req.on('aborted', () => {
console.warn('Request aborted by the client');
// Optional: Cleanup logic here
});
next();
});
// Global error handling middleware
app.use((err: any, req: Request, res: Response, next: NextFunction) => {
if (err.name === 'BadRequestError' && err.message === 'request aborted') {
console.warn('Caught request aborted error:', err);
res.status(499).send('Client Closed Request'); // 499 status code for client abort
} else {
next(err);
}
});
const server = app.getHttpServer();
server.setTimeout(120000); // 2 minutes
await app.listen(port, '0.0.0.0');
}
bootstrap();
Just to be sure: Public endpoint is listening to 8080, and you have set up the PORT variable for the service too?
Yes, and I don't have it defined. That would make sense, but if not 8080 is used
Can you please provide the slotService/setCdmData endpoint's code?
@UseGuards(Key)
@Post('setCdmData')
async setCdmData(
@Query('callsign') callsign: string,
@Query('tobt') tobt: string,
@Query('tsat') tsat: string,
@Query('ttot') ttot: string,
@Query('ctot') ctot: string,
@Query('reason') reason: string,
) {
if (callsign != '') {
return await this.flightService.setCdmData(
callsign,
tobt,
tsat,
ttot,
ctot,
reason,
);
}
}
Everything seems perfect. Just to be on the safer side, add these:
server.keepAliveTimeout = 65000;
server.headersTimeout = 66000;
Yes, before app.listen thing, and after this lineserver.setTimeout(120000)
This could be because of the high number of incoming requests, and that is why you are seeing 1/10 502 requests
Nope! You can upgrade once to see whether that plan is serving you best or not. Because it's not a problem in your application/your Railway subscription
Just waiting for the solution to be approved, so that this thread can be marked as solved ✌️
10 days ago
!s
Status changed to Solved angelo-railway • 10 days ago
10 days ago
Thanks so much for answering the question! 🙂