Urgent, Database is not starting

vambePRO

9 months ago

Mounting volume on: /var/lib/containers/railwayapp/bind-mounts/b6e81e81-3415-4fe8-93ec-9cf9b12f9bf0/vol_iktz6xcxstkn2c7a

Certificate will not expire

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-07-19 17:56:03.605 UTC [5] LOG: starting PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

2024-07-19 17:56:03.606 UTC [5] LOG: listening on IPv4 address "0.0.0.0", port 5432

2024-07-19 17:56:03.606 UTC [5] LOG: listening on IPv6 address "::", port 5432

2024-07-19 17:56:03.714 UTC [5] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

2024-07-19 17:56:03.719 UTC [28] LOG: database system was interrupted; last known up at 2024-07-19 17:06:30 UTC

2024-07-19 17:56:03.763 UTC [28] LOG: invalid resource manager ID in checkpoint record

2024-07-19 17:56:03.763 UTC [28] PANIC: could not locate a valid checkpoint record

2024-07-19 17:56:03.764 UTC [5] LOG: startup process (PID 28) was terminated by signal 6: Aborted

2024-07-19 17:56:03.764 UTC [5] LOG: aborting startup due to startup process failure

2024-07-19 17:56:03.766 UTC [5] LOG: database system is shut down

Mounting volume on: /var/lib/containers/railwayapp/bind-mounts/b6e81e81-3415-4fe8-93ec-9cf9b12f9bf0/vol_iktz6xcxstkn2c7a

Certificate will not expire

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-07-19 17:56:12.656 UTC [5] LOG: starting PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

2024-07-19 17:56:12.656 UTC [5] LOG: listening on IPv4 address "0.0.0.0", port 5432

2024-07-19 17:56:12.656 UTC [5] LOG: listening on IPv6 address "::", port 5432

2024-07-19 17:56:12.692 UTC [5] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

2024-07-19 17:56:12.699 UTC [29] LOG: database system was interrupted; last known up at 2024-07-19 17:06:30 UTC

2024-07-19 17:56:12.749 UTC [29] LOG: invalid resource manager ID in checkpoint record

2024-07-19 17:56:12.749 UTC [29] PANIC: could not locate a valid checkpoint record

2024-07-19 17:56:12.750 UTC [5] LOG: startup process (PID 29) was terminated by signal 6: Aborted

2024-07-19 17:56:12.750 UTC [5] LOG: aborting startup due to startup process failure

2024-07-19 17:56:12.751 UTC [5] LOG: database system is shut down

Mounting volume on: /var/lib/containers/railwayapp/bind-mounts/b6e81e81-3415-4fe8-93ec-9cf9b12f9bf0/vol_iktz6xcxstkn2c7a

container event container restart

Certificate will not expire

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-07-19 17:56:22.196 UTC [5] LOG: starting PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

2024-07-19 17:56:22.196 UTC [5] LOG: listening on IPv4 address "0.0.0.0", port 5432

2024-07-19 17:56:22.196 UTC [5] LOG: listening on IPv6 address "::", port 5432

2024-07-19 17:56:22.211 UTC [5] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

2024-07-19 17:56:22.218 UTC [28] LOG: database system was interrupted; last known up at 2024-07-19 17:06:30 UTC

2024-07-19 17:56:22.268 UTC [28] LOG: invalid resource manager ID in checkpoint record

2024-07-19 17:56:22.268 UTC [28] PANIC: could not locate a valid checkpoint record

2024-07-19 17:56:22.273 UTC [5] LOG: startup process (PID 28) was terminated by signal 6: Aborted

2024-07-19 17:56:22.273 UTC [5] LOG: aborting startup due to startup process failure

2024-07-19 17:56:22.275 UTC [5] LOG: database system is shut down

Mounting volume on: /var/lib/containers/railwayapp/bind-mounts/b6e81e81-3415-4fe8-93ec-9cf9b12f9bf0/vol_iktz6xcxstkn2c7a

container event container died

container event container restart

Certificate will not expire

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-07-19 17:56:31.360 UTC [5] LOG: starting PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

2024-07-19 17:56:31.361 UTC [5] LOG: listening on IPv4 address "0.0.0.0", port 5432

2024-07-19 17:56:31.361 UTC [5] LOG: listening on IPv6 address "::", port 5432

2024-07-19 17:56:31.363 UTC [5] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

2024-07-19 17:56:31.370 UTC [28] LOG: database system was interrupted; last known up at 2024-07-19 17:06:30 UTC

2024-07-19 17:56:31.425 UTC [28] LOG: invalid resource manager ID in checkpoint record

2024-07-19 17:56:31.425 UTC [28] PANIC: could not locate a valid checkpoint record

View Deploy details

ⓘ Deployment information is only viewable by project members and Railway employees.

Awaiting User Response

24 Replies

9 months ago

Hey, I can't seem to find this volume anywhere? Did you delete it?


Status changed to Awaiting User Response railway[bot] 9 months ago


vambePRO

9 months ago

Nop still there


Status changed to Awaiting Railway Response railway[bot] 9 months ago


vambePRO

9 months ago

We are trying to save the data

So we connect it to a github

Attachments


vambePRO

9 months ago

But feel free to mount it to a postgres and see for yourself the build error


9 months ago

Postgres-GCS-backup is attached to volume
vol_62wpv63vsw3hj4wb

Which is different than the one you shared


Status changed to Awaiting User Response railway[bot] 9 months ago


vambePRO

9 months ago

Nop same volume, that is the original, could have changed id for location purposes


Status changed to Awaiting Railway Response railway[bot] 9 months ago


vambePRO

9 months ago

That´s the one I want to connect to a postgres service


vambePRO

9 months ago

pgdata is called


vambePRO

9 months ago

Feel free to detach it and mount it in a postgres service


9 months ago

Nop same volume, that is the original, could have changed id for location purposes

Sorry like, do you mean you migrated it?


Status changed to Awaiting User Response railway[bot] 9 months ago


9 months ago

Could you please give us more information about what you've done here. It looks like whatever you've done has put your instance in a potentially corrupt state

It's very important you share exactly what you've done so that we can potentially help you recover the data.


vambePRO

9 months ago

Service failed, I tried to redeploy Postgres and the volume data now is corrupted. To archive to get the data volumen in my hands, I made a service connecting it with a script that downloads it to gcp, I still can't solve the corruption.


Status changed to Awaiting Railway Response railway[bot] 9 months ago


vambePRO

9 months ago

My startup depends on this data. Urgent help please.


vambePRO

9 months ago

Now I attached it to a pg service so you can see the error well

Attachments


vambePRO

9 months ago

Nop same volume, that is the original, could have changed id for location purposes

Sorry like, do you mean you migrated it?

yes


vambePRO

9 months ago

Thought that could be the server location so changed it a couple of times


9 months ago

That’s not good. You may want to try some additional steps to recover it

If you’re willing to waive any liability, we can attempt to try and help you recover it


Status changed to Awaiting User Response railway[bot] 9 months ago


vambePRO

9 months ago

Okay I am willing to waive any liability


Status changed to Awaiting Railway Response railway[bot] 9 months ago


9 months ago

I've made attempts initially but haven't been able to retrieve the data here nor reset the wal

I have escalated this to a member of the team. We will attempt what we can over the next 24 hours and circle back


Status changed to Awaiting User Response railway[bot] 9 months ago


vambePRO

9 months ago

Any news?


Status changed to Awaiting Railway Response railway[bot] 9 months ago


9 months ago

Hi, apologies for the delay here - we've been trying to restore the data but have been unsuccessful in our attempts due to the corruption after the additional steps you attempted. Could you share how this was corrupted/what prompted you to take the restorative actions?


Status changed to Awaiting User Response railway[bot] 9 months ago


vambePRO

9 months ago

Service failed, I tried to redeploy Postgres and the volume data now is corrupted. To archive to get the data volumen in my hands, I made a service connecting it with a script that downloads it to gcp, I still can't solve the corruption.


Status changed to Awaiting Railway Response railway[bot] 9 months ago


9 months ago

Apologies for the delay! We'll have another look at this, but it's unlikely we can perform a full restoration of your data because the steps you took after that seems to have corrupted it further.


Status changed to Awaiting User Response railway[bot] 9 months ago


9 months ago

Hey vambe, I've taken another look at this and unfortunately it doesn't seem like we can recover the data due to how the volume was re-mounted to different services that had performed some read/write actions on it causing further corruption on the Postgres data itself.

I'm really sorry about this.