a year ago
Hi there,
During the night my database reached the 5GB limit and now I am kinda stuck.
I tried deploying the file browser template and mount the volume to it, but because it's full, the browser template couldn't be deployed.
The database volume is connected to LibSQL and the size growing is due to WAL method. Usually, when I redeploy the libsql server it concat all the WAL and the real size of the volume is approximatively 600MB. However, since it reached the limit, I also can't redeploy the libsql server ("disk I/O error"), meaning it can't optimise the volume and go back to the "normal" size.
I'm now trying to recreate a new volume and restore data from my backup so I can at least restart my services.
Is there a way to grow the volume, either temporary (to 10GB for example) or just a paid add-on for hobby plan? (to 10GB as well)
PS: How can I download all the files from the volume? Since the file browser doesn't work (error: "mkdir: can't create directory '/data/storage': No space left on device")
id: 34849d22-d685-4e73-8858-fbd4fe42ea65
volume: db249e52-dbf7-43a5-963e-090ad7f2b034
0 Replies
a year ago
I can grow this to 10gb for you when I'm back at the computer.
to download files you would need a service that doesn't write data to the volume on startup, filebrowser writes it disk based database to disk along with metadata
a year ago
Alright, that would be awesome, let me know when you are able to do it!
Is there any existing template for that? Otherwise I will write something myself in case of emergency in the future like that
a year ago
nothing that I'm aware of to simply dump your volume in one go like filebrowser could
a year ago
done
a year ago
thank you, this is very much appreciated!
And I'm writing a stupid simple template to do a dump of a volume
a year ago
like just dump a zip?
a year ago
it sure is a good thing we have volume alerting on the pro plan π
a year ago
yep, a zip
a year ago
Sure is, but tbh, an automatic scaling/resizing would be even better. Alerts can still be missed
a year ago
thats why you can set alerts for different thresholds π
a year ago
I also need to look better into the sqld configuration, I might be able to reduce the frequency of WAL so it doesn't grow so fast
a year ago
i mean, Pro allows you to grow your own volume to 250GB so take that how you want haha
a year ago
Ahah yeah, I wouldn't have to worry that much. I'm still following closely the changes on the Pro plan, with maybe included usage in the future.
For the moment, for my side-project the hobby plan is enough
a year ago
sounds good to me, just a fair warning, your next volume increase will need to be done by upgrading to pro.
a year ago
Alright, that should do the job https://railway.app/template/EBwdAh
a year ago
are you down for some template feedback?
a year ago
Yeah sure, itβs the first one I wrote
a year ago
your code expects user to mount the volume to the correct location.
if you give the user room to do something wrong, it's guaranteed that they will do something wrong.
instead have the code use that railway volume mount path variable so that there is no opportunity for the user to do something wrong.
a year ago
another question, how fast can node even zip for example a 50gb directory?
a year ago
I actually started using a variable for the volume path but when mounting the volume to the service, you have to define the mounting path, so itβs still error prone. Unless you can define the volume path when mounting it to your service?
Good question, it might not be the best. Do you have a way for me to try with some dummy data through Railway. I could write the zipping part in whatever is fastest
a year ago
if you use the variable, it doesn't matter where the user mounts the volume
a year ago
as for benchmarking node zipping a directory, it doesn't need to be railway specific, you can run the test with a 50gb directory on your own computer as long as you have an nvme drive
a year ago
Alright cool, I'll improve that, thanks π
a year ago
alright, updated the template, rewrote in GO, perfs are quite good locally, ~1:30min for 40GB. Through Railway I only have 400MB of data to play with for testing, and that took 20sec.
If you have any dummy big data to try it out, could be interesting. I have one issue to fix, where the memory stays very high after streaming the ZIP, will look at it later
a year ago
haha if i knew you where gonna rewrite it in go i would have given you some tips
a year ago
i know go's gzip package is single threaded, so im going to assume so is it's zip package, you could swap to using
a year ago
I am using this package https://pkg.go.dev/github.com/klauspost/compress/zip from the same user, can give a try to the one you mentionned
a year ago
looks way faster with pgzip
a year ago
how much faster are we talking?