Resize volume on hobby plan

9 months ago

Hi there,

During the night my database reached the 5GB limit and now I am kinda stuck.

I tried deploying the file browser template and mount the volume to it, but because it's full, the browser template couldn't be deployed.

The database volume is connected to LibSQL and the size growing is due to WAL method. Usually, when I redeploy the libsql server it concat all the WAL and the real size of the volume is approximatively 600MB. However, since it reached the limit, I also can't redeploy the libsql server ("disk I/O error"), meaning it can't optimise the volume and go back to the "normal" size.

I'm now trying to recreate a new volume and restore data from my backup so I can at least restart my services.

Is there a way to grow the volume, either temporary (to 10GB for example) or just a paid add-on for hobby plan? (to 10GB as well)

PS: How can I download all the files from the volume? Since the file browser doesn't work (error: "mkdir: can't create directory '/data/storage': No space left on device")

id: 34849d22-d685-4e73-8858-fbd4fe42ea65
volume: db249e52-dbf7-43a5-963e-090ad7f2b034

0 Replies

9 months ago

I can grow this to 10gb for you when I'm back at the computer.

to download files you would need a service that doesn't write data to the volume on startup, filebrowser writes it disk based database to disk along with metadata


9 months ago

Alright, that would be awesome, let me know when you are able to do it!

Is there any existing template for that? Otherwise I will write something myself in case of emergency in the future like that


9 months ago

nothing that I'm aware of to simply dump your volume in one go like filebrowser could


9 months ago

done


9 months ago

thank you, this is very much appreciated!

And I'm writing a stupid simple template to do a dump of a volume


9 months ago

like just dump a zip?


9 months ago

it sure is a good thing we have volume alerting on the pro plan πŸ˜‰


9 months ago

yep, a zip


9 months ago

Sure is, but tbh, an automatic scaling/resizing would be even better. Alerts can still be missed


9 months ago

thats why you can set alerts for different thresholds πŸ™‚


9 months ago

I also need to look better into the sqld configuration, I might be able to reduce the frequency of WAL so it doesn't grow so fast

1284163214363463700


9 months ago

i mean, Pro allows you to grow your own volume to 250GB so take that how you want haha


9 months ago

Ahah yeah, I wouldn't have to worry that much. I'm still following closely the changes on the Pro plan, with maybe included usage in the future.

For the moment, for my side-project the hobby plan is enough


9 months ago

sounds good to me, just a fair warning, your next volume increase will need to be done by upgrading to pro.


9 months ago

Alright, that should do the job https://railway.app/template/EBwdAh


9 months ago

are you down for some template feedback?


9 months ago

Yeah sure, it’s the first one I wrote


9 months ago

your code expects user to mount the volume to the correct location.

if you give the user room to do something wrong, it's guaranteed that they will do something wrong.

instead have the code use that railway volume mount path variable so that there is no opportunity for the user to do something wrong.


9 months ago

another question, how fast can node even zip for example a 50gb directory?


9 months ago

I actually started using a variable for the volume path but when mounting the volume to the service, you have to define the mounting path, so it’s still error prone. Unless you can define the volume path when mounting it to your service?

Good question, it might not be the best. Do you have a way for me to try with some dummy data through Railway. I could write the zipping part in whatever is fastest


9 months ago

if you use the variable, it doesn't matter where the user mounts the volume


9 months ago

as for benchmarking node zipping a directory, it doesn't need to be railway specific, you can run the test with a 50gb directory on your own computer as long as you have an nvme drive


9 months ago

Alright cool, I'll improve that, thanks πŸ‘


9 months ago

alright, updated the template, rewrote in GO, perfs are quite good locally, ~1:30min for 40GB. Through Railway I only have 400MB of data to play with for testing, and that took 20sec.

If you have any dummy big data to try it out, could be interesting. I have one issue to fix, where the memory stays very high after streaming the ZIP, will look at it later


9 months ago

haha if i knew you where gonna rewrite it in go i would have given you some tips


9 months ago

i know go's gzip package is single threaded, so im going to assume so is it's zip package, you could swap to using


9 months ago

I am using this package https://pkg.go.dev/github.com/klauspost/compress/zip from the same user, can give a try to the one you mentionned


9 months ago

looks way faster with pgzip


9 months ago

how much faster are we talking?