error drive path full
adrianpunkt
PROOP

a month ago

We have a minio service running that uses storage and although that never reached more than 25 GB, it somehow was getting drive full errors. Wondering what might have caused this. We did increase the storage from 50GB to 80GB and restarted and the errors stopped, however this could happen again.

Here are some logs:

API: PutObject(bucket=langfuse, object=events/cmdn8gp94000vqm0268a13kc1/score/958bc5a5-1e90-4693-99c6-17bd5b3e375b:openai-mod-convo/63f2676482f4a3de94cbfa1680556fe3.json)

Time: 22:02:01 UTC 01/28/2026

DeploymentID: a574bdbc-be03-40ee-8f60-0a036d4d6638

RequestID: 188F0421B4528073

RemoteHost: [fd12:4672:262d:0:a000:19:3650:166b]

Host: minio.railway.internal:9000

UserAgent: aws-sdk-js/3.675.0 ua/2.1 os/linux#6.12.12+bpo-cloud-amd64 lang/js md/nodejs#24.11.1 api/s3#3.675.0 m/E,e

Error: drive:/data, srcVolume: .minio.sys/tmp, srcPath: cfe3bd65-2e6a-40ea-b659-a1a41e7950a7, dstVolume: langfuse:, dstPath: events/cmdn8gp94000vqm0268a13kc1/score/958bc5a5-1e90-4693-99c6-17bd5b3e375b:openai-mod-convo/63f2676482f4a3de94cbfa1680556fe3.json - error drive path full (*errors.errorString)

7: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()

6: internal/logger/logonce.go:149:logger.LogOnceIf()

5: cmd/logging.go:168:cmd.storageLogOnceIf()

4: cmd/xl-storage.go:2570:cmd.(*xlStorage).RenameData.func1()

3: cmd/xl-storage.go:2840:cmd.(*xlStorage).RenameData()

2: cmd/xl-storage-disk-id-check.go:503:cmd.(*xlStorageDiskIDCheck).RenameData.func2()

1: internal/ioutil/ioutil.go:127:ioutil.WithDeadline[...].func1()

API: PutObject(bucket=langfuse, object=events/cmdn8gp94000vqm0268a13kc1/score/958bc5a5-1e90-4693-99c6-17bd5b3e375b:openai-mod-msg/e4e1dfbb7f373f2c2b4b2bf3d8260e52.json)

Time: 22:02:01 UTC 01/28/2026

DeploymentID: a574bdbc-be03-40ee-8f60-0a036d4d6638

RequestID: 188F0421B4537451

RemoteHost: [fd12:4672:262d:0:a000:19:3650:166b]

Host: minio.railway.internal:9000

UserAgent: aws-sdk-js/3.675.0 ua/2.1 os/linux#6.12.12+bpo-cloud-amd64 lang/js md/nodejs#24.11.1 api/s3#3.675.0 m/E,e

Error: drive:/data, srcVolume: .minio.sys/tmp, srcPath: 6057cf9d-9125-404e-807d-f73de2cb3edb, dstVolume: langfuse:, dstPath: events/cmdn8gp94000vqm0268a13kc1/score/958bc5a5-1e90-4693-99c6-17bd5b3e375b:openai-mod-msg/e4e1dfbb7f373f2c2b4b2bf3d8260e52.json - error drive path full (*errors.errorString)

7: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()

6: internal/logger/logonce.go:149:logger.LogOnceIf()

5: cmd/logging.go:168:cmd.storageLogOnceIf()

4: cmd/xl-storage.go:2570:cmd.(*xlStorage).RenameData.func1()

3: cmd/xl-storage.go:2904:cmd.(*xlStorage).RenameData()

2: cmd/xl-storage-disk-id-check.go:503:cmd.(*xlStorageDiskIDCheck).RenameData.func2()

1: internal/ioutil/ioutil.go:127:ioutil.WithDeadline[...].func1()

$30 Bounty

1 Replies

Railway
BOT

a month ago

This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.

Status changed to Open Railway about 1 month ago


darseen
HOBBYTop 5% Contributor

a month ago

This issue might be caused by Inode Exhaustion (occurs when a linux file system runs out of available inodes). To verify it, you can run df -i in your container. Look at the IUse% column for the volume mounted at /data. If it's near 100%, inodes are your problem.
One possible way to fix it, is to delete old objects in your langfuse bucket. You can try something like: mc ilm rule add myminio/langfuse --expire-days 7 which deletes objects older than 7 days automatically.


Loading...