From my knowledge this started on the 9th.
/0
Meta community. Discuss about this lemmy instance or lemmy in general.
I see some uploads on the 10th, so almost correct.
Maybe as a stopgap, you can try cleaning up the auto-generated variants (alternate-size thumbnails) that pict-rs has stored? Might get you under whatever threshold was tripped while supports works out the main issue.
https://git.asonix.dog/asonix/pict-rs/#user-content-api
DELETE /internal/variants
: Queue a cleanup for generated variants of uploaded images.`
If any of the cleaned variants are fetched again, they will be re-generated.
My instance is fairly conservative with media uploads and cached thumbnails, but it shaved off a few gigs that had accumulated over 6-7 months. Heads up that the call to that endpoint will return immediately as it just queues the job in pictrs.
Unfortunately I already tried deleting some backups I had in there. After 10Gb freed, it still gave me the same issue. So I think it's some bug on their end.
Ah, gotcha. Definitely a more comprehensive "them" problem than I was imagining. Hope they get ya sorted.
I can suggest using other cheap alternatives since there are many. I tried Contabo before and it wasn’t a good experience.
Ye I'm getting increasingly frustrated with them. The main problem is how fast can a pict-rs OS migration happen.
I think you can migrate files while pict-rs running in read-only mode, without downtime. Then changing the S3 url should be enough, I guess? :)
Read-only mode is a v0.5 feature tho.
I really need to do that upgrade too and am not terribly looking forward to it as it looks complex
It's not as complex as much as time consuming. Shout if you need a hand