this post was submitted on 19 Jul 2023
81 points (96.6% liked)
main
1337 readers
5 users here now
Default community for midwest.social. Post questions about the instance or questions you want to ask other users here.
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Anyway us other instances can help? We still have copies of the content (except for images).
We should also have the activitypub objects that we've received cached in our database.
Edit: it seems that my instance fetches the latest posts instead of using its cache when loading a midwest community. But still, we should have some content in the activitypub table.
I'm not sure what I could do with it unfortunately. If it can recover itself, that would be awesome.
I don't think there's an easy way to "replay" them. But in theory, you should be able to take the entries in that table related to midwest social from any other instance and start broadcasting them anew. The remote instances will reject them because to them they are duplicates, but you would be able to recover lost content.
Now I realize this is far more complex, but in theory it should be possible to create a tool that does this specifically for these scenarios.
I'm sorry this happened to you :(
Thanks. Yeah, this is what happens when my need to try everything to resolve an issue gets the better of me too fast. My filesystem was 90% full and I didn't want it to run out, so I deleted something I shouldn't have. If you have any idea why my disk would be using up lots of storage even though I'm using an S3 bucket and have my logs limited let me know.
Yes, that might actually be the activitypub table in the database. You can safely delete older entries, like from two weeks ago or older. Otherwise it just keeps growing with the logs of all activitypub objects that the server has sent out received.
Do you know where I can find the query to do that? Databases are not my forte.
Yes, let me logging to my server and try to retrieve the exact query I used. BRB
Edit: here it is, the table is actually called
activity
DELETE FROM activity where published < '2023-06-27';
Just make sure to change the date to whatever you need. Leaving two weeks is more than enough to detect and refuse duplicates.In order to get access to the database you should probably be able to run
docker exec -it midwestsocial_postgres_1 busybox /bin/sh
. And then access postgres withpsql -U username
, the default username is 'lemmy'.Then connect to the database with
\c lemmy
You can list tables with
\dt
and view definitions for each table with\d+ tablename
. For example\d+ activity
.You can get some sample data from the table with
select * from activity LIMIT 10;
You'll see that the activity table holds activitypub logs and should be cleared out regularly as mentioned by dessalines in this post: https://github.com/LemmyNet/lemmy/issues/1133Important
After deleting the entries (which could take some minutes depending on how much data it holds) you will not see a difference in the filesystem. The database keeps that freed up space for itself but you should see backups being much lighter and of course, the file system itself will stop growing so fare until it has reached the previous levels.
If you want to free up that space to the filesystem you need to do a "vacuum full" but that will require downtime and could take several minutes depending on the space that was used up and the space that is still free. It could take up to an hour. I haven't done this myself since backups have gone down in size and I don't need extra free space in the filesystem as long as I stop the database from growing out of control again.
W3 Schools entry on SQL DELETE statement
I don't know the actual table definitions for lemmy, but it should look something like:
Thanks for the transparency! I don't mind, mistakes happen, but I understand it's frustrating and a bit problematic with the lost content.
There was a post about that on a Lemmy admin community a few days ago. Someone with a ~1k userbase was eating up a GB/day on average. IIRC, there were lots of logs, but also if I understand correctly, every server stores mirrors of the data from anything users subscribe to. That could eat up a lot of data pretty quickly as the fedeverse scales up.
If you wanted to suggest a shift for improved scalability, maybe servers could form tight pacts with a few who mirror each other's content, and then more loosely federated servers load data directly from the pact. A compromise between ultimate content preservation and every larger server having to host the entire fedeverse.
So basically, a few servers would form a union. Each union would be a smaller fedeverse (like cells in an organ), and they'd connect to other organs to form the fedeverse/body.
Also, are users who joined in the past few days affected? I suppose they might need to sign up again.
Yeah, if they signed up in the last few days they'll need to do it again. Ugh.