this post was submitted on 06 Apr 2024
38 points (100.0% liked)

datahoarder

6758 readers
1 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS
 

While clicking through some random Lemmy instances, I found one that's due to be shut down in about a week — https://dmv.social. I'm trying to archive what I can onto the Wayback Machine, but I'm not sure what the most efficient way to go about it is.

At the moment, what I've been doing is going through each community and archiving each sort type (except the ones under a month, since the instance was locked a month ago) with capture outlinks enabled. But is there a more efficient way to do it? I know of the Internet Archives save from spreadsheet tool, which would probably work well, but I don't know how I'd go about crawling all the links into a sitemap or csv or something similar. I don't have the know-how to setup a web crawler/spider.

Any suggestions?

top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 7 months ago* (last edited 7 months ago) (1 children)

Well since posts are numbered sequentially, you could archive all of them by generating the links. Tiny issue is, this would include every post that was federated with the server, which is almost 2 million it seems. A bit overkill for a relatively small instance.

I think if you filter by local on the main page and click next until you get to the end, there aren't that many pages. You could save those with outlinks.

Also, I believe, the posts will live on on other instances regardless.

[–] [email protected] 3 points 7 months ago

Oh good idea, thank you! Yeah, I think because of the federation stuff, it should persist, although I think that will complicate searching and finding things. I'm pretty sure this is the largest instance to go down to date, so I'd rather be safe than to lose things, even if it is only a small instance.

This does make me a bit nervous for how archiving larger instances will look when one eventually dies, though. A spider that logs everything into a spreadsheet and then splitting into different groups would probably be the best option. Or maybe a local ArchiveBox setup could work too. All the Lemmy admins seem fairly resonable though, so perhaps they might even upload everything directly into the Internet Archive themselves

[–] [email protected] 6 points 7 months ago (1 children)

I don't know enough about how ActivityPub works to be sure, but I suspect the right way to archive a Lmy instance would be to create software that acts like another instance, federates with the one you want to archive, and saves the raw stream of ActivityPub packets.

[–] [email protected] 4 points 7 months ago

Oh, yeah, you're probably right. Unfortunately I absolutely do not have the knowledge required to do that, but I'll keep it in mind. Thanks

[–] [email protected] 4 points 7 months ago

You may reach out to the Archive Team. They are not related to archive.org, but they often work together.

They're reachable on IRC, but if you know how, you can also visit the channels from Matrix.

[–] [email protected] 3 points 7 months ago

Maybe a plug-in for Lemmy server could be developed to automatically back up and / or restore instances from Arweave. Some protocol could be used to turn the instances into Json, which could then be uploaded as documents and parsed, or something like that. And then the Json could then be potentially restored. There might be many pages for a large instance, but they could perhaps be organized in a thoughtful and functional way.