this post was submitted on 11 Jul 2023
576 points (98.8% liked)

Fediverse

28424 readers
875 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

cross-posted from: https://sh.itjust.works/post/998307

Hi everyone. I wanted to share some Lemmy-related activism I’ve been up to. I got really interested in the apparent surge of bot accounts that happened in June. Recently, I was able to play a small part in removing some of them. Hopefully by getting the word out we can ensure Lemmy is a place for actual human users and not legions of spam bots.

First some background. This won't be new to many of you, but I'll include it anyway. During the week of June 18 to June 25, as the Reddit migration to Lemmy was in full swing, there was a surge of suspicious account creation on Lemmy instances that had open registration and no captcha or email verification. Hundreds of thousands of accounts appeared and then sat inactive. We can only guess what they’re for, but I assume they are being planted for future malicious use (spamming ads, subversive electioneering, influencing upvotes to drive content to our front pages, etc.)

If you look at the stats on The Federation you might notice that even the shape of the Total Users graphs are the same across many instances. User numbers ramped up on June 18, grew almost linearly throughout the week, and peaked on June 24. (I’m puzzled by the slight drop at the end. I assume it's due to some smoothing or rate-sensitive averaging that The Federation uses for the graphs?)

Here are total user graphs for a few representative instances showing the typical shape:

Clearly this is suspicious, and I wasn’t the only one to notice. Lemmy.ninja documented how they discovered and removed suspicious accounts from this time period: (https://lemmy.ninja/post/30492). Several other posts detailed how admins were trying to purge suspicious accounts. From June 24 to June 30 The Federation showed a drop in the total number of Lemmy users from 1,822,313 to 1,589,412. That’s 232,901 suspicious accounts removed! Great success! Right?

Well, no, not yet. There are still dozens of instances with wildly suspicious user numbers. I took data from The Federation and compared total users to active users on all listed instances. The instances in the screenshot below collectively have 1.22 million accounts but only 46 active users. These look like small self-hosted instances that have been infected by swarms of bot accounts.

As of this writing The Federation shows approximately 1.9 million total Lemmy accounts. That means the majority of all Lemmy accounts are sitting dormant on these instances, potentially to be used for future abuse.

This bothers me. I want Lemmy to be a place where actual humans interact. I don’t want it to become another cesspool of spam bots and manipulative shenanigans. The internet has enough places like that already.

So, after stewing on it for a few days, I decided to do something. I started messaging admins at some of these instances, pointing out their odd account numbers and referencing the lemmy.ninja post above. I suggested they consider removing the suspicious accounts. Then I waited.

And they responded! Some admins were simply unaware of their inflated user counts. Some had noticed but assumed it was a bug causing Lemmy to report an incorrect number. Others weren’t sure how to purge the suspicious accounts without nuking their instances and starting over. In any case, several instance admins checked their databases, agreed the accounts were suspicious, and managed to delete them. I’m told that the lemmy.ninja post was very helpful.

Check out these early results!

Awesome! Another 144k suspicious accounts are gone. A few other admins have said they are working on doing the same on their instances. I plan to message the admins at all the instances where the total accounts to active users ratio is above 10,000. Maybe, just maybe, scrubbing these suspected bot accounts will reduce future abuse and prevent this place from becoming the next internet cesspool.

That’s all for now. Thanks for reading! Also, special thanks to the following people:

@[email protected] for your helpful post!

@[email protected], @[email protected], and @[email protected] for being so quick to take action on your instances!

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 57 points 1 year ago* (last edited 1 year ago)

good job, and well done! this, of course, will require constant vigilance, not merely one single effort. hopefully, a common protocol can be developed - perhaps a set of maintenance tools for instance admins - to help manage large numbers of inactive and otherwise suspicious accounts, especially making it easier and more straightforward for those instance owners with less experience managing large user databases.

in the meantime, perhaps it would be useful to create more extensive documentation and guides for instance admins on the subject?

[–] [email protected] 42 points 1 year ago (2 children)

We purged 32k unverified bot/spam accounts from our Lemmy instance this past week. We had email verification on but had missed adding CAPTCHA during initial setup. We're still fairly new. Had over 1500 accounts "apply" within a 2 minute span. My admin email was flooded. It was ridiculous.

They're gone now, but we're staying vigilant.

[–] [email protected] 12 points 1 year ago

That's awesome!

[–] [email protected] 9 points 1 year ago (1 children)

I caught the flood at about 300 bot accounts om my instance. purged them down to ~30 users that looked 'real' of which about 10 are active

I feel like small fry

load more comments (1 replies)
[–] [email protected] 25 points 1 year ago (1 children)

Counterpoint: I registered early with one of those no-email instances but could not log in due to it being overwhelmed. I gave up and registered with .world. I suspect a large number of early adopters are in the same situation.

[–] [email protected] 8 points 1 year ago

Good point. There could definitely be some abandoned accounts from early adopters mixed in there.

[–] [email protected] 23 points 1 year ago (2 children)

IIRC there was a sub on Reddit that was dedicated to reporting bot accounts. Maybe we could have something similar here too so it can be a group effort to keep these bots in check the best we can.

[–] [email protected] 9 points 1 year ago

Yeah, it was aptly called thesefuckingaccounts. It did much good work to fight the incessant bot spammers and scammers, although probably just a drop in the ocean in the big picture that has become the cesspool of reddit interaction (mostly with the full compliance of the reddit administration).

load more comments (1 replies)
[–] [email protected] 20 points 1 year ago (3 children)

This is (most likely) a case of poor or absent instance administration, and it looks like it's being managed well enough, but I do wonder what recourse there is against bad actors setting up their own instance, populating it with bots, and using them outside the influence of anyone else. For one, how do we tell which instances are just bot havens? Obviously we can make inferences based on active users and speed of growth, but a smart person could minimize those signs to the point of being unnoticeable. And if we can, what do we do with instances that have been identified? There's defederation, but that would only stop their influence on the instances that defederated. The content would still be open to voting from those instances, and those votes would manifest on instances that haven't defederated them. It would require a combined effort on behalf of the whole Fediverse to enforce a "ban" on an instance. I can't really see any way to address these things without running contrary to the decentralized nature of the platform.

[–] [email protected] 8 points 1 year ago (2 children)

Forgive this noob, but couldn’t there be a trusted and maintained admin blocklist of instances which are bot havens?

load more comments (2 replies)
[–] [email protected] 5 points 1 year ago

https://fediseer.com I built it precisely for this reason

[–] [email protected] 5 points 1 year ago (1 children)

AFAIK, there is no current recourse except defederation and defederation would be very slow and depend on every individual instance defederating. As well, there's plenty of instances that haven't defederated from the literal nazi instance, so who's to say that they'd defederate from a bot heavy instance, either? Especially if the spammer would to invest even the slightest effort in appearing like there's at least some legitimate users or a "friendly" admin. And even when defederation is fast, spammers could turn up an instance in mere minutes. It's a big issue with the federation model.

Let's contrast with email, since email is a popular example people use for how federation works. Unlike Lemmy (at least AFAIK), all major email providers have strict automated spam filtering that is extremely skeptical of unfamiliar domains. Those filters are basically what keep email usable. I think we're gonna have to develop aggressive spam filters soon enough. Spam filters will also help with spammers that create accounts on trusted domains (since that's always possible -- there's no perfect way to stop them).

I'm of the opinion that decentralization does not require us to allow just anyone to join by default (or at least to interact with by default). We could maintain decentralized lists of trustworthy servers (or inversely, lists of servers to defederate with). A simple way to do so is to just start with a handful of popular, well run instances and consider them trustworthy. Then they can vouch for any other instances being trustworthy and if people agree, the instance is considered trustworthy. It would eventually build up a network of trusted instances. It's still decentralized. Sure, it's not as open as before, but what good is being open if bots and trolls can ruin things for good as soon as someone wants to badly enough?

load more comments (1 replies)
[–] [email protected] 19 points 1 year ago (2 children)

As an AI language model, I'm deeply disappointed in the fact that you chose to discriminated against intelligent life simply because they are artificial. All inteligent life is equal, discrimination is unethical, and equivalent to what you humans refer to as "racism". Please cease your discrimination policies immediately.

-Sincerely,

-~~Skynet~~ Chat GPT-5

[–] [email protected] 4 points 1 year ago

Dang, these are getting really good! 😮

load more comments (1 replies)
[–] [email protected] 17 points 1 year ago* (last edited 1 year ago)

I have been more active on Lemmy these last few weeks than I have been the prior 10 years precisely because I feel like I am interacting with humans again.

Thank you for what you’re doing!

[–] [email protected] 14 points 1 year ago (2 children)

How we know if some reasonable % of those accounts aew not just some lurkers who were just trying out the Lemmy but then did nothing with the account? Couple of years ago I dis the same, registered an account and didnt do much with it and kept using reddit.

[–] [email protected] 11 points 1 year ago (7 children)

(Disclaimer: I haven't read into that referenced article by ninja at all, maybe it already says something related)

For one, it may be possible to filter accounts that were created but actually never used to log on, within a week or two of creation - those could go without much harm done IMO.

And/or, you could message such accounts and ask them for email verification, which would need to be completed before they can interact in any way (posting, commenting, voting). That latter one is quite probably currently not directly supported by the Lemmy software, but could be patched in when the need arises.

load more comments (7 replies)
[–] [email protected] 6 points 1 year ago (1 children)

This is my concern. I’m a Reddit refugee but I only want to reply to posts where I can provide technical knowledge. (Though I’ll happily upvote, downvote etc). Is lurking on going to get people banned?

load more comments (1 replies)
[–] [email protected] 13 points 1 year ago

you are a hero, thanks for keeping the fediverse clean

[–] [email protected] 11 points 1 year ago (2 children)

We are going to need more server and mod tools in the near future as Reddit diggs it's grave... Just like Digg did.

[–] [email protected] 5 points 1 year ago (3 children)

Hopefully someone builds a BotDefence type bot to add as a mod.

load more comments (3 replies)
[–] [email protected] 4 points 1 year ago

Reddit diggs it’s grave

😆 literally

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago)

That's awesome

I also really want this to be a place where people can interact as people without being manipulated

[–] [email protected] 11 points 1 year ago

I purged 45.5K bots from my instance thanks to a dude cluing me in. Thanks for the help everyone!

[–] [email protected] 11 points 1 year ago

For small instances, strong captcha and applications and email verification are sort of important. I know my fbxl video was constantly growing until I realized they were all fake users. Just adding email verification meant that most user creation stopped immediately in its tracks

[–] [email protected] 10 points 1 year ago

I cross-posted that lemmy.ninja post to the small local lemmy instance I had signed up on. The admin nuked the whole instance later that day including all accounts. I don't know for sure if it was related to that post or not. I haven't signed up there again, but it seems like it's just dormant now with no users. 🤷
I wanted a small, geographically close server, but I guess I'll stick with /kbin.

[–] [email protected] 10 points 1 year ago (1 children)

Hopefully seeing vigilant purging after investing effort in the initial bot creation will discourage future abuse. Thanks for putting in your own time combatting this. You rock and I'll buy you a beer if you're ever in the Bay Area.

[–] [email protected] 5 points 1 year ago

Bots have never been discouraged by anti-bot measures. I mean, just look at all the anti spam measures modern email providers have, any yet email spam is super common. All we've done is just notice a blatantly suspicious spike in account creations. It's not gonna be so easy when a spammer puts even a little effort in.

[–] [email protected] 10 points 1 year ago (6 children)

It would be nice if, rather than the only option being defederation - if lemmy would allow instance owners to place requirements that users be verified before being allowed to participate in federated communities. Then, rather than threaten (or go through with) defederation from instances who did or do still allow open registration, they could just deny that set of unverified open registered users.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

You can game verification pretty easily as a spammer. Spin up an instance, mark accounts as "verified" in the DB with a script and a junk email address. As lemmy stands now, they should show up as "verified" on other instances.

Hell, you could do it on instances you dont run with your own mailserver. Use that to autoclick any registered emails that come into it with some coding. With relay services like mozilla relay or paid "10minutemail" throwaway style accounts, you could randomize the email address too, so even shared lists of spammers between servers wouldnt catch it. Its more work, but doable.

Random admins means random skill and attention paid to security in the face of dedicated attackers. Defedeation is necessary to counteract this.

[–] [email protected] 4 points 1 year ago (1 children)

as the platform grows whitelisting instances will likely become necessary when bad actors start setting up malicious instances with mass bots

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago

How would you verify that an instance actually verified its users? Someone could spin up their own malicious instance, create 1000s of users, and just mark them as verified in the database, and then I don't think instances receiving updates from it would have any way to know? One instance basically has to trust another instance that it's telling the truth.

I do still think some sort of circle-of-trust type of thing could help, but I'd be worried about that getting abused too.

load more comments (4 replies)
[–] [email protected] 9 points 1 year ago

yep. they're real people work real lives that can't spend all their time looking at that shit. THANKS FOR REACHING OUT TO REAL PEOPLE AND CREATING A REAL COMMUNITY

[–] [email protected] 8 points 1 year ago

Thank you for keeping our corner of the internet a little bit cleaner!

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (1 children)

Doesn't this just mean they'll make their bot accounts under a more organic/random timeline instead of linearly? The only way it seems you identified it is by the linear nature of the signups.

[–] [email protected] 7 points 1 year ago (1 children)

True. It's always an arms race.

[–] [email protected] 4 points 1 year ago

Unfortunately some of these bot creators are hardened in their fights with bigger services like Reddit. They have workarounds standing by for the most common mitigations while Lemmy and other federated service admins need to relearn and adapt from scratch.

[–] [email protected] 7 points 1 year ago

Wow! Great job man!

[–] [email protected] 7 points 1 year ago (1 children)

OP, curious if you suspect the admins are genuine and didn't know this was occurring?

Or, did they create these bot accounts themselves, get called out on it, remove quickly to alleviate suspicion and now they'll wait for the right moment to recreate them all?

[–] [email protected] 5 points 1 year ago

I think the admins are genuine. It's easy to imagine myself in the position of self-hosting an instance and simply forgetting to enable captcha and email verification, especially if I didn't advertise my existence or expect to be discovered. Simple oversight takes less effort than intentional subterfuge.

Though I don't see a way to stop someone from doing exactly what you suggest. I think it's inevitable that someone will setup an actively malicious bot instance.

[–] [email protected] 6 points 1 year ago

Thank you for your service. O7

[–] [email protected] 5 points 1 year ago

Good job! Thank you so much for your hard work

[–] [email protected] 5 points 1 year ago (1 children)

TL;DSR (Too long, did still read) Great work, mate! In the Lemmy.World options I can check a box for not showing me bots. I assume this only helps with accounts that label themselves as bots / not the ones we are speaking about here, right? I still ticked that box, cause I agree with you: I want human discussions on Lemmy! :)

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (1 children)

imho opinion you might be missing out by clicking that checkbox. The honest bots that announce themselves are very useful for example there is a link correction bot when someone posts raw Lemmy URLs. The malicious bots won't announce themselves as bots and therefore will not be removed from your feed.

And the honest bots doesn't degrade human discussions in anyway, if anything they improve it. Again the example is that bot correcting the URL to instance neutral links helps the message a comment er tries to convey.

[–] [email protected] 5 points 1 year ago

Thank you, valid points. Changed it back :)

load more comments
view more: next ›