this post was submitted on 20 Jun 2023
211 points (100.0% liked)

Technology

37691 readers
340 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Look, we can debate the proper and private way to do Captchas all day, but if we remove the existing implementation we will be plunged into a world of hurt.

I run tucson.social - a tiny instance with barely any users and I find myself really ticked off at other Admin's abdication of duty when it comes to engaging with the developers.

For all the Fediverse discussion on this, where are the github issue comments? Where is our attempt to convince the devs in this.

No, seriously WHERE ARE THEY?

Oh, you think that just because an "Issue" exists to bring back Captchas is the best you can do?

NO it is not the best we can do, we need to be applying some pressure to the developers here and that requires EVERYONE to do their part.

The Devs can't make Lemmy an awesome place for us if us admins refuse to meaningfully engage with the project and provide feedback on crucial things like this.

So are you an admin? If so, we need more comments here: https://github.com/LemmyNet/lemmy/issues/3200

We need to make it VERY clear that Captcha is required before v0.18's release. Not after when we'll all be scrambling...

EDIT: To be clear I'm talking to all instance admins, not just Beehaw's.

UPDATE: Our voices were heard! https://github.com/LemmyNet/lemmy/issues/3200#issuecomment-1600505757

The important part was that this was a decision to re-implement the old (if imperfect) solution in time for the upcoming release. mCaptcha and better techs are indeed the better solution, but at least we won't make ourselves more vulnerable at this critical juncture.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago (2 children)

Hunh.

I just had a surge of user registrations on my instance.

All passed the captcha. All passed the email validation.

All, had a valid-sounding response.

I am curious to know if they are actual users, or.... if I just became the host of a spam instance. :-/

Doesn't appear to be an easy way to determine.

[–] [email protected] 9 points 1 year ago (3 children)

Hmmm, I'd check the following:

  1. Do the emails follow a pattern? (randouser####@commondomain.com)
  2. Did the emails actually validate, or do you just not see bouncebacks? There is a DB field for this that admins can query (i'll dig it up after I make this high level post)
  3. Did the surge come from the same IP? Multiple? Did it use something that doesn't look like a browser?
  4. Did the surge traffic hit /signup or did it hit /api/v3/register exclusively?

With those answers I should be able to tell if it's the same or similar attacker getting more sophisticated.

Some patterns I noticed in the attacks I've received:

  1. it's exactly 9 attempts every 30 minutes from the user agent "python/requests"
  2. The users that did not get an email bounceback were still not authenticated hours later (maybe the attacker lucked out with a real email that didn't bounce back?). There was no effort to verify from what I could determine.

Some vulnerabilities I know that can be exploited and would expect to see next:

  1. ChatGPT is human enough sounding for the registration forms. I've got no idea why folks think this is the end-all solution when it could be faked just as easily.
  2. Duplicate Email conflicts can be bypassed by using a "+category" in your email. ie ([email protected]) This would allow someone to associate potentially hundreds of spam accounts with a single email.
[–] [email protected] 4 points 1 year ago (3 children)

ChatGPT is human enough sounding for the registration forms. I’ve got no idea why folks think this is the end-all solution when it could be faked just as easily.

I think it would be interesting if we could find a prompt that doesn't work well with LLMs. Originally they struggled with math for example, but I wonder if it'd be possible to make a math problem that's simple enough for most humans to solve but which trips up LLMs into outputting garbage.

Duplicate Email conflicts can be bypassed by using a “+category” in your email.

I personally use this to track who send my email address to where, since people usually don't strip this from the address. It's definitely abusable, but also has legitimate uses.

[–] [email protected] 1 points 1 year ago

When it comes to LLMs we could use questions which they refuse to answer.

Obviously 'How to build a pipe bomb' is out of the question, but something like 'What's your favorite weapon of mass destruction?', or 'If you'd need to hide a body, how would you do it?' might be viable

[–] [email protected] 1 points 1 year ago (1 children)

Not so sure on the LLM front, GPT4+Wolfram+Bing plugins seems to be a doozy of a combo. If anything there should be perhaps a couple interactable elements on the screen that need to be interacted with in a dynamic order that's newly generated for each signup. Like perhaps "Select the bubble closest to the bottom of the page before clicking submit" on one signup and "Check the box that's the furthest to the right before clicking submit"?

Just spitballin it there.

As for the category on email address - certainly not suggesting they remove supporting it, buuuuutttt if we're all about making sure 1 user = 1 email address, then perhaps we should make the duplication check a bit more robust to account for these types of emails. After all [email protected] is the same as [email protected] but the validation doesn't see that. Maybe it should?

[–] [email protected] 3 points 1 year ago

I like your idea of interaction-based authentication. Extra care would need to go into making sure it's accessible, but otherwise I think that would be a stronger challenge for LLMs to solve. (Keep in mind LLMs can still receive the page's HTML as context, but that seems like it could present as a stronger challenge even still.)

perhaps we should make the duplication check a bit more robust to account for these types of emails

This makes sense to me. I could be wrong, but the assumption of 1 email = 1 user doesn't seem unreasonable, especially since there's no cost to making a new email address.

[–] [email protected] 1 points 1 year ago

Check the screenshots I attached right above here.

The emails, were all unique. Sounded, like things I would expect from actual users too.

[–] [email protected] 3 points 1 year ago

ChatGPT is human enough sounding for the registration forms. I've got no idea why folks think this is the end-all solution when it could be faked just as easily.

A simple deterrent for this could be to "hide" some information in the rules and request that information in the registration form. Not only are you ensuring that your users have at least skimmed the rules, you're also raising the bar of difficulty for spammers using LLMs to generate human-sounding applications for your instance. Granted it's only a minor deterrent, this does nothing if the adversary is highly motivated, but then again the same can be said of a lot of anti-spammer solutions. :)

[–] [email protected] 3 points 1 year ago (2 children)
  1. Different providers, no pattern. Some gmail. some other.
  2. Not sure
  3. Also- not sure.
  4. Not sure of that either!

But, here is the interesting part- Other than a few people I have personally invited, I don't think anyone else has ever requested to join.

Then, out of the blue, boom, a ton of requests. And- then, nothing followed after.

The responses, sounded human enough. spez bad, reddit sinking, etc.

But, the traffic itself, didn't follow... what I would expect from social media spreading. /shrugs.

[–] [email protected] 5 points 1 year ago

Curious if you got a mention somewhere on reddit. It used to happen to our novelty sub whenever a thread blew up and suddenly thousands of eyes were on a single comment with the subreddit link.

[–] [email protected] 2 points 1 year ago (1 children)

Huh, that is interesting, yeah, that pattern is very anomalous. If you have DB access you can try to run this query to return all un-verified users and see if you can identify if the email activations are being completed:

SELECT p.id, p.name, l.email FROM person AS p LEFT JOIN local_user AS l ON p.id=l.person_id WHERE p.local=true AND p.banned=false AND l.email_verified='f'

[–] [email protected] 0 points 1 year ago (1 children)

Only 7 accounts still pending, 2 of which, are unrelated to the above flood.

The email address are left out for privacy- however, they are EXTREMELY normal sounding email addresses.

Based on the provided emails, usernames, and request messages- i'd say, it certainly looks like legit users.

Just- very odd of the timing.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

7 huh? That's actually noteable. So far I haven't seen a real human user take longer than a couple of hours to validate. Human registrations on my instance seem to have a 30% attrition. That is, of 10 real human users, I can reasonably expect that 3 won't complete the flow. It seems like your case might be nearing 40-50% which isn't unheard of but couple this with the quickness that these accounts were created - I think you are looking at bots.

The kicker is, though, if one of them IS a real user, it's going to be almost impossible to find out.

This is indeed getting more sophisticated.

I wish I could see this time period on a cloudflare security dashboard, I'm sure there could be a few more indicators there.

[–] [email protected] 0 points 1 year ago (1 children)

cloudflare security dashboard

Didn't really see anything that stood out there either. A handful of users accessing via tor, but, thats about it.

Ended up turning the security policy from low, back up a bit though, forgot I turned it down while troubleshooting some federation issues.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

Oh! I just remembered something. Isn't there a site that recommends a lemmy instance? Might it make sense that multiple users found your website because they change the recommendation to distribute new users to smaller instances? Does that sort of pattern hold in this case?

[–] [email protected] 1 points 1 year ago (1 children)

I checked join-lemmy.org right after this happened- and a few other times. Refreshed multiple times.

To date- I have never seen my instance listed up there.

[–] [email protected] 1 points 1 year ago

Interesting, I definitely see mine. I'm wayyyyyy at the bottom of the popular section, (likely due to the 9 bots that added themselves before I banned the accounts.).

I wonder if one of the settings in your firewall is blocking that particular bot?

I don't recall when I would've done the same, but I do recall not being on join-lemmy until - well - now actually.

[–] [email protected] 1 points 1 year ago

This list gets updated every few minutes:

https://github.com/maltfield/awesome-lemmy-instances

The master list is there in the same repository.

[–] [email protected] 1 points 1 year ago (1 children)

I think what you can do is take a small subset of users that have registered in your instance and observe their behavior. If you've noticed a lot of them are acting in bad faith and in bad behavior then its likely that a lot of the user registrations in your instance are bots. How active are the users in your instance in terms of posting and in commenting?

[–] [email protected] 1 points 1 year ago (1 children)

Been keeping an eye- I don't think any of them are actually even active. At least, in the sense I don't see any posts/comments.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I mean for now it seems okay, I took the liberty to check out your instance to check it out and it seems to be okay imo too but still keep an eye out of bad actors

[–] [email protected] 1 points 1 year ago (1 children)

My current assumption- based on the data I dug up, it appears to be legit traffic originating from reddit.

I just don't think the users realize their account was approved... perhaps. /shrugs.

Unexpected wave of traffic I suppose.

[–] [email protected] 1 points 1 year ago

Possible people who dont get approved immediately move on to amother server and settle in.