this post was submitted on 05 Sep 2024
565 points (96.2% liked)

Technology

60076 readers
3561 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don't just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it... One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable...

How do we even fix this issue or prevent it from affecting Lemmy??

(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 3 months ago* (last edited 3 months ago)

You can't get rid of bots, nor spammers. The only thing is that you can have a more aggressive automated punishment system, which will unevitably also punish good users, along with the bad users.

[–] [email protected] 5 points 3 months ago

Bots after getting banned: 📉📉📉📉

[–] [email protected] 5 points 3 months ago

I am glad clever people like yourselves are looking into this. Best of luck.

[–] [email protected] 5 points 3 months ago

I think the only way to solve this problem for good would be to tie social media accounts to proof of identity. However, apart from what would certainly be a difficult technical implementation, this would create a whole bunch of different problems. The benefits would probably not outweigh the costs.

[–] [email protected] 5 points 3 months ago (1 children)

Some sort of "report as bot" --> required captcha pipeline would be useful

[–] [email protected] 4 points 3 months ago (1 children)

Captcha is already mostly machine breakable, I've seen some new interesting pattern-based stuff but nothing that you couldn't do image training against.

At some point not too far in the future you won't be able to use captcha to stop bots from posting. It simply won't even be a hurdle, a couple extra pennies of computational power.

There's probably some power in detecting accounts that are blocked by many people. The problem is no matter what we do we're heading towards blocking them with an algorithm or AI. And I'd hate to see that for Lemmy.

This place is just the stuff you follow with the raw up and down votes. We don't hide unpopular posts making brigading less useful.

load more comments (1 replies)
[–] [email protected] 5 points 3 months ago

You have to watch where you are if you call out a bot, you'll have your comment removed and get banned. They tell you to report the bot and they'll take care of it. Then when you report the obvious troll/bot they ban you for it. Some shady mods out there.

[–] [email protected] 5 points 3 months ago (5 children)

Internet is not a place for public discourse, it never was. it's the game of numbers where people brigade discussions and make it confirm to their biases.

Post something bad about the US with facts and statistics in US centric reddit sub, youtube video or article, and see how it divulges into brigading, name calling and racism. Do that on lemmy.ml to call out china/russia. Go to youtube videos with anything critical about India.

For all countries with massive population on the internet, you're going to get bombarded with lies, delfection, whataboutism and strawman. Add in a few bots and you shape the narrative.

There's also burying bad press with literally downvoting and never interacting.

Both are easy on the internet when you've got the brainwashed gullible mass to steer the narrative.

load more comments (5 replies)
[–] [email protected] 4 points 3 months ago* (last edited 3 months ago)

Signup safeguards will never be enough because the people who create these accounts have demonstrated that they are more than willing to do that dirty work themselves.

Let's look at the anatomy of the average Reddit bot account:

  1. Rapid points acquisition. These are usually new accounts, but it doesn't have to be. These posts and comments are often done manually by the seller if the account is being sold at a significant premium.

  2. A sudden shift in contribution style, usually preceded by a gap in activity. The account has now been fully matured to the desired amount of points, and is pending sale or set aside to be "aged". If the seller hasn't loaded on any points, the account is much cheaper but the activity gap still exists.

  • When the end buyer receives the account, they probably won't be posting anything related to what the seller was originally involved in as they set about their own mission unless they're extremely invested in the account. It becomes much easier to stay active in old forums if the account is now AI-controlled, but the account suddenly ceases making image contributions and mostly sticks to comments instead. Either way, the new account owner is probably accumulating much less points than the account was before.
  • A buyer may attempt to hide this obvious shift in contribution style by deleting all the activity before the account came into their possession, but now they have months of inactivity leading up to the beginning of the accounts contributions and thousands of points unaccounted for.
  1. Limited forum diversity. Fortunately, platforms like this have a major advantage over platforms like Facebook and Twitter because propaganda bots there can post on their own pages and gain exposure with hashtags without having to interact with other users or separate forums. On Lemmy, programming an effective bot means that it has to interact with a separate forum to achieve meaningful outreach, and these forums probably have to be manually programmed in. When a bot has one sole objective with a specific topic in mind, it makes great and telling use of a very narrow swath of forums. This makes Platforms like Reddit and Lemmy less preferred for automated propaganda bot activity, and more preferred for OnlyFans sellers, undercover small business advertisers, and scammers who do most of the legwork of posting and commenting themselves.

My solution? Implement a weighted visual timeline for a user's points and posts to make it easier for admins to single out accounts that have already been found to be acting suspiciously. There are other types of malicious accounts that can be troublesome such as self-run engagement farms which express consistent front page contributions featuring their own political or whatever lean, but the type first described is a major player in Reddit's current shitshow and is much easier to identify.

Most important is moderator and admin willingness to act. Many subreddit moderators on Reddit already know their subreddit has a bot problem but choose to do nothing because it drives traffic. Others are just burnt out and rarely even lift a finger to answer modmail, doing the bare minimum to keep their subreddit from being banned.

[–] [email protected] 4 points 3 months ago

Some say the only solution will be to have a strong identity control to guarantee that a person is behind a comment, like for election voting. But it raises a lot of concerns with privacy and freedom of expression.

[–] [email protected] 4 points 3 months ago (2 children)

Perhaps the only way to get rid of them for sure is to require a CAPTCHA before all posts. That has its own issues though.

load more comments (2 replies)
[–] [email protected] 4 points 3 months ago (1 children)

On an instance level, you can close registration after a threshold level of users that you are comfortable with. Then, you can defederate the instances that are driven by capitalistic ideals like eternal growth (e.g. Threads from meta)

load more comments (1 replies)
load more comments
view more: ‹ prev next ›