this post was submitted on 05 Sep 2024
565 points (96.2% liked)

Technology

60101 readers
4174 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don't just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it... One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable...

How do we even fix this issue or prevent it from affecting Lemmy??

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 3 months ago (2 children)

Long before cryptocurrencies existed, proof-of-work was already being used to hinder bots. For every post, vote, etc., a cryptographic task has to be solved by the device used for it. Imperceptibly fast for the normal user, but for a bot trying to perform hundreds or thousands of actions in a row, a really annoying speed bump.

See e.g. https://wikipedia.org/wiki/Hashcash

This combined with more classic blockades such as CAPTCHAs (especially image recognition, which is still expensive in mass despite the advances in AI) should at least represent a first major obstacle.

[–] [email protected] 6 points 3 months ago (1 children)

Why resort to an expensive decentralized mechanism when we already have a client-server model? We can just implement rate-limiting on the server.

[–] [email protected] 5 points 3 months ago (1 children)

Can't this simply be circumvented by the attackers operating several Lemmy servers of their own? That way they can pump as many messages into the network as they want. But with PoW the network would only accept the messages work was done for.

[–] [email protected] 1 points 3 months ago

Rate-limiting could also be applied at the federation level, but I'm less sure of what the implementation would look like. Requiring filters on a per-account basis might be resource intensive.

[–] [email protected] 5 points 3 months ago (2 children)

The issue I have with this that basically, now users need to "pay" (with compute time) to speak their mind. This would be similar than if you had to pay to vote in political elections. It favors the rich. A poor user might not be able to afford 20$ additional electricity bill a month, but a large agency (such as state sponsored, corporate agendas) might have a 1000000$.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

We're talking about fractions of a cent here per post. Of course, this all needs to be worked out in detail and variables and scaling needs to be added / calculated. So for someone that posts only 2-3 times a day, costs and delay are practically unmeasurable low. but if you start pushing 100 posts out per minute, the difficulty of the PoW calculation gets up.

A delay of a fraction of a second to do the PoW for a single post is not a problem. But a spam-bot that is now suddenly limited to making 1 post per minute instead 100 makes a huge difference and could drive up the price even for someone with deep pockets.

But I'm not an expert in this field. I only know that spambots and similar are a problem that is almost as old as the Internet and that there have been an almost incalculable number of attempts to solve it to date, all of which have more or less failed. But maybe we can find a combination that could work for our specific case.

Of course, there are still a lot of things to clarify. how do we stop someone from constantly creating new accounts, for example?

would we have to start with a “harder difficulty” for new users to counteract this?

do we need some kind of reputation system?

How do we set them accurately enough not to drive away new users but still fulfill their purpose?

But as said, not an expert. Just brainstorming here.

[–] [email protected] 1 points 3 months ago

I see it more as a tax. While you can evade taxes in a political system, you're supposed to be paying them if you're voting.