45
submitted 1 year ago by [email protected] to c/[email protected]

The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I'm sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

top 50 comments
sorted by: hot top controversial new old
[-] [email protected] 19 points 1 year ago* (last edited 1 year ago)

This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:

one main, one alt, one "professional" (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

I feel like this is what happened when you'd see posts with hundreds / thousands of upvotes but had only 20-ish comments.

There needs to be a better way to solve this, but I'm unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).

Fwiw, I have only one Lemmy account.

[-] [email protected] 6 points 1 year ago

I see what you mean, but there's also a large number of lurkers, who will only vote but never comment.

I don't think it's unfeasible to have a small number of comments on a highly upvoted post.

[-] [email protected] 2 points 1 year ago

If it's a meme or shitpost there isn't anything to talk about

[-] [email protected] 1 points 1 year ago

Maybe you're right, but it just felt uncanny to see thousands of upvotes on a post with only a handful of comments. Maybe someone who active on the bot-detection subreddits can pitch in.

[-] [email protected] 0 points 1 year ago

I agree completely. 3k upvotes on the front page with 12 comments just screams vote manipulation

[-] [email protected] 2 points 1 year ago

True, but there were also a number of subs (thinking of the various meirl spin-offs, for example) that naturally had limited engagement compared to other subs. It wasn’t uncommon to see a post with like 2K upvotes and five comments, all of them remarking how little comments there actually were.

[-] [email protected] 2 points 1 year ago

Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don't know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.

[-] [email protected] 1 points 1 year ago

I’m curious what value you get from a bot? Were you using it to upvote your posts, or to crawl for things that you found interesting?

[-] [email protected] 1 points 1 year ago* (last edited 1 year ago)

The latter. I was making bots to collect data (for the previously-mentioned thesis) and to make some form of utility bots whenever I had ideas.

I once had an idea to make a community-driven tagging bot to tag images (like hashtags). This would have been useful for graph building and just general information-lookup. Sadly, the idea never came to fruition.

[-] [email protected] 1 points 1 year ago

Cool, thank you for clarifying!

[-] [email protected] 1 points 1 year ago

On Reddit there were literally bot armies by which thousands of votes could be instantly implemented. It will become a problem if votes have any actual effect.

It’s fine if they’re only there as an indicator, but if the votes are what determine popularity, prioritize visibility, it will become a total shitshow at some point. And it will be rapid. So yeah, better to have a defense system in place asap.

[-] [email protected] 0 points 1 year ago

Yes, I feel like this is a moot point. If you want it to be "one human, one vote" then you need to use some form of government login (like id.me, which I've never gotten to work). Otherwise people will make alts and inflate/deflate the "real" count. I'm less concerned about "accurate points" and more concerned about stability, participation, and making this platform as inclusive as possible.

[-] [email protected] 0 points 1 year ago* (last edited 1 year ago)

In my opinion, the biggest (and quite possibly most dangerous) problem is someone artificially pumping up their ideas. To all the users who sort by active / hot, this would be quite problematic.

I'd love to actually see some social media research groups actually consider how to detect and potentially eliminate this issue on Lemmy, considering Lemmy is quite new and is malleable at this point (compared to other social media). For example, if they think metric X may be a good idea to include in all metadata to increase chances of detection, then it may be possible to include this in the source code of posts / comments / activities.

I know a few professors and researchers who do research on social media and associated technologies, I'll go talk to them when they come to their office on Monday.

[-] [email protected] 0 points 1 year ago* (last edited 1 year ago)
[-] [email protected] 1 points 1 year ago
[-] [email protected] 1 points 1 year ago

@Lumidaub Ok, I will remind you on Monday Jul 10, 2023 at 9:36 AM PDT.

[-] [email protected] 0 points 1 year ago

I don't know how you got away with that to be honest. Reddit has fairly good protection from that behaviour. If you up vote something from the same IP with different accounts reasonably close together there's a warning. Do it again there's a ban.

[-] [email protected] 1 points 1 year ago

I did it two or three times with 3-5 accounts (never all 8). I also used to ask my friends (N=~8) to upvote stuff too (yes, I was pathetic) and I wasn't warned/banned. This was five-six years ago.

[-] [email protected] 2 points 1 year ago* (last edited 1 year ago)
[-] [email protected] 1 points 1 year ago* (last edited 9 months ago)

[This comment has been deleted by an automated system]

load more comments (2 replies)
[-] [email protected] 2 points 1 year ago

Web of trust is the solution. Show me vote totals that only count people I trust, 90% of people they trust, 81% of people they trust, etc. (0.9 multiplier should be configurable if possible!)

[-] [email protected] 2 points 1 year ago

The nice things about the Federated universe is that, yes, you can bulk create user accounts on your own instance - and that server can then be defederated by other servers when it becomes obvious that it's going to create problems.

It's not a perfect fix and as this post demonstrated, is only really effective after a problem has been identified. At least in terms of vote manipulation from across servers, it could act if it, say, detects that 99% of new upvotes are coming from a server created yesterday with 1 post, it could at least flag it for a human to review.

[-] [email protected] 0 points 1 year ago

It actually seems like an interesting problem to solve. Instance runners have the sql database with all the voting record, finding manipulative instances seems a bit like a machine learning problem to me

[-] [email protected] 2 points 1 year ago

You can buy 700 votes anonymously on reddit for really cheap

I don't see that it's a big deal, really. It's the same as it ever was.

[-] [email protected] 1 points 1 year ago

Over a houndred dollars for 700 upvotes O_o

I wouldn't exactly call that cheap 🤑

On the other hand, ten or twenty quick downvotes on an early answer could swing things I guess ...

[-] [email protected] 2 points 1 year ago

PSA: internet votes are based on a biased sample of users of that site and bots

[-] [email protected] 1 points 1 year ago

Reddit had/has the same problem. It's just that federation makes it way more obvious on the threadiverse.

[-] [email protected] 1 points 1 year ago

In case anyone's wondering this is what we instance admins can see in the database. In this case it's an obvious example, but this can be used to detect patterns of vote manipulation.

[-] [email protected] 2 points 1 year ago

“Shill” is a rather on-the-nose choice for a name to iterate with haha

[-] [email protected] 1 points 1 year ago* (last edited 9 months ago)

[This comment has been deleted by an automated system]

[-] [email protected] 1 points 1 year ago

This man is over 100 years old

[-] [email protected] 1 points 1 year ago

I've set the registration date on my account back 100 years just to show how easy it is to manipulate Lemmy when you run your own server.

That's exactly what a vampire that was here 100 years ago would say.

[-] [email protected] 1 points 1 year ago

The lack of karma helps some. There's no point in trying to rack up the most points for your account(s), which is a good thing. Why waste time on the lamest internet game when you can engage in conversation with folks on lemmy instead.

[-] [email protected] 1 points 1 year ago

It can still be used to artificially pump up an idea. Or used to bury one.

[-] [email protected] 1 points 1 year ago

This is the problem. All the algorithms are based on the upvote count. Bad actors will abuse this.

[-] [email protected] 0 points 1 year ago

So maybe more weight should be put on comment count? Much harder to fake those.

[-] [email protected] 0 points 1 year ago

That's where all the harm comes from

[-] [email protected] 2 points 1 year ago

Agree. Farming karma is nothing compared to making a single individual polar-opinion APPEAR as though it is other’s (or most’s) polar-opinion. We know that other’s opinions are not our own, but they do influence our opinions. It’s pretty important that either 1) like numbers mean nothing, in which case hot/active/etc. are meaningless or 2) we work together to ensure trust in like numbers.

load more comments (1 replies)
[-] [email protected] 1 points 1 year ago

You mean to tell me that copying the exact same system that Reddit was using and couldn’t keep bots out of is still vuln to bots? Wild

Until we find a smarter way or at least a different way to rank/filter content, we’re going to be stuck in this same boat.

Who’s to say I don’t create a community of real people who are devoted to manipulating votes? What’s the difference?

The issue at hand is the post ranking system/karma itself. But we’re prolly gonna be focusing on infosec going forward given what just happened

[-] [email protected] 1 points 1 year ago

Did anyone ever claim that the Fediverse is somehow a solution for the bot/fake vote or even brigading problem?

[-] [email protected] 1 points 11 months ago* (last edited 11 months ago)

This blog post is fantastic! It's packed with valuable insights and actionable advice. Thanks for sharing such an informative and well-written article. buy Linkedin Connections

[-] [email protected] 1 points 1 year ago

Federated actions are never truly private, including votes. While it's inevitable that some people will abuse the vote viewing function to harass people who downvoted them, public votes are useful to identify bot swarms manipulating discussions.

[-] [email protected] 0 points 1 year ago

Reddit admins manipulated vote counts all the time.

[-] [email protected] 0 points 1 year ago

maybe we can show a breakdown of which servers the votes are coming from so anything sus can be found out right away. Like, it would be easy enough to identify a bot farm I'd think

load more comments
view more: next ›
this post was submitted on 09 Jul 2023
45 points (94.1% liked)

Fediverse

17538 readers
4 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 4 years ago
MODERATORS