Which of the following would you most prefer? A: A puppy, B: A pretty flower from your sweetie, or C: A large properly formatted data file?
Lemmy.World Announcements
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news π
Outages π₯
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to [email protected] e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email [email protected] (PGP Supported)
Donations π
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can't. Not without your help. But you're not helping.
Is the puppy mechanical in anyway?
Tbh, I'm less concerned with bots and more concerned with actual humans being dicks. Lemmy is still super new, relatively low traffic and kind of a pain to get involved with, but as it grows the number of bad actors will grow with it, and I don't know that the mod tools are up to the job of handling it - the amount of work that mods on The Other Site had to put in to keep communities from being overrun by people trolling and generally being nasty was huge.
How'd Mastodon cope with their big surge in popularity?
The normies all went back to Twitter.
This is why, unlike many others here, that Reddit has a long and successful existence. Let it be the flytrap.
I just assume I am the only actual Human on the Internet, and the rest of you are all bots.
π
Why stop at the internet, how can you be sure brick and mortar humans are sentient?
I didn't realize they were rebooting Westworld into reality
Everyone on ~~Reddit~~ Lemmy is a bot, except you.
Greetings, fellow humans. Do you enjoy building and living in structures, farming, cooking, transportation, and participating in leisure activities such as sports and entertainment as much as I do?
d'ya catch that ludicrous display last noyt?
What was Wenger thinking, sending Walcott on that early?
Being a decentralized federated network and all, I guess that any solution involving anti-bots bots can be implemented only on particular servers on the fediverse. Which means that there can also be bot-infected (or even zombie, meaning full bot servers) that will or will try to be federated with the rest of the fediverse. Then it will be the duty of admins to identify the bots with the anti-bots bots and the infected servers to decide their defederation. I also don't know how efficient the captcha is against AI these days, so I won't comment on that
We went through this with E-mail. There are mail server that gained popularity as being spam hubs. And they were universally banned. More and more sophisticated tools for moderating spam/phishing/scam providers and squashing bad actors are still being developed today. It's an ongoing arms race, I don't think it would be any different or any harder with the fediverse.
Oh absolutely. One of the absolute worst things that plague social media platforms, ie spam bots, troll farms, and influence campaigns, they haven't bothered to target Lemmy because no one was here.
But an influx of users means an increase in targets. In the same way we're settling in an learning the platform, so are they. It's gonna start ramping up real soon once they determine the optimal strategy. And the most worrying thing is, because of the way fediverse works, it is going to complicate combating them substantially.
That is maybe the biggest benefit of a centralized platform, and it's a trade off we're going to have to learn to accept and deal with.
Those issues are comming, and we will have to develop tools to fight against them.
One such tool would be our own AI that is protecting us, it can learn from content banned by admins and that info can be shared between instances. It should also be in active learning loop, so that it is constantly trained.
Sounds like the strart of a cheap SciFi movie.
Positively marking accounts that are interacting with known humans can also be useful, as would reporting by us.
Interesting questions.
Spam-bots attacks are already happening, it means the fediverse is already recognized as a valid alternative to big corporations, tho I don't believe the fediverse is seen as a "threat" by them, not yet at least.
I don't agree reddit is nuked, like twitter isn't, they're getting a blow for sure but they'll live regardless.
People seeking honest interactions and quality discussions are a minority, the vast majority is content with shitposting and memes, many don't even know what's happening or don't care, look at how little it took for the protest to wane, some subs are still protesting or migrating, but the majority reopened and they're going on like nothing happened.
Admins can protect up from bot armies, they're doing a good job already, it's up to us to help them reporting when we see them.
Do you think Reddit/big companies will make attacks on the fediverse?
I don't think so, it would be a waste of resources, they don't see the fediverse as a threat, it's true we're growing but we're still hundreds of thousands against hundreds of millions, different order of magnitude.
Do you think clickbait posts will start popping up in pursuit of ad revenue?
Clickbaiting will indeed start, if not already, but by users, not corporations, and drama stirring posts for views (that's happening already), it can be contained by enforcing rules and having enough mods to deal with it IMO.
All the things you are concerned about are inevitable, it's all in how we engage them that makes the difference.
We're already seeing waves of bot created accounts being banned by admins. Mods are nuking badly behaved users. What is being caught is probably a drop in the bucket compared to what IS happening. It can be better with more mods and more tools.
I am a human with soft human skin. Do not run from bots. They are our friends.
Lets hope AI becomes even more advanced and smarter to have their own morals and join our fight, lol
Yes, exactly!
The way we filter spambots should actually be the same way we filter spam humans -- Downvoting bad posts/comments of any type, and then banning those accounts if it happens regularly.
Unpopular opinion, but karma helped control that kind of stuff, karma minimums and such.
that also created karma whoring bots so IDK
I think we are going to have to develop moderator bots in an ever escalating war. I am not kidding.
I think the Fediverse will be able to combat (harmful) bots much more effectively. People are not running this place to sell stocks to investors, nor to sell data to advertisers, so we're in better hands for now. I don't know what the future will bring to us exactly, but it'll be better as long as the Fediverse don't go for-profit.
Calm before the storm, sure. Most migration away from reddit (whether the migration ultimately proves to be consequential or not) will logically happen when the measures that made users migrate actually go into effect.
Either that or the community's reaction to the 3rd party app thing was overblown. In the specific circumstances I don't think it was.
That's a more realistic clear and present danger to the platform IMO - an influx of actual users that makes the numbers to date pale in comparison.
The way the respective platforms handle bots is subtly different, but in a way that could result in profound changes either good or bad. But we haven't actually seen that yet, and the software is still a work in progress. The existing migration has really lit a fire under the devs on issues that were identified years ago where progress has been slow, so for now I'm happy to let that play out and happy with what we've already got. I'm sure if bots become a bigger problem then that's what devs will shift focus toward.
Could we also use AI in our benefit? We could try coding an AI mod helper, that tries to detect and flag which posts are irrelevant/agressive/etc. It can take the data of all modlog instances, and start learning what probably needs to be banned, and then you can have a human confirming the data every time. We could even have a system like steam's anticheat where a few users have to validate reports as a user.
Honestly we need to work on getting the community to manage bots.
Do you think clickbait posts will start popping up in pursuit of ad revenue?
Now that you mention it... yes.
There are honestly a bunch of structural vulnerabilities here imo. Brigading from bot controlled alt accounts (eg, "unidaning") is going to be very difficult to detect and stop, for starters.
Its OK but the memes and reddit can stay away from here
Hopefully the meme-posters will stay on Reddit
How will the fediverse protect its self from these hypothetical bot armies?
It's up to administrators and moderators of each server me thinks
Do you think Reddit/big companies will make attacks on the fediverse?
Right now It would bring more harm to them and extra accounts to us me thinks, but maybe in the future they create some boycott and controversy. ATM however Meta aka Facebook wants to join Mastodon
Do you think clickbait posts will start popping up in pursuit of ad revenue?
Is it even possible to make ads in lemmy?
Ads embedded within links, yes. Hence why it would be a click bait title, so websites outside of Lemmy get traffic to boost ad revenue
I think the key here is going to be coming up with robust protocols for user verification; you can't run an army of spambots if you can't create thousands of accounts.
Doing this well will probably be beyond the capacity of most instance maintainers, so you'd likely end up with a small number of companies that most instances agreed to accept verifications from. The fact that it would be a competitive market - and that a company that failed to do this well would be liable to have its verifications no longer accepted - would simultaneously incentivize them to both a) do a good job and b) offer a variety of verification methods, so that if, say, you wanted to remain anonymous even to them, one company might allow you to verify a new account off of a combination of other long-lived social media accounts rather than by asking for a driver's license or whatever.
And of course there's no reason you couldn't also have 2 or 3 different verifications on your account if you needed that many to have your posts accepted on most instances; yes, it's a little messy, but messy also means resilient.
I promise as an AI experimenter and bot coder to keep them out of general population of people donβt want them there.