this post was submitted on 05 Jun 2024
99 points (93.0% liked)
Gaming
20006 readers
7 users here now
Sub for any gaming related content!
Rules:
- 1: No spam or advertising. This basically means no linking to your own content on blogs, YouTube, Twitch, etc.
- 2: No bigotry or gatekeeping. This should be obvious, but neither of those things will be tolerated. This goes for linked content too; if the site has some heavy "anti-woke" energy, you probably shouldn't be posting it here.
- 3: No untagged game spoilers. If the game was recently released or not released at all yet, use the Spoiler tag (the little ⚠️ button) in the body text, and avoid typing spoilers in the title. It should also be avoided to openly talk about major story spoilers, even in old games.
founded 5 years ago
MODERATORS
Because the devs/mods have the power to at least attempt to remove the person from the game before anyone else has to suffer their comments.
It's pretty simple to enable mod actions, too. Game devs make a list of rules about what you can and can't say. You agree to those rules when you start playing the game. Breaking the rules earns you a punishment. If you don't like it, you don't play the game. If the rules are unfairly restrictive then people won't play the game and it will fail. This is how internet moderation has worked since forever.
Yes that is how moderation has worked in some places in the past. It's also been historically unpaid volunteer work and not particularly effective, especially at large scales. Most of the people here have at least one story about bad moderation on reddit precisely because that kind of moderation is inefficient and heavily influenced by the personal bias of the moderator reviewing a report. You still needed to block people on a regular basis if you wanted to both participate and avoid harassment from a subset of users. That's how it is all over the internet and there is nothing that can be done to completely remove that element of online activity. Hence the need for thicker skin.
Well yeah, that's why part of Riot's solution seems to be adding more mods. I'd be more understanding if Riot didn't have the resources to add more paid mod support, but I truly don't think that's the case. So yeah, pay more mods and use more advanced technology to flag communication, I think that's an attainable goal.
I'm not saying that people shouldn't still protect themselves by blocking harassment, but I believe it's perfectly within devs' abilities to at least attempt to remove the most heinous bullies from the game.
While that is true in many respects, voice chat is quite difficult to police compared to text chat. I'm not sure how you go about automating or even monitoring that without recording everything people say using your service. Which then brings up a whole host of issues from data storage costs to privacy concerns to consent to record laws. You pretty much have to rely on users to submit evidence of their claims and that leads us back to the idea that users need to expect to have an active role in enforcing any sort of moderation policy.
It doesn't bring up any issues to record people for moderation purposes, if it's in the Terms of Service of whatever service/game you're using. Agreeing to the ToS is a form of contract. CoD's voice chat, for example, is already monitored and recorded.
Also, as voice recognition with AI is getting better, so will the effectiveness of those moderation tools. Not just in terms of speed but also in terms of cost.