this post was submitted on 18 Feb 2024
147 points (95.1% liked)
Technology
59285 readers
4222 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Concept is, end user reports misinformation, fact checkers at the company take the reported information and check it against the database. If it's listed as false in the database It gets squelched and the AI gets a little tuning to make sure that it stays squelched. If it's in the database and it's true the user is informed that it's not false information. If it's not in the database, That's when it's dicey. Does the team of people moderating the posts make the call, does it go to another team to be classified. At what point do you block it? If one details wrong if two details are wrong if half the post is wrong. Do you squelch mostly true? Or do we just get disclaimers everywhere for 6 months.
I'm mostly puzzled by how this would be carried out when the vast majority of information seems to be discretionary, interpreted, perceived, opinion. Like the statement I just made ;)
Facts either are or aren't.
Misinformation is vary more challenging because it's usually derived from an event that was a fact, but the interpretation, analysis, significance, etc is based on the person's bias.