this post was submitted on 30 Nov 2024
351 points (97.8% liked)

Technology

60101 readers
2056 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.

The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.

rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.

Comments

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 50 points 3 weeks ago (4 children)

How on earth did that pass the ethics application

[–] [email protected] 24 points 3 weeks ago (1 children)

They probably had no idea it would be this bad.

[–] [email protected] 17 points 3 weeks ago (1 children)

Thought: "They probably do something, but I doubt the claims of 99%."

Reality: "They aren't doing shit!"

[–] [email protected] 6 points 3 weeks ago* (last edited 3 weeks ago)

Hey, the algorithm hides the image if it contains words like “death”, it’s all good

[–] [email protected] 21 points 3 weeks ago (1 children)

The claim by Meta that they block this type of material combined with the existing spread of this type of material mean that adding a temporary source of material does not carry the same level of harm as may be expected. Testing if Meta does in fact remove this type of content and finding it failing may reasonably be expected to lead to changes which would reduce the amount of this type of material. The net result is a very small, essentially marginal increase in the amount of self harm material and a fuller understanding of the efficacy of Meta filtering systems. If I were on the ethics board I would approve.

[–] [email protected] 9 points 3 weeks ago

Plus, if it did work the way it was supposed to, there would be zero harm done.

[–] [email protected] 8 points 3 weeks ago

Maybe the ethics board uses AI, claiming to remove about 99% of harmful studies before they are approved.

[–] [email protected] 4 points 3 weeks ago (1 children)

The group was private and they created fake profiles ... did I miss something?

[–] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago)

yea you did. the "fake" profiles could've been made by any one and sent that to non private groups and should've been blocked. them being "fake accounts" doesn't take away from the claims meta makes about 99% of this type of content being removed. please use your brain