this post was submitted on 15 Mar 2024
244 points (96.6% liked)

Technology

59414 readers
3315 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Code used in the analysis is here

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 8 months ago* (last edited 8 months ago)

You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.

Only if you can recognise the bias, and what the cause of the bias is to fix it.

It's not implausible that the AI might come to the same trend using similar patterns, even if you excised the gender data. People with particular names, hobbies, whether they'd joined a sorority, etc.

A slapdash fix to try to patch the bias by just adding a positive spin might not do that much, and most of the time, you don't know the specifics of what goes on inside a model, and what different parts specifically contribute to what. Let alone one owned by another company like ChatGPT, who would very much not like people pulling apart their LLMs to figure out how they work, and what they were trained on.

Consider the whole Google Bard image generation debacle, where it's suspected that they secretly added additional keywords to prompts to try to minimise bias, causing a whole bunch of other problems because it had unpredicted effects.