this post was submitted on 28 Nov 2023
109 points (100.0% liked)

Technology

37740 readers
581 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 1 year ago (6 children)

That's actually a pretty smart way to combat racial bias.

[–] [email protected] 7 points 1 year ago (1 children)

The smarter way would be using balanced training data.

[–] [email protected] 1 points 11 months ago

You can't balance every single aspect of the training data. You will always run into some searches favoring one race over another.

[–] [email protected] 6 points 1 year ago

Except when it does this

[–] [email protected] 5 points 1 year ago

It's not, the underlying data is still just as biased. Taking a bunch of white people and saying they are "ethnically ambiguous" is just statistical blackface.

[–] [email protected] 5 points 1 year ago (1 children)

No, it's an incredibly dumb way because fucking with people's prompts will make the tech unreliable

[–] [email protected] 10 points 1 year ago (1 children)

will make the tech unreliable

Man, do I have some bad news for you

[–] [email protected] 4 points 1 year ago

Lol fair enough. I guess I could say "make the tech even less reliable"

[–] [email protected] 1 points 1 year ago (1 children)

If a request is for a generic person, sure. But when the request is for a specific character, not really.

Like make one of the undefined arms black.

[–] [email protected] 2 points 1 year ago (1 children)

I agree with you, but there is a lot of gray area. What about Spider-man? 95% of the pictures it ingests are probably Peter Parker so it would have a strong bias towards making him white when there are several ethnicities that might apply. What about Katniss Everdeen? Is she explicitly white in the book or is she just white because she's played by a white actress? I truly don't know so maybe that is a bad example. What about Santa? What about Jesus? Of all characters, Jesus absolutely shouldn't be white but I'll bet the vast majority of AI depicts him that way.

I'm not disagreeing with you so much as I'm pointing out the line isn't really all that clear. I don't like this ham-handed way of going about it, but I agree with and support the goal of making sure the output isn't white biased just because preserved history tends to be.

[–] [email protected] 4 points 1 year ago (1 children)

It's tricky because the data itself is going to be biased here. Think about it - even the video game is specifically called "Spider-Man Miles Morales" while the one with Peter Parker is just called "Spider-Man."

Katniss is actually a good example. I was not aware of the details, but the books apparently describe her as having "olive skin". The problem though is that if you image search her all you get is Jennifer Lawrence.

That said, Homer is yellow.

[–] [email protected] 1 points 1 year ago (1 children)

Absolutely. There is only a single depiction of Homer and I agree that unless you specifically ask for a race bent Homer it shouldn't do this. I was just pointing out that you can't draw the line at "identifiable character" because clearly that's also a problem. Maybe there is a better place to draw the line, or maybe it's going to be problematic regardless of where is drawn, including not doing anything at all.

I would say if you can't do it right just do nothing at all, except as a white guy in a white biased world, that's self-serving. I'm not the right person to say it's fine to just let it be.

[–] [email protected] 0 points 1 year ago (1 children)

Can you explain to me how racial bias in general-purpose LLM is a problem to begin with?

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

If you were really curious about the answer, you practically gave yourself the right search term there: "racial bias in general purpose LLM" and you'll find answers.

However, like your question is phrased, you just seem to be trolling (= secretly disagreeing and pretending to wanting to know, just to then object).