this post was submitted on 12 Mar 2024
52 points (100.0% liked)

Technology

37724 readers
558 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

all 36 comments
sorted by: hot top controversial new old
[–] [email protected] 28 points 8 months ago (1 children)

was it really offensive or was it just "target selling pride clothes during pride month" offensive?

[–] [email protected] 16 points 8 months ago (2 children)

I don't know that "offensive" is the right word. More just "shitty" and "lazy".

Like they took the time out to teach it "diversity" but couldn't bother to train it past "diversity = people who are not white" or to acknowledge when the user is asking specifically for a white person or a different region or time period.

[–] [email protected] 6 points 8 months ago* (last edited 8 months ago) (1 children)

I, for one, welcome Japanese George Washington, Indian Hitler and Inuit Ghandi to our historical database.

[–] [email protected] 3 points 8 months ago

Jojo Rabbit featured Jewish Maori Hitler and was very well received.

[–] [email protected] 4 points 8 months ago

I think the lesson here is that political correctness isn't very machine learnable. Human history and modern social concerns are very complex in a precise way and really should be addressed with conventional rules and algorithms. Or manually, but that's obviously not scalable at all.

[–] [email protected] 16 points 8 months ago (3 children)

It's not just historical. I'm a white male and I prompted Gemini to create images for me if a middle aged white man building a Lego set etc. Only one image was a white male and two of the others wrecan Indian and a Black male. Why when I asked for a white male. It was an image I wanted to share to my family. Why would Gemini go off the prompt? I did not ask for diversity, nor was it expected for that purpose, and I got no other options for images which I could consider so it was a fail.

[–] [email protected] 33 points 8 months ago (3 children)

The problem is that the training data is biased and these AIs pick up on biases extremely well and reinforce them.

For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.
So, if you've then got a journalistic picture, like from the food banks mentioned in the article, suddenly there will be relatively many people of color there, compared to what the AI has seen from its other training data.
As a result, it will store that one of the defining features of how a food bank looks like, is that it has people of color there.

To try to combat these biases, the bandaid fix is to prefix your query with instructions to generate diverse pictures. As in, literally prefix. They're simply putting words in your mouth (which is industry standard).

[–] [email protected] 6 points 8 months ago (1 children)

For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.

That is quite the bold statement. Source?

[–] [email protected] 7 points 8 months ago

I don't think I came up with that myself, but yeah, I've got nothing. Would have been multiple years, since I've read about that.
Maybe strike the "mostly", but then it seemed logical enough to me that this would be a factor, similar to how some women will avoid revealing their gender (in certain contexts on the internet) to steer clear from sexual harassment.
For that last part, I can refer you to a woman from which I've heard first-hand that she avoids voice chat in games, because of that.

[–] [email protected] 5 points 8 months ago (1 children)

Sometimes you do want something specific. I can understand if someone just asked for a person x, y, z and then gets a broader selection of men, women, young, old, black or white. But if one asks for a middle-aged white man, I would not expect it to respond with a young, Black women, just to have variety. I'd expect other non-stated variables to be varied. It's like asking for a scene of specifically leafy green trees, then I would not expect to see a whole lot of leafless trees.

[–] [email protected] 13 points 8 months ago (1 children)

Yeah, the problem with that is that there's no logic behind it. To the AI, "white person" is equally as white as "banker". It only knows what a white person looks like, because it's been shown lots of pictures of white people and those were labeled "white person". Similarly, it's been shown lots of pictures of white people and those were labeled "banker".

There is a way to fix that, which is to introduce a logic before the query is sent to the AI. It needs to be detected whether your query contains explicit reference to skin color (or similar), and if so, that query prefix needs to be left out.

Where it gets wild, is that you can ask the AI whether your query contains such explicit references to skin color and it will genuinely do quite well at answering that correctly, because text processing is its core competence.
But then it will answer you "Yes." or "No." or "Potato chips." and you have to program the condition to then leave out the query prefix.

[–] [email protected] 4 points 8 months ago

Yes, it could be that, and may explain why the Nazi images came out like they did. But it sounded more like to me, Google was forcing diversity into the images deliberately. But sometimes that does not make sense. For general requests, yes. Otherwise they can just as well decide that grass should not always be green or brown, but sometimes also just make it blue or purple for variety.

[–] [email protected] 3 points 8 months ago (1 children)

Nah, in this case I think it's a classic case of over correction and prompt manipulation. The bus you're talking about is right, so to try to combat that they and other ai companies manipulate your prompt before feeding it to the llm. I'm very sure they are stripping out white male and or subbing in different ethnicities to try to cover the bias

[–] [email protected] 4 points 8 months ago

TFW you accidentally leave the hidden diversity LoRa weight at 1.00.

[–] [email protected] 8 points 8 months ago (1 children)

Could you elaborate on the use case you're describing? You were trying to make an image of a middle aged white man building Lego for your family?

[–] [email protected] 5 points 8 months ago (1 children)

Yes, but it does not really matter what the rest of the prompt detail was? The point was, it was supposed to me an image of me doing an activity. I'd clearly prompted for a white man, but it gave me two other images that were completely not that. Why was Gemini deviating from specific prompts like that? Seems the identical issue to the case with the Nazis, just introducing variations completely of its own.

[–] [email protected] 13 points 8 months ago (1 children)

Yeah yeah sure sure but why were you generating an image of a middle aged white man building Lego for your family? I'm baffled.

[–] [email protected] 6 points 8 months ago (1 children)

That is really just not relevant at all to the discussion here, but to satisfy your curiosity, I'm busy building a Lego model that a family member sent me, so the generated AI photo was supposed to depict someone that looked vaguely like me building such a Lego model. I used Bing in the past, and it has usually delivered 4 usable choices. Fact that Google gave me something that was distinctly NOT what I asked for, means it is messing with the specifics that are asked for.

[–] [email protected] 6 points 8 months ago (2 children)

Why use an AI? Just like... take a selfie

[–] [email protected] 3 points 8 months ago

I'm not the lego person, but I am not taking that selfie because: 1) I don't want to clean the house to make it look all nice before judgey relatives critique the pic, 2) my phone is old and all its pics are kinda fish-eyed, 3) I don't actually want to spend the time doing the task right now when AI can get me an image in seconds.

[–] [email protected] 3 points 8 months ago (1 children)

So, what you're saying is that white people shouldn't use AI?

[–] [email protected] 2 points 8 months ago

It would appear that is exactly what I'm saying as long as the reader lacked any reading comprehension skills.

[–] [email protected] 6 points 8 months ago

A while back, one of the image generation AIs (midjourney?) caught flack because the majority of the images it generated only contained white people. Like...over 90% of all images. And worse, if you asked for a "pretty girl" it generated uniformly white girls, but if you asked for an "ugly girl" you got a more racially-diverse sample. Wince.

But then there reaction was to just literally tack "...but diverse!" on the end of prompts or something. They literally just inserted stuff into the text of the prompt. This solved the immediate problem, and the resulting images were definitely more diverse...but it led straight to the sort of problems that Google is running into now.

[–] [email protected] 9 points 8 months ago* (last edited 8 months ago) (1 children)

What? Gemini has an image gen tool? That fucker told me it didn't when I asked! Dumbass AI don't even know what it can do... SMH

[–] [email protected] 10 points 8 months ago

They switched off image generation after these issues, so it (correctly) said that it couldn’t generate images at the time.

[–] [email protected] 2 points 8 months ago

Maybe there is a ban from ChatGPT

[–] [email protected] 1 points 8 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryBrin’s comments, at an AI “hackathon” event on 2 March, follow a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

The pictures, as well as Gemini chatbot responses that vacillated over whether libertarians or Stalin had caused the greater harm, led to an explosion of negative commentary from figures such as Elon Musk who saw it as another front in the culture wars.

But it follows a similar pattern to an uncovered system prompt for OpenAI’s Dall-E, which was instructed to “diversify depictions of ALL images with people to include DESCENT and GENDER for EACH person using direct term”.

Dame Wendy Hall, a professor of computer science at the University of Southampton and a member of the UN’s advisory body on AI, says Google was under pressure to respond to OpenAI’s runaway success with ChatGPT and Dall-E and simply did not test the technology thoroughly enough.

Hall says Gemini’s failings will at least help focus the AI safety debate on immediate concerns such as combating deepfakes rather than the existential threats that have been a prominent feature of discussion around the technology’s potential pitfalls.

Dan Ives, an analyst at the US financial services firm Wedbush Securities, says Pichai’s job may not be under immediate threat but investors want to see multibillion-dollar AI investments succeed.


Saved 78% of original text.