this post was submitted on 06 Oct 2023
56 points (100.0% liked)

TechTakes

1427 readers
143 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

all 25 comments
sorted by: hot top controversial new old
[–] [email protected] 30 points 1 year ago (2 children)

I had severe decision paralysis trying to pick out quotes cause every post in that thread is somehow the worst post in that thread (and it’s only an hour old so it’s gonna get worse) but here:

Just inject random 'diverse' keywords in the prompts with some probabilities to make journalists happy. For an online generator you could probably take some data from the user's profile to 'align' the outputs to their preferences.

solving the severe self-amplifying racial bias problems in your data collection and processing methodologies is easy, just order the AI to not be racist

…god damn that’s an actual argument the orange site put forward with a straight face

[–] [email protected] 5 points 1 year ago

So this is how the tokenism sausage is made!

[–] [email protected] 2 points 1 year ago (1 children)

It works with other obvious stuff. Put the words "best, good, high quality" in your prompt actually makes the generated images better.

[–] [email protected] 12 points 1 year ago (1 children)

brb throwing away your account

[–] [email protected] 10 points 1 year ago (4 children)

I did not expect to get back to my laptop late on a friday and see someone "Well Akshoewally, If You Just Sing Gentle Sweet Songs To The Prompt then you get the socks you wanted"

but I guess the orange site had a spillover and has me covered today!

[–] [email protected] 9 points 1 year ago (2 children)

leading to the obvious question: if putting the words “best, good, high quality” in your generative AI prompt isn’t a placebo, then why is all the AI art I’ve seen absolute garbage

[–] [email protected] 10 points 1 year ago (1 children)

I forget where I saw it, but the phrase/comparison stuck with me and I think of it often: all of this shit is a boring person's idea of interesting

but the "just slap some prompt qualifiers on it (to deal with the journalists)" ...god. it is of course entirely unsurprising to have an orange poster be so completely assured of their self-correctness to not even question anything, but the outright direct "just dress it up in vibes until they shut up"

you just have to wonder what (and who?) else in their life they treat the same way

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (1 children)

a boring person’s idea of interesting

Agh this is such a good way of putting it. It has all the signifiers of a thing that has a lot of detail and care and effort put into it but it has none of the actual parts that make those things interesting or worth caring about. But of course it's going to appeal to people who don't understand the difference between those two things and only see the surface signifiers (marketers, executives, and tech bros being prime examples of this type of person)

ETA: and also of course this explains why their solution to bias is "just fake it to make the journalists happy." Why would you ever care about the actual substance when you can just make it look ok from a distance

[–] [email protected] 5 points 1 year ago

it was from a post around the time when dall-e and such were first catching social hype, I think. iirc the article was touching specifically on the output product of visual generators

if I find the article again I'll link it

[–] [email protected] 8 points 1 year ago

I checked their other replies. I shouldn't have.

you'd think I'd learn by now

[–] [email protected] 5 points 1 year ago

Amazing as that wasn't your point or why the post was bad.

you : ow god they said 'get rid of racial profiling by more profiling and adding secret prompts'

reply : 'you can change prompts, this can improve things!'

But yeah weird guy, looking at their other posts on lemme banning them was a good move.

[–] [email protected] 5 points 1 year ago (2 children)

"best, good, high quality, huge tits, mahoosive bazongas, boobs, popular on artstation"

[–] [email protected] 5 points 1 year ago (1 children)

did you see the thing on 404media with the genai girlfriend service that's already serving out totally batshit outputs?

[–] [email protected] 4 points 1 year ago

@dgerard @froztbyte "ships from China in 12-14 days"

[–] [email protected] 27 points 1 year ago

These guys.

Exactly this. Generative AI shows that most people doing technical work are men? It probably also shows that most construction workers are men, most social workers are women, etc.. Guess what, that reflects reality. If you want something else? You can ask for it. "Picture of a woman welding." "Picture of a black, male social worker." You'll get it, no problem.

Over on Mastodon I linked to an NPR article where they kept asking Midjourney to generate images of BLACK doctors treating WHITE children in Africa and it was largely unable to do so, even with the prompts. Not "no problem," but Midjourney sometimes literally interspersed Giraffes and Elephants into images with Black doctors.

[–] [email protected] 16 points 1 year ago

The amount of lazy "it is what it is" takes makes me want to vomit.

Every system has some form of bias, more or less, and a system that has less of a functional bias than another system isn't necessarily a better one

I can't even begin to comprehend how asinine this take is.

[–] [email protected] 9 points 1 year ago

This is gonna be a little off the rails but bear with me:

I recently watched a youtube video that talked about how the contemporary jazz musician Laufey* and her audience run the risk of erasing the history and culture of jazz because they don’t take the time to engage with it. Instead, they are content with replacing it with an idealised parody/pastiche of that culture. Like how people wear mexican costumes and drink on cinco de mayo, or irish costumes on st pats day, or german costumes for oktoberfest, or 1920’s rich white people costumes for a gatsby party etc.

I’ve also been thinking about how the immortality fetish faction of treacles want to do brain uploading so they can live in a simulation forever. I think anyone would agree that such an existence is essentially the same as plopping on a VR headset and watching AI-generated content.

Putting these two ideas together, I’ve essentially reformulated what we already know about treacles et al, which is that they don’t want to acknowledge actual reality. Their model of the world is a pastiche of lazy stereotypes and reinforced by cherry picked statistics. They want to live in a space that confirms all their biases, basically an echo chamber in the cloud.

So yeah, when confronted with an observation about how generative AI produces biased results, we see an expression of the above. The AI produced parody is the reality they want to live in, so there’s no issue.

*I love Laufey. She’s great. You should give her a listen.

[–] [email protected] 5 points 1 year ago

Because reflecting "reality" never affects reality, right? ....Right?

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I commented about this when it was first posted but I'm still angry. These motherfuckers never consider that "reflecting reality" perpetuates that reality. And if AI art never surprises you, it isn't art. But they don't care.

[–] [email protected] 2 points 1 year ago

sounds like something linus tech tips would say