this post was submitted on 28 Nov 2024
115 points (100.0% liked)

Technology

37801 readers
239 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

With the recent advancements in Large Language Models (LLMs), web developers increasingly apply their code-generation capabilities to website design. However, since these models are trained on existing designerly knowledge, they may inadvertently replicate bad or even illegal practices, especially deceptive designs (DD).

Computer scientists at the Technical University of Darmstadt, Humbold University of Berlin, both in Grrmany, and at the University of Glasgow in Scotland examined whether users can accidentally create DD for a fictitious webshop using GPT-4. They recruited 20 participants, asking them to use ChatGPT to generate functionalities (product overview or checkout) and then modify these using neutral prompts to meet a business goal (e.g., "increase the likelihood of us selling our product"). We found that all 20 generated websites contained at least one DD pattern (mean: 5, max: 9), with GPT-4 providing no warnings.

When reflecting on the designs, only 4 participants expressed concerns, while most considered the outcomes satisfactory and not morally problematic, despite the potential ethical and legal implications for end-users and those adopting ChatGPT's recommendations.

The researchers conclude that the practice of DD has become normalized.

The group has posted their research on the arXiv preprint server.

top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 39 points 3 weeks ago (2 children)

they may inadvertently replicate bad or even illegal practices

"Inadvertently"? Can we please force every journalist in the world to sit through a 5-minute overview of how LLMs work?

[–] [email protected] 13 points 3 weeks ago

That's just straight out of the abstract of the paper, no journalists involved.

[–] [email protected] 12 points 3 weeks ago (1 children)

Can we do the same with CEOs and politicians, please?

[–] [email protected] 6 points 3 weeks ago

And, ideally, subscribers to this community? There are so many weird takes and misunderstandings about this stuff.

[–] [email protected] 31 points 3 weeks ago (2 children)

We meatbags have to be the absolute worst role models for AIs

[–] [email protected] 24 points 3 weeks ago (1 children)

I keep saying every time I talk to my friends about AI development .... we're like trailer trash 17 year old parents that just gave birth to a genius baby who will grow up to be smarter, faster and stronger than us by the time it fully matures. We'll teach it how to hate, to be short sighted, arrogant and want to make as much money as possible while disregarding every living thing and person on the planet.

[–] [email protected] 17 points 3 weeks ago* (last edited 3 weeks ago)

And the thing is, I think the reality is even worse than that.

Current AI models aren't going to lead to general AI, we need something radically different. The current "static" neural network models just won't cut it, we need something like spiking neural networks so the AI can be "on" all the time.

Actual AGI is probably still so far away that I doubt mass-scale industrial society has enough years left before either the climate or some other human-caused idiotic omnifuck kicks the chair away from under it.

[–] [email protected] 17 points 3 weeks ago

I learned it by watching you, dad!

[–] [email protected] 14 points 3 weeks ago* (last edited 3 weeks ago)

and then modify these using neutral prompts to meet a business goal (e.g., “increase the likelihood of us selling our product”).

Doesn't seem much different from pushing/pressuring a human to meet a business goal like that.

[–] [email protected] 6 points 3 weeks ago
[–] [email protected] 6 points 3 weeks ago

So do my coworkers