this post was submitted on 15 May 2024
160 points (100.0% liked)

SneerClub

1011 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 11 points 7 months ago (3 children)

AI alignment research aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles.

https://en.wikipedia.org/wiki/AI_alignment

[–] [email protected] 18 points 7 months ago (1 children)

Misaligned AI systems can malfunction and cause harm. AI systems may find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful, ways (reward hacking).

They may also develop unwanted instrumental strategies, such as seeking power or survival because such strategies help them achieve their final given goals. Furthermore, they may develop undesirable emergent goals that may be hard to detect before the system is deployed and encounters new situations and data distributions.

Today, these problems affect existing commercial systems such as language models, robots, autonomous vehicles, and social media recommendation engines.

The last paragraph drives home the urgency of maybe devoting more than just 20% of their capacity for solving this.

[–] [email protected] 10 points 7 months ago

They already had all these problems with humans. Look, I didn't need a robot to do my art, writing and research. Especially not when the only jobs available now are in making stupid robot artists, writers and researchers behave less stupidly.

[–] [email protected] 14 points 7 months ago (1 children)

you can tell at a glance which subculture wrote this, and filled the references with preprints and conference proceedings

[–] [email protected] 4 points 7 months ago (1 children)
[–] [email protected] 8 points 7 months ago

the lesswrong rationalists

[–] [email protected] 11 points 7 months ago (1 children)

I genuinely think the alignment problem is a really interesting philosophical question worthy of study.

It's just not a very practically useful one when real-world AI is so very, very far from any meaningful AGI.

[–] [email protected] 16 points 7 months ago

One of the problems with the 'alignment problem' is that one group doesn't care about a large part of the possible alignment problems and only cares about theoretical extinction level events and not about already occurring bias, and other issues. This also causes massive amounts of critihype.