this post was submitted on 22 Dec 2024
73 points (92.0% liked)
Technology
60116 readers
2665 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
For LLMs specifically, or do you mean that goal alignment is some made up idea? I disagree on either, but if you infer there is no such thing as miscommunication or hiding true intentions, that's a whole other discussion.
Cargo cult pretends to be the thing, but just goes through the motions. You say alignment, alignment with what exactly?
Alignment is short for goal alignment. Some would argue that alignment suggests a need for intelligence or awareness and so LLMs can't have this problem, but a simple program that seems to be doing what you want it to do as it runs but then does something totally different in the end is also misaligned. Such a program is also much easier to test and debug than AI neural nets.
Aligned with who's goals exactly? Yours? Mine? At which time? What about future superintelligent me?
How do you measure alignment? How do you prove conservation of this property along open ended evolution of a system embedded into above context? How do you make it a constructive proof?
You see, unless you can answer above questions meaningfully you're engaging in a cargo cult activity.
Here are some techniques for measuring alignment:
https://arxiv.org/pdf/2407.16216
By in large, the goals driving LLM alignment are to answer things correctly and in a way that won't ruffle too many feathers. Any goal driven by human feedback can introduce bias, sure. But as with most of the world, the primary goal of companies developing LLMs is to make money. Alignment targets accuracy and minimal bias, because that's what the market values. Inaccuracy and biased models aren't good for business.
So you mean "alignment with human expectations". Not what I was meaning at all. Good that that word doesn't even mean anything specific these days.