Hasn't it just lost its context and somewhat "forgotten" what the intentions of the prompt were?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
I see a lot of comments that aren't up to date with what's being discovered in research claiming that "given a LLM doesn't know the difference between true and false" that it can't be described as 'lying.'
Here's a paper from October 2023 showing that in fact LLMs can and do develop internal representations of whether it is aware a statement is true or false: The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets
Which is just the latest in a series of multiple studies this past year that LLMs can and do develop abstracted world models in linear representations. For those curious and looking for a more digestible writeup, see Do Large Language Models learn world models or just surface statistics? from the researchers behind one of the first papers finding this.
Huh, I guess it is human.
Wow, maybe these things are more human than I thought.
It's not doing anything other than predicting the next word. It reflects human data.
It's just like me, fr fr
It's learning to be a typical high school student.