this post was submitted on 17 Aug 2023
343 points (98.0% liked)
Technology
59232 readers
3057 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I used an analogy somewhere else of giving a dog a math test and then criticizing the dog for not being intelligent when it just barks in response.
Large language models are trained on words in their relationships. They understand what they are trained on, they understand logic in the form of words in their relationships, but the beautiful thing is that are words and their relationships can express most human knowledge, so in learning to predict those things these LLMs have also picked up most human knowledge and can make rational conclusions from it.
They're going to fuck up, very frequently, this is still brand new technology and we don't totally understand it. But to suggest that these things don't have logic or reason behind what they do, I think that's just crazy.
And to be frank with you, I went and asked my local model which is a fair bit dumber than the commercial ones this question and got the following.
Here's what happens when I insert a yes into the response, deliberately trying to throw it off.