194
this post was submitted on 13 Oct 2024
194 points (100.0% liked)
TechTakes
1483 readers
210 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
When you ask an LLM a reasoning question. You're not expecting it to think for you, you're expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.
That's why you can ask it. because it encodes semantics.
Paraphrasing Neil Gaiman, LLMs don't give you information; they give you information shaped sentences.
They don't encode semantics. They encode the statistical likelihood that each token will follow a given sequence of tokens.
It's worth pointing out that it does happen to reconstruct information remarkably well considering it's just likelihood. They're pretty useful tools like any other, it's funny ofc to watch silicon valley stumble all over each other chasing the next smartphone.
The only remarkable thing is how fucking easy it is to convince the median consumer that vaguely-correct-shape sentences are correct.