this post was submitted on 13 Oct 2024
192 points (100.0% liked)

TechTakes

1335 readers
176 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 2 days ago* (last edited 2 days ago) (7 children)

"sigh"

(Preface: I work in AI)

This isn't news. We've known this for many, many years. It's one of the reasons why many companies didn't bother using LLM's in the first place, that paired with the sheer amount of hallucinations you'll get that'll often utterly destroy a company's reputation (lol Google).

With that said, for commercial services that use LLM's, it's absolutely not true. The models won't reason, but many will have separate expert agents or API endpoints that it will be told to use to disambiguate or better understand what is being asked, what context is needed, etc.

It's kinda funny, because many AI bros rave about how LLM's are getting super powerful, when in reality the real improvements we're seeing is in smaller models that teach a LLM about things like Personas, where to seek expert opinion, what a user "might" mean if they misspell something or ask for something out of context, etc. The LLM's themselves are only slightly getting better, but the thing that preceded them is propping them up to make them better

IMO, LLM's are what they are, a good way to spit information out fast. They're an orchestration mechanism at best. When you think about them this way, every improvement we see tends to make a lot of sense. The article is kinda true, but not in the way they want it to be.

[–] [email protected] 7 points 2 days ago* (last edited 2 days ago) (2 children)

what a user “might” mean if they misspell something

this but with extra wasabi

load more comments (2 replies)
load more comments (6 replies)
[–] [email protected] 58 points 4 days ago (47 children)

Did someone not know this like, pretty much from day one?

Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?

[–] [email protected] 37 points 4 days ago* (last edited 4 days ago) (7 children)

Well, two responses I have seen to the claim that LLMs are not reasoning are:

  1. we are all just stochastic parrots lmao
  2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of "emergent").

So I think this research is useful as a response to these, although I think "fuck off, promptfondler" is pretty good too.

[–] [email protected] 21 points 4 days ago (1 children)

“Language is a virus from outer space”

[–] [email protected] 8 points 3 days ago

I thought it came from Babylonian writing that recoded the brains and planted the languages.

load more comments (6 replies)
[–] [email protected] 27 points 4 days ago (1 children)

there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

[–] [email protected] 11 points 3 days ago

No they do not im afraid, hell I didnt even know that even ELIZA caused people to think it could reason (and this worried the creator) until a few years ago.

load more comments (45 replies)
[–] [email protected] 28 points 4 days ago (5 children)

We suspect this research is likely part of why Apple pulled out of the recent OpenAI funding round at the last minute. 

Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing. They certainly argue like it.

🔥

load more comments (5 replies)
load more comments
view more: ‹ prev next ›