this post was submitted on 06 Jan 2025
750 points (100.0% liked)

TechTakes

1533 readers
247 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -3 points 2 days ago (1 children)

What the LLMs do, at the end of the day, is statistics. If you want a more precise model, you need to make it larger. Basically, exponentially scaling marginal costs meet exponentially decaying marginal utility.

[–] [email protected] 1 points 2 days ago (1 children)

Some LLM bros must have seen this comment and become offended.

[–] [email protected] 7 points 2 days ago (2 children)

guess again

what the locals are probably taking issue with is:

If you want a more precise model, you need to make it larger.

this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology ~~and because you’re all-in on the grift~~

[–] [email protected] 6 points 2 days ago

look bro just 10 more ~~reps~~ gpt3s bro itl’ll get you there bro I swear bro

[–] [email protected] -5 points 2 days ago (1 children)

Well, then let me clear it up. The statistics becomes more precise. As in, for a given prefix A, and token x, the difference between the calculated probability of x following A (P(x|A)) to the actual probability of P(x|A) becomes smaller. Obviously, if you are dealing with a novel problem, then the LLM can't produce a meaningful answer. And if you're working on a halfway ambitious project, then you're virtually guaranteed to encounter a novel problem.

[–] [email protected] 7 points 2 days ago

Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

it doesn’t produce any meaningful answers for non-novel problems either