this post was submitted on 06 Jan 2025
750 points (100.0% liked)

TechTakes

1533 readers
247 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] -4 points 3 days ago* (last edited 2 days ago) (2 children)

LLM inference can be batched, reducing the cost per request. If you have too few customers, you can't fill the optimal batch size.

That said, the optimal batch size on today's hardware is not big (<100). I would be very very surprised if they couldn't fill it for any few-seconds window.

[–] [email protected] 4 points 2 days ago (1 children)

i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.

[–] [email protected] 2 points 2 days ago

yep, original is still visible on mastodon

[–] [email protected] 8 points 2 days ago (1 children)

this sounds like an attempt to demand others disprove the assertion that they're losing money, in a discussion of an article about Sam saying they're losing money

[–] [email protected] -4 points 2 days ago (1 children)

What? I'm not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.

[–] [email protected] 5 points 2 days ago (1 children)

oh, so you’re that kind of fygm asshole

good to know

[–] [email protected] -4 points 2 days ago* (last edited 2 days ago) (1 children)

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

@sc_[email protected] asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what ~~Open~~AI is operating at.

@[email protected] why did you say I am demanding someone disprove the assertion? Are you misunderstanding "I would be very very surprised if they couldn't fill [the optimal batch size] for any few-seconds window" to mean "I would be very very surprised if they are not profitable"?

The tweet I linked shows that good LLMs can be much cheaper. I am saying that ~~Open~~AI is very inefficient and thus economically "cooked", as the post title will have it. How does this make me FYGM? @[email protected]

[–] [email protected] 9 points 2 days ago

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

my god! let me fix that