this post was submitted on 17 Jul 2024
401 points (99.5% liked)

PC Gaming

8427 readers
1032 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 3 months ago* (last edited 3 months ago) (1 children)

And when traditional AI programs can be run on much lower end hardware with the same speed and quality, those chips will have no use. (Spoiler alert, it's happening right now.)

Corporations, for some reason, can't fathom why people wouldn't want to pay hundreds of dollars more just for a chip that can run AI models they won't need most of the time.

If I want to use an AI model, I will, but if you keep developing shitty features that nobody wants using it, just because "AI = new & innovative," then I have no incentive to use it. LLMs are useful to me sometimes, but an LLM that tries to summarize the activity on my computer isn't that useful to me, so I'm not going to pay extra for a chip that I won't even use for that purpose.

[–] [email protected] 5 points 3 months ago (2 children)
[–] [email protected] 4 points 3 months ago (1 children)
[–] [email protected] 1 points 3 months ago

That still needs an FPGA. While they certainly seems to be able to use smaller ones, adding an FPGA chip will still add cost

[–] [email protected] 1 points 3 months ago

Whoops, no clue how that happened, fixed!