this post was submitted on 04 Aug 2023
53 points (96.5% liked)
PCGaming
6501 readers
1 users here now
Rule 0: Be civil
Rule #1: No spam, porn, or facilitating piracy
Rule #2: No advertisements
Rule #3: No memes, PCMR language, or low-effort posts/comments
Rule #4: No tech support or game help questions
Rule #5: No questions about building/buying computers, hardware, peripherals, furniture, etc.
Rule #6: No game suggestions, friend requests, surveys, or begging.
Rule #7: No Let's Plays, streams, highlight reels/montages, random videos or shorts
Rule #8: No off-topic posts/comments
Rule #9: Use the original source, no editorialized titles, no duplicates
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They're not that different, really. CUDA processing cores are the most used in AI training, and those are the main processors used in both Nvidia's consumer desktop cards and machine learning enterprise cards. As "AI" is on the rise, more and more of the supply of CUDA processors and VRAM chips will be diverted to enterprise solutions that will fetch a higher price from deals with corporations. Meaning there will be less materials available for the consumer-level GPU supply, which will drive prices up for normal consumers. NVIDIA has been banking on this for a long time; that's why they don't care about overpricing the consumer market and have been trying to push people towards cloud-based GeForce Now subscription models where you don't even own the hardware and just basically rent the processing power to play games.
Also just to be anal, the 3090 and 4090 have 24Gb of vram, not 32Gb. And unlike gaming nowadays you can distribute the workload to multiple GPU's in one system, or over a network of machines.