this post was submitted on 20 May 2025
46 points (96.0% liked)
PC Gaming
11037 readers
689 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm starting to have a sneaking suspicion that putting 24G of VRAM on a card isn't happening because they don't want people using AI models locally. The moment you can expect the modern gamers computer to have that kind of local computing power - is the moment they stop getting to slurp up all of your data.
It's because the GTX 10XX gen had much VRAM (yes 1070 had 8GB VRAM in 2016) and was a super good generation that lasted many years. Clearly they want you to change GPUs more often and that's why they limit the VRAM.
Why wouldn't they want that when they're the ones selling you the hardware?
Do you think they'd make more money selling every individual an AI GPU or selling 1 GPU to OpenAI to serve thousands of users?
Honestly I think it is because of DLSS. If you can get a $300 card that could do 4k DLSS performance well, why would you need to buy a xx70(ti) or xx80 card?
Lossless Scaling (on Steam) has also shown HUGE promise from a 2-GPU standpoint as well. I've seen some impressive results from people piping their NVidia cards, into an Intel GPU (on-die or discreet) and using a dedicated GPU for the upscaling as well.