this post was submitted on 20 May 2025
46 points (96.0% liked)

PC Gaming

11037 readers
689 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments, within reason.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 20 points 2 days ago (3 children)

I'm starting to have a sneaking suspicion that putting 24G of VRAM on a card isn't happening because they don't want people using AI models locally. The moment you can expect the modern gamers computer to have that kind of local computing power - is the moment they stop getting to slurp up all of your data.

[–] [email protected] 8 points 2 days ago

It's because the GTX 10XX gen had much VRAM (yes 1070 had 8GB VRAM in 2016) and was a super good generation that lasted many years. Clearly they want you to change GPUs more often and that's why they limit the VRAM.

[–] [email protected] 2 points 2 days ago

Why wouldn't they want that when they're the ones selling you the hardware?

Do you think they'd make more money selling every individual an AI GPU or selling 1 GPU to OpenAI to serve thousands of users?

[–] [email protected] 3 points 2 days ago (1 children)

Honestly I think it is because of DLSS. If you can get a $300 card that could do 4k DLSS performance well, why would you need to buy a xx70(ti) or xx80 card?

[–] [email protected] 8 points 2 days ago

Lossless Scaling (on Steam) has also shown HUGE promise from a 2-GPU standpoint as well. I've seen some impressive results from people piping their NVidia cards, into an Intel GPU (on-die or discreet) and using a dedicated GPU for the upscaling as well.