this post was submitted on 31 Jan 2024
501 points (97.0% liked)

Technology

58142 readers
4319 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AMD’s new CPU hits 132fps in Fortnite without a graphics card::Also get 49fps in BG3, 119fps in CS2, and 41fps in Cyberpunk 2077 using the new AMD Ryzen 8700G, all without the need for an extra CPU cooler.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 7 months ago (2 children)

I think the opposite is true. Discrete graphics cards are on the way out, SoCs are the future. There are just too many disadvantages to having a discrete GPU and CPU each with it’s own RAM. We’ll see SoCs catch up and eventually overtake PCs with discrete components. Especially with the growth of AI applications.

[–] [email protected] 2 points 7 months ago (1 children)

People will be building dedicated AI PCs.

[–] [email protected] 2 points 7 months ago

They may build dedicated PCs for training, but those models will be used everywhere. All computers will need to have hardware capable of fast inference on large models.

[–] [email protected] 1 points 7 months ago (1 children)

I agree, especially with the prices of graphics card being what they are. The 8700G can also fit in a significantly smaller case.

[–] [email protected] 3 points 7 months ago

Unified memory is also huge for performance of AI tasks. Especially with more specialized accelerators being integrated into SoCs. CPU, GPU, Neural Engine, Video encoder/decoders, they can all access the same RAM with zero overhead. You can decode a video, have the GPU preprocess the image, then feed it to the neural engine for whatever kind of ML task, not limited by the low bandwidth of the PCIe bus or any latency due to copying data back and forth.

My predictions: Nvidia is going to focus more and more on the high-end AI market with dedicated AI hardware while losing interest in the consumer market. AMD already has APUs, they will do the next logical step and move towards full SoCs. Apple is already in that market, and seems to be getting serious about their GPUs, I expect big improvement there in the coming years. No clue what Intel is up to though.