this post was submitted on 03 Aug 2024
12 points (87.5% liked)
Technology
59322 readers
5346 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Am I blind? I don't see any information in there to draw any conclusions about power efficiency. The little information that I do see actually seems to imply the apple silicon chip would be more efficient. Help me out please?
24 threads at 2.00 GHz vs. 8 threads at 0.66 GHz with a 40% difference in TDP. The AMD chip may draw more power, but has much higher performance. Simplifying things, it can perform 9x the operations as the Apple silicon for only 1.4x the power draw.
That... is very naive and inaccurate approach. You can't use frequency and core counts to guesstimate performance even when the chips in question are closely related. They're utterly useless when it's two very different chips that don't even use the same instruction set. But anyway, there are benchmarks in that page and they clearly show that the amd chip is clearly not performing 9x the operations. It is obviously more powerful, though not nearly by that much.
I desperately want something to start competing with apple silicon, believe me, but knowing just how good the apple silicon chips are from first hand experience, forgive me if I am a little bit sceptical about a little writeup that only deals in benchmark results and official specs. I want to read about how it performs in real life scenarios because I also know from experience that benchmark results and official specs alone don't always give an accurate picture of how the thing performs in real life.
That's exactly how you guesstimate CPU performance. It obviously won't be accurate to real life use cases, but you don't necessarily need benchmarks to get a ballpark comparison of raw performance. The standard comparison is FLOPS, floating point operations per second. Yes different architectures have different instruction sets, but they're all relatively similar especially for basic arithmetic. It breaks down with more complex computations, but there's only so many ways to add two numbers together.