|
Cygni posted:Looks like Microcenter has the 12600k for $320, 5600X for $290, the 11600k for $230, and the 10400 for $180. They really are crowding that "new middle" tier of the market. looks like prices already dipped a bit 5600X for $280 11600k for $200 10400 for $160 but the 12600k is also down $20, so maybe it's a sale?
|
# ? Nov 5, 2021 09:34 |
|
|
# ? Jun 10, 2024 23:59 |
|
Cygni posted:Zen 3 also had temp misreportings at launch, why is this becoming a thing. Like if you are Intel or AMD, you KNOW people are gonna be lookin for this. I hope it's properly investigated, buyers might be over-emphasising extreme cooling for the top end SKUs unnecessarily because of the 100c results from reviewers.
|
# ? Nov 5, 2021 09:35 |
|
BurritoJustice posted:I hope it's properly investigated, buyers might be over-emphasising extreme cooling for the top end SKUs unnecessarily because of the 100c results from reviewers. It seems like it could be also related to unrestricted power limits just sucking up crazy power for no real benefit
|
# ? Nov 5, 2021 09:45 |
|
Paul MaudDib posted:
Benches with JEDEC memory like Anandtech does are stacked against Intel because the beefy L3 cache on Zen2/3 absorbs the impact of trash memory latency and benches with tight timings are also stacked against Intel because ???/"Paul said so".
|
# ? Nov 5, 2021 13:10 |
|
1440p benchmarks are generally rare, but this alder lake review makes the intel 12900k or 12600k look worth buying: https://www.eurogamer.net/articles/digitalfoundry-2021-intel-core-i9-12900k-i5-12600k-review?page=4 CP2077 with a RTX 3090 at 1440p, with raytracing on. the 12th gen intel chips have a significant fps edge (like 35%+!) compared to any AMD chips (5950X and 5600X are basically the same). I'm kind of surprised, really -- you don't see this in almost any other 1440p situation. I guess raytracing is the real use case here? I still expected that to be GPU bound, but, not according to these benchmarks. anyone spot a problem with that test or do we think this is legit and that alder lake will actually outperform Zen 3 in realistic 1440p raytrace gaming situations?
|
# ? Nov 5, 2021 14:10 |
|
Arzachel posted:Benches with JEDEC memory like Anandtech does are stacked against Intel because the beefy L3 cache on Zen2/3 absorbs the impact of trash memory latency and benches with tight timings are also stacked against Intel because ???/"Paul said so". Tight timings are fair because it shows the CPU's potential and it's what the dedicated will do. JEDEC or XMP are also fair, because that's what most people are going to do. As long as you disclose which you're doing and don't arbitrarily mix it's all good.
|
# ? Nov 5, 2021 14:10 |
|
Intel's memory controller is still massively superior. People insist on benching with unrealistically slow memory and it creates an unrealistic result. The memory differences are part of the difference, you don't buy the same memory for an AMD and an Intel system. It would be stupid. Paul is right that anyone who advocated for any of the AMD CPUs for gaming over the 8700k was a fool. It was obvious at the time that it was a better value proposition, but there were countless goons fanboying over AMD's fat core counts even though they've never done anything with the CPUs but game. There were so loving many people talking about how the 1700 was a great buy for gaming even though single thread wise it was basically no uplift from a 2500k.
|
# ? Nov 5, 2021 14:15 |
If you're benchmarking things right, you only test one thing at a time, and keep every other variable the same.
|
|
# ? Nov 5, 2021 14:24 |
|
Khorne posted:This is confusing because JEDEC memory harms AMD far more than Intel.
|
# ? Nov 5, 2021 14:31 |
|
RDR2 is notoriously AMD favored at low settings. If the 5900x were on that graph it'd have 190 fps. At high settings intel leads by 1-2 fps because it becomes gpu limited and latency comes into play. It's not clear to me which review this came from, but anandtech has used mixed memory timings for charts like this before. If they did do that then AMD & the i7 11xxx would be running 3200 at jedec which doesn't harm AMD that much vs some older reviews that used much slower memory. K8.0 posted:Intel's memory controller is still massively superior. People insist on benching with unrealistically slow memory and it creates an unrealistic result. The memory differences are part of the difference, you don't buy the same memory for an AMD and an Intel system. It would be stupid. Paul is right, yes. If I upgraded from my 3770k at that point I'd 100% have gone with an 8700k. The 1700x was identical in perf or slightly slower than my ivy bridge CPU in gaming. I opted to wait for zen2/zen3 and Intel was not that appealing at that point. Alder Lake is good. Khorne fucked around with this message at 16:11 on Nov 5, 2021 |
# ? Nov 5, 2021 15:38 |
|
K8.0 posted:Paul is right that anyone who advocated for any of the AMD CPUs for gaming over the 8700k was a fool. It was obvious at the time that it was a better value proposition, but there were countless goons fanboying over AMD's fat core counts even though they've never done anything with the CPUs but game. There were so loving many people talking about how the 1700 was a great buy for gaming even though single thread wise it was basically no uplift from a 2500k. I think I missed it in the reviews but why is power draw so high with Alder Lake? I am not saying it is bad but with a new process I would have thought it would be closer to AMD offerings.
|
# ? Nov 5, 2021 15:40 |
|
Budzilla posted:Weren't people fawning over the 1600(non X) for games? I think it was more that Zen 1/Zen+ was a very cheap on-ramp to getting a 6c/12t CPU - AFAIK people were quite realistic with their expectations that Skylake could push more frames for games, but also that it was quite pricey for what you were getting. Intel was still making hyperthreading a premium add-on until 10th gen.
|
# ? Nov 5, 2021 15:49 |
|
Budzilla posted:I think I missed it in the reviews but why is power draw so high with Alder Lake? I am not saying it is bad but with a new process I would have thought it would be closer to AMD offerings.
|
# ? Nov 5, 2021 15:50 |
|
Budzilla posted:Weren't people fawning over the 1600(non X) for games? I remember people saying the 1700 was good for games but better for productivity compared to what was previously offered from Intel at the same price point and the Ryzen 1800X too. This was also at a time when people were saying the 2500K was good enough for games. Coffee Lake came out 7 months after Ryzen 1 and was largely a paper launch for a couple of months. Because they have chosen to make default power limits absurdly high for k chips (technically by making PL2 duration unlimited), meaning the chips will opportunistically boost as high as possible as long as thermals are inside acceptable parameters. I was hoping to see some benchmarks with like a 150w pl enforced but I haven't yet.
|
# ? Nov 5, 2021 16:04 |
|
Budzilla posted:I think I missed it in the reviews but why is power draw so high with Alder Lake? I am not saying it is bad but with a new process I would have thought it would be closer to AMD offerings. Because you can't win the FRAMEZ WARS if you don't juice.
|
# ? Nov 5, 2021 16:56 |
|
SourKraut posted:Because you can't win the FRAMEZ WARS if you don't juice. Not frame wars actually - power consumption in games appears to be quite reasonable, even with 241w pl. You're only gonna see the bonkers numbers in applications that can heavily load all the cores at once, if these numbers are to be believed. https://www.igorslab.de/en/intel-core-i9-12900kf-core-i7-12700k-and-core-i5-12600k-review-gaming-in-really-fast-and-really-frugal-part-1/9/ So to win the cinebench warz I guess?
|
# ? Nov 5, 2021 17:49 |
|
The 12900K seems like a pretty compelling upgrade from my 8700K but I imagine trying to get a decent mATX Z690 is going to suck for a bit. I want to reuse my existing mATX case and move the 8700K into a Node 804 to replace my existing NAS box. Ah well can wait for DDR5 to shake out a little bit.
|
# ? Nov 5, 2021 18:21 |
|
Has there been any indication when the mobile parts might launch?
|
# ? Nov 5, 2021 18:34 |
|
VorpalFish posted:Has there been any indication when the mobile parts might launch? If Intel's roadmap is still accurate, they should be delivered at the SIs factories starting next week. Roadmap has week 45 for mass delivery. Probably looking at an early 2022 launch? There have already been lots of leaks of the mobile parts, so there are def a bunch of samples floating around. For example: https://www.ashesofthesingularity.com/benchmark#/benchmark-result/56f434ea-e4e0-4232-acb1-e7b793362941
|
# ? Nov 5, 2021 19:13 |
|
Well the Intel iGPU still needs work... https://www.phoronix.com/scan.php?page=article&item=uhd-graphics-770&num=1
|
# ? Nov 5, 2021 19:18 |
Also, where's the standard deviation and median/mean/average values, and data confidence intervals?
|
|
# ? Nov 5, 2021 19:23 |
|
Pablo Bluth posted:Well the Intel iGPU still needs work... That seems about in line with what you'd expect? The desktop igpus are all the 32 eu variants. I don't think I've seen anything to suggest it's different to what's in rocket lake so I guess the performance uplift is all down to memory bandwidth. I guess their reasoning is they expect anyone who wants GPU performance in a desktop is just going to go discrete. Top end mobile parts should have 96 eu variants, was really disappointed to hear they aren't going bigger with the jump to lpddr5/ddr5. I'd love a 128 eu GPU in a thin and light.
|
# ? Nov 5, 2021 19:52 |
|
Dedicating die space to the CPU bits was probably far more important. Plus I expect all their best GPU designers are busy making their discrete stuff fit for launch..... That said, irrespective of the expectations and reasons, it's probably one of the few saving graces of the low end Ryzens right now; if you can't/won't do discrete but want decent graphics.
|
# ? Nov 5, 2021 20:05 |
|
I haven't heard anything about Atoms in a long time, how have the last few atom generations been? I've been running a Haswell NUC at home for ages as a little lightweight server, and I'm wanting to add another node on the cheap. What's going to have a better future in front of it, a used NUC or micro-form factor corporate PC with Broadwell or Skylake i5, or a current Tremont atom like a N5095? On paper they look pretty comparable, the Atom has 2 more physical cores and faster RAM (DDR4-2933), but I'm pretty sure a Broadwell or Skylake -U i5 will still have faster single threaded speed. It's hard to compare because nobody is still running the same benchmarks as they were 6 years ago! https://www.notebookcheck.net/6200U-vs-4250U-vs-Celeron-N5095_6966_4219_13189.247596.0.html https://www.cpubenchmark.net/compare/Intel-i5-4250U-vs-Intel-Celeron-N5095-vs-Intel-i5-6200U/1944vs4472vs2556
|
# ? Nov 5, 2021 20:16 |
|
And here's a list of games you won't be able to play on Alder Lake: https://arstechnica.com/gaming/2021/11/faulty-drm-breaks-dozens-of-games-on-intels-alder-lake-cpus/ (Disabling the E-cores in BIOS would presumably get around this issue?)
|
# ? Nov 6, 2021 00:00 |
|
Drakhoran posted:And here's a list of games you won't be able to play on Alder Lake: From TechSpot: quote:Luckily, there is a way to avoid the issue before the patches roll out. It involves enabling Legacy Game Compatibility Mode, which will place the E-cores in a standby mode while playing games. Here’s how to enable the feature:
|
# ? Nov 6, 2021 00:16 |
|
Dr. Video Games 0031 posted:So... that bios option just makes it so scroll lock toggles the e-cores? Pretty neat. It's a great workaround. But there also should be a process flag in windows so the e-cores can still run Netflix/discord/mail while you game on the p-cores. Maybe one of those compatibility flags you can set on the executable file.
|
# ? Nov 6, 2021 01:19 |
|
The article says an upcoming Win11 update will fix some of the games, so they probably are just adding a special case that pins those specific exes to the P-cores
|
# ? Nov 6, 2021 01:25 |
|
Amazing how they fit those p-cores into the balls
|
# ? Nov 6, 2021 01:34 |
|
VorpalFish posted:That seems about in line with what you'd expect? The desktop igpus are all the 32 eu variants. I don't think I've seen anything to suggest it's different to what's in rocket lake so I guess the performance uplift is all down to memory bandwidth. yes, this is an important point, the iGPU on desktop chips is much much smaller than the laptop variants. Perhaps we will see a "10400G" or whatever someday, a socketed laptop chip for that specific niche market, like AMD's APU lineup, but they haven't yet. On the other hand AMD's graphics block is faster, but still basically has the exact same capability set as 2017 era Vega - so it still doesn't support HDMI 2.1 or AV1 or HDMI Org VRR. So it's faster at graphics but also worse at HTPC and can't do VRR to a LG OLED (for example, for streaming). Twerk from Home posted:I haven't heard anything about Atoms in a long time, how have the last few atom generations been? intel has been pushing aggressively on atom for a long time now - tbh you can probably argue the big.little stuff has had probably 10 years of groundwork behind it. Silvermont was really where it all changed, that's where Intel decided to put out-of-order on Atom and start pushing performance upwards really aggressively. They also moved away from the older Intel GMA graphics to the same mainline Intel HD/UHD architecture as the core chips. Bay Trail was acceptable as a low-power server for its era (similar to Athlon 5350, etc) but I think it was very pokey at the desktop especially with the RAM-limited configurations I was using. Goldmont stepped up performance aggressively again, and introduced a backported media block from Xe (it's otherwise the same Intel UHD graphics - but it supports better decode and HDMI 2.0b). Tremont is the next full generation, and it's unfortunately the latest thing you can buy in a NUC, there is no Gracemont based NUC yet, so it's a generation behind. But still, node shrink to 10nm, another giant leap in per-thread performance, and now it has Xe graphics, so you get AV1 decode and Adaptive Sync and AV1 decode and so on - but no HDMI 2.1. Gracemont steps performance aggressively again and you get HDMI 2.1 and HDMI Org VRR, and full AVX2 support. So anyway it depends on what you're going to be doing with it, specifically. If it's encoding type stuff, AVX2 on a USFF Haswell may be a selling point, but the Tremont NUC will have better hardware transcoding and codec support. The Haswell USFF also may be allowed to dissipate more power (which is always advantageous) but will have much worse perf/watt, much worse idle power, and perhaps even lower single-threaded performance than Tremont in non-AVX tasks. Tremont is really really good, it's a huge step. But it seems inevitable that there will be a Gracemont based NUC at some point, or perhaps Nextmont. Alder Lake is where it has finally sunk into the public consciousness that "oh poo poo Atom is good now", they're hitting Skylake per-thread performance with Gracemont, but at a much lower clock and power. But like, Goldmont was already really good, you get performance that's about halfway between a midrange Core2Quad and a midrange Nehalem i5, out of a NUC that tops out at 15W peak and idles at 3W, and that's two major architectures behind Gracemont. So like, Tremont edging out haswell in some non-AVX scenarios as those Passmark numbers suggest actually wouldn't surprise me at all. Passmark is a generally lovely benchmark but I mean... I'd expect Tremont to be somewhere around Ivy Bridge at worst. I used a J5005 NUC as a cheap citrix-into-work client over the last 2 years so I didn't have to run my gaming rig and have it heat up my office, you used to be able to pick them up for like $120 in basically like-new surplus condition (mine still had plastic on them) and yeah it's not the fastest thing in the world but it's perfectly usable for basic desktop poo poo, browsing the web and so on. The one comment is, do be sure to put enough RAM in them, as a power user I was running out of RAM with 8GB and when it swaps, performance dies, so 16GB is recommended. Supposedly it will even do 32GB, actually, despite 8GB being the official maximum (needs to be 2400C16 though). Tremont and Gracemont were both huge steps (as Nextmont will also be) so if you can wait, there will be better Atoms coming. I think AVX is a really big selling point, as is the full HDMI 2.1 support and so on. I fully intend to use them as various media PCs around the house for a long time to come, and I'd like to have HDMI 2.1 VRR for that, for streaming to an OLED or whatever. But I got them for $120 each, barebones, so my expectations are low. Would I pay like $500 for one? Probably not, unless you're super concerned about heat (idle power is going to be notably worse on Haswell USFF), or hardware transcoding, or HTPC. The Haswell USFF will not do HDMI 2.0b without an adapter for sure, but it also will be faster in software transcoding or other AVX tasks. Paul MaudDib fucked around with this message at 01:59 on Nov 6, 2021 |
# ? Nov 6, 2021 01:39 |
|
also if your budget is like $500 such that you're considering a current-gen NUC (even an Atom-based one), let me also raise that you could buy a M1 Mac Mini and ride that whole wave with Asahi Linux. The Mac Mini is actually very affordable for what it is and it's the best "NUC" on the market right now essentially at any price. Not only do you get 4 skylake tier e-cores but you also get 4 full fat p-cores with 5900X-level performance, in the nuc level power budget. It’s wildly outperforming Intel NUCs at >2x the build cost. The Linux graphics aren't quite there yet, but they have the distro itself running fine at this point with software emulation, and the graphics are getting very close. If you're just using it as a server, you don't care. No graphics also means no transcoding though. There are certainly going to be problems with lack of x86 compatibility and/or you would have to be adaptable in terms of using emulation or debugging your toolchain, but there's a lot of stuff being written for it because running in native ARM mode the new Macbooks have insane battery life, like multiple days of developer work per charge. But honestly that's one of the things I do find attractive about NUCs - I've done the raspberry pi game, etc, and just running standard x86 binaries on the standard open-source intel graphics driver is fantastically low-effort for what it is. I strongly prefer recommending the Goldmont based NAS units for Synology/etc just out of pure convenience. Everything is x86, everything just works. Also Apple's e-core is crazy crazy good, they are at Gracemont performance from a core 40% of the size, even considering the node shrink it's insane. If you call 10ESF "TSMC 7nm class", official TSMC number is 1.8x density for N7->N5, but Apple got about a 35% shrink across the whole chip going A13 to A14, so Apple is getting Gracemont performance at about 55-75% of the equivalent area of Gracemont. A15 is wizard tier hardware engineering, it's amazing. Paul MaudDib fucked around with this message at 02:36 on Nov 6, 2021 |
# ? Nov 6, 2021 01:57 |
It's not just a question of wizard engineering, it's also that Apple controls the compiler. ICC is well-known for doing a much better job than LLVM, GCC, and any other compiler - for much the same reason.
|
|
# ? Nov 6, 2021 12:19 |
|
Apparently ADL is more efficient than AS at the same power. Smoke and mirrors from Apple? https://twitter.com/no_one180/status/1456966531460501507
|
# ? Nov 6, 2021 17:34 |
|
carry on then posted:Apparently ADL is more efficient than AS at the same power. Smoke and mirrors from Apple? Lol hosed up if true
|
# ? Nov 6, 2021 18:58 |
|
Gonna need more context than one tweet with two numbers, I think.
|
# ? Nov 6, 2021 19:27 |
|
Level 1 Thief posted:Gonna need more context than one tweet with two numbers, I think. The source is linked in a reply. https://www.youtube.com/watch?v=WSXbd-PqCPk
|
# ? Nov 6, 2021 20:25 |
|
If you like the J5005 NUC but can't find it for a good price, look for Wyse 5070 thin clients. The extended chassis even has a low-profile PCIe slot.
|
# ? Nov 6, 2021 20:39 |
|
carry on then posted:Apparently ADL is more efficient than AS at the same power. Smoke and mirrors from Apple? That’s a pretty arbitrary bench, it’s 6+8 @35w vs 8+2 @30w with a big rear end gpu adding to idle power. Also what’s the single core perf at 35w? If it’s less than an m1 what’s the point. edit The tweet is wrong, the numbers actually show AS has better perf/W. 14288/35=408 points/W 12326/30=410 points/W Perplx fucked around with this message at 21:07 on Nov 6, 2021 |
# ? Nov 6, 2021 20:41 |
|
Core counts are a social construct, buddy.
|
# ? Nov 6, 2021 22:00 |
|
|
# ? Jun 10, 2024 23:59 |
|
Eletriarnation posted:If you like the J5005 NUC but can't find it for a good price, look for Wyse 5070 thin clients. The extended chassis even has a low-profile PCIe slot. Oh, that’s a good tip, I vaguely remember seeing those linked on ServeTheHome’s hot deals forum and looked at them briefly but didn’t dig into them too much. Any major downsides as a nuc-homeserver replacement or htpc?
|
# ? Nov 6, 2021 22:23 |