|
at what point is the NUC really new still tho
|
# ? Jan 6, 2024 00:17 |
|
|
# ? May 14, 2024 01:54 |
Worf posted:at what point is the NUC really new still tho Personally, I'd hoped they went with something more closely aligned with their ExpertCenter line of devices, like the PN42 - which is a completely fanless system with the N100 processor. If I can ever afford one, I'd love to get it for a HTPC.
|
|
# ? Jan 6, 2024 14:29 |
|
Well, like intel said. New doesn’t always mean best
|
# ? Jan 6, 2024 15:47 |
|
https://x.com/ghost_motley/status/1744803710834806960?s=20
|
# ? Jan 10, 2024 02:38 |
|
Beef posted:It's dumb how good that branding works. There are constantly people saying that their laptop is still good because it's an i7 or i9 without having a clue that there are generations. My favorite was a Steam user review I saw a few days ago of a somewhat recent Spider-Man game where someone complained about performance. Their included hardware list just claimed their PC specs were way over recommended and then listed "Intel Core i7 @ 2.60GHz, 16,0GB of RAM, NVIDIA GeForce RTX 2060, SSD 480GB." The Wikipedia page on i7 CPUs gives 17 results (16 if you ignore the embedded 1255UL having it as the E-core frequency) when searching for processors with a base clock of 2.6 GHz, and it's mostly mobile or embedded CPUs from the i7-3720QM released in 2012 to the i7-13650HX released in 2023. Given the historical use of the @ to sometimes indicate overclocking/underclocking frequency it could also have a completely different base clock and the user might just be running it at that frequency manually. The RAM/SSD type or speed not being listed also doesn't help, so I can't even tell for sure if they are on an old desktop or a somewhat more recent laptop by that post
|
# ? Jan 10, 2024 21:07 |
|
Bofast posted:My favorite was a Steam user review I saw a few days ago of a somewhat recent Spider-Man game where someone complained about performance. Their included hardware list just claimed their PC specs were way over recommended and then listed "Intel Core i7 @ 2.60GHz, 16,0GB of RAM, NVIDIA GeForce RTX 2060, SSD 480GB." The only i7's that ran that slow (that are not a laptop) is like the first generation 2008-9 Nehalem processors. Sandy and Ivy bridge have nothing that clocks that low at the i7 level. They're totally on an old desktop, thats over a decade old. Ofc your processor is going to whimp out. I think there is like a 20-30% uplift in performance between the first gen i series and sandy bridge alone, not to mention ivy bridge and everything that comes after it.
|
# ? Jan 10, 2024 21:18 |
|
Rawrbomb posted:The only i7's that ran that slow (that are not a laptop) is like the first generation 2008-9 Nehalem processors. Sandy and Ivy bridge have nothing that clocks that low at the i7 level. They could be on something like a 2020 Asus G15, which would also match those specs and should run spider-man fine. That gets back to Bofast's point that marketing bullshit makes things harder
|
# ? Jan 10, 2024 22:04 |
|
It would probably be so much simpler if the games just gave the recommendations as required benchmark results.
|
# ? Jan 10, 2024 22:18 |
|
Saukkis posted:It would probably be so much simpler if the games just gave the recommendations as required benchmark results. Do you remember the windows experience index?
|
# ? Jan 10, 2024 22:37 |
|
Beef posted:Hats off to the guy on the NUC team that found a Skull Canyon on the map to satisfy the geographic naming convention guidelines. Worf posted:at what point is the NUC really new still tho BlankSystemDaemon posted:The new part is that it's now more gaudy than ever, after Asus took over. Worf posted:Well, like intel said. New doesn’t always mean best
|
# ? Jan 12, 2024 01:08 |
|
Now it's going to be AU for the Absolute Unit. I don't think this was posted here, but apparently the 14th-gen game optimizer thing APO will cover 14 games soon and will expand to some 13th and 12th gen CPUs as well: https://www.pcgamer.com/intel-to-roll-out-14th-gens-game-optimization-software-to-older-1213th-gen-hybrid-cpus-after-all/
|
# ? Jan 12, 2024 01:11 |
|
mobby_6kl posted:Now it's going to be AU for the Absolute Unit. As someone that still plays WoW, I gotta say that it's pretty awesome, but also hilarious, that WoW is one of those 14 games.
|
# ? Jan 12, 2024 04:02 |
|
Canned Sunshine posted:As someone that still plays WoW, I gotta say that it's pretty awesome, but also hilarious, that WoW is one of those 14 games. I know WoW has gotten graphical revamps and the newer zones are more complex but is it really a demanding game
|
# ? Jan 12, 2024 04:10 |
|
CPU wise, absolutely, at least in raids.
|
# ? Jan 12, 2024 05:23 |
|
whereas in FFXIV standing in a crowd of people as it struggles to load all of their glams is harder on the CPU than anything in a raid
|
# ? Jan 12, 2024 05:28 |
|
gradenko_2000 posted:I know WoW has gotten graphical revamps and the newer zones are more complex but is it really a demanding game Yeah, as others said, it can be still demanding CPU-wise. And while you can generally crank all settings to max with enough of a system, there's even weird stuff here and there that can hit the GPU hard. Liquid details in particular, for some reason, can bring cards to their knees, including the 4090. I'm guessing it's just not well optimized, but it's pretty funny. Generally anything over "Fair" for Liquid Details resulted in a pretty major fps drop, though a 4090 could generally handle it, but still with a pretty noticeable loss, lol.
|
# ? Jan 12, 2024 05:49 |
|
Canned Sunshine posted:Yeah, as others said, it can be still demanding CPU-wise. Ah, the GPU salesman graphics option
|
# ? Jan 12, 2024 09:32 |
|
Edit: I'm an idiot that meant to post in the GPU thread but since I got a reply, I'm keeping it to avoid even more confusion. I'm assuming a 4070Ti Super will do great with a VR headset with the current MSFS, but are there any takers about how that might change with the oncoming 2024 edition? That's about the only regular thing I can count on where I'll want that kind of performance (outside of being an idiot doing 3d game stuff of my own). Rocko Bonaparte fucked around with this message at 21:26 on Jan 12, 2024 |
# ? Jan 12, 2024 09:42 |
|
It's impossible to say since they've shown so little of the 2024 edition so far. They're aiming to put in a lot more ground detail it seems, but I don't know how much that will affect performance, or if you can tune it back down to 2020 levels. edit: i just realized this was the intel thread. we're pretty off-topic but whatever. VR in MSFS benefits from having as much GPU horsepower as you can possibly throw at it. Even the 4090 isn't good for a consistent 90fps at max settings if you have a high-res headset like the Reverb G2 or Quest 3, but you can make lower-end cards work by dropping down the settings a lot. this will probably be true for the 2024 edition too. Dr. Video Games 0031 fucked around with this message at 13:21 on Jan 12, 2024 |
# ? Jan 12, 2024 10:37 |
|
Chips & Cheese has some tests of MTL: https://chipsandcheese.com/2024/01/11/previewing-meteor-lake-at-ces/
|
# ? Jan 12, 2024 21:03 |
|
Dr. Video Games 0031 posted:edit: i just realized this was the intel thread. Yeah that was supposed to go into the GPU thread. I've got big win energy over here.
|
# ? Jan 12, 2024 21:25 |
|
Fab 9 is now open, co-located with Fab 11X in New Mexico. https://www.techpowerup.com/318257/intel-opens-fab-9-foundry-in-new-mexico Apparently it does something something EMIB Foveros something chiplets etc.? Fab 11X had previously been the home of Optane (rest in piss), and I legit have no idea what's going on there now.
|
# ? Jan 24, 2024 22:05 |
|
mdxi posted:Fab 11X had previously been the home of Optane (rest in piss), and I legit have no idea what's going on there now. https://www.anandtech.com/show/20042/intel-foundry-services-to-make-65nm-chips-for-tower-semiconductor Analog and RF processors, apparently.
|
# ? Jan 24, 2024 23:23 |
|
Some rumors on Arrow Lake possibly not having HT. Personally I don't really have an opinion on that either way, presumably they've looked into it and it wouldn't be worth it. As always could be BS anyway. https://www.notebookcheck.net/Deskt...s.795626.0.html Also "following apple" lol
|
# ? Jan 24, 2024 23:56 |
|
With the increased core count, I find it hard to even play advocate of the devil for HT on client. On the flip side, I do have the opinion that big-core servers should get 4+ SMT or just not at all.
|
# ? Jan 25, 2024 00:14 |
|
mobby_6kl posted:Some rumors on Arrow Lake possibly not having HT. Personally I don't really have an opinion on that either way, presumably they've looked into it and it wouldn't be worth it. As always could be BS anyway. Speculation on possible reasons: HT on the big cores doesn't make as much sense when you have a sea of small cores to handle highly parallel loads, and also such loads aren't that important in client notebooks anyways, and also HT makes performance/power aware scheduling more difficult, and also HT keeps on playing a role in various security flaws. If it's not providing compelling value to make up for its downsides any more, why not remove it? Or at least disable it in client parts? (oh another one: HT has enjoyed a long run on processors that were stuck on 4-wide decode with much wider execution backends. For many workloads, getting lots of work out of that wide backend meant feeding it with multiple threads. Intel's finally improved decode width to 6-wide in their big cores, which converts some of that 2-thread throughput potential into improved 1-thread performance. So that's another reason why HT may not be looking as great in client any more.) (speaking of Apple, that last one is very likely why Apple has never bothered with HT. Decoding a fixed-width RISC ISA is so easy that even their phone cores have 8-wide decode. Their front end can keep the back end well occupied with a single thread.)
|
# ? Jan 25, 2024 00:30 |
|
There are a lot of interesting workloads that are backend memory latency bound where SMT is the only practical mechanism that can help hide DRAM latency. You use it to let the core go do something else while you need to wait for a load in the critical path. Think of any kind of walk of unstructured data. Graph analysis and analytical databases spring to mind, but also parallel and/or concurrent GC. But 2 threads per core is barely enough speedup to justify the implementation pains. Going to 4 or 8 would make the tradeoff more interesting. Sun did this with Niagara, IBM with their z-series and Intel with KNL. So the question becomes, how important are those workloads and are the big core clients willing to pay extra for it when there are also small core server chips like Sierra forest.
|
# ? Jan 25, 2024 10:13 |
Gwaihir posted:CPU wise, absolutely, at least in raids. Oh man… WoW was built in an era when multiple core systems weren’t the norm. It’s not like the WoW client is single threaded? It’s like - even on a modern system there’s an upper bound on performance of Quake 2’s software renderer and games like Crysis. The one core gets pegged to 100% and that’s it WoW doesn’t have these problem any more right? The guts of WoW aren’t still stuck in the design decisions of twenty years ago Coffee Jones fucked around with this message at 12:33 on Jan 25, 2024 |
|
# ? Jan 25, 2024 12:31 |
|
I started playing wow on an AMD athlon 2100+ 💀
|
# ? Jan 25, 2024 12:31 |
|
Beef posted:But 2 threads per core is barely enough speedup to justify the implementation pains. Going to 4 or 8 would make the tradeoff more interesting. Sun did this with Niagara, IBM with their Not exactly a list of products with a track record of success against fat no/low smt cores tho. * Power is not Z. Z does 2 way SMT.
|
# ? Jan 25, 2024 15:48 |
|
Oops, I should have checked to make sure. And yeah, the economics are not in its favor.
|
# ? Jan 25, 2024 16:59 |
|
Coffee Jones posted:Oh man… WoW was built in an era when multiple core systems weren’t the norm. It’s not like the WoW client is single threaded? No, WoW's engineers are very very good at what they do and have rewritten the game to be much more suitable for today's computers. It was its immediate competitor, EverQuest 2, that died on the altar of "single thread go up forever." It's just that WoW is still very CPU intensive when there's lots of players doing lots of things around you. It's a fact of the MMO genre.
|
# ? Jan 25, 2024 20:49 |
|
WoW will use multiple cores, though I still think it's pretty limited to upwards of 4-6 cores with the current x86 codebase. But it'll easily push 2-3 cores, sometimes 4, up to 100% utilization, yeah. I think the arm64 build of WoW is actually better optimized than the x86 version, largely because some of Apple's software developers worked with Blizzard's staff to optimize the game for Apple Silicon, so it makes better use of it even though Apple Silicon does not have SMT.
|
# ? Jan 25, 2024 21:00 |
|
When are we expecting arrow lake? I want to build but I can't pull the trigger on final socket gen + ridiculous power use. Considering amd but I do video edit a lot and the Intel features would be welcome.
|
# ? Jan 26, 2024 00:35 |
|
BobHoward posted:processors that were stuck on 4-wide decode with much wider execution backends. For many workloads, getting lots of work out of that wide backend meant feeding it with multiple threads. Intel's finally improved decode width to 6-wide in their big cores incredibly curious about this, the old had to be 4-1-1-1 but I don't have a guess at what 6 breaks down to
|
# ? Jan 26, 2024 00:39 |
BobHoward posted:Speculation on possible reasons: HT on the big cores doesn't make as much sense when you have a sea of small cores to handle highly parallel loads, and also such loads aren't that important in client notebooks anyways, and also HT makes performance/power aware scheduling more difficult, and also HT keeps on playing a role in various security flaws. If it's not providing compelling value to make up for its downsides any more, why not remove it? Or at least disable it in client parts? Beef posted:There are a lot of interesting workloads that are backend memory latency bound where SMT is the only practical mechanism that can help hide DRAM latency. You use it to let the core go do something else while you need to wait for a load in the critical path. Because we have much more bandwidth available than we did now, DRAM latencies has actually gone up (though not significantly compared to the order of magnitude difference there is between cache latency and DRAM latency already). If memory serves, there's also big differences between the way SPARC 4-way SMT worked, and the way Intel SMT works, or even how POWER(8+?) works. Coffee Jones posted:Oh man… WoW was built in an era when multiple core systems weren’t the norm. It’s not like the WoW client is single threaded? Even with more advanced locking primitives in modern OS kernels, there's a lot of the compute that need to be done serially, thus there's no benefit to be had from putting them on different threads (and in-fact there can be a down-side if you end invalidating the L1 or L2 cache). One of the newer optimizations that've been catching on, at least for the engines that can take advantage of it (either because they aren't so old as to be too big to change, are new enough to default to it, or are written by teams big enough to do the work of changing it) is to do data-driven programming. The idea, if memory serves, is to try and make sure everything fits into the cacheline of each individual CPU core.
|
|
# ? Jan 26, 2024 00:59 |
|
JawnV6 posted:incredibly curious about this, the old had to be 4-1-1-1 but I don't have a guess at what 6 breaks down to Went looking, found an Anandtech article about Alder Lake's Golden Cove written near its launch date (that being the first 6-wide decoder core), but all it had to say was "We asked if there was still a kind of special-case decoder layout as in previous generations (such as the 1 complex + 3 simple decoder setup), however the company wouldn’t dwell deeper into the details at this point in time." Maybe there's more info out there now. I would be surprised if there's any more than 2 complex decoders, it can't be important to have lots of them or they never would've done 1+3 in the first place.
|
# ? Jan 26, 2024 01:37 |
|
BobHoward posted:Maybe there's more info out there now. I would be surprised if there's any more than 2 complex decoders, it can't be important to have lots of them or they never would've done 1+3 in the first place. There's only ever going to be one "all" decoder, no sense routing a bunch of wires to a second slot when most ucode flows are just going to lock down the front end for a while, but there's options for the smaller ones.
|
# ? Jan 26, 2024 01:47 |
|
Canned Sunshine posted:WoW will use multiple cores, though I still think it's pretty limited to upwards of 4-6 cores with the current x86 codebase. But it'll easily push 2-3 cores, sometimes 4, up to 100% utilization, yeah. I'd guess any help Apple gave mostly concerned Metal / Apple GPU performance tuning. SMT or not wouldn't matter much to WoW; as long as you've got at least 4 fast cores (which every Apple Silicon Mac does), you're doing pretty good. Speaking of, on the CPU side, if anything it's more like Apple's CPUs are optimized for what software like WoW needs. Apple's fast cores don't use much power, only about 5 to 6 watts at peak clock rate. Thanks to that, Apple doesn't have to roll back multi-core clocks nearly as much as Intel Turbo Boost. That's good for imperfectly-scaling programs like WoW (and most other games): you want a few fast cores, not a ton of slower ones.
|
# ? Jan 26, 2024 02:40 |
|
|
# ? May 14, 2024 01:54 |
|
Seems like this is from Jan 1st but but I haven't seen the chart here (or anywhere else) before: https://www.hardwaretimes.com/intel-1st-gen-core-ultra-meteor-lake-beats-amds-ryzen-7000-cpus-at-ultra-low-power/ It's just SPECint so dunno how much you can extrapolate from that. But there are a few interesting things here
|
# ? Jan 27, 2024 16:08 |