|
hobbesmaster posted:While 140W is the default, it can go much, much higher. The “can’t draw twice the power” isn’t actually true. With a good board and uncapped PBO settings 5950x users report 240W in prime95 small FFT which should give an absolute upper bound. My 5800x3d can hit 120W in the same test. I guess technically that’s more than double because the io die is like max 20W on both? He is right in that the 5950X didn't use 2x the power of the 5800X. The 5800X could use a lot more power than 120W in stress tests with PBO enabled. The 5800X3D is different though due to the lack of PBO or any kind of power manipulation or overclocking. Anyway, if your primary use case is gaming, then the 5800X3D makes more sense than the 5950X. It's just the better gaming chip, and it's not even close.
|
# ? May 29, 2022 21:54 |
|
|
# ? May 16, 2024 17:52 |
|
speaking of 5800X3D, anyone else see their cpu go to 70c on boot and stay there for a bit? i thought i was doomed to go through this forever as it also went on a bit later, but apparently that was some weird windows background services that robably decided to run because the computer had been turned off for over a week as i was setting up in a new case. had to take off one of the d15 fans for the o11d evo, but still pretty weird maybe i just didn't hear the fans before in my older case whereas i got this one to make it easier to observe everything that was going on - for better and for worse. makes you a bit more paranoid when you can hear all the hysteresis cinebench performs better than before, but i'll have to check 3dmark tomorrow to compare
|
# ? May 30, 2022 01:10 |
|
Despite people saying the big air tower coolers are equivalent to 240mm AIOs, “soak time” is longer than a few seconds which helps a bit with brief stuff like that. The regular 5800x immediately runs hot like that too, i guess there’s just too much going on in the one CCD.
|
# ? May 30, 2022 02:01 |
|
It's a lot of heat in a very small spot. I've also heard the "direct contact" coolers that got so popular might not be the best with CPUs like the 5800 because the hot spot is so small it could only be touching one of those heatpipes.
|
# ? May 30, 2022 02:08 |
|
FuturePastNow posted:I've also heard the "direct contact" coolers that got so popular might not be the best with CPUs like the 5800 because the hot spot is so small it could only be touching one of those heatpipes. This is true but if kliras has a D15 that's not a problem. hobbesmaster posted:Despite people saying the big air tower coolers are equivalent to 240mm AIOs, “soak time” is longer than a few seconds which helps a bit with brief stuff like that. A D15 isn't gonna be thermally soaked by a couple cores of load from a windows boot or update. I think this is just the same pattern of observed high CPU temperature that we've seen on all 7nm chips, just one step worse. Temps with light workloads are very high because the CPU can boost a core to the max. One core is so much heat in so small a spot that the thermal conduction of the silicon itself is a limiting factor. Spinning up fans doesn't really do much because it isn't the fin stack that's the problem. It's a <60W load, that's nothing. The 3000 series was this way, and "is my CPU hot" was a super common question. The 5000s did the same but could boost even higher. And the X3D adds a slab of silicon that makes it even worse. Water cooling does "help", because modern waterblocks get the water super close to the CPU. The skived plate is only a few mm thick. Fastest possible evacuation of heat from the core. But alternately you can realize that the problem isn't really a problem. 70c just isn't as hot as it used to be. If you have a good cooler that you know is mounted well (because all-core prime95 temps are low), just raise your fan curves and set longer step-up/down. Hell, on a D15+5800x3d pair, set it to low static fan speed. It has loads more capacity than the CPU can generate.
|
# ? May 30, 2022 03:28 |
|
My last CPU that didn't immediately went to >65 C on a moderate load was the 2500K. I have also set my fan curves for my 5600X and 5600 to be zero until 75C.
|
# ? May 30, 2022 09:47 |
|
Leaked slide confirms AMD Mendocino RDNA2 graphics specifications with two Compute Units Oofta. Is Intel still using last-gen HD graphics at the low end?
|
# ? May 30, 2022 19:41 |
|
SwissArmyDruid posted:Leaked slide confirms AMD Mendocino RDNA2 graphics specifications with two Compute Units 2 CU is going to be slow. Even the universally panned 6500XT has 16 CU
|
# ? May 30, 2022 20:07 |
|
Not sure what you were expecting out of a chromebook chip
|
# ? May 30, 2022 20:19 |
|
I was more replying to the implication that Intel was somehow going to be far behind at the low end.
|
# ? May 30, 2022 20:29 |
|
Turning on Progress Quest's ray-tracing mode is gonna own on the Mendocino chips
|
# ? May 30, 2022 20:32 |
|
gradenko_2000 posted:Turning on Progress Quest's ray-tracing mode is gonna own on the Mendocino chips Please tell me this is real and you get periodic updates on what Ray is up to.
|
# ? May 30, 2022 21:31 |
|
Frames per second? More like seconds per frame
|
# ? May 30, 2022 22:37 |
|
Hell, at just 2 CU the dual channel DDR5 is a huge waste. Might as well have gone with single channel DDR4 to really cut costs.
|
# ? May 31, 2022 00:48 |
|
The ddr5 support is probably to "future-proof" it. this seems like one of those chips that's going to stick around and be sold to OEMs for an annoyingly long time, past the point where everyone has already switched to DDR5. It hopefully won't take long for low-end DDR5 to reach price parity with DDR4 anyway.
|
# ? May 31, 2022 00:56 |
|
what's the state of emulation and CPUs? with xenoblade coming out in July, i might want to try out emulation given how bad performance is on the switch these days are there CPU-specific enhancements on modern emulators? do they leverage the GPU at all?
|
# ? May 31, 2022 01:02 |
|
shrike82 posted:what's the state of emulation and CPUs? with xenoblade coming out in July, i might want to try out emulation given how bad performance is on the switch these days Some emulators make use of AVX-512 instructions to give a large speed-up, with RPCS3 getting the most benefit there. Yuzu also uses AVX-512, but it seems to receive a much lesser boost from it. This is largely a moot point since you can't buy any modern CPUs with AVX-512 support anymore. You could kinda get it with Intel 12th gen through some BIOS fuckery, but Intel started physically modifying the chips so they don't support those instructions anymore. Yuzu reportedly benefits from the 5800X3D's extra cache by a lot. I'm seeing reports of a 40% speedup over the 5800X.
|
# ? May 31, 2022 01:10 |
|
Dr. Video Games 0031 posted:Yuzu reportedly benefits from the 5800X3D's extra cache by a lot. I'm seeing reports of a 40% speedup over the 5800X.
|
# ? May 31, 2022 01:11 |
|
Dr. Video Games 0031 posted:This is largely a moot point since you can't buy any modern CPUs with AVX-512 support anymore Zen4 should have it though, AMD is being cagey around confirming which instructions it has besides the AI ones but leaks have shown it has very good coverage Wikichip on Zen4 posted:AVX-512 - 512-bit Vector Instructions Ignoring the stuff that Intel only ever shipped on accelerators, it's only missing VP2INTERSECT compared to Intel's latest implementation (that you can't buy lol) repiv fucked around with this message at 01:19 on May 31, 2022 |
# ? May 31, 2022 01:17 |
|
I'm getting very little in the way of corroboration with regards to Yuzu getting a large boost from the 5800X3D. Someone posted screenshots of xenoblade 2 getting a large framerate boost, and someone else on reddit mentioned getting a boost, but that's it. It seems to be a very undertested scenario. edit: this comparison is also incomplete without also showing what an Intel chip can do. Dr. Video Games 0031 fucked around with this message at 01:30 on May 31, 2022 |
# ? May 31, 2022 01:27 |
|
not surprising the emulator performs well when you can fit the entire game into cpu cache
|
# ? May 31, 2022 02:02 |
|
Dr. Video Games 0031 posted:The ddr5 support is probably to "future-proof" it. this seems like one of those chips that's going to stick around and be sold to OEMs for an annoyingly long time, past the point where everyone has already switched to DDR5. It hopefully won't take long for low-end DDR5 to reach price parity with DDR4 anyway.
|
# ? May 31, 2022 08:17 |
|
Anime Schoolgirl posted:I desperately want the punchline of Mendocino to be that it shows up as an Athlon 5000G to provide an SKU for the very bottom end of AM5, which is fast approaching "premium-only platform" status man, that'd be rough. Even the Athlon was a Vega 3 [compute units]
|
# ? May 31, 2022 08:22 |
|
gradenko_2000 posted:man, that'd be rough. Even the Athlon was a Vega 3 [compute units] To be fair, these are RDNA2 CUs. Not loving GCN. God gently caress, I am so goddamn happy we are done with GCN forever.
|
# ? May 31, 2022 09:19 |
|
yeah now we'll be stuck with rdna for a decade lmao
|
# ? May 31, 2022 12:03 |
|
The concept of even being able to emulate a current gen (even Nintendo) system is insane to me. My emulation box is a HP SFF desktop with a Haswell 4770, and it struggles to do PS2 stuff, can't handle WiiU and occasionally struggles with N64 games. I feel like grabbing a second hand WiiU and homebrewing it is gonna be a better option than trying to emulate more recent Nintendo stuff, given that a heap of switch launch titles were apparently ported straight from that system anyway?
|
# ? May 31, 2022 12:42 |
|
Dude the Wii U came out one year before Haswell, of course it’s having trouble emulating that.
|
# ? May 31, 2022 15:34 |
|
SwissArmyDruid posted:Leaked slide confirms AMD Mendocino RDNA2 graphics specifications with two Compute Units AMD's Robert Hallock and Frank Azor tried to heavily set expectations in the podcast/vid about the built in graphics of Mendocino in the other Videocardz article you posted a few pages back. Here's the vid where they talk about it, the entire thing is pretty drat good: https://www.youtube.com/watch?v=_s4w49nAgVs tl;dr market saturation and providing a product that the market, not us nerds, want. Other APUs will have much more powerful integrated graphics.
|
# ? May 31, 2022 17:22 |
|
I’ve been Very Annoyed ™️ by the Google hardware experience for the last 3 years but a Zen Chrome tablet tickles some very nerdy neurons.
|
# ? May 31, 2022 17:45 |
|
This article is a pretty good breakdown of the IO capabilities of Zen 4 and the new chipsets: https://www.techpowerup.com/295394/amd-zen-4-socket-am5-explained-pcie-lanes-chipsets-connectivity In short, a Ryzen 7000 CPU in a X670 motherboard will have a total of 24 usable PCIe Gen 5 lanes, 12 usable Gen 4 lanes, and 8 usable Gen 3 lanes/SATA connections. In Comparison, an Alder Lake CPU in a Z670 motherboard has a total of 16 usable PCIe Gen 5 lanes, 16 usable Gen 4 lanes, and 16 usable Gen 3 lanes (not shared with SATA). Technically AMD has more total PCIe bandwidth here due to having 8 more Gen 5 lanes. In either case, it's more than you'd ever practically need.
|
# ? May 31, 2022 17:59 |
|
Dr. Video Games 0031 posted:This article is a pretty good breakdown of the IO capabilities of Zen 4 and the new chipsets: https://www.techpowerup.com/295394/amd-zen-4-socket-am5-explained-pcie-lanes-chipsets-connectivity cool where's my next $80 mobo thank u very much
|
# ? May 31, 2022 23:09 |
|
Prescription Combs posted:AMD's Robert Hallock and Frank Azor tried to heavily set expectations in the podcast/vid about the built in graphics of Mendocino in the other Videocardz article you posted a few pages back. well maybe the market should want better poo poo. =E I am still mad about my Raven Ridge experience.
|
# ? Jun 1, 2022 03:43 |
|
Dr. Video Games 0031 posted:This article is a pretty good breakdown of the IO capabilities of Zen 4 and the new chipsets: https://www.techpowerup.com/295394/amd-zen-4-socket-am5-explained-pcie-lanes-chipsets-connectivity With Gen 5 and people implementing devices keeping only Gen 5 lane widths in mind (or doing bandwidth bridges) that lane count is beyond fantastic on either platform (e.g., a x2 Gen 5 link = x4 Gen 4 == x8 Gen 3). Even with the 'usual' widths: x16 PEG x4 NVMe x4 NVMe x2 Wi-Fi Tons of lanes left. Most consumer 10Gb MACs are still PCIe 3.0 I'd assume. The server space will be wild -- x1 Gen 5 links on their own is stupid amounts of bandwidth for most devices.
|
# ? Jun 1, 2022 03:59 |
|
20Gbps USB ports on a x1 card
|
# ? Jun 1, 2022 05:02 |
|
AMD APUs with RDNA2 integrated graphics only got their first release back in January. Mendocino is just a lowest-end continuation of that
|
# ? Jun 1, 2022 05:13 |
|
Asrock announced X670 boards. Figured to check out the Taichi, since I fared well with that series. They put loving Killer network stuff onto the board. Goddamnit. --edit: Wait, looking it up, I get to Intel Ark. Did they buy it, or just co-opt that name?!
|
# ? Jun 1, 2022 21:36 |
|
Combat Pretzel posted:Asrock announced X670 boards. Figured to check out the Taichi, since I fared well with that series. They put loving Killer network stuff onto the board. Goddamnit. Wait, where are you seeing anything about Intel Ark?
|
# ? Jun 1, 2022 21:52 |
|
Intel bought Killer two years ago, at this point I'd assume their newer products are just Intel NICs with gamerified drivers Dunno if you can use the generic Intel drivers with them though
|
# ? Jun 1, 2022 21:55 |
|
Combat Pretzel posted:Asrock announced X670 boards. Figured to check out the Taichi, since I fared well with that series. They put loving Killer network stuff onto the board. Goddamnit. i am way more bothered than i should be at the location of the top x16 slot. msi is doing this too and i hate it
|
# ? Jun 1, 2022 21:55 |
|
|
# ? May 16, 2024 17:52 |
|
Combat Pretzel posted:Asrock announced X670 boards. Figured to check out the Taichi, since I fared well with that series. They put loving Killer network stuff onto the board. Goddamnit. Yeah, that's one of those stories that got lost in the middle of COVID. Intel purchased Rivet Networks back in 2020, and now owns the rights to Killer.
|
# ? Jun 2, 2022 00:18 |