|
Mine was a reference model with a single fan so that thing ran loud and hot. It did play games well I'll give it that much.
|
# ? Aug 16, 2021 19:08 |
|
|
# ? May 15, 2024 19:33 |
|
Building my PC with a 3900xt CPU and ddr4 3600mhz memory. I remember the being she unique relationship between infinity fabric clock and memory clock for amd processor performance, and to keep the ratio at 2:1 (so 1800mhz fclk for 3600mhz mclk), is that necessary or is there no real performance difference? Bios says 'auto' sets fclk to equal mclk (which I don't think is possible since fclk can't go that high so I'm not sure what auto is doing, maybe down locking the memory), but I can manually set 1800 for the Infinity fabric frequency.
|
# ? Aug 17, 2021 23:14 |
|
PageMaster posted:Building my PC with a 3900xt CPU and ddr4 3600mhz memory. I remember the being she unique relationship between infinity fabric clock and memory clock for amd processor performance, and to keep the ratio at 2:1 (so 1800mhz fclk for 3600mhz mclk), is that necessary or is there no real performance difference? Bios says 'auto' sets fclk to equal mclk (which I don't think is possible since fclk can't go that high so I'm not sure what auto is doing, maybe down locking the memory), but I can manually set 1800 for the Infinity fabric frequency. DDR operates at, predictably, Double Data Rate, which means an EFFECTIVE data rate of 3600MHz, since it signals on the rise AND fall of the signal. 1800 FCLK matching 1:1 with a 1800 MCLK is the correct setting.
|
# ? Aug 17, 2021 23:52 |
|
just a heads-up to anyone who cares that there are still no 5700G drivers on Linux. absolute I figured this was a gimmie since Cezanne has been available in laptops for a year now...
|
# ? Aug 18, 2021 01:15 |
|
CFox posted:My 290 came from the crypto mines, I used it for a few years, and then it went right back into the crypto mines again. Sold it for more than I bought it for originally even. Terrible GPU but best value ever. I sold a Kickstarter era Star Citizen account to some dude on Reddit in like 2014 and bought a 290X with the proceeds. Ended up giving the 290X to a friend when I bought a 1070, then traded the 1070 and a few hundred bucks to a different friend for a 1080 Ti. In the year 2026 I will finally buy an RTX 3080.
|
# ? Aug 18, 2021 01:24 |
|
Paul MaudDib posted:just a heads-up to anyone who cares that there are still no 5700G drivers on Linux. I got one to toss into a DeskMini for another project I have in mind, because apparently I just can’t bring myself to use a VM like a normal person in 2021. It’s going to be 99% headless and rather than deal with popping in / out a GPU, I’ll take whatever shows up on the 5700G. Forgot it got robbed of cache though, but… it’s also in-stock at MSRP. (That project uses GovCloud / export-controlled Office 365 which is just a pain in the loving rear end to co-exist with non-GovCloud stuff and at this point it just needs its own loving computer and I’ll just RDP into it).
|
# ? Aug 18, 2021 03:09 |
|
Kazinsal posted:I sold a Kickstarter era Star Citizen account to some dude on Reddit in like 2014 and bought a 290X with the proceeds. Ended up giving the 290X to a friend when I bought a 1070, then traded the 1070 and a few hundred bucks to a different friend for a 1080 Ti. Oh poo poo, I still have one of those, I should flip my account.
|
# ? Aug 18, 2021 03:53 |
|
Paul MaudDib posted:just a heads-up to anyone who cares that there are still no 5700G drivers on Linux. A bit of poking around shows people running them (blogs and phoronix test suite runs) under kernel 5.13+, though some note that you may need a git version of linux-firmware if you're not on a rolling release distro.
|
# ? Aug 18, 2021 04:04 |
|
Paul MaudDib posted:just a heads-up to anyone who cares that there are still no 5700G drivers on Linux. The 2200G was a tough nut to crack in it's day, so I'm not surprised.
|
# ? Aug 18, 2021 07:01 |
|
lol if you aren't using a rolling distrSEGMENTATION FAULT
|
# ? Aug 18, 2021 07:09 |
|
PageMaster posted:Building my PC with a 3900xt CPU and ddr4 3600mhz memory. I remember the being she unique relationship between infinity fabric clock and memory clock for amd processor performance, and to keep the ratio at 2:1 (so 1800mhz fclk for 3600mhz mclk), is that necessary or is there no real performance difference? Bios says 'auto' sets fclk to equal mclk (which I don't think is possible since fclk can't go that high so I'm not sure what auto is doing, maybe down locking the memory), but I can manually set 1800 for the Infinity fabric frequency. Auto is optimal at 1:1, 3600mclk is actually 1800 because DDR. Make sure you set your memory to XMP so it isn't running at default speeds.
|
# ? Aug 18, 2021 13:22 |
|
SwissArmyDruid posted:Oh poo poo, I still have one of those, I should flip my account. i don't think you can still do this, iirc the last time you could actually get $$$ to make it worth the effort was like mid 2014 before the cracks got super obvious
|
# ? Aug 18, 2021 13:30 |
|
Some questions on power usage. I've got a 3900X and I'm looking at Ryzen Master seeing 20-35W core power at 1.15V average, sitting at the desktop doing nothing. 1% CPU usage. Peak frequency hanging around 800MHz. This is using the basic Windows balanced power plan. If I switch the power plan to "power saver", power usage instantly drops to 5-7W core at 0.45V. Nothing else changed, it's still sitting there doing nothing, just doing nothing a lot more efficiently. However this mode limits the maximum frequency to 50% which is a real drag when speed is actually required. I'd like it always to ramp down when idle but also allow maximum performance. However Windows is dumb and if I configure the maximum CPU performance to 100% in the power saver plan, it simply ignores it and still limits the frequency to half. It seems like none of the CPU settings in Windows power plans do anything at all. Any idea how to get the best of both worlds here? I tried the Ryzen power plans but there's no difference.
|
# ? Aug 18, 2021 17:11 |
|
SwissArmyDruid posted:DDR operates at, predictably, Double Data Rate, which means an EFFECTIVE data rate of 3600MHz, since it signals on the rise AND fall of the signal. owls or something posted:Auto is optimal at 1:1, 3600mclk is actually 1800 because DDR. Make sure you set your memory to XMP so it isn't running at default speeds. Thanks. With everything on auto, memory in windows shows 1066/2133 MHz so I'm not sure why it's not running at 1800/3600 by default. I did see that the timings in window are much different )(15-15-15) vs what the memory specs are (18-22-22, which is also shown in bios XMP profile), so I wonder if the automatic settings are changing everything to optimize it in some way. Either way, it sounds like manually setting frequency of fclk and memory to 1800 is the way to go?
|
# ? Aug 18, 2021 18:10 |
|
The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up?
|
# ? Aug 18, 2021 18:22 |
|
PageMaster posted:Thanks. With everything on auto, memory in windows shows 1066/2133 MHz so I'm not sure why it's not running at 1800/3600 by default. I did see that the timings in window are much different )(15-15-15) vs what the memory specs are (18-22-22, which is also shown in bios XMP profile), so I wonder if the automatic settings are changing everything to optimize it in some way. Either way, it sounds like manually setting frequency of fclk and memory to 1800 is the way to go? ARRGHPLEASENONONONO posted:The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up? Edit: For reference my 5950X with a Noctua C14S maxes around 71C with PBO off but 88C with PBO on. Ambient is around 26C. NoDamage fucked around with this message at 18:33 on Aug 18, 2021 |
# ? Aug 18, 2021 18:27 |
|
ARRGHPLEASENONONONO posted:The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up? Also make sure the fans are blowing the right way, at the right speed at the right temp/idle/load, etc.
|
# ? Aug 18, 2021 18:30 |
|
NoDamage posted:You need to enable XMP to get the RAM to run at the correct speed, it's turned off by default. PBO on Auto, so probably enabled kliras posted:I would try removing the cooler and see how much of the CPU is covered. An X pattern is much easier to get right than a pea dot. I actually had to remove the noctua and put it back as I stupidly mounted it upside down (so no room for the first PCIe slot). I redid a similar pea blob from the first time that got very nice circular coverage across the entire CPU, so hopefully it's not that. Also made sure all fans are going front to back including the noctua one
|
# ? Aug 18, 2021 18:36 |
|
NoDamage posted:You need to enable XMP to get the RAM to run at the correct speed, it's turned off by default. PageMaster fucked around with this message at 18:54 on Aug 18, 2021 |
# ? Aug 18, 2021 18:52 |
|
PageMaster posted:Thanks. With everything on auto, memory in windows shows 1066/2133 MHz so I'm not sure why it's not running at 1800/3600 by default. I did see that the timings in window are much different )(15-15-15) vs what the memory specs are (18-22-22, which is also shown in bios XMP profile), so I wonder if the automatic settings are changing everything to optimize it in some way. Either way, it sounds like manually setting frequency of fclk and memory to 1800 is the way to go? Enable XMP, leave FCLK/MCLK on Auto (1:1) and you'll have your 1800/3600 and your timings set to auto should also set correctly from switching XMP on.
|
# ? Aug 18, 2021 19:08 |
|
owls or something posted:Enable XMP, leave FCLK/MCLK on Auto (1:1) and you'll have your 1800/3600 and your timings set to auto should also set correctly from switching XMP on. Xmp is enabled with mclk and fclk set to auto (1:1), but memory frequencies only showing 2133mhz.
|
# ? Aug 18, 2021 19:12 |
|
PageMaster posted:Xmp is enabled. The xmp profile title in the bios actually lists the correct timings and mclock as well You might need to pick the other profile than profile 1 if there is one. If not, maybe reset the cmos, go into bios and only change XMP to on and touch nothing else. Save & reboot and see what is being reported in Windows.
|
# ? Aug 18, 2021 19:14 |
|
ARRGHPLEASENONONONO posted:The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up? What is "heavy load"?
|
# ? Aug 18, 2021 19:15 |
|
3600 might not actually work even if you have a 3600 kit because overclocking rams and memory controllers isn't exactly an exact thing. my brother's pc doesn't run on 3600 with xmp so he just runs at 3200 now
|
# ? Aug 18, 2021 19:16 |
|
LRADIKAL posted:What is "heavy load"? My bad I'm probably overstating. No benchmarking, mining or anything like that, I was just playing Humankind and checked afterwards.
|
# ? Aug 18, 2021 19:21 |
|
owls or something posted:You might need to pick the other profile than profile 1 if there is one. If not, maybe reset the cmos, go into bios and only change XMP to on and touch nothing else. Save & reboot and see what is being reported in Windows. I finally just took off the cooler and moved the two memory sticks from the mobo recommended slots and put them in the other two; xmp works fine now
|
# ? Aug 18, 2021 21:11 |
|
ARRGHPLEASENONONONO posted:My bad I'm probably overstating. No benchmarking, mining or anything like that, I was just playing Humankind and checked afterwards. I played Humankind for an hour just now, with the same Noctua cooler you have on a 5950X, and max I saw was 74C. PBO would definitely explain the high temps if it's on.
|
# ? Aug 18, 2021 21:53 |
|
ARRGHPLEASENONONONO posted:My bad I'm probably overstating. No benchmarking, mining or anything like that, I was just playing Humankind and checked afterwards. If that's a 3D game it could also be your graphics card heating up the case in concert with the CPU load.
|
# ? Aug 19, 2021 00:29 |
|
ARRGHPLEASENONONONO posted:PBO on Auto, so probably enabled
|
# ? Aug 19, 2021 00:37 |
|
Latest BIOS cures this problem and now it idles in the low power state. Boosts higher too. Board is an X470 MSI Gaming Plus Max for anyone with the same problem
|
# ? Aug 19, 2021 02:42 |
|
https://twitter.com/HansDeVriesNL/status/1427611644717305863 i'm the 124 watt io die
|
# ? Aug 19, 2021 07:31 |
|
Malloc Voidstar posted:https://twitter.com/HansDeVriesNL/status/1427611644717305863 the thing is a monster though, gigabyte leaked the specs and the io die alone is 400mm2. I would assume there’s some extra space to handle the PHYs for the additional CCDs (that part won’t scale very well), and maybe they will eventually do the cache-on-IO-die thing.
|
# ? Aug 19, 2021 08:08 |
|
Paul MaudDib posted:the thing is a monster though, gigabyte leaked the specs and the io die alone is 400mm2. Isn't the current server io die ~400mm2 already?
|
# ? Aug 19, 2021 12:00 |
|
An I/O die to die for
|
# ? Aug 19, 2021 13:07 |
|
Well, the HEDT CPUs are coming, too. Let's see how expensive they'll be. Probably O_o levels. Too bad the rumored 16-core TR isn't a thing.
|
# ? Aug 19, 2021 14:26 |
|
Fats posted:I played Humankind for an hour just now, with the same Noctua cooler you have on a 5950X, and max I saw was 74C. PBO would definitely explain the high temps if it's on. That was it. Averaging mid 70s now after turning it from auto to off. Wasn't the GPU, that is and was running at about 75 as well. I throttled it back since my previous card (2080) decided to commit suicide while I was playing CK3, so I prefer to be extra cautious.
|
# ? Aug 19, 2021 15:20 |
|
Combat Pretzel posted:Well, the HEDT CPUs are coming, too. Let's see how expensive they'll be. Probably O_o levels. Too bad the rumored 16-core TR isn't a thing. They might make lower core count TR Pro CPUs. The buyers looking for small threadripper chips are likely in it for the memory bandwidth and PCIe so it makes more sense for the Pros. It's like the current TR Pro 12 core SKU
|
# ? Aug 19, 2021 15:32 |
|
If they can make 8-core F-sku EPYCs, they can make 16-core TRs. This is the world that per-core licensing has pushed us into.
|
# ? Aug 19, 2021 19:04 |
|
BurritoJustice posted:They might make lower core count TR Pro CPUs. The buyers looking for small threadripper chips are likely in it for the memory bandwidth and PCIe so it makes more sense for the Pros. It's like the current TR Pro 12 core SKU The idea I have behind a 16-core TRX40 one would be CCXes with half the cores enabled but full cache available, i.e. twice the cache of the 5950X*, plus quad channel memory bandwidth, plus more CPU lanes. (*: I guess that one will become moot with the stacked cache Zen3.)
|
# ? Aug 19, 2021 20:24 |
|
|
# ? May 15, 2024 19:33 |
|
After ignoring it for 8 months, I've started messing with the curve optimizer with my 5600X and am pretty happy with the results. I tried to take it slow and careful by going in increments of -5, one day at a time since I heard that instability can commonly occur in low-workload or idle situations that can only be exposed through normal use. For me, instability happened immediately on startup once I upped the offset to -25. Many of my desktop icons wouldn't load, and there was some unresponsiveness. I tried to restart the system and the start menu wasn't registering my clicks. So yeah, I force restarted, set the offset back to -20, and it's been stable for the couple weeks since then. Temps have been better during lightly threaded workloads, and i'm hitting higher boost frequencies (+150MHz or so) during all-core workloads. This is at the default power limit (PPT maxing out at around 75W). I guess it's possible to go further with the cores Ryzen Master marks as your best, but I'm just sticking with an all-core offset for now. This seems like something that's probably worth experimenting with for most Zen 3 owners. It's a fairly low-risk way of improving thermals and eking out some extra performance since this is really just an undervolt. (if anything, doing this is better for your CPU's health, no?)
|
# ? Aug 20, 2021 00:20 |