Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CFox
Nov 9, 2005
Mine was a reference model with a single fan so that thing ran loud and hot. It did play games well I'll give it that much.

Adbot
ADBOT LOVES YOU

PageMaster
Nov 4, 2009
Building my PC with a 3900xt CPU and ddr4 3600mhz memory. I remember the being she unique relationship between infinity fabric clock and memory clock for amd processor performance, and to keep the ratio at 2:1 (so 1800mhz fclk for 3600mhz mclk), is that necessary or is there no real performance difference? Bios says 'auto' sets fclk to equal mclk (which I don't think is possible since fclk can't go that high so I'm not sure what auto is doing, maybe down locking the memory), but I can manually set 1800 for the Infinity fabric frequency.

SwissArmyDruid
Feb 14, 2014

by sebmojo

PageMaster posted:

Building my PC with a 3900xt CPU and ddr4 3600mhz memory. I remember the being she unique relationship between infinity fabric clock and memory clock for amd processor performance, and to keep the ratio at 2:1 (so 1800mhz fclk for 3600mhz mclk), is that necessary or is there no real performance difference? Bios says 'auto' sets fclk to equal mclk (which I don't think is possible since fclk can't go that high so I'm not sure what auto is doing, maybe down locking the memory), but I can manually set 1800 for the Infinity fabric frequency.

DDR operates at, predictably, Double Data Rate, which means an EFFECTIVE data rate of 3600MHz, since it signals on the rise AND fall of the signal.

1800 FCLK matching 1:1 with a 1800 MCLK is the correct setting.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
just a heads-up to anyone who cares that there are still no 5700G drivers on Linux.

absolute :wtf: I figured this was a gimmie since Cezanne has been available in laptops for a year now...

Kazinsal
Dec 13, 2011


CFox posted:

My 290 came from the crypto mines, I used it for a few years, and then it went right back into the crypto mines again. Sold it for more than I bought it for originally even. Terrible GPU but best value ever.

I sold a Kickstarter era Star Citizen account to some dude on Reddit in like 2014 and bought a 290X with the proceeds. Ended up giving the 290X to a friend when I bought a 1070, then traded the 1070 and a few hundred bucks to a different friend for a 1080 Ti.

In the year 2026 I will finally buy an RTX 3080.

movax
Aug 30, 2008

Paul MaudDib posted:

just a heads-up to anyone who cares that there are still no 5700G drivers on Linux.

absolute :wtf: I figured this was a gimmie since Cezanne has been available in laptops for a year now...

I got one to toss into a DeskMini for another project I have in mind, because apparently I just can’t bring myself to use a VM like a normal person in 2021. It’s going to be 99% headless and rather than deal with popping in / out a GPU, I’ll take whatever shows up on the 5700G. Forgot it got robbed of cache though, but… it’s also in-stock at MSRP.

(That project uses GovCloud / export-controlled Office 365 which is just a pain in the loving rear end to co-exist with non-GovCloud stuff and at this point it just needs its own loving computer and I’ll just RDP into it).

SwissArmyDruid
Feb 14, 2014

by sebmojo

Kazinsal posted:

I sold a Kickstarter era Star Citizen account to some dude on Reddit in like 2014 and bought a 290X with the proceeds. Ended up giving the 290X to a friend when I bought a 1070, then traded the 1070 and a few hundred bucks to a different friend for a 1080 Ti.

In the year 2026 I will finally buy an RTX 3080.

Oh poo poo, I still have one of those, I should flip my account.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Paul MaudDib posted:

just a heads-up to anyone who cares that there are still no 5700G drivers on Linux.

absolute :wtf: I figured this was a gimmie since Cezanne has been available in laptops for a year now...

A bit of poking around shows people running them (blogs and phoronix test suite runs) under kernel 5.13+, though some note that you may need a git version of linux-firmware if you're not on a rolling release distro.

Craptacular!
Jul 9, 2001

Fuck the DH

Paul MaudDib posted:

just a heads-up to anyone who cares that there are still no 5700G drivers on Linux.

absolute :wtf: I figured this was a gimmie since Cezanne has been available in laptops for a year now...

The 2200G was a tough nut to crack in it's day, so I'm not surprised.

SCheeseman
Apr 23, 2003

lol if you aren't using a rolling distrSEGMENTATION FAULT

owls or something
Jul 7, 2003

PageMaster posted:

Building my PC with a 3900xt CPU and ddr4 3600mhz memory. I remember the being she unique relationship between infinity fabric clock and memory clock for amd processor performance, and to keep the ratio at 2:1 (so 1800mhz fclk for 3600mhz mclk), is that necessary or is there no real performance difference? Bios says 'auto' sets fclk to equal mclk (which I don't think is possible since fclk can't go that high so I'm not sure what auto is doing, maybe down locking the memory), but I can manually set 1800 for the Infinity fabric frequency.

Auto is optimal at 1:1, 3600mclk is actually 1800 because DDR. Make sure you set your memory to XMP so it isn't running at default speeds.

Truga
May 4, 2014
Lipstick Apathy

SwissArmyDruid posted:

Oh poo poo, I still have one of those, I should flip my account.

i don't think you can still do this, iirc the last time you could actually get $$$ to make it worth the effort was like mid 2014 before the cracks got super obvious

Spatial
Nov 15, 2007

Some questions on power usage. I've got a 3900X and I'm looking at Ryzen Master seeing 20-35W core power at 1.15V average, sitting at the desktop doing nothing. 1% CPU usage. Peak frequency hanging around 800MHz.

This is using the basic Windows balanced power plan. If I switch the power plan to "power saver", power usage instantly drops to 5-7W core at 0.45V. Nothing else changed, it's still sitting there doing nothing, just doing nothing a lot more efficiently. However this mode limits the maximum frequency to 50% which is a real drag when speed is actually required.

I'd like it always to ramp down when idle but also allow maximum performance. However Windows is dumb and if I configure the maximum CPU performance to 100% in the power saver plan, it simply ignores it and still limits the frequency to half. It seems like none of the CPU settings in Windows power plans do anything at all.

Any idea how to get the best of both worlds here? I tried the Ryzen power plans but there's no difference.

PageMaster
Nov 4, 2009

SwissArmyDruid posted:

DDR operates at, predictably, Double Data Rate, which means an EFFECTIVE data rate of 3600MHz, since it signals on the rise AND fall of the signal.

1800 FCLK matching 1:1 with a 1800 MCLK is the correct setting.


owls or something posted:

Auto is optimal at 1:1, 3600mclk is actually 1800 because DDR. Make sure you set your memory to XMP so it isn't running at default speeds.

Thanks. With everything on auto, memory in windows shows 1066/2133 MHz so I'm not sure why it's not running at 1800/3600 by default. I did see that the timings in window are much different )(15-15-15) vs what the memory specs are (18-22-22, which is also shown in bios XMP profile), so I wonder if the automatic settings are changing everything to optimize it in some way. Either way, it sounds like manually setting frequency of fclk and memory to 1800 is the way to go?

ARRGHPLEASENONONONO
Feb 5, 2001

The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up?

NoDamage
Dec 2, 2000

PageMaster posted:

Thanks. With everything on auto, memory in windows shows 1066/2133 MHz so I'm not sure why it's not running at 1800/3600 by default. I did see that the timings in window are much different )(15-15-15) vs what the memory specs are (18-22-22, which is also shown in bios XMP profile), so I wonder if the automatic settings are changing everything to optimize it in some way. Either way, it sounds like manually setting frequency of fclk and memory to 1800 is the way to go?
You need to enable XMP to get the RAM to run at the correct speed, it's turned off by default.

ARRGHPLEASENONONONO posted:

The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up?
Do you have PBO enabled? Those temps are kind of normal for PBO but if you're running stock then it does seem a bit high.

Edit: For reference my 5950X with a Noctua C14S maxes around 71C with PBO off but 88C with PBO on. Ambient is around 26C.

NoDamage fucked around with this message at 18:33 on Aug 18, 2021

kliras
Mar 27, 2021

ARRGHPLEASENONONONO posted:

The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up?
I would try removing the cooler and see how much of the CPU is covered. An X pattern is much easier to get right than a pea dot.

Also make sure the fans are blowing the right way, at the right speed at the right temp/idle/load, etc.

ARRGHPLEASENONONONO
Feb 5, 2001

NoDamage posted:

You need to enable XMP to get the RAM to run at the correct speed, it's turned off by default.

Do you have PBO enabled? Those temps are kind of normal for PBO but if you're running stock then it does seem a bit high.

Edit: For reference my 5950X with a Noctua C14S maxes around 71C with PBO off but 88C with PBO on. Ambient is around 26C.

PBO on Auto, so probably enabled

kliras posted:

I would try removing the cooler and see how much of the CPU is covered. An X pattern is much easier to get right than a pea dot.

Also make sure the fans are blowing the right way, at the right speed at the right temp/idle/load, etc.

I actually had to remove the noctua and put it back as I stupidly mounted it upside down (so no room for the first PCIe slot). I redid a similar pea blob from the first time that got very nice circular coverage across the entire CPU, so hopefully it's not that.

Also made sure all fans are going front to back including the noctua one

PageMaster
Nov 4, 2009

NoDamage posted:

You need to enable XMP to get the RAM to run at the correct speed, it's turned off by default.
Xmp is enabled. The xmp profile title in the bios actually lists the correct timings and mclock as well

PageMaster fucked around with this message at 18:54 on Aug 18, 2021

owls or something
Jul 7, 2003

PageMaster posted:

Thanks. With everything on auto, memory in windows shows 1066/2133 MHz so I'm not sure why it's not running at 1800/3600 by default. I did see that the timings in window are much different )(15-15-15) vs what the memory specs are (18-22-22, which is also shown in bios XMP profile), so I wonder if the automatic settings are changing everything to optimize it in some way. Either way, it sounds like manually setting frequency of fclk and memory to 1800 is the way to go?

Enable XMP, leave FCLK/MCLK on Auto (1:1) and you'll have your 1800/3600 and your timings set to auto should also set correctly from switching XMP on.

PageMaster
Nov 4, 2009

owls or something posted:

Enable XMP, leave FCLK/MCLK on Auto (1:1) and you'll have your 1800/3600 and your timings set to auto should also set correctly from switching XMP on.

Xmp is enabled with mclk and fclk set to auto (1:1), but memory frequencies only showing 2133mhz.

owls or something
Jul 7, 2003

PageMaster posted:

Xmp is enabled. The xmp profile title in the bios actually lists the correct timings and mclock as well



You might need to pick the other profile than profile 1 if there is one. If not, maybe reset the cmos, go into bios and only change XMP to on and touch nothing else. Save & reboot and see what is being reported in Windows.

LRADIKAL
Jun 10, 2001

Fun Shoe

ARRGHPLEASENONONONO posted:

The 5900x I have is running at 90C on heavy load and I'm starting to thing I maybe did something wrong when I installed a noctua d15s. My previous build was with a 4790k also with a noctua.. something and I never saw CPU temps spiking this much or going that high. Is this just normal for this gen of Ryzen CPUs or did I gently caress something up?

What is "heavy load"?

Truga
May 4, 2014
Lipstick Apathy
3600 might not actually work even if you have a 3600 kit because overclocking rams and memory controllers isn't exactly an exact thing. my brother's pc doesn't run on 3600 with xmp so he just runs at 3200 now

ARRGHPLEASENONONONO
Feb 5, 2001

LRADIKAL posted:

What is "heavy load"?

My bad I'm probably overstating. No benchmarking, mining or anything like that, I was just playing Humankind and checked afterwards.

PageMaster
Nov 4, 2009

owls or something posted:

You might need to pick the other profile than profile 1 if there is one. If not, maybe reset the cmos, go into bios and only change XMP to on and touch nothing else. Save & reboot and see what is being reported in Windows.

I finally just took off the cooler and moved the two memory sticks from the mobo recommended slots and put them in the other two; xmp works fine now :shrug:

Fats
Oct 14, 2006

What I cannot create, I do not understand
Fun Shoe

ARRGHPLEASENONONONO posted:

My bad I'm probably overstating. No benchmarking, mining or anything like that, I was just playing Humankind and checked afterwards.

I played Humankind for an hour just now, with the same Noctua cooler you have on a 5950X, and max I saw was 74C. PBO would definitely explain the high temps if it's on.

LRADIKAL
Jun 10, 2001

Fun Shoe

ARRGHPLEASENONONONO posted:

My bad I'm probably overstating. No benchmarking, mining or anything like that, I was just playing Humankind and checked afterwards.

If that's a 3D game it could also be your graphics card heating up the case in concert with the CPU load.

NoDamage
Dec 2, 2000

ARRGHPLEASENONONONO posted:

PBO on Auto, so probably enabled
Auto means disabled on most boards IIRC so 90C does seem high. What GPU do you have and what do your GPU temps look like?

Spatial
Nov 15, 2007

Latest BIOS cures this problem and now it idles in the low power state. Boosts higher too. Board is an X470 MSI Gaming Plus Max for anyone with the same problem

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.
https://twitter.com/HansDeVriesNL/status/1427611644717305863
i'm the 124 watt io die

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

the thing is a monster though, gigabyte leaked the specs and the io die alone is 400mm2.

I would assume there’s some extra space to handle the PHYs for the additional CCDs (that part won’t scale very well), and maybe they will eventually do the cache-on-IO-die thing.

Arzachel
May 12, 2012

Paul MaudDib posted:

the thing is a monster though, gigabyte leaked the specs and the io die alone is 400mm2.

I would assume there’s some extra space to handle the PHYs for the additional CCDs (that part won’t scale very well), and maybe they will eventually do the cache-on-IO-die thing.

Isn't the current server io die ~400mm2 already?

Bofast
Feb 21, 2011

Grimey Drawer
An I/O die to die for :D

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Well, the HEDT CPUs are coming, too. Let's see how expensive they'll be. Probably O_o levels. Too bad the rumored 16-core TR isn't a thing.

ARRGHPLEASENONONONO
Feb 5, 2001

Fats posted:

I played Humankind for an hour just now, with the same Noctua cooler you have on a 5950X, and max I saw was 74C. PBO would definitely explain the high temps if it's on.

That was it. Averaging mid 70s now after turning it from auto to off.

Wasn't the GPU, that is and was running at about 75 as well. I throttled it back since my previous card (2080) decided to commit suicide while I was playing CK3, so I prefer to be extra cautious.

BurritoJustice
Oct 9, 2012

Combat Pretzel posted:

Well, the HEDT CPUs are coming, too. Let's see how expensive they'll be. Probably O_o levels. Too bad the rumored 16-core TR isn't a thing.

They might make lower core count TR Pro CPUs. The buyers looking for small threadripper chips are likely in it for the memory bandwidth and PCIe so it makes more sense for the Pros. It's like the current TR Pro 12 core SKU

SwissArmyDruid
Feb 14, 2014

by sebmojo
If they can make 8-core F-sku EPYCs, they can make 16-core TRs.

This is the world that per-core licensing has pushed us into.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BurritoJustice posted:

They might make lower core count TR Pro CPUs. The buyers looking for small threadripper chips are likely in it for the memory bandwidth and PCIe so it makes more sense for the Pros. It's like the current TR Pro 12 core SKU
Yeah, but WRX80 boards :stare: :10bux:

The idea I have behind a 16-core TRX40 one would be CCXes with half the cores enabled but full cache available, i.e. twice the cache of the 5950X*, plus quad channel memory bandwidth, plus more CPU lanes.

(*: I guess that one will become moot with the stacked cache Zen3.)

Adbot
ADBOT LOVES YOU

Dr. Video Games 0031
Jul 17, 2004

After ignoring it for 8 months, I've started messing with the curve optimizer with my 5600X and am pretty happy with the results. I tried to take it slow and careful by going in increments of -5, one day at a time since I heard that instability can commonly occur in low-workload or idle situations that can only be exposed through normal use. For me, instability happened immediately on startup once I upped the offset to -25. Many of my desktop icons wouldn't load, and there was some unresponsiveness. I tried to restart the system and the start menu wasn't registering my clicks. So yeah, I force restarted, set the offset back to -20, and it's been stable for the couple weeks since then. Temps have been better during lightly threaded workloads, and i'm hitting higher boost frequencies (+150MHz or so) during all-core workloads. This is at the default power limit (PPT maxing out at around 75W). I guess it's possible to go further with the cores Ryzen Master marks as your best, but I'm just sticking with an all-core offset for now.

This seems like something that's probably worth experimenting with for most Zen 3 owners. It's a fairly low-risk way of improving thermals and eking out some extra performance since this is really just an undervolt. (if anything, doing this is better for your CPU's health, no?)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply