Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
kliras
Mar 27, 2021

Cygni posted:

There are some bizarre outliers and conflicting results out there in some of the reviews (what the hell is going on in the Far Cry games), but this is a pretty interesting outlier:



That’s an insane uplift over the 5800X. SimFolks take note.
read somewhere that ms flight sim is pretty bad for ryzen, guess the cache makes up for that in a pretty nuts way?

that being said, dx12 is available as a test branch, so maybe something will change later

Adbot
ADBOT LOVES YOU

CoolCab
Apr 17, 2005

glem
i've asked this before but - why do chip manufacturers want to force obsolescence with their motherboards? i don't understand their incentive here. do they get paid a significant idk chipset premium or something? my intuitive assumption would be it would be in the chip makers interest to make people buy lots of chips and not many motherboards and the motherboard makers have the opposite incentive but that's the perfect opposite of what is happening.

i doubt enormously AM4 is going to be repeated (although i was right about them eventually finishing out the stack! lol) but i want to understand why.

Klyith
Aug 3, 2007

GBS Pledge Week

CoolCab posted:

i've asked this before but - why do chip manufacturers want to force obsolescence with their motherboards? i don't understand their incentive here. do they get paid a significant idk chipset premium or something?

Yes. When AMD was in the dumpster Intel was raking in a surprising amount of money just from the chipset chips. They're easy to make and you get steady profit even from people buying the cheapest CPUs.


kliras posted:

benchmarks results like that are mostly for peak load. ryzen's been pretty bad at managing p-states on my end, so i had to run it very high all the time which is more expensive than intel automatically managing the clock and load of the cpu

assuming you're talking about the latency from switching out of sleep, manually forcing poo poo like this is the dumbest thing. yes it has measurable impact but it is so goddamn trivial.

New Zealand can eat me
Aug 29, 2008

:matters:


Hughmoris posted:

What's the day to day reality of such a high power draw? Will my room become noticeably warmer, or my electricity bill noticeably higher?

Or would most people not be able to tell a difference in power draws if it was a blind test?

You won't notice the hit on your bill from the rig alone, even if it was fully loaded 24/7 it'd only be a few dollars a month at most residential rates.

But! On a hot summer day when the AC is fighting to keep a modest sized bedroom cool? That + a loaded GPU is more than enough to counteract any effect of the AC, and make the room quite uncomfortable if the door is closed.

I'm no HVAC guy, but even a conservative 400w of heat is ~1350BTUs. The internet says you need ~20BTUs of cooling for every square foot of space, so a 10'x10' bedroom requires 2,000 BTUs. That's essentially making the room ~3.3ft bigger in both directions! It's much harder to notice in bigger areas, but if you're limited on space it will get toasty quick

Cygni
Nov 12, 2005

raring to post

CoolCab posted:

i've asked this before but - why do chip manufacturers want to force obsolescence with their motherboards? i don't understand their incentive here. do they get paid a significant idk chipset premium or something? my intuitive assumption would be it would be in the chip makers interest to make people buy lots of chips and not many motherboards and the motherboard makers have the opposite incentive but that's the perfect opposite of what is happening.

i doubt enormously AM4 is going to be repeated (although i was right about them eventually finishing out the stack! lol) but i want to understand why.

Some of the reasons normally given are:
  • They make money on chipset sales.
  • They spend money and resources recertifying and changing new stuff to work with/work around old chipsets that could be spent making new products instead.
  • Consumers get confused (and they lose brand value) when they buy an ancient board with the same socket as the newest CPU and its incompatible for whatever reason, and vice versa.
  • Consumers get confused (and they lose brand value) when they stick a brand new CPU in an ancient poo poo tier motherboard that technically supports it, and the platform doesnt support the latest features or performance they thought they would get.
  • They are trapped into physical/electrical constraints with pins or physical design of the old sockets.

Although a ton of consumers did benefit from the longevity of AM4, it wasnt always smooth sailing. There are only a handful of boards on the market that actually support every retail CPU in the socket, and basically every problem above was experienced by AMD at one point or another. Which is part of the reason that AMD attempted to segment AM4 multiple times.

Cygni fucked around with this message at 19:17 on Apr 14, 2022

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
I hope these results spur Intel to do a consumer version of Sapphire Rapid's HBM-on-package. Entire games in L4 cache :unsmigghh:

redeyes posted:

Looks like Intel wins most stuff except a few games. With 3x more power usage.

Don't forget spending twice as much for the CPU, mobo and DDR5.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
My whole setup pulls like 700W at the socket. In spring and fall, where it's just mild outside (i.e. heat can escape the apartment), I need to leave the door of the living room open, so that it doesn't heat up too much, otherwise the thermostat doesn't trigger and has the boiler generate heat for the rest of the drat apartment.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

ConanTheLibrarian posted:

Don't forget spending twice as much for the CPU, mobo and DDR5.

How do you figure that? Do you mean just for gaming, or what? I'm building a higher end Intel DAW system now and the CPU does cost more than its nearest AMD competition, because I'm going with the high end 12900K part which was $599 when I bought it, but until recently that was inverted and the most comparable AMD part cost $799. The competition forced a price drop that makes it more attractive at $535 or so to get. That is a difference but it's not a huge one.

I'm using DDR4 RAM because I haven't seen a lot of information that suggests it's a terrible idea, with the things I do in DAWs (using lots of DSP, and virtual instruments that are not sample-based but rather make sounds via synthesis) seemingly performing identically between the two types. DDR5 does some better with sampled VSTIs when going for absolute maximum polyphony which makes sense I think, but I don't rely heavily on those at all, and it seems like it's pretty neck and neck at this time in consumer applications and games (with some games actually performing better with DDR4 due to no need to run in Gear 2/4 resulting in lower latency) - of course DDR5 will be out and out better in time but right now it is not so compelling, while certainly costing a great deal more.

As far as the motherboard, the series of motherboard that I am getting has a comparable part in the same series from the same maker aimed at Ryzen 5 compatibility and it is about $50 less and still costs $223. And, it has half the m.2 NVME slots also, which would mean for me I have less storage as I will be populating 3/4 of the slots on my Z690 board, as well as other things.

I agree that the overall platform cost is higher, but double is a stretch at least in this segment and there are some feature differences that matter at least to me as well. Maybe I am not aware of the conditions under which that statement applies. I did think pretty hard about making an x5950-based system but the single threaded performance increase is likely to help what I am doing more, personally. I like it when AMD is doing well, even if I haven't built one since the Barton days it's definitely a good thing for them to be standing up and providing fierce competition as they are now - it was a close call too, the 5950x is a really nice processor as well and if I were doing more things that rely exclusively on excellent multi-threaded performance it would have won me over this go around.

Agreed fucked around with this message at 21:48 on Apr 14, 2022

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

kliras posted:

benchmarks results like that are mostly for peak load. ryzen's been pretty bad at managing p-states on my end, so i had to run it very high all the time which is more expensive than intel automatically managing the clock and load of the cpu

computers turning off because the psu can't provide enough power aside, cpu coolers can only dissipate so many watts, so there will be some fun cut-off points where certain coolers might not cut the mustard. i'm not a huge heatsink benchmark person, so i don't know what the listed vs practical limits are, but the good old hyper 212 evo has a listed 150w cap for instance - even though it's a bit more complicated. time to get familiar with fan curves

cpu cooling isn't an abrupt cut-off thing, you just won't be able to sustain boost once the cpu reaches the throttle temp. There is notionally nothing stopping you from putting an overclocked 5950X under a low-profile 35W cooler though, it's not gonna light on fire, it just will throttle to whatever the 5950X can do with that level of dissipation.

to make those numbers extra-fun though, they really are just colloquial rules-of-thumb, CPUs are getting harder and harder to cool due to thermal density and this is actually getting worse as chips shrink. So a "150W cooler" might only be effective for a 95W processor (made up numbers) on 7nm and would run equally hot as a 150W cpu on 14nm...

Dr. Video Games 0031
Jul 17, 2004

Cygni posted:

There are some bizarre outliers and conflicting results out there in some of the reviews (what the hell is going on in the Far Cry games), but this is a pretty interesting outlier:



That’s an insane uplift over the 5800X. SimFolks take note.

I cannot figure out what LTT's test setup was. They just completely gloss over that for the sake of having snappy video pacing, which would've been fine if they put the test setup in the video description or something instead, but it's just nowhere to be found. Which is frustrating because many of their results diverge from everyone else's.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Cygni posted:

There are some bizarre outliers and conflicting results out there in some of the reviews (what the hell is going on in the Far Cry games), but this is a pretty interesting outlier:

dunia is super broken, far cry 5 was the game where a 3.2 GHz 2C4T pentium was beating a 5.2 GHz 6C6T 9600K/8600K in GN's tests.

it's a popular game and if you want to play it you have to work with it, and it's interesting in the way it's a relatively cpu-heavy benchmark, but when it does bizarre stuff it's probably best to take it with a massive grain of salt and, you know, not put out a series of videos harping on how 6C6T is completely dead based on a game where they're beaten by a cpu with 1/2 the ST and 1/4 the MT performance.

hobbesmaster
Jan 28, 2008

Dr. Video Games 0031 posted:

I cannot figure out what LTT's test setup was. They just completely gloss over that for the sake of having snappy video pacing, which would've been fine if they put the test setup in the video description or something instead, but it's just nowhere to be found. Which is frustrating because many of their results diverge from everyone else's.

This is why tech Jesus is the only reviewer to pay attention to.

MikeC
Jul 19, 2004
BITCH ASS NARC

CaptainSarcastic posted:

:same:

Still waiting to see pricing and availability settle out before completely making up my mind.

drat it, Tech Jesus and HUB have convinced me to waste my money.

hobbesmaster
Jan 28, 2008

Amazon (ships from and sold by) has the 5900x at $384 right now - less than 5800x3d retail will be.

edit: microcenter has it down at $369

hobbesmaster fucked around with this message at 15:58 on Apr 15, 2022

v1ld
Apr 16, 2012

Jeez, per core that's close to the $175 I paid for a 3600 pre-pandemic.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Yeah that is a genuinely impressive price. For anyone who needs a PC beyond just browsing and gaming, that's a great deal.

hobbesmaster
Jan 28, 2008

TSAIK (MSI’ “official overclocker”, like evga’s kingpin) uploaded a cpu-z validation of a 5800x3d above 5ghz.

https://wccftech.com/amd-ryzen-7-5800x3d-breaks-the-5-ghz-barrier-overclocked-to-5-15-ghz-on-msi-meg-x570-godlike/

45.5 multiplier, 113MHz bus clock and 1.2V.

Now, must people would immediately ask “yes but is the cpu stable” but I’m jumping straight to “is the keyboard stable?”

AARP LARPer
Feb 19, 2005

THE DARK SIDE OF SCIENCE BREEDS A WEAPON OF WAR

Buglord

hobbesmaster posted:

Now, must people would immediately ask “yes but is the cpu stable” but I’m jumping straight to “is the keyboard stable?”

lmao. it boots but touching the keyboard gives you a shock

hobbesmaster
Jan 28, 2008

USB spec says the clock (differential data lines) needs to be 48MHz +/- 0.25%. USB is a heavily abused spec so device’s PLLs can often handle more but 13% is slightly higher than 0.25%. PCIE is also 300ppm which is 0.03% so yes I’m really wondering if the keyboard mouse and disk are stable at that bclk

FuturePastNow
May 19, 2014


As is common for the super extreme overclocker stuff, that brand new very expensive motherboard has a PS/2 port for just this reason.

Klyith
Aug 3, 2007

GBS Pledge Week

hobbesmaster posted:

USB spec says the clock (differential data lines) needs to be 48MHz +/- 0.25%. USB is a heavily abused spec so device’s PLLs can often handle more bit uh 13% is slightly higher than 0.25%. PCIE is also 300ppm which is 0.03%.

Spec might say one thing, but people have definitely run systems with BCLK overclocks in the +10% range and had them stable.

IIRC the thing that matters the most at that range is having very few devices on the bus. I had an Ivy Bridge non-K CPU, which I did a little OC by running BCLK at I think 106. It was stable as long as I had exactly 1 PCIe card (gpu) and 1 PCI card (audio). Had to keep the mobo audio and various other stuff disabled.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

FuturePastNow posted:

As is common for the super extreme overclocker stuff, that brand new very expensive motherboard has a PS/2 port for just this reason.

I would totally believe this, but then why do even bottom barrel budget boards also have PS/2 ports?

FuturePastNow
May 19, 2014


gradenko_2000 posted:

I would totally believe this, but then why do even bottom barrel budget boards also have PS/2 ports?

'Cause grandpa still needs his trackball from 1997

hobbesmaster
Jan 28, 2008

I’d say because it’s “free” with the super I/O chip but any connector is a surprising large hit to the BOM.

hobbesmaster
Jan 28, 2008

Klyith posted:

Spec might say one thing, but people have definitely run systems with BCLK overclocks in the +10% range and had them stable.

IIRC the thing that matters the most at that range is having very few devices on the bus. I had an Ivy Bridge non-K CPU, which I did a little OC by running BCLK at I think 106. It was stable as long as I had exactly 1 PCIe card (gpu) and 1 PCI card (audio). Had to keep the mobo audio and various other stuff disabled.

I also remember from times long past that about 105 BCLK was the limit where things would absolutely be breaking, sometimes less. NVMe drives apparently refuse to work above 104.25 bclk.

As for the spec…
At work: ok good nice solid 1.2V line for this ddr4 chip as per the manufacturer data sheet! Now to set up some script to run a very long test and sure it never spikes above the never exceed voltage of 1.5V…
At home: some guy on Reddit said that that B die can run 24/7 at 1.55V so let’s do that

Mr. Crow
May 22, 2008

Snap City mayor for life

hobbesmaster posted:

This is why tech Jesus is the only reviewer to pay attention to.

Nah hardware unboxed is good too, probably better tbh for gamers, tech Jesus gets to technical sometimes and HUB does more suites of games with different configurations

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

FuturePastNow posted:

'Cause grandpa still needs his trackball from 1997

My trackball is a wireless Logitech, thankyouverymuch.

New Zealand can eat me
Aug 29, 2008

:matters:


hobbesmaster posted:

TSAIK (MSI’ “official overclocker”, like evga’s kingpin) uploaded a cpu-z validation of a 5800x3d above 5ghz.

https://wccftech.com/amd-ryzen-7-5800x3d-breaks-the-5-ghz-barrier-overclocked-to-5-15-ghz-on-msi-meg-x570-godlike/

45.5 multiplier, 113MHz bus clock and 1.2V.

Now, must people would immediately ask “yes but is the cpu stable” but I’m jumping straight to “is the keyboard stable?”

I was wondering the same thing, but he's somehow booted with an nvme drive! Any other time I've seen "suicide runs" like this for Zen overclocking, they've had to boot from SATA drives because nvme is supposed to fall off past 105MHz or so

E: Would it be possible that these oc bioses are applying some sort of offset to the bus clock, to allow for the processor to go hog wild while everything else stays within the realm of reason?

New Zealand can eat me fucked around with this message at 00:57 on Apr 17, 2022

BurritoJustice
Oct 9, 2012

Klyith posted:

Spec might say one thing, but people have definitely run systems with BCLK overclocks in the +10% range and had them stable.

IIRC the thing that matters the most at that range is having very few devices on the bus. I had an Ivy Bridge non-K CPU, which I did a little OC by running BCLK at I think 106. It was stable as long as I had exactly 1 PCIe card (gpu) and 1 PCI card (audio). Had to keep the mobo audio and various other stuff disabled.

Intel has basically all the IO buses decoupled from BCLK now, so you can go hog wild and only really hit the CPU. AMD hasn't yet.

kliras
Mar 27, 2021
https://www.youtube.com/watch?v=9XB3yo74dKU

e: video didn't embed the first time for some reason

kliras fucked around with this message at 13:40 on Apr 17, 2022

SwissArmyDruid
Feb 14, 2014

by sebmojo
In short, HU's conclusion is the same as what everyone else has been saying: The KF only trades punches with the X3D, while costing twice as much, sucking down more power, and expensive DDR5.

Maybe they should have just stayed home, this doesn't seem like any kind of a statement from Intel other than "look at us! we're still relevant! we promise!"

It also certainly doesn't make me feel good about any laptop purchases going forward. It makes me feel like Intel has forgotten how to do anything other than "MOAR FREQUENCY, MOAR VOLTAGE" to get perf, and while this doesn't translate directly to mobile parts, it *does* make me wonder if Intel aren't leaving something on the table by not having a single, unified, more efficient core design, than the P/E split.

FuturePastNow
May 19, 2014


The DDR5 kit they used is $449 lol

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

FuturePastNow posted:

The DDR5 kit they used is $449 lol

Would DDR4 have made that big a difference, though?

PC LOAD LETTER
May 23, 2005
WTF?!

SwissArmyDruid posted:

It also certainly doesn't make me feel good about any laptop purchases going forward. It makes me feel like Intel has forgotten how to do anything other than "MOAR FREQUENCY, MOAR VOLTAGE" to get perf,

I think all the chip companies do this when they get stuck due to the long production and design cycles and their competition is beating them out. Always some exec out there who gets the bright idea to pump the power/clocks to win on a few benches and claim some sort've marketing victory.

The classic examples were the P4's and higher clocked Bulldozer chips for x86.

Given the way the process tech is starting to really run out of head room I kind've expect everyone's stuff to use more power in the future in general though.

There hints of this given the way power is going up lots for GPU's.

300W for a GPU at stock clocks used to be considered nuts not too long ago but I think when the R7xxx and NV4xxx GPU's come out that will either be the norm. Or perhaps even look low in comparison given some of the rumors of how much power the top end NV4xxx GPU's are going to use (700W+ = WTFFF). That and how AMD apparently is going to have some 170W stock TDP chips for AM5.

I dunno if default 200W+ TDP CPU's will ever become normal for everyday PC's (I'd actually expect them to stay around 90W or less for the CPU stock) but I could see it being common for enthusiast and power user's PC's eventually over the next few years.

Once the process tech is tapped out mo' power to pump clocks is the path of least resistance to more performance. Major redesigns matter but take too long.

Indiana_Krom
Jun 18, 2007
Net Slacker
There are diminishing returns and just plain limitations on how much heat can reliably be removed in any given form factor. Like a full tower case can handle a lot, but stuff 1000w into one without significant attention to cooling and there will be major thermal problems. Honestly even 500w is already stretching the limits of heat rejection from the average tower case.

And then you have to deal with thermal density, GPUs have it fairly easy because they have thousands of cores that split the load so each individual core isn't dealing with as much heat and it is spread more evenly over the entire chip. CPUs on the other hand have fewer cores that are much higher performance, so the heat and power is concentrated in those cores which are very small areas on the die so it is a huge challenge to get the heat away from those hot spots.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Indiana_Krom posted:

There are diminishing returns and just plain limitations on how much heat can reliably be removed in any given form factor. Like a full tower case can handle a lot, but stuff 1000w into one without significant attention to cooling and there will be major thermal problems. Honestly even 500w is already stretching the limits of heat rejection from the average tower case.

what would be a "better" form factor for dissipating 1000w of heat, if tower cases aren't really good for that sort of thing anymore?

like, ignoring practicality and extant practices, would we be better off shifting to... 1U/2U-style server form factors?

Kibner
Oct 21, 2008

Acguy Supremacy

gradenko_2000 posted:

what would be a "better" form factor for dissipating 1000w of heat, if tower cases aren't really good for that sort of thing anymore?

like, ignoring practicality and extant practices, would we be better off shifting to... 1U/2U-style server form factors?

Open-air test benches, basically.

https://www.youtube.com/watch?v=kMmTMr66Csc

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Strapping a box fan to the side of your case has always been the pro move.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


Server form factors won’t work at home because they trade noise for cooling efficiency. They use small high RPM fans which can get quite loud. No one will want that under their desk

We are probably due for a new form factor given the power draw and cooling needs of modern equipment. There are all sorts of ways to design a modern chassis to cool parts more effectively. Getting it adopted over the entrenched ATX form factor though? That’s the real trick

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week

SwissArmyDruid posted:

It also certainly doesn't make me feel good about any laptop purchases going forward. It makes me feel like Intel has forgotten how to do anything other than "MOAR FREQUENCY, MOAR VOLTAGE" to get perf, and while this doesn't translate directly to mobile parts, it *does* make me wonder if Intel aren't leaving something on the table by not having a single, unified, more efficient core design, than the P/E split.

Nah dog the P/E split is all about laptops. Intel cares a lot about laptops, and they're not just competing against Ryzen. They're also worried about ARM.

Look at the Apple M1. It's spanking both Alder Lake and Ryzen in efficiency. But when you ask for something other than perf/watt it starts looking weak. Against real desktops the high-end M1s aren't as dominant, even in highly multi-core stuff like cinebench and video encoding. It can't shove more watts into each core, so it loses to x86 chips with fewer cores that can effectively use 150W.


So Intel is trying to fight two fronts at once. This is their first stab at the P/E thing so I wouldn't write it off yet. Until Intel unfucks their internal problems -- behind on process, shake-ups to design teams that I guess spent the entire 2010s partying -- it's too soon to judge. Don't forget that the entire Core architecture originally came from an "efficiency" project.


Rinkles posted:

Would DDR4 have made that big a difference, though?

In gaming, not a huge amount. About 2-3% across many games. (Though with some games DDR4 actually performs better!)

I think that the idea of both CPUs running the exact same 3200 kit would actually have benefited Intel overall, since that would knock 200mhz off the IF clock on Ryzen.

Klyith fucked around with this message at 16:30 on Apr 17, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply