Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Quaint Quail Quilt posted:

There are rumors it'll help with VR.
I am also considering it. I don't see it at best buy yet, and amd site is maybe getting hammered?

Looks like the stock was rather low. I never saw it on BB, and it's already sold out on Amazon. If you have a Microcenter around you I'd check--my local one still has a couple.

Adbot
ADBOT LOVES YOU

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

drat, I just got a 5800X in November. I don't need the 5800X3D but I do play poorly optimized FPSes so it's tempting.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

DrDork posted:

I've always wondered why Intel didn't bother looking into that any further, since basically everyone was impressed with how much work that "L4" cache was doing for the 5775C. I'm sure it added a bit to the BOM and all, and I know at the time Intel was running so far ahead of AMD that they didn't have to bother with anything fancy, but still.

I think Crystal Well really was created specifically for Apple. Broadwell exists (in laptop form) without the L4, consumers just tend to think they’re tied together because the desktop chips all have it, and even then it wasn’t there big a product due to the early 14nm woes.

But Crystal Well specifically was basically never used in anything except Apple, the Skull Canyon NUC using Skylake-R, and the Broadwell-C socketed desktop chips (which have the push factor of some of the iMacs behind them too). Super low volume product in general.

I can only imagine the story as being “Apple goes to intel and asks if they can make the igpu not suck”, and very few companies are willing to splash out on expensive chips when the commodity x86 laptop market doesn’t really reward it in any way, so Intel just threw some “infinity cache” on and called it a day.

It’s a fun little footnote though and as a Weird Hardware enthusiast I couldn’t say no when a big batch of 5775C popped up for $90 on eBay a while back. I upgraded a Z97 mitx system I’ve got kicking around and I’m gonna throw a RX 6600 in it and use it as a hackintosh for my new oled. Sadly, not a ton of m.2 support in boards that old, and this wasn’t a premium board even then.

Paul MaudDib fucked around with this message at 19:06 on Apr 21, 2022

hobbesmaster
Jan 28, 2008

hobbesmaster posted:

In case anyone wanted an update on my liquid freezer ii problem I just got an exchange through Amazon and the new one has the same issue. Again, good spread on the heat spreader so I’m thinking Arctic must be having some production issues. They haven’t responded to my support case yet so I’m guessing they weren’t kidding about the 9 days to hear a reply thing.

And I returned everything and got a 5800x3d and Lian li Galahad 240mm.

It appears that MSI forgot to turn off some of the AMD overclocking menu features related to PBO even in the 5800x3d bios. However if I tried to turn PBO on it wouldn’t even post.

edit: here’s a fun comparison. In cyberpunk 2077, RT medium, rt sun+local shadows on, DLSS balanced at 1440p
3600x: 60 avg fps, 13 low fps, 93 max fps
5800x3d: 65 avg fps, 46 low fps, 84.41 max fps

(3070ti’s OC settings are not the same I’m guessing that’s the max fps discrepancy. Also do not have a RAM oc at the moment but I doubt that matters )

hobbesmaster fucked around with this message at 20:20 on Apr 21, 2022

kliras
Mar 27, 2021
by coincidence, i just happened to come across a bugged galahad aio. might wanna listen and see if you get the same rattling. they probably mounted in a bad way, but worth looking out for

hobbesmaster
Jan 28, 2008

kliras posted:

by coincidence, i just happened to come across a bugged galahad aio. might wanna listen and see if you get the same rattling. they probably mounted in a bad way, but worth looking out for

That’s probably air in the pump?

kliras
Mar 27, 2021

hobbesmaster posted:

That’s probably air in the pump?
probably, but lian li has had some weird qa issues. i'm still gonna get a case from them, but it just doesn't hurt to check quickly. it might also be the mounting orientation of the waterblock with the loop not going above the pump and stuff like that

Klyith
Aug 3, 2007

GBS Pledge Week
As someone who ran a watercooling setup back in the days when people used aquarium pumps, auto heater core radiators, and parts from the plumbing section of the hardware store, let me just say: water cooling is dumb. AIOs have only made it marginally less dumb.

The first modern tower heatpipe heatsinks came out less than a year after I bought all that poo poo. I kept using water for many years after out of sunk cost, but feeling increasingly stupid the whole time. And that was all back when the CPU was the main heat source of a system! In 2022, an AIO CPU cooler is responsible for a component that, in practice, for most of the people using them, produces between 1/2 and 1/5th as much heat as their GPU during normal operation.


Someone should make a GPU AIO cooler that's pervasively compatible, that would make think that water is a great idea.

hobbesmaster
Jan 28, 2008

Klyith posted:

Someone should make a GPU AIO cooler that's pervasively compatible, that would make think that water is a great idea.

Tell EVGA to make more ultra hybrid variants or whatever.

Dr. Video Games 0031
Jul 17, 2004

Klyith posted:

Someone should make a GPU AIO cooler that's pervasively compatible, that would make think that water is a great idea.

Every board is too different. There have been attempts at creating kits that convert CPU AIOs into GPU AIOs (NZXT made one for Turing GPUs), but even those have issues.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Klyith posted:

Someone should make a GPU AIO cooler that's pervasively compatible, that would make think that water is a great idea.

I had a HP Z420 workstation that I was given that I regifted to a friend in need, it came with basically a tower cooler but the tower was a small radiator (120mm?) standing vertically on edge. So literally a tower cooler but with water inside instead of a heatpipe working from evaporation. I’ve mostly not seen that approach before, Corsair had an AIO (bulldog?) that was parallel to the cpu (flat against the mobo) pushing air upwards, or front to back, or something like that, but the “air cooler design but with water as a working fluid” is relatively unexplored.

I think mostly because it has a lot of the disadvantages of both, of course. Heatpipes are a pretty good working fluid (better than liquid water in some respects) and you can’t change the fundamental physics of heat dissipation, it’s purely a game of airflow and surface area.

A 120mm aio will move a lot of heat, an OC’d 295x2 could do 500W at 60c through a 120mm aio. Aio performance in CPUs is limited primarily by the IHS and coldplate, not really the radiator. With bare dies and multiple chips/chiplets and lots of surface area, GPUs can do really well at that. Tbh if you want a “universal GPU cooler” a 120mm aio is really fine, that fits basically any tower case designed in the last 10-15 years.

One interesting novelty playing on the “bulldog” design might be to have a 3 slot GPU that’s a blower card. Use the aio to move the heat and then have a fairly large quiet fan push the air out the back as much as possible. Super blower. But I guess that comes back to “is it going to be noticeably better than a heatpipe for a given amount of surface area”… but a 120mm standard depth radiator isn’t that much surface area either.

The real :okpos: idea is to just have compressed air as a working fluid. The reason whiskey rocks suck is that energy of phase change (melting ice) is much greater than the energy to raise water a degree, expanding air isn’t really a phase change but it carries energy in the same way. Since it’s cooling itself as it expands, it will carry much more energy away than just raising room temperature air by a degree. So have an external compressor and the pc has “water block” fittings that channel the expanded, cool air through the pc and out the back. Much like water pumps are just a different way to move heat, using water as a working fluid rather than evaporating water inside a heatpipe - this is just a different working fluid too, but compressed air is much easier to handle in a lot of respects than water, like small leaks don’t really matter as long as the general direction of airflow is good, there’s much less corrosion and contamination (and these are known problems with known solutions for air tools), etc. Plumbing air around and having quick disconnects is an off the shelf solution and unlike water you don’t have to drain the whole pc to service something, etc.

It’s of course not much different from a blower, but the blower is your compressor, so the pc itself doesn’t need loud blowers to build up the pressure. And the compressor gets the heat from compression, but compressors are easily purchased with a 100% duty cycle, while your pc gets the cooling from the expansion side of the equation. You’d just want to keep everything above the condensation point of course.

If you put a big compressor in your garage you could just plumb the compressed air over with copper pipe (standard stuff) and the only sound the pc would make is the hiss of airflow moving through the heat sinks and out the PC.

I’d love to see compressed air as a service fluid in data centers. I think that could help power density keep pushing upwards.

Klyith
Aug 3, 2007

GBS Pledge Week

Dr. Video Games 0031 posted:

Every board is too different. There have been attempts at creating kits that convert CPU AIOs into GPU AIOs (NZXT made one for Turing GPUs), but even those have issues.

Yeah, I know. :) After we get the sarcasm tag, we need an "playing dumb" tag too.


Though now I'm wondering. If either Nvidia or AMD sat down and defined a standard not just for their reference card, but for all cards of that generation, such that AIOs could be made... it'd probably be a decent competitive advantage. Give those specs to acetech or whatever their name is, you could have GPU AIOs by corsair and nzxt on shelves for your new GPU by the time the OEM models are out.

Like, AMD did a Vega 64 that came with water cooling already and it sucked, I know, but Vega was poo poo to begin with. And the built-in one was a crappy 120 or 140mm rad to maximize case compatibility. Do it on a competitive GPU and as an aftermarket product for :homebrew: enthusiasts and it could work.

Now you've got a standard that people are invested in, just like my bad watercooling setup.


Paul MaudDib posted:

The real :okpos: idea is to just have compressed air as a working fluid.
...
If you put a big compressor in your garage you could just plumb the compressed air over with copper pipe

I don't think a normal air compressor does enough compression to really have that much cooling advantage from expansion. Like, 5-10 degrees under ambient isn't that much, and a narrow pipe carrying a decent volume of air won't be silent.

OTOH if you're going with poo poo in your garage you can fun things with water. Like the guy who put down a bunch of pipe before his concrete got poured and used his garage floor as a radiator.

wargames
Mar 16, 2008

official yospos cat censor

Klyith posted:

Like, AMD did a Vega 64 that came with water cooling already and it sucked, I know, but Vega was poo poo to begin with. And the built-in one was a crappy 120 or 140mm rad to maximize case compatibility. Do it on a competitive GPU and as an aftermarket product for :homebrew: enthusiasts and it could work.


vega isn't poo poo just wasn't the best, been running a vega56 for a few years and its holds up well.

Dr. Video Games 0031
Jul 17, 2004

Klyith posted:

OTOH if you're going with poo poo in your garage you can fun things with water. Like the guy who put down a bunch of pipe before his concrete got poured and used his garage floor as a radiator.

That will be a fun discovery when the next homeowner randomly digs it up.

hobbesmaster
Jan 28, 2008

I’m keeping an eye on the x3d while dialing in the memory OC and it’s just straight weird to see a constant 4450.13 straight across all cores.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Klyith posted:

Someone should make a GPU AIO cooler that's pervasively compatible, that would make think that water is a great idea.

The NZXT one mentioned above actually was compatible across virtually every NVidia GPU for like 5 generations or something, and almost every AIB board was compatible. They changed the mounting holes for the 3xxx series, though, so no dice.

But yeah, I used it, and it worked real well. GPU dies are very large compared to CPU dies, with a much more even heat generation pattern, so they take to water cooling quite well.

The downside was the same with any AIO, though: they only last so long before they inevitably leak enough fluid to have function problems, and as long as we stick with the terribly outdated "upside-down" mounting for PCIe cards, you're putting the AIO pump in a pretty non-optimal spot.

Still, if it was an option again, I'd do it again. Waaaaay cheaper than doing a custom waterblock for a card, especially since you can't transplant that to any other card. The G10+AIO I used on 4 different cards.

SwissArmyDruid
Feb 14, 2014

by sebmojo

ConanTheLibrarian posted:

If that's the path forwards, just go with HBM-on-package like Sapphire Rapids. That way you could forego RAM entirely. This would allow for new motherboard form factors that ditch DIMM slots entirely to allow more room for the heat sinks of future 1000W+ GPUs.

my sibling in christ, have you forgotten the horrors of HBM GPUs? You wanna talk about how Intel is sucking down 300W+ with the 12900KS, throws some HBM on there and you're gonna need 220V service to your PC.

SwissArmyDruid fucked around with this message at 04:55 on Apr 22, 2022

New Zealand can eat me
Aug 29, 2008

:matters:


You can double that with just one Vega 64 :hellyeah:

If the air was cold enough the coil whine sounded like a 650W slot car, like there was something really moving around down there that would claim a finger

VorpalFish
Mar 22, 2007
reasonably awesometm

SwissArmyDruid posted:

my sibling in christ, have you forgotten the horrors of HBM GPUs? You wanna talk about how Intel is sucking down 300W+ with the 12900KS, throws some HBM on there and you're gonna need 220V service to your PC.

Uh, I believe one of the advantages of hbm is that it's actually pretty efficient. More efficient than ddr even.

Harder to cool because it's denser but the power draw on those vega gpus was almost certainly not the fault of the hbm (and I've seen it argued that one of the reasons they even went hbm for those gpus was they couldn't get gddr in the power envelope.)

Edit: yeah it's more than 3x the efficiency, so you could triple your bandwidth and still be under ddr power consumption in absolute terms.

VorpalFish fucked around with this message at 12:32 on Apr 22, 2022

kliras
Mar 27, 2021
what is more fluid than a ryzen bios

https://twitter.com/VideoCardz/status/1517499962023632897

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Too bad the 5800X3D is sold out everywhere--and is now up on eBay for $600+.

I assumed the stock levels were going to be low, given that this is pretty obviously a taste-test chip, so it'll be interesting to see how long it takes for another batch to show up.

Actuarial Fables
Jul 29, 2014

Taco Defender
There's still 20 available at the Madison Heights Micro Center ~

FuturePastNow
May 19, 2014



What does burning cache smell like?

kliras
Mar 27, 2021
if there's one thing i learned about reddit, it's that it's literally the worst place for overclocking advice. i can't even begin to imagine the sacrifices that will be made in the name of bclk

Arzachel
May 12, 2012

kliras posted:

if there's one thing i learned about reddit, it's that it's literally the worst place for overclocking advice

:hai:

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

FuturePastNow posted:

What does burning cache smell like?

Honestly, at 1.33v it's not even as high as some other "super pro overclocking guidance" that people are running around with. Unless the additional cache is considerably more delicate than the rest of the CPU, I'd expect it'll be fine--at least for the normal lifetimes that people are probably gonna be using this sort of CPU for.

But yeah, messing with BCLK is a quick way to see how sensitive the rest of your system is to out-of-spec clock times.

hobbesmaster
Jan 28, 2008

kliras posted:

i can't even begin to imagine the sacrifices that will be made in the name of bclk

Just buy m.2 drives like packs of gum.

edit: correction

quote:

This BIOS will support MEG X570 GODLIKE, Ace and Unify series, so motherboards which have external clock generators, the report claims.

This will not burn through m.2 drives, instead it may or may not burn through $450 CPUs.

hobbesmaster fucked around with this message at 16:54 on Apr 22, 2022

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

DrDork posted:

Honestly, at 1.33v it's not even as high as some other "super pro overclocking guidance" that people are running around with. Unless the additional cache is considerably more delicate than the rest of the CPU, I'd expect it'll be fine--at least for the normal lifetimes that people are probably gonna be using this sort of CPU for.

But yeah, messing with BCLK is a quick way to see how sensitive the rest of your system is to out-of-spec clock times.

I'm not convinced it's just a matter of delicacy. Has AMD configured and validated temperature reporting as accurately on the 5800X3D as the rest of its lineup? It's fundamentally quite a different product, it may be the case that there are unreported hot spots that turn into more of a problem than you would guess based on th enumbers.

hobbesmaster
Jan 28, 2008

K8.0 posted:

I'm not convinced it's just a matter of delicacy. Has AMD configured and validated temperature reporting as accurately on the 5800X3D as the rest of its lineup? It's fundamentally quite a different product, it may be the case that there are unreported hot spots that turn into more of a problem than you would guess based on th enumbers.

I’m running prime95 memory tests on my x3d now and current core temps are 72C, L3 cache temp is reported as 47.3C. Maxes are 85.2C and 48.1C.

I just fired up small FFTs and Tdie pegs at 90C, L3 temp sensor is right at 50C.

50C is nothing for silicon so I’m guessing you’re probably right that that sensor isn’t truly showing limits.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

hobbesmaster posted:

50C is nothing for silicon so I’m guessing you’re probably right that that sensor isn’t truly showing limits.

Yeah, I wouldn't trust any temp-sensor software currently out, since there's a real high chance that it's reading the wrong/incomplete temp data from the brand-new chip.

That said, I would expect that the CPU itself can monitor its own temps properly, and will throttle or shut down if the cache gets too toasty.

Frankly, given that the extra cache is layered on top of the rest of the chip, which now also has some hefty chunks of "structural silicon" shims sitting on top of a good portion of the core layouts, I'd be more worried that the more traditional portions of the chip could have unexpected problems with dissipating heat--and that's stuff that we already know has pretty good temp monitoring and whatnot, since it's the exact same stuff as the rest of the current Zen lineup.

Cygni
Nov 12, 2005

raring to post

https://www.anandtech.com/show/17356/tsmc-roadmap-update-n3e-in-2024-n2-in-2026-major-changes-incoming

TSMC pushing back N2, pulling up the half-node "N3E", and likely adding more half nodes.

Posting it here because AMD is still a heavily TSMC shop... for now!

CaptainSarcastic
Jul 6, 2013



FuturePastNow posted:

What does burning cache smell like?

Victory.

redeyes
Sep 14, 2002

by Fluffdaddy
AMD prolly found chips dieing with OCing while validating and then we get a hard lockdown.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

redeyes posted:

AMD prolly found chips dieing with OCing while validating and then we get a hard lockdown.

Looks that way, yeah. I found a part of an interview where AMD was noted as saying the limit for the VCache is 1.3-1.35v, so there's not a whole lot of headroom there. Whether that's strictly because the cache itself can't take higher than that, or that more voltage was causing heat problems with the silicon spacers acting as insulators wasn't noted--either seems possible, though.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Speaking of heat causing problems, does anyone know if thermal expansion could be a problem with these designs? For example if the cores were really busy but the cache wasn't being used much (Prime95 small FFTs maybe? iunno), could differential expansion cause enough mechanical stress to screw up the little copper vias?

New Zealand can eat me
Aug 29, 2008

:matters:


I'm no physics student but I'd expect you would need to get well past the rated operating temperatures for that to start to become a problem.


redeyes posted:

AMD prolly found chips dieing with OCing while validating and then we get a hard lockdown.

The only requirement is the presence of a clock generator on the boards. It seems like a "hard lockdown" because most boards do not have this

The voltage limit isn't a function of physical safety, anything past that would put you into the zone where all it's accomplishing is increasing the error rate, which is especially bad when it's something like L3 cache where the average latency is something like 40 cycles (~10ns?) for Zen 3. It'd be like the opposite of diminishing returns (compounding gains?) towards making the processor extremely fuckin slow and unstable. It would also leave the door open for disingenuous tech youtubers to deliberately do this and then make outrage videos about AMD being doomed because vcache is the worst thing ever.

Edit: It's also not like HBM where even if you could separately control the processor/memory (cache in this case) voltages to do the 'undervolt the processor as low as possible to free up all of the thermal headroom for the memory' dance, any benefit you could gain from the cache going faster is very tightly coupled to the speed of the processor.

Zen chips are p good at turning themselves off before they can be murdered. If you turn LN2 mode on and throw 1.5v @ LLC5 on air, with enough repeated attempts you're more likely to damage the board, and then the processor while you're inserting it into another board, than you are actually killing the chip. Most boards with them will go past forcing a CMOS reset after enough sequential fast-fails, and force a hard power cycle that ignores the LN2 jumper next boot.

hobbesmaster posted:

50C is nothing for silicon so I’m guessing you’re probably right that that sensor isn’t truly showing limits.

Sounds like they're using the same approach they did for Junction Temps on the HBM cards, where there's a shitload of temp sensors and they're doing some fancy math to calculate 'effective heat' or whatever

New Zealand can eat me fucked around with this message at 20:46 on Apr 22, 2022

Klyith
Aug 3, 2007

GBS Pledge Week

ConanTheLibrarian posted:

Speaking of heat causing problems, does anyone know if thermal expansion could be a problem with these designs? For example if the cores were really busy but the cache wasn't being used much (Prime95 small FFTs maybe? iunno), could differential expansion cause enough mechanical stress to screw up the little copper vias?

Very unlikely. One of the nice properties of silicon is that it has an extremely low coefficient of thermal expansion. Otherwise this would be a problem for regular chips all over the place.

Plus the stacked cache is in the center of the CCD, on top of the normal cache and non-core bits of the CPU.

In the very long term it might be possible that stacked chips have a lower lifetime than normal, but that's in comparison to most CPUs don't have much functional aging at all.

NewFatMike
Jun 11, 2015

One of the things that AMD said ahead of the launch is that their bonding method is effectively the same thing that is done to wring gauge blocks. The effect is described in this video:

https://youtu.be/qE7dYhpI_bI

I would be EXTREMELY surprised if this can manage the extremes of the operating specs that regular monolithic silicon can. It’s pretty much just cohesive forces keeping everything together.

I’m sure it’ll be fine in spec, but I wouldn’t be surprised if the cache pops clean off or peels when someone tries an LNO2 overclock from the temperature differential and the aspect ratio of wide and flat silicon bodies resting on each other.

hobbesmaster
Jan 28, 2008

How would you even do an LN2 overclock? I’m sure we’ll find out shortly.

https://valid.x86.fr/m1eu6l
Like that bus clock overclock is at 1.2V.

hobbesmaster fucked around with this message at 23:34 on Apr 22, 2022

Adbot
ADBOT LOVES YOU

New Zealand can eat me
Aug 29, 2008

:matters:


On the flipside, it's insane that these things are still stable down at 0.844v!! https://wccftech.com/amd-ryzen-7-5800x3d-cpu-undervolting-monster-efficency-sub-1v-same-performance-runs-cooler-lower-power/

That multicore cinebench r20 score is almost twice as high as the M1 Max at very similar power levels. I wish he would have done a geekbench pull so we'd have a better comparison to make

(Funny to see the comments just making poo poo up like "1.2V should easily be stable at 4.65GHz", given the above post)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply