Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Scarecow posted:

Im doing a full retard build (never done one and im going all out with sli 1080tis full custom water loop with 2 res and 4 radiators hard line tubing etc)

Yes i know its loving stupid but ive always wanted to do one so thats why i want to do a 10core intel or a 16core threadripper

Knock yourself out, and I don't think anybody trying to build the baddest computer around will regret going Skylake-X. It's likely to have fewer weird new platform issues than Threadripper anyway.

Adbot
ADBOT LOVES YOU

canyoneer
Sep 13, 2005


I only have canyoneyes for you
Every time I see "Threadripper" I think of that Doom comic with the guy yelling RIP AND TEAR THE GUTS

Kazinsal
Dec 13, 2011



canyoneer posted:

Every time I see "Threadripper" I think of that Doom comic with the guy yelling RIP AND TEAR THE GUTS

eames
May 9, 2009

Skylake-X rumored to use FIVR (Fully Integrated Voltage Regulator) which would explain some of the rather extreme TDP/temps observed so far.

I vaguely remember that it was deemed a bad idea after Haswell and something about two alternating teams in US and Israel designing Intel CPUs, one of being pro-FIVR, the other against it.
It's easy to see why integrated regulators are useful for mobile devices (less PCB space required) but for HEDT?

http://www.tweaktown.com/news/58013/intels-skylake-use-integrated-voltage-regulator/index.html

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

eames posted:

Skylake-X rumored to use FIVR (Fully Integrated Voltage Regulator) which would explain some of the rather extreme TDP/temps observed so far.

I vaguely remember that it was deemed a bad idea after Haswell and something about two alternating teams in US and Israel designing Intel CPUs, one of being pro-FIVR, the other against it.
It's easy to see why integrated regulators are useful for mobile devices (less PCB space required) but for HEDT?

http://www.tweaktown.com/news/58013/intels-skylake-use-integrated-voltage-regulator/index.html

Came here to post this.

It probably improves efficiency for the datacenter guys but it seems to be a net loss for enthusiasts. The strange thing is there is no FIVR on Small Skylake/Kaby Lake, so this does appear to be something slightly different from merely Skylake With More Cores and will behave slightly differently at the margins.

The catch with mobile devices is that you need inductor discretes on-package, which increases the thickness of the package. To keep it under control with Broadwell-Y, you needed to cut a hole in your PCB for the inductors to poke down into.

BurritoJustice
Oct 9, 2012

Just a comment on the De8auer Skylake-X overclocking video, because a few people commented saying they distrust him because Intel is providing him with so many expensive CPUs to bin.

He works for caseking.de, which bins Intel CPUs and sells them for more than retail (think SiliconLottery.com). That's where he got the piles of Skylake-X CPUs. Like with SiliconLottery, assuming they'd be malicious, it would be in their best interest to under-exaggerate the average overclockability of retail processors so that their own service is more appealing. He even states in the video that 5GHz is a binned chip and standard chips are closer to 4.8GHz.

As with SiliconLottery last time this was discussed, I think it is logical to not immediately distrust his stats; even if no weight is given to his reputation in the overclocking community.

E: My personal guess is that with the 14nm+ process of Kaby and the IVR the chips will have fantastic voltage scaling up to 5GHz but will be heavily temperature limited making it unfeasible for most. The higher core count chips will cut a few hundred MHz off that like with past HEDT platforms.

BurritoJustice fucked around with this message at 18:56 on Jun 14, 2017

Gwaihir
Dec 8, 2009
Hair Elf

PerrineClostermann posted:

I didn't mean it sarcastically, if that's how it was interpreted. Browsing the net can sap a lot of resources and multitasking is pretty important.

You don't have to preach that to me, heh, I traded in an XPS13 for a Precision 7510 because I wasn't really happy with the CPU performance of the ULV chips just for web browsing with 20 tabs.

(But just going to a real 45w quad did the trick.)

movax
Aug 30, 2008

eames posted:

Skylake-X rumored to use FIVR (Fully Integrated Voltage Regulator) which would explain some of the rather extreme TDP/temps observed so far.

I vaguely remember that it was deemed a bad idea after Haswell and something about two alternating teams in US and Israel designing Intel CPUs, one of being pro-FIVR, the other against it.
It's easy to see why integrated regulators are useful for mobile devices (less PCB space required) but for HEDT?

http://www.tweaktown.com/news/58013/intels-skylake-use-integrated-voltage-regulator/index.html

Not necessarily a "bad" idea, but yeah, Haswell came out of the US following Sandy Bridge from Israel. It comes down to the simple fact that die area is valuable, and do you want to spend it on something that can be on the motherboard (at the expense of complexity + cost + efficiency) or do you spend it on something you can only do on the die (i.e. more transistors for logic)? There's a lot of non-CPU logic on the dies these days; for example QuickSync's H.264 encoder is fixed-function hardware sitting there doing IDCT / motion estimation / etc (there's only one thing it leans on an EU for, I forget what it is off hand); those are all transistors spent on stuff that isn't the CPU core.

eames
May 9, 2009

movax posted:

Not necessarily a "bad" idea, but yeah, Haswell came out of the US following Sandy Bridge from Israel. It comes down to the simple fact that die area is valuable, and do you want to spend it on something that can be on the motherboard (at the expense of complexity + cost + efficiency) or do you spend it on something you can only do on the die (i.e. more transistors for logic)? There's a lot of non-CPU logic on the dies these days; for example QuickSync's H.264 encoder is fixed-function hardware sitting there doing IDCT / motion estimation / etc (there's only one thing it leans on an EU for, I forget what it is off hand); those are all transistors spent on stuff that isn't the CPU core.

Interesting you mention that because Skylake-X doesn't support Quicksync. Perhaps they "replaced" the iGPU with FIVR.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Any opinions on the mesh instead of the ringbus on the Skylake-X/SP? I hear that average hop count is lower, but the latencies are ostensibly a little higher? Any truth to this, specifically latter?

eames posted:

Interesting you mention that because Skylake-X doesn't support Quicksync. Perhaps they "replaced" the iGPU with FIVR.
The HEDT never had the logic on the die to begin with.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Welp, poo poo is fast

http://hexus.net/tech/reviews/cpu/107017-intel-core-i9-7900x-14nm-skylake-x/



Wirth1000
May 12, 2010

#essereFerrari
Disassembling and returning my Ryzen 1600X build faster than the i9-7900X.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Doing some naive scaling math, the 7820X should easily beat the 6950X. More so, given it has higher stock clocks than the 7900X.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Bit-tech also has a review up as well.

System power is 350W at 4.7 GHz, and the chip is clearly thermally throttling IMO. With delidding and an AIO I bet Der8auer is right, a decent chip should go to 5.0 and with a custom loop it should hit 5.1.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
I wonder if the 18-core part will hit 5GHz, you'd need quite the cooling setup for that.

Cygni
Nov 12, 2005

raring to post

Interesting to see the board partners so unprepared for X299. BIOS seem to be all messed up in both reviews. Seems to be a repeat of the Ryzen launch at the moment.

Both reviews were pretty clear to point out they didnt source their parts through Intel, so I wonder if the other reviews are still under NDA.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
They got their chips from other sources. They are not under NDA and the launch isn't till the end of the month.

Cygni
Nov 12, 2005

raring to post

Don Lapre posted:

They got their chips from other sources. They are not under NDA and the launch isn't till the end of the month.

There were rumors that the review NDA would lift last Monday (which didnt happen) and then again today, but that doesn't seem to be the case either, was my point.

eames
May 9, 2009

I was hoping for better (idle) power consumption compared to desktop KL, SL-X being a server based design and all. It is a 10C quadchannel HEDT platform so I'm not sure what I expected. :shrug:

Intel should lift the NDA at this point. Hope we'll see some 6C/8C numbers too.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Cygni posted:

There were rumors that the review NDA would lift last Monday (which didnt happen) and then again today, but that doesn't seem to be the case either, was my point.

But rumors about when the NDA may lift dont really mean anything.

Am i saying the motherboard partners are ready, no. But i wouldn't base that opinion on people who got their chips by third parties and a rumor of an NDA date that didn't come true

Don Lapre fucked around with this message at 19:08 on Jun 16, 2017

3peat
May 6, 2010

quote:

There seems to be much more headroom with Skylake-X than its predecessor, and the main limiting factor is temperature if our CPU is anything to go by. We plumbed in 1.3V as a starting point and crept up from 4GHz all the way to an astounding 4.7GHz, which is 300MHz higher than we managed with the Core i7-6950X. Even more impressive was the fact that it was still completely stable with just 1.28V - far lower than the 1.44V we needed with the older CPU.

However, temperatures were definitely a concern with Cinebench and Terragen pushing 100°C with our 240mm AIO liquid cooler. As a result, while stable and potentially tameable under custom water-cooling, we decided to go for 4.6GHz for benchmarking, which required a super-low 1.22V. Interestingly our Core i7-6950X ran much cooler despite using a significantly higher voltage, albeit at 4.4GHz. This could well be due to thermal paste having been used between the heatspreader and CPU core with the new Skylake-X CPUs, in which case delidding could potentially yield significant benefits given the high heat density.

Indeed, as we were writing this, overclocker der8auer released a video on YouTube of him successfully delidding a Core i9-7900X and achieving a 5GHz overclock using an off-the-shelf AIO liquid cooler. If you're prepared to risk it with your £1,000 / $1,000 CPU, then there are likely serious gains to be had.
https://www.bit-tech.net/hardware/2017/06/16/intel-core-i9-7900x-and-x299-chipset-revie/8

lmfao what a joke

anyways, intel's $1000 ten core cpu at 4.6ghz is 40% faster in cinebench than a $300 eight core amd at 4ghz; if the 16 core threadripper is really gonna cost $700-800, then you must be an idiot to buy this poo poo

Cygni
Nov 12, 2005

raring to post

3peat posted:

lmfao what a joke

anyways, intel's $1000 ten core cpu at 4.6ghz is 40% faster in cinebench than a $300 eight core amd at 4ghz; if the 16 core threadripper is really gonna cost $700-800, then you must be an idiot to buy this poo poo

The rumored $850 threadripper is the entry level version of the 16c with limited clocks. Pretty likely that if the top end threadripper (god what an awful name) is truly competitive, AMD will price it as such. But it won't hit shelves until mid August in the first place.

And of course there is the whole single thread performance thing.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Entry-level Threadripper 16c (or heck, Epyc) is going to be great for servers. I dunno about it as a workstation chip though, I would go for maybe like a 10-12C and higher clocks.

Also, threadripper is probably going to suck for gaming, so the 6C/8C/10C Skylake-X will definitely have a home in a few gaming rigs simply because that's the fastest thing for gaming right now. Kaby Lake clocks but 6-10 cores is going to be perfect for gaming.

eames
May 9, 2009

I cannot wait to see how Apple will cool those in their iMac Pro with a vapor chamber and two radial fans. There's no way the 18C will be able to stay anywhere near nominal frequency without sounding like a jet taking off.

craig588
Nov 19, 2005

by Nyc_Tattoo
I've had some HCC Haswell Xeons and they run very cool fully loaded. It really makes the consumer parts look like trash that they figured they could do something better with than throwing away. 18 cores at 2.4GHz barely touching 60C with a tiny virtually silent stock Intel cooler.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

craig588 posted:

I've had some HCC Haswell Xeons and they run very cool fully loaded. It really makes the consumer parts look like trash that they figured they could do something better with than throwing away. 18 cores at 2.4GHz barely touching 60C with a tiny virtually silent stock Intel cooler.

My 10C E5-2650v3 (Haswell-E, 2.6 GHz all-core) pulls 60W doing a Handbrake encode. My desktop (5820K, 6C at 4.13 GHz all-core) pulls 90W for similar total performance. Clocking down makes a huge difference in efficiency.

(2650v3: 10C * 2.6GHz = 26 core-GHz, 5820K: 6 * 4.13 = 24.78 core-GHz, so this is roughly what you'd expect)

I have encode quality turned way up so these framerates will sound low, but I'm compressing 1440p 60fps at CRF 24 with veryslow quality and I get about 10 frames per second on either processor.

This is another thing that skews expectations with Ryzen - Ryzen overshoots its TDP really badly if you are boosting on all cores, while Intel is undershooting them considerably. That 2650v3 is listed as a 105W processor and the 60W figure is including some AVX (x264 is AVX-aware), I think I would need to run Prime95 SmallFFT to get close to 105W.

Paul MaudDib fucked around with this message at 21:35 on Jun 16, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Another interesting note is that Intel is back in the perf/watt lead. Stock 7900X all-core boost clock is 4.0 GHz, and they compare directly against the 1800X OC'd to 4.0 GHz. Intel is pulling 267W and AMD is pulling 259W (whole system load, measured during Prime95 SmallFFT), so they're pulling 8 watts more with 4 additional cores at the same clocks, and obviously have a pretty commanding performance lead at those levels.

Intel is really pushing the clocks hard in the HEDT lineup for a change and the TDP suffers as a result, but I bet the Xeon chips are going to be nice and cool.

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy
Jesus loving Christ intel go back to soldiering your cpus you tight rear end fucks

I am drooling over that 10 core but having to delid it to stop thermals being a issue at 999usd fuuuuuuuck youuuuuuuu

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If the 7900X is so much further ahead in performance in Handbrake, which seems to be limited in scaling to six cores via x264 (forgot which thread said this), and the power draw is that huge, I suppose it's AVX at play here, and rather well performance-wise?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Scarecow posted:

Jesus loving Christ intel go back to soldiering your cpus you tight rear end fucks

I am drooling over that 10 core but having to delid it to stop thermals being a issue at 999usd fuuuuuuuck youuuuuuuu

Yeah the TIM is obviously the bottleneck right now. On a $350 processor, whatever, fine, guess you can delid, but that's bullshit on a $1k processor.

OK, so, assertion check here, but I don't think heat dissipation is actually the problem with AIOs. AMD cools 500W with a 120mm cooler on the 295x2 and it runs at like 60C. Same for Fury X, 300W+ on a 120mm AIO and temps are no problem at all. I think if you measured the radiator it probably wouldn't go past 60C tops for most CPU AIOs.

Instead, the problem is moving the heat out of the die fast enough. The GPUs in the example have bare dies that can be directly cooled, and they have comparatively larger heatspreaders.

My assertion is that Skylake-X (large die) and Threadripper (dual-die) should both have an easier time dissipating heat than you would expect when comparing against consumer CPUs, for the same reasons. At least once you replace the toothpaste with some liquid metal.

Also, why has nobody put a peltier in a CPU waterblock yet? Yes, it would add some heat too but my assertion here is that we have plenty of thermal capacity to spare, the bottleneck is getting the heat into the liquid. A peltier would be able to keep the CPU cooler and increase the contact temp delta at the coldplate, which should mean more heat gets carried away.

Or even better yet a peltier that replaces the IHS entirely. We still have to have at least 1 layer to shim the die to the proper height to contact the cooler... but instead of having a useless IHS layer and then a second peltier layer on top, we can just have the shim be the peltier.

gently caress having an IHS anyway. Not that I don't have bad memories of installing the cooler on bare-die Athlon XP 1800+'s but the problem there was the locking mechanism. Contact force is way lower nowadays, bare die plus today's modern pogo-pin/screw-down/lever-down mechanisms would probably be fine.

Paul MaudDib fucked around with this message at 23:13 on Jun 16, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Combat Pretzel posted:

If the 7900X is so much further ahead in performance in Handbrake, which seems to be limited in scaling to six cores via x264 (forgot which thread said this), and the power draw is that huge, I suppose it's AVX at play here, and rather well performance-wise?

Umm, where did you see that?

AFAIK it varies by resolution, naturally. I do 1440p CRF 24 veryslow at 60 fps, and my results indicate that it scales well up to at least 10 cores. My E5-2650v3 (10C12T 2.6 GHz all-core) and my 5820K (6C12T at 4.13 GHz all-core) have pretty much identical framerates, which is what you'd expect (2650v3 is 26 core-GHz and 5820K is 24.78 core-GHz). Gets about 10-11 frames per second in most titles.

At 720p I definitely would bet that it doesn't scale much beyond 6 cores, but you're probably also zipping through at 120+ fps even with fairly slow quality modes.

Paul MaudDib fucked around with this message at 23:11 on Jun 16, 2017

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy
Trouble with a Peltier is you would need a monstrous 400w one (not actually running at 400w) the power surply for it and the controller to stop you going into the dew temp range but god drat that would be interesting

Gwaihir
Dec 8, 2009
Hair Elf

eames posted:

I cannot wait to see how Apple will cool those in their iMac Pro with a vapor chamber and two radial fans. There's no way the 18C will be able to stay anywhere near nominal frequency without sounding like a jet taking off.

Dollars to doughnuts Apple gets a custom sku with no heat spreader, laptop style. Direct die cooling will be no problem.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Scarecow posted:

Trouble with a Peltier is you would need a monstrous 400w one (not actually running at 400w) the power surply for it and the controller to stop you going into the dew temp range but god drat that would be interesting

Can you explain this further? I understand the general concept of peltiers but I've never used one.

Is the idea that you size it for 400W average load, then you toggle it on and off to keep the processor within a given range? Or that you physically need a 400W size to cover a large IHS?

How much heat is the Peltier going to generate, in terms of a percentage of the moved heat I assume? I also assume they drop off in efficiency at extremes (cold side is already very cold, etc).

How much of a thermal delta can you sustain across a peltier without damaging it? Or is it more of an absolute operating temperature range?

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy

Paul MaudDib posted:

Can you explain this further? I understand the general concept of peltiers but I've never used one.

Is the idea that you size it for 400W average load, then you toggle it on and off to keep the processor within a given range? Or that you physically need a 400W size to cover a large IHS?

How much heat is the Peltier going to generate, in terms of a percentage of the moved heat I assume? I also assume they drop off in efficiency at extremes (cold side is already very cold, etc).

How much of a thermal delta can you sustain across a peltier without damaging it? Or is it more of an absolute operating temperature range?

its more about how the peltier its self works, so naturally as it needs electricity to work and that generates heat so if you run the peltier at max power its putting out more heat so you have to move the heat of what your trying to cool along with the peltier, theres also reasons for efficiency re cooling/power use as to why you run one at 50% or less of its max power and the deltas depends upon the peilter and what its rated for, admittedly im still learning about them ( only just got one to tinker with) but theres a lot of good info about them floating around

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Paul MaudDib posted:

Umm, where did you see that?
Not sure, I thought it was in the AMD thread, but I can't find it there. Probably Reddit then.

eames
May 9, 2009

Peltier elements are inefficient active heat pumps. You'd need an absolutely giant peltier element to meaningfully cool 350W if such an element even exists. An undersized element will simply heat up the CPU.

It's been ages since I last played with peltier/TEC but this site shows that you'd need ~1,6 kW of peltier elements to cool a 350W processor to 50°C with the hot side of the peltier cooled to 75°C while putting out a total of ~1950W.

You'd also need a super massive copper block as a heat spreader (calculation assumes perfectly even heat distribution), thermal insulation, etc. Phase change cooling obviously makes a lot more sense.

GRINDCORE MEGGIDO
Feb 28, 1985


Or a water chiller and WC, keep it above the dew point.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Scarecow posted:

its more about how the peltier its self works, so naturally as it needs electricity to work and that generates heat so if you run the peltier at max power its putting out more heat so you have to move the heat of what your trying to cool along with the peltier, theres also reasons for efficiency re cooling/power use as to why you run one at 50% or less of its max power and the deltas depends upon the peilter and what its rated for, admittedly im still learning about them ( only just got one to tinker with) but theres a lot of good info about them floating around

Pretty sure that's not correct, the cold side gets cold, it getting hot on that side would defeat the purpose of a thermoelectric pump.

The bigger problem appears to be efficiency. A StackOverflow mentioned more like 10% efficiency (pumping 10W continuously requires 100W, so 110W total). That would be pretty catastrophic.

Just doing some idle googling, it looks like you are gated mostly by the TDmax (temperature difference between the two sides) the TEC can withstand. The closer you push to DTmax the less efficient it is.

I came up with this link which gives some trendlines that might be relevant. Using this 350W cooler would give me a DTmax of 68C, figure you aim for half that in reality (DT/DTmax = 0.5) which would keep your CPU 34C cooler than the AIO. Worst case, that makes the coefficient of performance (how much heat we move with 1W of electricity) ~0.35 at full 350W load. Since the max load also includes the heat from the Peltier itself (I assume) then basically we would divide by 3,. So worst case you could cool roughly 116W continuously at a minimum, and you'd eat 350W extra to do it.

I don't think it's quite there yet for the 300W or so that OC'd 10-core Skylake-X is cranking out, but it actually sounds ballpark feasible for smaller chips, maybe the 6C or the 7700K. The key with a smaller Peltier would be finding one with enough capacity, because this is 62mm x 62mm (~2.5" square) and that's bigger than even SKL-X's IHS.

Paul MaudDib fucked around with this message at 00:37 on Jun 17, 2017

Adbot
ADBOT LOVES YOU

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.

Combat Pretzel posted:

If the 7900X is so much further ahead in performance in Handbrake, which seems to be limited in scaling to six cores via x264 (forgot which thread said this), and the power draw is that huge, I suppose it's AVX at play here, and rather well performance-wise?
I'd assume it's AVX-512, yeah. Also x264 thread scaling depends on how tall the video you're encoding is. 10 cores should be fine for 1080p.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply