Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



SwissArmyDruid posted:

And yet one is benched as being capable of about 10% fewer FLOPS than the other. Gee. It's almost like IPC and transistor count between a 28nm process and a 10nm process actually *means* something.
That's assuming a lot, frankly.
Benchmarks today seem to consist of at most one run without ministat(1) or similar, with all sorts of optimizations enabled without an established baseline.

Adbot
ADBOT LOVES YOU

canyoneer
Sep 13, 2005


I only have canyoneyes for you

D. Ebdrup posted:

That's assuming a lot, frankly.
Benchmarks today seem to consist of at most one run without ministat(1) or similar, with all sorts of optimizations enabled without an established baseline.

try renaming it to quack.exe

JawnV6
Jul 4, 2004

So hot ...
how does a dgpu get ‘more’ bandwidth

it’s not the dedicated memory?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

JawnV6 posted:

how does a dgpu get ‘more’ bandwidth

it’s not the dedicated memory?

dedicated memory, wider memory bus/higher clocks (faster memory), and delta compression, yeah

for iGPUs LPDDR4X makes a huge difference, it's faster and allows quad-channel on 2 DIMMs (well, it's not physically a DIMM but I'm not sure what to call it then... per module?)

Cygni
Nov 12, 2005

raring to post

AT has apparently put the LGA1159 rumors to bed. Intel made some Comet Lake samples on LGA1150 (because Comet Lake is basically the same as Coffee Lake R), but they aren't retail products. LGA1200 is the socket for retail Comet Lake.

https://www.anandtech.com/show/15359/trx80-and-wrx80-dont-exist-neither-does-the-intel-lga1159-socket

Worf
Sep 12, 2017

If only Seth would love me like I love him!

man if i had gotten rid of my trust ol asrock z97 for nothing i would be the one melting down

eames
May 9, 2009

Cygni posted:

AT has apparently put the LGA1159 rumors to bed. Intel made some Comet Lake samples on LGA1150 (because Comet Lake is basically the same as Coffee Lake R), but they aren't retail products. LGA1200 is the socket for retail Comet Lake.

https://www.anandtech.com/show/15359/trx80-and-wrx80-dont-exist-neither-does-the-intel-lga1159-socket

I also expect CML to only arrive on the new socket but I’m not sure the article “puts the rumours to bed”, as you phrased it. In any case it would have been strange to see a new consumer CPU released across two sockets.

On the topic of strange things that kind of make no sense, computerbase has spotted another X299 based refresh, this time with the XCC Die. 22 Cores, 4.0/5.0 GHz, 380W rated TDP... :shrug:

Demostrs
Mar 30, 2011

by Nyc_Tattoo
lol even the igpus on intel processors are getting kneecapped

https://www.phoronix.com/scan.php?page=article&item=intel-gen7-hit&num=4

Worf
Sep 12, 2017

If only Seth would love me like I love him!

Demostrs posted:

lol even the igpus on intel processors are getting kneecapped

https://www.phoronix.com/scan.php?page=article&item=intel-gen7-hit&num=4

Yeah before anybody clicks out of huge concern , the big hits are to broadwell and haswell era chips, basically

Which can no longer render some text or play q3 on integrated lmao

The headline I first read said "Intel G7 graphics" which they meant as...haswell. ofc I assumed they meant icelake mobile sku at first and lold/cried

eames
May 9, 2009

Demostrs posted:

lol even the igpus on intel processors are getting kneecapped

https://www.phoronix.com/scan.php?page=article&item=intel-gen7-hit&num=4

If this goes through 2013-2015 MBPs may become unusable? I remember that the early models struggled with 2D scrolling performance when they launched because the resolution was so high for the time.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
while I get that Intel has iGPUs in a lot of their CPUs so that you can run an office machine without needing a dedicated GPU, what is the use-case for putting an iGPU in something like an i5 or an i7, where you wouldn't expect that someone would get a quad-core (or more) just as an office/browsing machine, but at the same time, any gamer/enthusiast/content-producer is going to get a dedicated GPU anyway, especially since the Intel iGPUs are not powerful enough to really be a stand-alone solution the way AMD's APUs are/were intended to be.

I guess it would make sense for laptops, but I checked and even something like an i5-6600k for the desktop has an iGPU. Is it just a production thing where it's not worth "removing" the iGPU?

Theris
Oct 9, 2007

gradenko_2000 posted:

Is it just a production thing where it's not worth "removing" the iGPU?

Is this. Desktop chips are high leakage laptop chips. It doesn't make sense to have a separate GPU-less die for mid range desktops since it's relatively low volume and there'd be nowhere for them to use laptop chips that don't make the cut.

Theris fucked around with this message at 11:56 on Jan 16, 2020

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Demostrs posted:

lol even the igpus on intel processors are getting kneecapped

https://www.phoronix.com/scan.php?page=article&item=intel-gen7-hit&num=4

Wow, what a total mess

SwissArmyDruid
Feb 14, 2014

by sebmojo

gradenko_2000 posted:

while I get that Intel has iGPUs in a lot of their CPUs so that you can run an office machine without needing a dedicated GPU, what is the use-case for putting an iGPU in something like an i5 or an i7, where you wouldn't expect that someone would get a quad-core (or more) just as an office/browsing machine, but at the same time, any gamer/enthusiast/content-producer is going to get a dedicated GPU anyway, especially since the Intel iGPUs are not powerful enough to really be a stand-alone solution the way AMD's APUs are/were intended to be.

I guess it would make sense for laptops, but I checked and even something like an i5-6600k for the desktop has an iGPU. Is it just a production thing where it's not worth "removing" the iGPU?

Theris posted:

Is this. Desktop chips are high leakage laptop chips. It doesn't make sense to have a separate GPU-less die for mid range desktops since it's relatively low volume and there'd be nowhere for them to use laptop chips that don't make the cut.

And I know you didn't ask, but to tag onto what Theris is saying, by that same token but inverted, HEDT -X parts is all high-leakage server chips. Server and mobile get all of the R&D money, because it is in these situations that CPU silicon is the most constrained: Either they need to sip power off a battery or need to share wattage with another socket, four GPUs, and a dozen drives in RAID, while having TDPs to not cook themselves to death in a server rack or a laptop case. Because you can always slap a bigger air cooler, or more fans, or go to water cooling, or use a bigger power supply in a desktop case, but not necessarily in the other two markets.

It is a brilliant tactic that has served Intel well over the years, and if it ever seems like Intel doesn't care about desktop, you're absolutely correct, they don't, except as a dumping ground for anything they could not bin into mobile or server.

Do not take how AMD approaches the market as anything remotely related to how Intel approaches the market. Yes, AMD is/was basically playing in Intel's junkyard before intruding on their laptop/server markets, but it was Intel's junkyard, and they had everything arranged just how they liked it, damnit, and you paid what they were asking for their castoffs. And now they struggle to sell their bum silicon off.

https://twitter.com/TechEpiphany/status/1215016030868852736

SwissArmyDruid fucked around with this message at 15:00 on Jan 16, 2020

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
This is going to segue into a more general question about technology and manufacturing:

As I understand it, and I could be dead wrong, the basic idea is that you have a wafer of silicon, and then the pattern/design for a chip gets "etched" onto the silicon by a process that, by now, is EUV, and then the wafers gets cut-up into "dies", and each die represents a chip.

All of these chips are made from the same pattern, but each individual chip is not going to perform the same way, and that's what gets branched off into different SKUs: some chips will have bad cores so you sell them as less-core CPUs, some will clock better than others so you "bin" them into a different SKU with a higher base/default clock, and so on.

Is that about right?

And if what I'm following correctly what y'all are saying, the desktop CPUs and the laptop CPUs are all from the same manufacturing process, and separating them into each of these two categories is just another "binning distinction" (my term), where you pick the chips that consume low amounts of power (is that what "low leakage" is?) and use them for laptops, and then you pick the chips that consume (relatively) lots of power and use them for desktops (since you can always throw more cooling at a desktop CPU).

And then, HEDT CPUs and server CPUs are all from the same manufacturing process themselves, where you take pick the chips that consume low amounts of power and use them for servers (since servers need to be relatively cooler, since they rely on passive cooling on a rack), and then you pick the chips that consume lots of power and use them for HEDT.

Is that about right?

I guess the part that's still blowing my mind is the idea that the laptop CPUs that consume as little as 15W or less is, ultimately, very similar if not the same as the desktop CPU, except maybe with some internally programmed limiters to keep it that way?

EDIT: I guess another question would be if the above paradigm that I'm describing is specific to Intel, at least as far as SwissArmyDruid suggesting that the desktop/HEDT CPUs are all just cast-offs.

gradenko_2000 fucked around with this message at 15:22 on Jan 16, 2020

Vanagoon
Jan 20, 2008


Best Dead Gay Forums
on the whole Internet!
I've wondered how the CPU model number/designation is set, if they're trying to make a bunch of the same chip but have to bin down some of them, is this done by wiring it up differently in the fiberglass (?) PCB the cpu is soldered to or do they blow fuses on the chip itself or what?

It would be really interesting to know how exactly they disable faulty cache/cores/etc and make what would have been a certain model into another one, I mean.

Blorange
Jan 31, 2007

A wizard did it

gradenko_2000 posted:

I guess the part that's still blowing my mind is the idea that the laptop CPUs that consume as little as 15W or less is, ultimately, very similar if not the same as the desktop CPU, except maybe with some internally programmed limiters to keep it that way?

Remember that the relationship between clock speed, voltage and power use is not linear. Lowering the stock voltage and clock speed of a chip dramatically impacts on how much power you end up using on the same silicon. The clockspeed boosting algorithms also take TDP into account, so a laptop chip will apply boost for a shorter duration to conserve power.

SwissArmyDruid
Feb 14, 2014

by sebmojo

gradenko_2000 posted:

This is going to segue into a more general question about technology and manufacturing:

As I understand it, and I could be dead wrong, the basic idea is that you have a wafer of silicon, and then the pattern/design for a chip gets "etched" onto the silicon by a process that, by now, is EUV, and then the wafers gets cut-up into "dies", and each die represents a chip.

All of these chips are made from the same pattern, but each individual chip is not going to perform the same way, and that's what gets branched off into different SKUs: some chips will have bad cores so you sell them as less-core CPUs, some will clock better than others so you "bin" them into a different SKU with a higher base/default clock, and so on.

Is that about right?
In general terms, correct.

quote:

And if what I'm following correctly what y'all are saying, the desktop CPUs and the laptop CPUs are all from the same manufacturing process, and separating them into each of these two categories is just another "binning distinction" (my term), where you pick the chips that consume low amounts of power (is that what "low leakage" is?) and use them for laptops, and then you pick the chips that consume (relatively) lots of power and use them for desktops (since you can always throw more cooling at a desktop CPU).
"low leakage" gets used here as a casual shorthand for "in order to hold stable clocks, they do not need the voltage turned up to levels that are not suitable for the mobile market"

Of course, since it's desktop, the TDPs allowed are far higher, so they can also just goose the poo poo out of the voltage on these chips, and boost their base and boost frequencies far above what is sensible for the mobile market, again.

quote:

And then, HEDT CPUs and server CPUs are all from the same manufacturing process themselves, where you take pick the chips that consume low amounts of power and use them for servers (since servers need to be relatively cooler, since they rely on passive cooling on a rack), and then you pick the chips that consume lots of power and use them for HEDT.

Is that about right?
Or have too many bum cores, or whatever.

quote:


I guess the part that's still blowing my mind is the idea that the laptop CPUs that consume as little as 15W or less is, ultimately, very similar if not the same as the desktop CPU, except maybe with some internally programmed limiters to keep it that way?

EDIT: I guess another question would be if the above paradigm that I'm describing is specific to Intel, at least as far as SwissArmyDruid suggesting that the desktop/HEDT CPUs are all just cast-offs.

I answered this pre-emptively, it seems, but because the maximum desktop TDP is "as high as the socket will take without melting", they can really goose the poo poo out of processors to make them do whatever they want them to do. See Intel's forthcoming FX-959010-core Comet Lake desktop processors, which are reported to draw up to 300W, and the already existing i9-9900 variants that hit 5 GHz and suck 250W+.

And yes, this paradigm is specific to Intel, because nobody else shares the same kind of broad market portfolio that Intel does, encompassing laptops, desktop, and server. This might change in the future as ARM slowly wombles along, but it's been the year of the ARM server for half a decade now, and promises of Windows on ARM for at least twice as long, so......

SwissArmyDruid fucked around with this message at 20:58 on Jan 16, 2020

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Blorange posted:

Remember that the relationship between clock speed, voltage and power use is not linear. Lowering the stock voltage and clock speed of a chip dramatically impacts on how much power you end up using on the same silicon. The clockspeed boosting algorithms also take TDP into account, so a laptop chip will apply boost for a shorter duration to conserve power.

The TDP is also not a true reflection of maximal power draw on modern processors, it's just a configurable limit that the processor won't exceed.

A 15W laptop chip isn't twice as efficient as a desktop chip or whatever, it just will throttle when you try to run an all-core load. You can take a desktop chip and set a lower power limit and then it will be "more efficient" too, because it's running in the efficiency sweet spot re clockspeed/voltage/power like you say.

Laptop chips are somewhat better binned than desktop, but remember they make up a huge amount of Intel's volume so not every shittop can have a 9900KS-tier bin. A midrange laptop processor probably isn't hugely better than a midrange desktop processor with an equivalent power limit.

Also note that Intel does not die harvest their cores at all - all 2C dies are real 2C dies with no disabled cores, all 4Cs are 4C dies with no disabled cores, etc. Depending on yields, this can be better for fab throughput than having to fab bigger chips and turn off cores to satisfy market demand. And 14++ is super super mature at this point. The only thing they die harvest is bad GPUs which turn into -F parts or lower-tier parts with smaller GPUs.

This means laptop and desktop binning streams mostly don't overlap for Intel. Desktop chips don't use the 2C dies (except for Pentium/Celeron) and the workstation mobile processors are pretty low volume. So it's not like the best 80% become laptop chips and the garbage becomes 9700Ks. There exist lovely laptop chips too, by necessity. You just hide that in your specifications, you set a voltage/frequency target that pretty much every chip should be able to meet.

Paul MaudDib fucked around with this message at 21:32 on Jan 16, 2020

Indiana_Krom
Jun 18, 2007
Net Slacker
Some random info about desktop parts vs mobile parts: They may not be cut from the same wafers at all. Transistors can be optimized in multiple ways, but basically it comes down to a trade off between performance and energy consumption. Generally speaking higher performing transistors also consume more energy (because of higher leakage). The important thing to remember about leakage is that transistors are analog devices, so a transistor that is very high performance and can switch between its on and off states very quickly will likely leak a considerable amount of voltage during its "off" state because in order to attain that performance the difference between on and off is tiny. And on the other side of things, a transistor that leaks very little energy in its off state is probably not going to perform very well because the electrical field that is responsible for switching is likely significantly larger and more powerful which will take a lot longer to charge or discharge. This optimization happens at the design and process phases for the transistors, so basically binning cannot explain the variance between most mobile and desktop processors. No matter how aggressively you bin and down clock a desktop processor, those high performance transistors will always leak an unsustainable amount of energy for a mobile device. And a leakage optimized mobile transistor will never be able to switch as quickly as that desktop processor no matter how much voltage and current you try to ram through it.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Thank you! This has been very educational.

AARP LARPer
Feb 19, 2005

THE DARK SIDE OF SCIENCE BREEDS A WEAPON OF WAR

Buglord
I agree, great stuff. Thank you.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Indiana_Krom posted:

Some random info about desktop parts vs mobile parts: They may not be cut from the same wafers at all. Transistors can be optimized in multiple ways, but basically it comes down to a trade off between performance and energy consumption. Generally speaking higher performing transistors also consume more energy (because of higher leakage). The important thing to remember about leakage is that transistors are analog devices, so a transistor that is very high performance and can switch between its on and off states very quickly will likely leak a considerable amount of voltage during its "off" state because in order to attain that performance the difference between on and off is tiny. And on the other side of things, a transistor that leaks very little energy in its off state is probably not going to perform very well because the electrical field that is responsible for switching is likely significantly larger and more powerful which will take a lot longer to charge or discharge. This optimization happens at the design and process phases for the transistors, so basically binning cannot explain the variance between most mobile and desktop processors. No matter how aggressively you bin and down clock a desktop processor, those high performance transistors will always leak an unsustainable amount of energy for a mobile device. And a leakage optimized mobile transistor will never be able to switch as quickly as that desktop processor no matter how much voltage and current you try to ram through it.

What you're describing is a different process (eg some of the mobile vs HPC nodes, I forget from who) with different design rules. I'm not sure I agree that mobile Kaby Lake and desktop Kaby Lake/Coffee Lake/Comet Lake cores are inherently different designs/masks/etc. Different in minor aspects, designed for different processes, no. Intel's uarch was tied to their process too closely.

They literally couldn't have ported to a different node if they tried. They did. "we're tied too closely to the node" is a tacit admission there.

Broadwell sucked, a bunch of that was the early node. Broadwell-E proves that comparatively vs Haswell-E in performance x clock for a given core count, clocks dropped a ton, so performance didn't go up much total given the design/shrink/etc. It is also a very interesting reference point for DDR4-capable Haswell/Broadwell and other such hypotheticals.

Skylake was a big refinement of the uarch and the node, that was 14+. Kaby is 14++, it's cleaning up the skylake uarch for the new node and higher timings. All Coffee Lake and Comet Lake designs are Kaby with more cores stamped out, plus some core stepping increment.

You may note I'm presenting this in terms of uarch and process steps together. It's really hard to do what Intel is claiming and be "process portable". I'm kinda mystified how that works.

Paul MaudDib fucked around with this message at 14:13 on Jan 17, 2020

silence_kit
Jul 14, 2011

by the sex ghost

Paul MaudDib posted:

What you're describing is a different process (eg some of the mobile vs HPC nodes, I forget from who) with different design rules.

Indiana Kron is mainly describing how transistor sensitivity to input voltage is somewhat fundamental, so you trade off transistor on-current density (and thus speed) with off-state leakage current density (and thus leakage power dissipation) by changing the transistor threshold voltage.

I don’t work for Intel, nor do I actually work in the computer chip industry, so I can only speculate and go off what I have read, but I suspect that Intel has multiple threshold voltage transistor options in their recent processes. So I believe that to perform this trade off, Intel would not need to switch processes. They would need to switch designs though—there is not a way to dynamically change transistor threshold voltage after manufacture and testing.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

silence_kit posted:

Indiana Kron is mainly describing how transistor sensitivity to input voltage is somewhat fundamental, so you trade off transistor on-current density (and thus speed) with off-state leakage current density (and thus leakage power dissipation) by changing the transistor threshold voltage.

I don’t work for Intel, nor do I actually work in the computer chip industry, so I can only speculate and go off what I have read, but I suspect that Intel has multiple threshold voltage transistor options in their recent processes. So I believe that to perform this trade off, Intel would not need to switch processes. They would need to switch designs though—there is not a way to dynamically change transistor threshold voltage after manufacture and testing.

I don't know of any evidence that Intel has a different stepping of skylake cores with a different library for mobile transistors

I would believe such a library exists for things like Atom/Denverton cores though

Paul MaudDib fucked around with this message at 16:39 on Jan 17, 2020

cycleback
Dec 3, 2004
The secret to creativity is knowing how to hide your sources
I looking at building a workstation based on an i9-10900X processor for some simulation code that needs both high single threaded CPU performance and memory bandwidth (4+ memory channels) with at least 8 cores. Fast memory might help with the simulations I am planning. Some of the simulation codes were likely compiled with Intel's MKL.

Does anyone have any motherboard recommendations for the i9-10900X?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Paul MaudDib posted:

I don't know of any evidence that Intel has a different stepping of skylake cores with a different library for mobile transistors

The evidence of a different stepping (more precisely, mask set, cell library, cell library usage choices, and possibly even process recipe) is that Intel often does an entirely different tapeout for true mobile chip segments (by which I mean under 30W TDP). For example, if we look at parts codenamed Whiskey Lake, there are no desktop options, just mobile (and low power embedded, which is the same thing), all 15W:

https://ark.intel.com/content/www/us/en/ark/products/codename/135883/whiskey-lake.html

Where you do find binning to differentiate mobile and desktop is the high end of mobile TDP, the ~45W bin, with mainstream high performance desktop. See for example Coffee Lake:

https://ark.intel.com/content/www/us/en/ark/products/codename/97787/coffee-lake.html

For example the 9900K (95W 8-core) and 9980HK (45W 8-core) are almost certainly the same, just binned differently, with different turbo and TDP limits programmed into fuse bits at the factory. (Since they're both "k", you can of course override those limits and make one behave much like the other.)

Back to the process thing, it's pretty normal in the industry to develop differentiated recipes and cell libraries on the same node. Intel is no exception, e.g. they offer at least two versions of 14nm to foundry customers (14GP and 14LP, general purpose or low power).

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

BobHoward posted:

The evidence of a different stepping (more precisely, mask set, cell library, cell library usage choices, and possibly even process recipe) is that Intel often does an entirely different tapeout for true mobile chip segments (by which I mean under 30W TDP). For example, if we look at parts codenamed Whiskey Lake, there are no desktop options, just mobile (and low power embedded, which is the same thing), all 15W:

https://ark.intel.com/content/www/us/en/ark/products/codename/135883/whiskey-lake.html

Where you do find binning to differentiate mobile and desktop is the high end of mobile TDP, the ~45W bin, with mainstream high performance desktop. See for example Coffee Lake:

https://ark.intel.com/content/www/us/en/ark/products/codename/97787/coffee-lake.html

For example the 9900K (95W 8-core) and 9980HK (45W 8-core) are almost certainly the same, just binned differently, with different turbo and TDP limits programmed into fuse bits at the factory. (Since they're both "k", you can of course override those limits and make one behave much like the other.)

Back to the process thing, it's pretty normal in the industry to develop differentiated recipes and cell libraries on the same node. Intel is no exception, e.g. they offer at least two versions of 14nm to foundry customers (14GP and 14LP, general purpose or low power).

I thought it was common knowledge that the reason Intel limited mainstream desktop chips to 4 cores pre-Ryzen was that they just harvested leaky mobile processors. It's also why they always had an iGPU despite it being pointless on most desktops.

silence_kit
Jul 14, 2011

by the sex ghost

Malcolm XML posted:

I thought it was common knowledge that the reason Intel limited mainstream desktop chips to 4 cores pre-Ryzen was that they just harvested leaky mobile processors. It's also why they always had an iGPU despite it being pointless on most desktops.

If you go to wikichip.org, you can find die images for Haswell, Skylake, etc. designs. The web site shows two different die images for Intel's 2 core (primarily laptop) & 4 core (primarily desktop) products for many of its generations. This suggests that Intel does not harvest leaky mobile processors to create its desktop products. If Intel harvested leaky mobile processors to create its desktop products, there would only be one die image for the 2 core & 4 core products.

Paul MaudDib is right though in the quote below: there is no evidence on wikichip.org that the circuit designs for each core are different for the 2 core & 4 core products. That does not preclude the possibility that the core designs are different though for the 2 core & 4 core products.

Paul MaudDib posted:

I don't know of any evidence that Intel has a different stepping of skylake cores with a different library for mobile transistors

I would believe such a library exists for things like Atom/Denverton cores though

JawnV6
Jul 4, 2004

So hot ...
wow, pictures from a wiki? wrap it up chipailures

silence_kit
Jul 14, 2011

by the sex ghost
Lol nowhere did I claim that they were authoritative, or that I am engaging in anything other than speculation

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

silence_kit posted:

If you go to wikichip.org, you can find die images for Haswell, Skylake, etc. designs. The web site shows two different die images for Intel's 2 core (primarily laptop) & 4 core (primarily desktop) products for many of its generations. This suggests that Intel does not harvest leaky mobile processors to create its desktop products. If Intel harvested leaky mobile processors to create its desktop products, there would only be one die image for the 2 core & 4 core products.

Paul MaudDib is right though in the quote below: there is no evidence on wikichip.org that the circuit designs for each core are different for the 2 core & 4 core products. That does not preclude the possibility that the core designs are different though for the 2 core & 4 core products.

Fair enough. The iGPU also makes more sense for nucs and business machines but I'd guess everyone's moved to laptops these days.

Cygni
Nov 12, 2005

raring to post

cycleback posted:

I looking at building a workstation based on an i9-10900X processor for some simulation code that needs both high single threaded CPU performance and memory bandwidth (4+ memory channels) with at least 8 cores. Fast memory might help with the simulations I am planning. Some of the simulation codes were likely compiled with Intel's MKL.

Does anyone have any motherboard recommendations for the i9-10900X?

Honest answer is basically nobody is buying 10900Xs at the moment, or the X299 platform in general, so you probably won't get many people with strong opinions/experience. You might want to run some tests to see if you cant determine if single core or memory bandwidth is more important. If the answer leans more single core, a 9900KS will almost certainly be much faster and cheaper all-in. A lot of workloads end up being faster on the KS than the 10900X due to the much higher clock speeds.

If it really is a balance of the two though and you need a 10900X, the X299 platform is a bit of a mess unfortunately. The brands released new board revisions for the 10 series, but often with the same drat names as the old stuff. And a lot of the X299 stock on the market is super old (like 2 years), and don't have BIOS revisions that support 10th Gen. So for old boards, you want one with bios flashback.

If you don't want to deal with that, one way to find new boards is to sort on PCPartpicker for boards that support 256gb of RAM. That will bring up only the newer revisions of boards. The Asrock Steel Legend is the cheapest option, and has more than enough VRM to handle a 10900X. I have no personal experience, but it might be a good option to look at.

There is also the AMD Threadripper 3960X, which is quad channel and offers a ton more cores if your workload scales at all. But is probably $1k more expensive with the board.

Sorry I cant be more helpful!

Cygni fucked around with this message at 05:42 on Jan 22, 2020

Ika
Dec 30, 2004
Pure insanity

We are looking into a similar setup right now, but haven't finalized parts :/ We are sticking with intel for MKL support, and grabbing a low end late 2019 release board + 128 or 256gb ram.


iGPU is nice because you don't need a dedicated graphics card, and desktops still are cheaper and support more storage/memory/addon cards than laptops.

Ika fucked around with this message at 19:56 on Jan 22, 2020

eames
May 9, 2009

Toms Hardware reports that Comet Lake S was supposed to launch with PCIe 4.0 and all components support it, but they allegedly ran into problems with the chipset.

https://www.tomshardware.com/news/intel-gets-the-jitters-plans-then-nixes-pcie-40-support-on-comet-lake

FRINGE
May 23, 2003
title stolen for lf posting

gradenko_2000 posted:

while I get that Intel has iGPUs in a lot of their CPUs so that you can run an office machine without needing a dedicated GPU, what is the use-case for putting an iGPU in something like an i5 or an i7, where you wouldn't expect that someone would get a quad-core (or more) just as an office/browsing machine, but at the same time, any gamer/enthusiast/content-producer is going to get a dedicated GPU anyway, especially since the Intel iGPUs are not powerful enough to really be a stand-alone solution the way AMD's APUs are/were intended to be.

I guess it would make sense for laptops, but I checked and even something like an i5-6600k for the desktop has an iGPU. Is it just a production thing where it's not worth "removing" the iGPU?

If youre building out sets of office machines that need some power for loads of parallel tasks, but the only display power they need is at the level of browsers, Office, and maybe VS/sql, the integrated stuff with a mediocre motherboard are fine.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

FRINGE posted:

If youre building out sets of office machines that need some power for loads of parallel tasks, but the only display power they need is at the level of browsers, Office, and maybe VS/sql, the integrated stuff with a mediocre motherboard are fine.

it was a big gamechanger when the intel integrated stuff started supporting dual-displays. there was a $100/box line item that everyone could drop overnight

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

BobHoward posted:

Back to the process thing, it's pretty normal in the industry to develop differentiated recipes and cell libraries on the same node. Intel is no exception, e.g. they offer at least two versions of 14nm to foundry customers (14GP and 14LP, general purpose or low power).

Even on a given node, you can still do a lot of tweaking to the basic fitFET transistors via doping, legth and width of the channel, etc. That way when you power and clock gate different areas of the chip, you can have less leaky but slower stuff for things that don't need to run at the full core speed, and save more of your tdp budget for the cores and cache, where a lot of the magic happens.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Not that it isn't a shock, but the optics of this: https://www.techpowerup.com/263318/intel-400-series-chipset-motherboards-to-lack-pcie-gen-4-0-launch-pushed-to-q2

Adbot
ADBOT LOVES YOU

eames
May 9, 2009

I'm sure motherboard manufacturers are thrilled that they had to invest into developing (and perhaps even manufacturing) boards with PCIe 4.0 compatability when they didn't need to. The GN guy said he already saw 400 series boards back in December.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply