Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
forbidden dialectics
Jul 26, 2005





Palladium posted:

man have some sympathy for the small 10M+ sub channel who can't afford to get a real 9700K

No idea how, he's got to be one of the most obnoxious people on youtube. The PewDiePie of hardware reviews.

Insha'allah, but I'm contemplating delidding and running the 9900k bare die. I've been doing this with my 4700k for like 4 years and it's been great. You're already cracking a $500 CPU in a fancy vice, what's another hour of dremelling some plastic on the motherboard socket?? Get your calipers and start stacking tiny washers, my dudes.

forbidden dialectics fucked around with this message at 07:45 on Oct 22, 2018

Adbot
ADBOT LOVES YOU

PC LOAD LETTER
May 23, 2005
WTF?!

Winks posted:

They've known that their 10 nm fabs were not going to be up by now for quite some time.
Yup and as you noted there have been signs of shortages for a while now but it takes a long time to transition anything at the fab level. Especially for something like a modern high performance CPU. There were stories about Intel moving production of their chipsets and other stuff not too long ago of off their 14nm+++, a few weeks or a month or so ago, but Intel probably was planning to do this back in early 2018 at least.

Probably even earlier than that, like late 2017, right after it became apparent they weren't going to be able to make any miracles happen with their 10nm process and rumors of those half busted (no working iGPU) low clocked 10nm chips being sold in China on the down low started to pop up.

eames
May 9, 2009

Hyperthreading's benefits has always been somewhat dubious for games, some benefit from it, some are negatively impacted. The move to 8 physical cores likely makes it even less useful. My personal experience is that systems with HT enabled feel far more responsive under load, so given the option I'd always leave HT on. But anyway, this is LTT we are talking about. :rolleyes:


I believe the far more interesting question is why Intel (or their board partners) chose not to enable Turbo powerlimits for the 9900K by default. Limiting it to 95W instead of unlimited (200W+) generally only decreases performance in productivity applications by only 10-15% while it reduces power consumption by 50% and eliminates all thermal issues.
Average power consumption in games is around 75W, so with a 95W limit you only lose 1-2 FPS average and 99th percentile frametimes go down by 3 FPS. Note that you can still overclock the CPU within those power limits.

This 9x00K series is another perfect example of a chips running way past their efficiency curves to squeeze out those last few percents of pointless and barely noticeable benchmark performance in order to look good in the youtube chart slides, just like Vega, Pascal and even Ryzen to a lesser degree.
I can only assume that the reason why Intel doesn't lock things down like Nvidia (very limited power and voltage limits that board partners can't modify) is because they're scared of Zen 2.

some sources:

application and gaming performance with 95W turbo limit:
https://www.computerbase.de/2018-10/intel-core-i9-9900k-i7-9700k-cpu-test/3/#abschnitt_benchmarks_mit_95wattpowertarget

more slides with applications and games and the 95W turbo limit:
https://www.golem.de/news/core-i9-9900k-im-test-acht-verloetete-5-ghz-kerne-sind-extrem-1810-136974-5.html

power efficiency: (last image :lol:)
https://www.techpowerup.com/reviews/Intel/Core_i9_9900K/16.html

eames fucked around with this message at 11:43 on Oct 22, 2018

TheCoach
Mar 11, 2014
Yeah it does seem that if you just leave it stock it will be fine. OC'ing this thing is just rolling coal but for a computer.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Xerophyte posted:

I don't mean that it's a yield improvement because they can disable any shorted or otherwise nonfunctional execution schedulers, I certainly expect that would mean disabling the physical core. I mean that they're setting fixed thermal, clock and power targets and you get a higher yield for those with hyperthreading disabled. If that is case it's a marketing decision to prioritize thermals, power and base clock over HT on the i7, but the product line specifics are always a marketing decision.

Sorry for the misunderstanding, I took your post as being about defects. I agree there is binning going on to make the 9900K. In modern times they're often binning on power efficiency instead of or in addition to Fmax; the die with the lowest leakage and/or best Hz/volt can hit higher clocks without exceeding TDP.

For that reason I agree that the i7 HT decision could've been about prioritizing clocks over HT; it's plausible that yield distributions required disabling HT to yield enough high frequency parts that would stay inside 95W TDP.

Vanagoon posted:

I'm curious as to what happens to completely bad dies, do they shred them up before they dispose of them to make it harder for anyone to study them and steal info from/about them, or is there some process to reclaim valuable elements from them (like the copper wiring, I imagine after a while they would be throwing away a lot of copper that might be worth saving).

On the last point, there's not enough metal to be worth recycling. Most defect testing happens on the wafer or just after dicing, prior to packaging, and there's several orders of magnitude more copper in the heatspreader slug that's part of a finished, packaged CPU. By mass, I'd expect the die is >99.9% silicon and silicon dioxide.

One consequence of there being so little mass of any material that's not highly refined sand is that they have no shame about using super rare metals. For example, Intel used hafnium as a gate material at their 45nm node (and 32nm too I think).

I wouldn't be surprised if most of Intel's scrap gets shredded, but not to stop people from peeking at their circuits. There are commercial outfits which provide this kind of analysis as a service, and they just buy retail samples AFAIK. Avoids legal issues, and it's cheap compared to the cost of doing the work. Intel might care about trying to prevent it from happening prior to product launch, but once chips are being sold to the public there's absolutely no way to stop it.

The more likely reason to grind up scrap: Because it's super cheap and leaves zero possibility of any of it becoming counterfeit CPUs through black markets.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

eames posted:

Hyperthreading's benefits has always been somewhat dubious for games, some benefit from it, some are negatively impacted. The move to 8 physical cores likely makes it even less useful. My personal experience is that systems with HT enabled feel far more responsive under load, so given the option I'd always leave HT on. But anyway, this is LTT we are talking about. :rolleyes:
The Windows scheduler has been SMT aware for quite a long while now, so there isn't really any reason to disable Hyperthreading, unless you're also running a lot of crap in the background, leading to the logical cores being loaded eventually. But the same would happen with Hyperthreading disabled, because if a lot of stuff is ready to run, it'll hit your cores anyway and mess with your framerates.

Llamadeus posted:

Extremely let down by youtube channel Linus Tech Tips, exemplars of scientific rigour.
I'm not entirely sure why testing idle temperatures and power draw doesn't involve Linux, because you can make it reliably shut the gently caress up, unlike Windows, which runs maintenance jobs in background when it thinks the user is away. I mean, it was a point of contention in that video.

Combat Pretzel fucked around with this message at 11:55 on Oct 22, 2018

TheCoach
Mar 11, 2014
Linus tested on a 4 phase mobo that limited the CPU TDP and throttled it. Drop 9900K into a proper motherboard and it'll draw 150 watts without touching any overclocking settings.
Hilariously enough both numbers are considered "in spec" because base freq is only 3.6 GHz.

https://www.youtube.com/watch?v=NGHiRrQ2AAo

mewse
May 2, 2006

forbidden dialectics posted:

Insha'allah, but I'm contemplating delidding and running the 9900k bare die.

Removing the solder looks like a loving nightmare. Steve from gamersnexus was doing it to the bare die with a blade snapped off a box cutter, I kept expecting him to ruin the chip.

repiv
Aug 13, 2009

https://semiaccurate.com/2018/10/22/intel-kills-off-the-10nm-process/

big if true

Kung-Fu Jesus
Dec 13, 2003

repiv posted:

big if true

At least compared to everyone else on <14nm

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Can anyone explain how a substantially smaller company like AMD can beat Intel to a better process node despite the massive financial war chest Intel has compared to AMD? I know AMD is a fabless chip company but even still I’d been of the belief that cash wins the war when it comes to making chips. I know back in the Athlon days that AMD did real well with a superior architecture (I built almost nothing but AMD boxes for like 6 years until the Core series in 2008) but that doesn’t have anything to do with manufacturing. Is TSMC just better at fabs than Intel via magic secret sauce engineers or something?

ufarn
May 30, 2009
For one, Intel ignored the findings in the reports they commissioned about the state of their fab capability.

Having Jim Keller help you out probably doesn't hurt either.

TheCoach
Mar 11, 2014
It's intel vs all the other fabs. Intel can't really choose to make their chips anywhere else so they end up putting everything in one basket. Intel loving up where at least one of their competitors does not has always been a possibility and it finally happened.

AMD just has the freedom to choose the winner and fab their chips there.

PC LOAD LETTER
May 23, 2005
WTF?!

necrobobsledder posted:

Can anyone explain how a substantially smaller company like AMD can beat Intel to a better process node despite the massive financial war chest Intel has compared to AMD?
Intel management screwed the pooch for several years in a row trying to get their manufacturing side (Intel TMG) to hit very ambitious and aggressive performance and transistor density metrics for their 10nm process and blew their huge process lead.

necrobobsledder posted:

I’d been of the belief that cash wins the war when it comes to making chips
Incompetence and hubris of the management can apparently combine to make a financial black hole of sorts that cannot be filled with any amount of money.

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
Also tsmc is ahead because they get that apple money, which makes 75% of their business. If intel had taken apple's offer to make iphone chips they would probably be ahead now.

Sininu
Jan 8, 2014

Perplx posted:

Also tsmc is ahead because they get that apple money, which makes 75% of their business. If intel had taken apple's offer to make iphone chips they would probably be ahead now.

Oh, I did not know they made chips for Apple. Any danger of Apple buying them out?

Anime Schoolgirl
Nov 28, 2002

Sininu posted:

Oh, I did not know they made chips for Apple. Any danger of Apple buying them out?
Given that it's only a low double digit percentage of their output at best, no. TSMC has a ton of customers, especially now that they're the leading process fab.

ufarn
May 30, 2009
The GloFo contract running out was also a millstone off the neck of AMD.

Sininu posted:

Oh, I did not know they made chips for Apple. Any danger of Apple buying them out?
Apple's aversion to accepting any risk to their extreme profit margins makes that very unlikely. Apple prefer to use suppliers/contractors whom they can put on all the risk, sometimes to the extent of bankruptcy.

Perplx
Jun 26, 2004


Best viewed on Orgasma Plasma
Lipstick Apathy
I ordered a 9900k, hope I didn't make a huge mistake.

Cygni
Nov 12, 2005

raring to post

ufarn posted:

The GloFo contract running out was also a millstone off the neck of AMD.

Hasn't run out yet, GloFo just gave up on making high performance nodes and they agreed to "renegotiate" the agreement.

eames
May 9, 2009

The news isn't entirely unexpected, I do wonder if Intel has the guts to release another incompatible chipset or (even socket) for 14nm++++++ before 7nm.
Chances are that they'll crank CFL up to 12 cores on the ring-bus but with enforced power limits to keep it from melting the copper IHS. :allears:

eames fucked around with this message at 16:33 on Oct 22, 2018

Winks
Feb 16, 2009

Alright, who let Rube Goldberg in here?
Semiaccurate's last big Intel scoop was that Apple dropped Intel as a modem supplier for the new iPhone. Before that it was Apple dropping Intel chips in their laptops.

Winks fucked around with this message at 16:42 on Oct 22, 2018

repiv
Aug 13, 2009

demerjian'd again

https://twitter.com/intelnews/status/1054397715071651841

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

TheCoach posted:

Linus tested on a 4 phase mobo that limited the CPU TDP and throttled it. Drop 9900K into a proper motherboard and it'll draw 150 watts without touching any overclocking settings.
Hilariously enough both numbers are considered "in spec" because base freq is only 3.6 GHz.

https://www.youtube.com/watch?v=NGHiRrQ2AAo

He tested on a Asus Maximus XI Hero right? That's a doubled 4-phase, which is practically as good as an 8-phase. Also, each of those stages can handle 100A, which is about twice as much as the phases on other boards.

Not saying he didn't manually set a power limit, or that there aren't BIOS shenanigans, but a Maximus Hero should be capable of pushing much, much more than 95W. Does not pass the smell test.

Arzachel
May 12, 2012

quote:

For several years now SemiAccurate has been saying the the 10nm process as proposed by Intel would never be financially viable

Pad everything with weasel words and you're never wrong!

mewse
May 2, 2006


This denial reeks of desperation

WhyteRyce
Dec 30, 2001

Winks posted:

Semiaccurate's last big Intel scoop was that Apple dropped Intel as a modem supplier for the new iPhone. Before that it was Apple dropping Intel chips in their laptops.

On the even of the first Optane drives launched, he also went on some big rant about how it was HORRIBLY BROKEN and Intel was LYING in their benchmarks and endurance was the EXACT SAME as NAND because he had no understanding of what metadata was vs. traditional over-provisioning. Then the drives launched, no reviewers could bother to substantiate any of his claims even if they cared to and he shut up and moved on to a different topic

I think he was even ranting about how some on-site demo/news release day was full of SUSPICIOUS and UNDERHANDED tactics all because he wasn't invited therefor Intel must be hiding something. Then he received an invite and shut up about it

WhyteRyce fucked around with this message at 17:08 on Oct 22, 2018

Cygni
Nov 12, 2005

raring to post

I mean, I assume Intel did shitcan the “initially proposed” 10nm like three years ago.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Arzachel posted:

Pad everything with weasel words and you're never wrong!

if Demerjian is saying that Intel loosened up some of their design rules to get 10nm out the door... didn't that one come out like a couple months ago? It looks like semiaccurate covered it weeks ago, at a minimum.

Sounds like he's going farther here and saying that Intel is skipping 10nm entirely which... is an idea. Doesn't sound super plausible but.

WhyteRyce posted:

On the even of the first Optane drives launched, he also went on some big rant about how it was HORRIBLY BROKEN and Intel was LYING in their benchmarks and endurance was the EXACT SAME as NAND because he had no understanding of what metadata was vs. traditional over-provisioning. Then the drives launched, no reviewers could bother to substantiate any of his claims even if they cared to and he shut up and moved on to a different topic

I think he was even ranting about how some on-site demo/news release day was full of SUSPICIOUS and UNDERHANDED tactics all because he wasn't invited therefor Intel must be hiding something. Then he received an invite and shut up about it

Charlie has predicted 7 of the last 3 disasters for both Intel and NVIDIA. He's an old-school ATI dude (I think?) who just absolutely loathes them and to steal a phrase from someone here, you can hear his hateboner ripping through his pants with every breathless post about the imminent disasters that are going to befall them.

Paul MaudDib fucked around with this message at 17:30 on Oct 22, 2018

Hed
Mar 31, 2004

Fun Shoe
I have an Asus Z390-H Strix is it going to get normal performance from a 9900K or should I be exchanging it to a ~Maximus~. I don’t understand if the power rail crap people are talking about is for OCing or just stock.

Zorak of Michigan
Jun 10, 2006


Can anyone explain why Intel would just throw the entire process out? I have been part of enough computer screwups to believe that they might have to roll back six months worth of hacks and try something new to get yields up, or whatever, but to get to a point where they would need a clean sheet of paper seems really odd.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


Intel will undoubtedly use the process to make something that will work with whatever its flaws are. It might not be CPUs but they will do whatever they can to salvage it.

forbidden dialectics
Jul 26, 2005





mewse posted:

Removing the solder looks like a loving nightmare. Steve from gamersnexus was doing it to the bare die with a blade snapped off a box cutter, I kept expecting him to ruin the chip.

Yeah I'm gonna whip out the sandpaper like der8auer, looks like the smallest chance of slicing my fingers off. what is wrong with me

Winks
Feb 16, 2009

Alright, who let Rube Goldberg in here?
They've had a bottom end 10 nm CPU on the market since May, the i3-8121U.

SlowBloke
Aug 14, 2017
One of my local webstores is selling a new in box combo of a asus x299-deluxe and a i7-7740x at 745€, how does it compares to a z390+i9-9900k combo(which is 900 new)?

craig588
Nov 19, 2005

by Nyc_Tattoo
Very badly, that's the 4 core 299 model that's phased out and beaten by 4 core regular CPUs. There was no good reason for it to exist.

evilweasel
Aug 24, 2002

Winks posted:

They've had a bottom end 10 nm CPU on the market since May, the i3-8121U.

yeah but that's worse in every respect than the relevant 14nm chip and seems to have been shoved out for no other purpose than to claim that it's in production

Dingwick
May 3, 2007

This is always the highlight of my day.

Hed posted:

I have an Asus Z390-H Strix is it going to get normal performance from a 9900K or should I be exchanging it to a ~Maximus~. I don’t understand if the power rail crap people are talking about is for OCing or just stock.

The Strix models can handle roughly 200W TDP while the ~Maximus~ can handle up to 250W. That just means you can push higher voltages, i.e. higher clock speeds, with more confidence on a Maximus.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

evilweasel posted:

yeah but that's worse in every respect than the relevant 14nm chip and seems to have been shoved out for no other purpose than to claim that it's in production

Also to recoup a tiny tiny bit of cash back from the node, and to help debug the many and varied issues with the process.

Adbot
ADBOT LOVES YOU

Winks
Feb 16, 2009

Alright, who let Rube Goldberg in here?
It can produce CPUs. Can it produce good CPUs? We'll see.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply