Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Tab8715 posted:

With 10mm delayed does that mean we won’t see any significant boosts in mobile or laptop performance for next few years?

Aside from small incremental gains or little things like a new Bluetooth version.

You will once AMD laptops become more common!

Adbot
ADBOT LOVES YOU

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
I got a feeling my 4790K will still be shockingly good for a 10 year old chip.

eames
May 9, 2009

The next obvious performance increase for Intel CPUs should be a copy of Nvidia's and AMD's refined boost/factory-OC strategies along with higher TDPs and maybe further inter-node improvements if that's a thing. They won't be able to leave much OC headroom on the table when Zen2 launches with further refined precision boost/XFR and a process advantage.
Naturally that won't do the power limited mobile/laptop market much good. Maybe they'll roll out custom silicon/cache using EMIB SiP for companies like Apple but I doubt it.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money

Palladium posted:

I got a feeling my 4790K will still be shockingly good for a 10 year old chip.

When I was exploring ways to cheaply virtualize, I had a hearty LOL that the 4790k was the first consumer, overclockable chip that Intel ever produced that had VT-d support.

So yeah, basically you're right. No good reason to move beyond the 4790k. Even Intel haven't done so except by gluing on a couple more cores.

GRINDCORE MEGGIDO
Feb 28, 1985


What's the reason behind expecting 7nm to be a high performing process with good yields, and for it to launch on time? Isn't that a lot of ifs?

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

mcbexx posted:

So what I'm reading from all this is I should sacrifice a goat to make sure my 2500k@4.2 lasts another 18-24 months? :sweatdrop:

get it up to 4.4 you coward

eames
May 9, 2009

GRINDCORE MEGGIDO posted:

What's the reason behind expecting 7nm to be a high performing process with good yields, and for it to launch on time? Isn't that a lot of ifs?

https://www.anandtech.com/show/12677/tsmc-kicks-off-volume-production-of-7nm-chips

GRINDCORE MEGGIDO
Feb 28, 1985


Looking good so far.
LOL how I missed everything about Tsmc. :downs:

GRINDCORE MEGGIDO fucked around with this message at 23:26 on May 16, 2018

Cygni
Nov 12, 2005

raring to post

TSMC seems to be the only fab thats really on track with 7nm tbh. I guess the research alliance isn't scheduled to start volume production for a while though (both Ryzen and Vega7nm is on TSMC), so maybe they are still on track

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

bobfather posted:

I still think mATX is the best form factor. Even better than ATX, since who (honestly) runs that many expansion cards?

:agreed:, I got some Supermicro X11SPM-F boards in for general purpose servers and I can’t think of anything missing from it that I would need a full ATX for. 2 full x16 slots and a x8? Check. Metric poo poo ton of sata ports? Check. m.2 slot? Check. Dual GbE, IPMI, VGA onboard, it’s great.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

bobfather posted:

So yeah, basically you're right. No good reason to move beyond the 4790k. Even Intel haven't done so except by gluing on a couple more cores.
Literally the only reason I’m considering upgrading my 4790k is for better NVMe support and maybe USB-C and Thunderbolt. I also have an old E3-1230v1 hanging around but I’m just tired of it being so... old rather than something being bad or actually outdated then.

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"

BangersInMyKnickers posted:

get it up to 4.4 you coward

:agreed:

I've been running 4.4Ghz now for...*looks*...6.5 years. :cripes:

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


GRINDCORE MEGGIDO posted:

What's the reason behind expecting 7nm to be a high performing process with good yields, and for it to launch on time? Isn't that a lot of ifs?

AFAIK, intel wasn’t interested in updating their GPU without a new process.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

BIG HEADLINE posted:

:agreed:

I've been running 4.4Ghz now for...*looks*...6.5 years. :cripes:

Finding new mobos for Intel EOLed CPUs especially the Z series is always a bitch. Another reason to go AMD next time: new AM4 mobos will probably still be made by 2022, if's there is a new AM4+ with backwards CPU and DDR4 compatibility, even better.

Tab8715 posted:

AFAIK, intel wasn’t interested in updating their GPU without a new process.

Which is another proof of Intel's idiocy since chip design is more critical than ever over process improvements in this age.

Mr Chips
Jun 27, 2007
Whose arse do I have to blow smoke up to get rid of this baby?
Will TSMC 7nm produce ICs with physical features that are equivalent to Intel 7nm? I recall seeing some posts ITT about how the numbers from various manufacturers weren't a like for like comparison.

Xae
Jan 19, 2005

Mr Chips posted:

Will TSMC 7nm produce ICs with physical features that are equivalent to Intel 7nm? I recall seeing some posts ITT about how the numbers from various manufacturers weren't a like for like comparison.

TSMC 7nm is roughly equivalent to Intel 10nm.

Mr Chips
Jun 27, 2007
Whose arse do I have to blow smoke up to get rid of this baby?

Xae posted:

TSMC 7nm is roughly equivalent to Intel 10nm.
excellent - now I can join in on all the schadenfreude.

PC LOAD LETTER
May 23, 2005
WTF?!
IIRC Intel's 7nm is delayed to the 2022 timeframe too.

To be fair Intel's 7nm will probably be a fair amount better than TSMC's/GF's :airquote:7nm:airquote: but its also coming much later and will probably be forced to compete with TSMC's/GF's :airquote:5nm:airquote: process which might just offer similar performance/yields over all again.

Its possible Intel's once dominating process lead has been, if not permanently blown (since commercial process scaling is getting ever more difficult and after 5nm progress is probably going to slow down even more due to even greater increases in cost and difficulty), then effectively permanently reduced to being a minor advantage at best and only typically process parity the rest of the time.... or perhaps a moderate disadvantage vs TSMC/GF's process offerings at the time at worst. Which isn't going to be anywhere near, say, BD vs Sandybridge levels of horrible for them but its definitely not good. At that point they can only really expect to compete on design for the most part. Gotta keep those ASP's up after all otherwise they won't be able to afford to keep their fabs running!

A little dated but still fairly relevant article on this subject IMO. And a more recent sorta kinda follow up on that article from earlier this year.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
TSMC/GoFlo's 7nm is roughly on par with intel's 10nm in terms of pitch size, via size, density and sram cell sizes, +-10%ish depending on what metric you're looking at.

Mr Chips
Jun 27, 2007
Whose arse do I have to blow smoke up to get rid of this baby?

Xae posted:

TSMC 7nm is roughly equivalent to Intel 10nm.
excellent - now I can fully savour the schadenfreude.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
https://techreport.com/blog/33653/the-days-of-casual-overclocking-are-numbered

So TR is now saying what I said since the 4790K: Value-for-money based OCing is dead.

wargames
Mar 16, 2008

official yospos cat censor

Mr Chips posted:

excellent - now I can fully savour the schadenfreude.

https://www.overclock3d.net/news/cpu_mainboard/globalfoundries_expects_great_things_from_7nm_-_clocks_in_the_5ghz_range/1

is a claim being made.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Palladium posted:

https://techreport.com/blog/33653/the-days-of-casual-overclocking-are-numbered

So TR is now saying what I said since the 4790K: Value-for-money based OCing is dead.

On one hand, you're wrong, there's still close to 20% headroom to squeeze out on an 8700K and it's pretty easy to get them to at least 4.8/4.9 even without delidding. Good chips can do 5 GHz without delidding if you are comfortable with higher temps.

On the other hand, TR is also correct that AMD's XFR2 and NVIDIA's GPU Boost 3.0 do a good job of extracting almost all the performance the chip has to offer, that Intel is almost alone in that their turbo clocks are nowhere near what the silicon can do, and that as competition tightens up that they will probably be forced to push the silicon a little harder out of the box.

The real reason they haven't done so is largely TDP-centric - god, can you imagine the reviewers hand-wringing about a consumer CPU that pulls 150-200W out of the box? Because with 8C 14nm CPUs and 12C 7nm CPUs running at 5 GHz, that's where we're headed. Liquid cooling and Noctua D15s all around!

The first company to push TDP that high is going to get cockslapped by reviewers for no good reason... then the other will follow suit and reviewers will grudgingly accept it... then in 5 years it'll be the new normal. After all, IPC gains are gone, process gains are gone... and increasing core count will inevitably run up the TDPs unless you drop the clocks again. Barring some change in those fundamentals, that's just how it's going to be, you can pick any two of: good clocks, high core count, and decent TDP.

At some point there may even be some re-evaluation of whether we really need 12 cores just to play video games, if it means having a stupid TDP. Sure, enthusiasts will like having the multithreaded oomph, but a 8C at 5 GHz is probably going to be the gaming-value sweet spot for the next few generations (i.e. something like a Ryzen 3600X), with 6C settling as the ramen-noodles build. (not to imply that the current crop of 6C 5 GHz or 8C 4.2 GHz processors are bad, or will be bad anytime in the near future, but there is poised to be a pretty big value increment as we move onto 7nm and higher core counts next generation)

Intel will also need to either fix their Z-height problem or just solder the IHS to really extract all the potential of the chip... at some point the extra 200 MHz of headroom is going to be worth more than the $2 they save in solder.

Paul MaudDib fucked around with this message at 19:25 on May 17, 2018

Inept
Jul 8, 2003

Paul MaudDib posted:

On one hand, you're wrong, there's still close to 20% headroom to squeeze out on an 8700K and it's pretty easy to get them to at least 4.8/4.9 even without delidding.

That's not really value-based overclocking though. You get a decent performance bump, but you pay more for the CPU, motherboard, and heatsink, and have ongoing higher electric bills.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Inept posted:

That's not really value-based overclocking though. You get a decent performance bump, but you pay more for the CPU, motherboard, and heatsink, and have ongoing higher electric bills.

Performance bump and cost relative to what?

Inept
Jul 8, 2003

Upgrading with the saved money more frequently. For example, buying a new video card every 3 years instead of every 4. The analogy works less well for processors because of how stagnant performance has been. Unless you're buying the latest equipment every year or two and don't care about money, saving that money for future upgrades instead of paying more to push your current stuff makes more sense for most people.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
I feel like a lot of people underestimate the value proposition of overclocking because they only consider the increased cost versus what they invested in a CPU instead of the platform entire.

Sure, if you're paying $100-150 more so that you can get a K-chip with an aftermarket heatsink and a Z-chipset motherboard then you might be increasing your cost by 50% of the CPU line item and it seems like a big deal.

If you're building a whole new system for $1000+ though then the CPU is key to getting the maximum potential out of that whole investment. Getting just 5-10% more performance out of the CPU could easily be equivalent to having a system a generation newer, let alone 20+%, and could mean that you go another year or two without having to replace that investment or at least the CPU-RAM-mobo core of it. It also feels less wasteful to lean on the side of building expensive systems that last longer, versus cheaper ones which will be replaced sooner. Less so if you are effective at selling your old parts, but for me they tend to just gather dust unless I still have a use for them.

By building cheaper, just-adequate systems you do hedge against future generations performing better than expected, but on the other hand if you start out with more performance you get to enjoy that performance for the entire life of the system.

Eletriarnation fucked around with this message at 21:14 on May 17, 2018

evilweasel
Aug 24, 2002

Eletriarnation posted:

I feel like a lot of people underestimate the value proposition of overclocking because they only consider the increased cost versus what they invested in a CPU instead of the platform entire.

Sure, if you're paying $100-150 more so that you can get a K-chip with an aftermarket heatsink and a Z-chipset motherboard then you might be increasing your cost by 50% of the CPU line item and it seems like a big deal.

If you're building a whole new system for $1000+ though then the CPU is key to getting the maximum potential out of that whole investment. Getting just 5-10% more performance out of the CPU could easily be equivalent to having a system a generation newer, let alone 20+%, and could mean that you go another year or two without having to replace that investment or at least the CPU-RAM-mobo core of it. It also feels less wasteful to lean on the side of building expensive systems that last longer, versus cheaper ones which will be replaced sooner. Less so if you are effective at selling your old parts, but for me they tend to just gather dust unless I still have a use for them.

By building cheaper, just-adequate systems you do hedge against future generations performing better than expected, but on the other hand if you start out with more performance you get to enjoy that performance for the entire life of the system.

the amount of times that my cpu is what's bottlenecking my system down compared to my hard drive, my internet speed, or my gpu is rather miniscule

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

evilweasel posted:

the amount of times that my cpu is what's bottlenecking my system down compared to my hard drive, my internet speed, or my gpu is rather miniscule

This depends on what game you're playing. Like, wildly so. Many, many popular games are CPU bottlenecked, and even if the averages are OK the minimums sometimes are not.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

evilweasel posted:

the amount of times that my cpu is what's bottlenecking my system down compared to my hard drive, my internet speed, or my gpu is rather miniscule

That's fine, whether it's a good value proposition for you will of course depend upon your use case. It seems self evident that you should not invest much in solving problems you don't have. A lot of people buy new machines largely because the CPU in the old one isn't fast enough for games or whatever, and they're the main audience I had in mind.

Eletriarnation fucked around with this message at 22:02 on May 17, 2018

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

Inept posted:

That's not really value-based overclocking though. You get a decent performance bump, but you pay more for the CPU, motherboard, and heatsink, and have ongoing higher electric bills.

Since absolute performance gains have fallen off a cliff since 2011, I suspect that the gaming perf delta/$ of going from an 2500 build to an 8700 build is smaller than the difference between an 8700 and an efficiently OC'd 8700k. Overclocking is becoming more and more relevant, not less. It's going to lose a lot of relevancy once Intel catches up to the auto-OCing everyone else has already, but until then it's pretty much a no brainer. Even then, unless Intel changes something it's likely to be well worth the 50 or 60 bucks to delid and replace TIM on a high end CPU, since the thermal advantages are so dramatic it will likely have a big impact on the auto-OCing.

K8.0 fucked around with this message at 22:20 on May 17, 2018

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Paul MaudDib posted:

This depends on what game you're playing. Like, wildly so. Many, many popular games are CPU bottlenecked, and even if the averages are OK the minimums sometimes are not.

I'm not so sure about that. Some games that you'd think for sure would be CPU bound (for example, Factorio) are actually memory bound. You see the same in video processing too - many simple filters (such as resizers, 3x3 convolutions, simple FIR filters, etc) are actually memory bound these days. The CPU's are already too fast and there are too many threads.

Sininu
Jan 8, 2014

TheFluff posted:

I'm not so sure about that. Some games that you'd think for sure would be CPU bound (for example, Factorio) are actually memory bound. You see the same in video processing too - many simple filters (such as resizers, 3x3 convolutions, simple FIR filters, etc) are actually memory bound these days. The CPU's are already too fast and there are too many threads.

How does memory latency affect performance? I read that it gets worse and worse with each DDR generation.

Yaoi Gagarin
Feb 20, 2014

Sininu posted:

How does memory latency affect performance? I read that it gets worse and worse with each DDR generation.

It's not actually getting worse and worse, it's staying roughly the same but since the numbers are expressed in terms of clock cycles and the frequency is increasing the latency numbers appear to get bigger

Sininu
Jan 8, 2014

VostokProgram posted:

It's not actually getting worse and worse, it's staying roughly the same but since the numbers are expressed in terms of clock cycles and the frequency is increasing the latency numbers appear to get bigger

Ohh, that's interesting.

The Illusive Man
Mar 27, 2008

~savior of yoomanity~
So, given Intel’s never ending 10nm woes, this brings up a couple points I’ve been curious about for a while. And being as I’m most definitely not a CPU architect, I apologize in advance if these are overly moronic.

One thing I’ve been very curious about the past few years is if the x86 architecture is mostly ‘finished’ as far as major improvements goes - i.e., are we done with Sandy Bridge-style massive generational improvements? People have harped on Intel for not innovating due to AMD being non competitive prior to Ryzen, but I’ve wondered moreso if all the low hanging fruit has been picked and there’s simply not much left in the way of improvements that could be made to the x86 architecture, specifically in the ~100w TDP mainstream desktop space - which is why we keep getting the incremental 5% gains year after year but nothing more.

Secondly, is there a reason Intel doens’t go ahead and roll out a new architecture on 14nm instead of just increasing clocks and cores on the existing Skylake architecture? Since, unless I’m missing something, Kaby Lake and Coffee Lake are architecturally identical to Skylake, ignoring changes to the iGPU (as evidenced by identical IPC clock-for-clock).

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

TheFluff posted:

I'm not so sure about that. Some games that you'd think for sure would be CPU bound (for example, Factorio) are actually memory bound. You see the same in video processing too - many simple filters (such as resizers, 3x3 convolutions, simple FIR filters, etc) are actually memory bound these days. The CPU's are already too fast and there are too many threads.

Yeah, and I should also quantify that games that are CPU-bound are usually single-thread-bound. Really there aren't many games that can even saturate a 6-core 5 GHz processor given the current GPUs.

But yeah, tbh these days you should really be looking for at least 3000 or 3200 RAM. And if you really want to squeeze the maximum out of your system, there's stupider things than dumping an extra $200 into a 2x8 GB 4266 kit and looking for a memory-OC motherboard like a Maximum Apex, although I realize that my computer-building advice largely consists of :homebrew:. You do get what you pay for though, there is real performance to extract in a lot of titles from higher-end RAM and clocks.

In fact I'd say that next generation you are arguably better off going with a 6C and a better RAM kit than an 8-12C processor with crappier RAM, if you are tight on cash. Most games aren't going to saturate 6C worth of MT for a while.

Paul MaudDib fucked around with this message at 01:44 on May 18, 2018

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Space Racist posted:

Secondly, is there a reason Intel doens’t go ahead and roll out a new architecture on 14nm instead of just increasing clocks and cores on the existing Skylake architecture? Since, unless I’m missing something, Kaby Lake and Coffee Lake are architecturally identical to Skylake, ignoring changes to the iGPU (as evidenced by identical IPC clock-for-clock).

I'm rooting for Atom's time to shine. Skylake was their new architecture on 14nm, broadwell was the first!

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Sininu posted:

How does memory latency affect performance? I read that it gets worse and worse with each DDR generation.

For example, 1 cycle of latency at 100MHz has exactly the same absolute latency in nanoseconds as 2 cycles of latency at 200MHz, but you get 2x the bandwidth with the latter. Ditto for 10 cycles @ 1000MHz.

The relatively long RAM traces in the PCB already puts a hard limit on how low the absolute latency can go, but CPUs can definitely use the added bandwidth for iGPUs or more prefetching into the caches.

Adbot
ADBOT LOVES YOU

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

Space Racist posted:

So, given Intel’s never ending 10nm woes, this brings up a couple points I’ve been curious about for a while. And being as I’m most definitely not a CPU architect, I apologize in advance if these are overly moronic.

One thing I’ve been very curious about the past few years is if the x86 architecture is mostly ‘finished’ as far as major improvements goes - i.e., are we done with Sandy Bridge-style massive generational improvements? People have harped on Intel for not innovating due to AMD being non competitive prior to Ryzen, but I’ve wondered moreso if all the low hanging fruit has been picked and there’s simply not much left in the way of improvements that could be made to the x86 architecture, specifically in the ~100w TDP mainstream desktop space - which is why we keep getting the incremental 5% gains year after year but nothing more.

Secondly, is there a reason Intel doens’t go ahead and roll out a new architecture on 14nm instead of just increasing clocks and cores on the existing Skylake architecture? Since, unless I’m missing something, Kaby Lake and Coffee Lake are architecturally identical to Skylake, ignoring changes to the iGPU (as evidenced by identical IPC clock-for-clock).

You are correct that the low hanging fruit has been picked, even if the architecture were changed to ARM or something from a microarchitectural standpoint we are hitting diminishing returns. Back in the day doubling your transistor count could have meant something like adding a whole new level of cache which would result in huge IPC gains, now you're just making an already big cache bigger or making an already wide pipeline wider etc, there are hard limits to IPC. That's why the focus has gone to multi core because if you can split the workload across multiple threads you can get more out of your larger transistor budget by adding entire new cores, but this has limits for a lot of tasks as well.

That doesn't mean that it's impossible to beat Intel's anemic performance gains though, AMD claims a 10% IPC gain for Zen 2 and combined with faster clocks that could be pretty substantial. If Intel knew 10nm would be delayed possibly until 2020 perhaps they would have done a new arch on 14nm but at this point they will probably just wait it out.

MaxxBot fucked around with this message at 03:15 on May 18, 2018

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply