Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
canyoneer
Sep 13, 2005


I only have canyoneyes for you
I've heard the old joke that high performance computing is the practice of turning a CPU bound problem into an I/O bound problem.

Thin client computing sounds like the next level of that

Adbot
ADBOT LOVES YOU

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Flipperwaldt posted:

They don't have backups at all and some hdd failure modes allow you to recover some data, which is what they use as their retarded safety net, whereas this is a lot less likely to be possible with ssds because they tend to crap out suddenly and entirely without warning. Is what I took from that.

e: also, in case of total failure, a specialist data recovery firm can get something back from platters, where they probably won't from a busted ssd.

If it makes you feel better I work in a research group of non-programmer types who use an in-house code/library to do our research and one of the "programmer dudes" just set up a git server for us since we didn't have some kind of centralized repository. It's set up on his work machine and I asked him what the backup solution was when I started here because I know for a fact our work computers/laptops don't have a routine backup system. He shrugged and said there is none at the moment (and there still isn't) and thinks it's not a big deal because git keeps a local version on your computer so there are backups on everyone's computer :downs:.

Keep in mind this is a university research group and he is encouraging us to keep things like our academic papers and thesis drafts and experimental data saved in this system, without a backup solution.

Rastor
Jun 2, 2001

Boris Galerkin posted:

If it makes you feel better I work in a research group of non-programmer types who use an in-house code/library to do our research and one of the "programmer dudes" just set up a git server for us since we didn't have some kind of centralized repository. It's set up on his work machine and I asked him what the backup solution was when I started here because I know for a fact our work computers/laptops don't have a routine backup system. He shrugged and said there is none at the moment (and there still isn't) and thinks it's not a big deal because git keeps a local version on your computer so there are backups on everyone's computer :downs:.

Keep in mind this is a university research group and he is encouraging us to keep things like our academic papers and thesis drafts and experimental data saved in this system, without a backup solution.

I would actually kinda sorta back up the programmer dude on that, any person's local git repo could be used as the base of a git repo going forward.

Still, it's always best practice to have backups.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Easiest solution is to use a hosted git repo. Bitbucket is free, GitHub is free for public repos or they offer a student pack that includes 5 free private repos.

He's right though, barring a freak fire that destroys all your machines at once you could just copy the repo over from someone else's machine. There's no inherent "central repository" in the git model.

I have had local git history get corrupted so it is a good idea to have it in more than one place, however.

WhyteRyce
Dec 30, 2001

Given the amount of people I work with who never push their changes and let their local version get months out of sync before trying to push all their changes and then acting confused and angry when it fails, that can be dangerous

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
I play fast and loose at times, but if you were using it for things that would cost money to replace or worse, make money for the business, I would put the kibosh on that immediately. How hard is it to get a linux VM spun up to host the git repo? surely your central IT can give that to you for peanuts.

pigdog
Apr 23, 2004

by Smythe
[wrong thread]

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

feedmegin posted:

Or, y'know, Moores Law is dying, as we always knew it would. We see only small improvements now because we've pushed the physics about as far as it can go. A healthier AMD might push prices down but it wouldnt make speeds shoot up again because, again, physics.

It's dead in the sense that we're not going to see doubling every 18 months anymore but it's not dead in the sense that we have to settle for paltry 5% performance improvements every generation like we've been getting from Intel desktop CPUs. GPUs, server CPUs, FPGAs, mobile CPUs, etc have all continued to improve at much higher rates, those too will probably start to hit a wall sometime in the 2020s but we're not there yet.

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

That paltry 5% increase in performance but the huge gains in power efficiency means that you can have a laptop you don't have to charge for 8 hours.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Boiled Water posted:

That paltry 5% increase in performance but the huge gains in power efficiency means that you can have a laptop you don't have to charge for 8 hours.

Except that the battery is still gonna die after like 6 months because battery technology is poo poo and definitely hasn't improved every 12/18/24 months.

NihilismNow
Aug 31, 2003

Boris Galerkin posted:

Except that the battery is still gonna die after like 6 months because battery technology is poo poo and definitely hasn't improved every 12/18/24 months.

I was promised disposable ethanol fuel cells like a decade ago. Where are they?

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Yeah I didn't mean that to be a complaint as much as just pointing out how we've made a lot more progress in areas other than single-threaded CPU performance, and that we're not butting up against a physics wall yet. Now if Nvidia or AMD released a new line of GPUs with only a 5% performance improvement that would be something to panic about.

Arsten
Feb 18, 2003

NihilismNow posted:

I was promised disposable ethanol fuel cells like a decade ago. Where are they?

After the fifth or sixth immolated research scientist, others have stopped offering to thin the herd.

HMS Boromir
Jul 16, 2011

by Lowtax

Boiled Water posted:

That paltry 5% increase in performance but the huge gains in power efficiency means that you can have a laptop you don't have to charge for 8 hours.

Honestly, Intel switching to a power efficiency focus seems win/win to me. Laptop owners get better battery life and desktop owners (IE, me) get to upgrade less often and save money. I expect most people on a hardware enthusiast forum like this one would gladly pay extra for more power if it was available but personally I'm very happy with the possibility that my 6600K build will last me most of a decade, barring something weird like PCIe 4.0 forcing me to upgrade if I want a new GPU.

computer parts
Nov 18, 2010

PLEASE CLAP

HMS Boromir posted:

Honestly, Intel switching to a power efficiency focus seems win/win to me. Laptop owners get better battery life and desktop owners (IE, me) get to upgrade less often and save money. I expect most people on a hardware enthusiast forum like this one would gladly pay extra for more power if it was available but personally I'm very happy with the possibility that my 6600K build will last me most of a decade, barring something weird like PCIe 4.0 forcing me to upgrade if I want a new GPU.

A lot of hardware enthusiasts just like it to make numbers go up.

In terms of actual processing need, there hasn't really been that much for a long time. That's probably a major reason why you're not getting giant advances in technology - what's the point?

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

Boris Galerkin posted:

Except that the battery is still gonna die after like 6 months because battery technology is poo poo and definitely hasn't improved every 12/18/24 months.

If you stop treating your electronics like poo poo you'll see that batteries actually last a fairly long time.

NihilismNow
Aug 31, 2003

HMS Boromir posted:

Honestly it seems win/win to me. Laptop owners get better battery life and desktop owners (IE, me) get to upgrade less often and save money. I expect most people on a hardware enthusiast forum like this one would gladly pay extra for more power if it was available but personally I'm very happy with the possibility that my 6600K build will last me most of a decade, barring something weird like PCIe 4.0 forcing me to upgrade if I want a new GPU.

It is available just not at the price we may like. You can buy a 12 core 24 thread 3.0 ghz monster today if you want. Quad socket if you need (price may equal mortage downpayment).

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

computer parts posted:

In terms of actual processing need, there hasn't really been that much for a long time. That's probably a major reason why you're not getting giant advances in technology - what's the point?

144hz or 165hz monitors, or VR.

Gaming is a niche market, but you simply cannot do without good single-core performance. Multiple cores helps nowadays too, but you can't escape Amdahl's Law. There's usually some critical portion that needs to be run as fast as possible. For DX11 and previous, that is the thread which makes drawcalls (since that cannot be multithreaded in those APIs). Also, multi-threading implies synchronization overhead that is not present in a single-threaded implementation.

High-refresh, in particular, is a growth market. You can get a 144hz 1080p monitor for about $150-170, or a 1440p 100hz monitor for $200. A 1440p 144hz monitor runs you about $350-450 nowadays. Right now this relies heavily on Free/G-sync because the hardware simply cannot reliably push enough frames.

Similarly, if you can't push 90hz 1440p you can't really do VR. Latency or microstutter produces unacceptable physical responses (people hurl). Free/G-sync produces an unacceptable amount of latency here.

What is out right now is barely good enough for many 60hz situations. At higher refresh rates, it's not good enough. You really need at least a factor of 2 for both GPU and CPU relative to what is currently available, across the board.

Paul MaudDib fucked around with this message at 02:44 on Mar 22, 2016

mobby_6kl
Aug 9, 2009

by Fluffdaddy
There are now more uses for computing power than ever, it's just no longer required for poo poo like emails and cat pictures. But machine learning/AI, image processing and VR stuff are growing fields and are way more computationally intensive than basic office tasks ever required or will require. In this context, having only 5% yearly gains is very frustrating.

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

Machine learning as a whole don't require that much. Most laptops compute very useful linear models in seconds. Non-linear models on the other hand, now there's room for improvement. Most of that improvement won't be found in a CPU though, it'll be through GPU acceleration.

Anime Schoolgirl
Nov 28, 2002

and it's mostly because of the insane parallelization GPUs afford, which people still have a hard time doing with CPUs

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

That makes it a non-problem doesn't it? Just throw GPU power at machine learning and you'll (mostly) be fine. At some point you'll hit the PCI-e bottleneck and then you can worry about CPUs.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Yeah that's why the 5% gains don't frustrate me too much, all of the cool stuff is happening with GPUs and they're still making leaps and bounds for now. The new Pascal Titan is supposed to be a 100% improvement in raw compute power over the Titan X. If DX12 is able to successfully reduce burden on CPUs and better utilize many-core CPUs then I'll probably stop caring completely.

Also in respects to the AI stuff there has been a lot of progress on neuromorphic chips which completely deviate from the traditional von neumann architecture which has been computing as we know it for decades. Traditional CPUs are really ill-suited for these types of tasks and even GPUs might not be used for this stuff after the neuromorphic architectures are better developed and researched.

I think that the future of computing architecture is going to be CPU + GPU + FPGA/Neuromorphic ASIC all on the same die with some HBM and shared cache. The traditional CPU is going to be relegated to roles where serial processing is absolutely necessary while the other components do the heavy lifting.

MaxxBot fucked around with this message at 01:02 on Mar 23, 2016

SwissArmyDruid
Feb 14, 2014

by sebmojo
http://www.pcper.com/news/Processors/Intel-officially-ends-era-tick-tock-processor-production

Finally.

Rastor
Jun 2, 2001


Moore’s law really is dead this time.

computer parts
Nov 18, 2010

PLEASE CLAP

mobby_6kl posted:

There are now more uses for computing power than ever, it's just no longer required for poo poo like emails and cat pictures. But machine learning/AI, image processing and VR stuff are growing fields and are way more computationally intensive than basic office tasks ever required or will require. In this context, having only 5% yearly gains is very frustrating.

That's my point though. For a long while (the past decade or so) we haven't really needed much beyond 5% yearly gains.

VR is an actual demand source now, but it remains to be seen how big it actually is. If lots of people want VR, there's your incentive to increase the numbers again.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Anime Schoolgirl posted:

to fit a pin count for PGA that supports quad channel you need a pcb the size of an lga2011 chip for a 1200-1300 pin chip, LGA2011 is only very slightly bigger than a g34 (whereas c32 is slightly bigger than an LGA115x)

i don't think we're going to see quad channel on the consumer end for quite a while, even, especially as ddr4 will look to be practically 75% faster than ddr3 and 40-48gbps would be enough for a 14nm GPU on a 65w APU

Remember when I said AMD is likely going for quad channel DDR4 for high end APU systems?

They're expressly building a chip with about that pin count. So the vast majority of boards will likely come with dual channel DDR4, but high end boards now have a stronger argument for quad channel due to pin count. Also, the PGA socket can be switched out to LGA socket, and with a max supported TDP of 140W I'm wondering if this doesn't mean some server processors will be available for enthusiasts in PGA configuration.

EDIT: Also, with 1331 pins, hype for Bristol Ridge being Excavator Opterons?

EmpyreanFlux fucked around with this message at 21:47 on Mar 23, 2016

Anime Schoolgirl
Nov 28, 2002

I'll take that with a cube of salt given that it's WCCF which sources from an equally dodgy site, but the PCB will be loving hilarious if they did manage to fit that many PGA pins in the 939 size :allears:

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Anime Schoolgirl posted:

I'll take that with a cube of salt given that it's WCCF which sources from an equally dodgy site, but the PCB will be loving hilarious if they did manage to fit that many PGA pins in the 939 size :allears:

Well, they are claiming µOPGA and the way it's described makes it sound like they're putting LGA socket pins on the CPU instead, so imagine a 1366 socket pins on a CPU PCB? Also, not sure how comfortable I am about such small, weak pins on the CPU, as CPUs are an order more expensive than a motherboard.

PC LOAD LETTER
May 23, 2005
WTF?!
You just can use a mechanical pencil with no lead in it to fix a bent CPU pin. Though maybe a large syringe needle would be better for pins as small as they're aiming for. CPU's are also easier to ship to get fixed or RMA if the worst happens and you can't fix it.

I'd love to be wrong about them not doing quad memory channel but I'm still not optimistic even though the WCCF article does have some interesting stuff in it. It also seems a bit strange given we know they're going to put out a HBM2 based APU sometime in 2017. Unless the HBM2 they're using is very low capacity, which wouldn't make sense, it'd make quad memory channel APU's largely pointless.

Maaaaybe if the HBM2 APU will their "super high end ~$300+ APU" and the quad channel memory APU's will be for "mainstream/sub-$300 APU"? I'd assume there would be a whole lot of cheaper (~$200) APU's limited to dual channel for lower end systems. Just spitballin' there but nothing else really makes sense to me.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

PC LOAD LETTER posted:

You just can use a mechanical pencil with no lead in it to fix a bent CPU pin. Though maybe a large syringe needle would be better for pins as small as they're aiming for. CPU's are also easier to ship to get fixed or RMA if the worst happens and you can't fix it.

I'd love to be wrong about them not doing quad memory channel but I'm still not optimistic even though the WCCF article does have some interesting stuff in it. It also seems a bit strange given we know they're going to put out a HBM2 based APU sometime in 2017. Unless the HBM2 they're using is very low capacity, which wouldn't make sense, it'd make quad memory channel APU's largely pointless.

Maaaaybe if the HBM2 APU will their "super high end ~$300+ APU" and the quad channel memory APU's will be for "mainstream/sub-$300 APU"? I'd assume there would be a whole lot of cheaper (~$200) APU's limited to dual channel for lower end systems. Just spitballin' there but nothing else really makes sense to me.

That's what I am thinking as well. It's been teased but not confirmed that consumer dies are much smaller than server dies for Zen, by a noticeable magnitude. If so then it's entirely possible quad channel will be exclusive to consumer variants of Zen server dies, with mainstream Zen being dual channel (and thus not having to screw around much with with separate mainstream Zen dies with different memory controllers). Motherboards would then be designed to accommodate dual or quad channel chips. They'd probably separate APUs into A4, A6, A8 and A10 mainstream Zen dies and Polaris, and A8 and A10 server dies with either Polaris or Vega. Any Vega APU (with HBM2) would likely use a very similar memory technique the current Fury drivers use to get around the 4GB limit without noticeable loss in performance.

This would be the perfect time to bring back Duron, Athlon and Phenom; Phenom are server dies, Athlon are mainstream, Duron are mainstream with disabled bits. No one liked Semiporn anyway, and it's overall a bad name.

canyoneer
Sep 13, 2005


I only have canyoneyes for you

FaustianQ posted:

No one liked Semiporn anyway, and it's overall a bad name.

Those were the processors on late night Cinemax right?

PC LOAD LETTER
May 23, 2005
WTF?!
Eeehhhh I'd ditch the Phenom branding altogether. Too much stink on it now. Duron for the low end would be perfect though. Maybe something like Athlon-ZP+ branding for higher end versions and Athlon-Z for mainstream chips.

Though honestly I kinda half want Intel and AMD to go back to the old school naming scheme from the 486 days and just call their chips 14-86's or whatever generation they count themselves as up to now. Not like most people care that much about CPU branding. Especially these days.

NihilismNow
Aug 31, 2003

PC LOAD LETTER posted:

Though honestly I kinda half want Intel and AMD to go back to the old school naming scheme from the 486 days and just call their chips 14-86's or whatever generation they count themselves as up to now. Not like most people care that much about CPU branding. Especially these days.

People absolutely care about branding. Many people buy a CPU/PC/laptop based purely on "It's a i7 so it is powerful" without looking at further specs. Even in IT departments i see this with some people "vendor says we need a i7 so we must buy i7's, can't buy a i5, won't be supported". I've even seen support calls where users specifically demand a i7 because they need one. Intel has trained people up really well with where Celeron/Pentium/i3/i5/i7/xeon fit on the hierarchy.
There is nothing to gain by going to a name that can't be protected by trademarks like n-x86. You can't build brand loyalty based on a generic name.

PC LOAD LETTER
May 23, 2005
WTF?!
Most of those are the types you could say "its 64bit so its powerful" to and they'd buy it though. They have little to no understanding of the underlying tech and don't really know what it is they're buying. Just that its "better" somehow.

Your IT dept. example has nothing to do with branding too. They're buying what they're required to buy because of software or hardware requirements and will likely actually know what for and why they're buying it and will have had to justify the expense to someone up on high who doesn't want to spend the money unless they have to. Branding will have little to no effect on them.

Plenty of people asked for 486's, using just that name, because they needed one too back in the day.

No one has brand loyalty these days in the CPU market. If they did Intel would own the mobile phone CPU market too. Or at least have a decent chunk of it. But as things stand now they've got nearly nothing and that looks increasingly unlikely to change any time soon. That they own the desktop and laptop PC market right now has more to do with a lack of effective competition than branding.

edit: PC's are advertised in all sorts of manner. I've seen more people focus on hard drive size or main system memory or even case color than the CPU. Many still ask for computers with the "Pentiums" and more "hurtz". The CPU is just 1 component in a complex system they have hardly any understanding of. The branding of it means very very little to the average person and the lack of knowledge means they're far more easily swayed by other factors. \/\/\/\/\/\/\/

PC LOAD LETTER fucked around with this message at 12:32 on Mar 24, 2016

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
That's different though. PCs are advertised with an x core i7, which is obviously better than an i5. People who buy PCs have a choice. They can get a Dell with an i7, or an i5. They can also get one with an nvidia, or a Radeon. People don't give so much a poo poo about the Dell brand than they do about the i5 brand. Nobody cares if they get an hp or Dell PC as long as it has an i5. The market here isn't so much the PC manufacturer but what's inside it. Two people who talk about their Dell and HP they both bought the other day are gonna compare it on the basis of if it has a 4 core i5 or a 8 "core" i7, or if it's s nvidia or Radeon.

Laptops are a tiny bit different because people care in this area because it's a bigger difference. A Thinkpad looks ugly and huge next to a MBP or whatever Sony ultra slim they have. Plus you know Sony and Apple are household names. There's still some options for processors and what have you but it's less important because people will either pick a MBP because they want one, or a Sony ultrabook because it's lightweight and has an i7, or a Thinkpad because their work ordered them one.

Phones are entirely different because people don't give a poo poo at all what's in it. They want an iPhone 6S. They want the new Samsung. They don't want a Samsung so they get a LG. IPhone users know their processors get better every year. Android users who buy top of the line android phones are just gonna stick with the next iteration of their Samsung or LG phones, and the ones that don't are just going to buy whatever cheapest phone is offered to them.

tldr PCs are marketed by the i5/i7/AMD/NVIDIA brand. Phones are marketed on the company's brand. Brand loyalty exists for PCs wrt processors but not in the phones department because the manufacturers have done a way better job marketing the Apple/Samsung name than the "A9" or "Snapdragon" names.

Boris Galerkin fucked around with this message at 12:10 on Mar 24, 2016

feedmegin
Jul 30, 2008

MaxxBot posted:

It's dead in the sense that we're not going to see doubling every 18 months anymore but it's not dead in the sense that we have to settle for paltry 5% performance improvements every generation like we've been getting from Intel desktop CPUs. GPUs, server CPUs, FPGAs, mobile CPUs, etc have all continued to improve at much higher rates, those too will probably start to hit a wall sometime in the 2020s but we're not there yet.

No, it really is, in that sense. Not just intel desktop CPUs, by the way, if that's all there was to it you'd see IBM's POWER line racking up the clocks, or SPARC or even Itanium. (which server CPUs do you think are increasing in speed above 5% or so year on year, by the way?)

GPUs and FPGAs are both highly parallel which gets them round the clock speed limit - but not all tasks are parallel, or the growing popularity of multicore systems would have lead to much greater performance improvements. Most code is inherently serial (do this thing in order that you can do this thing in order that you can do this thing) and there isn't some magical way around that for all the research that has been done on auto-parallelisation. Mobile CPUs aren't pushing the highest possible clock speed, they're aiming for power efficiency, so again they're not hit by the hard physical limit on how high we can clock things.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

feedmegin posted:

No, it really is, in that sense. Not just intel desktop CPUs, by the way, if that's all there was to it you'd see IBM's POWER line racking up the clocks, or SPARC or even Itanium. (which server CPUs do you think are increasing in speed above 5% or so year on year, by the way?)

GPUs and FPGAs are both highly parallel which gets them round the clock speed limit - but not all tasks are parallel, or the growing popularity of multicore systems would have lead to much greater performance improvements. Most code is inherently serial (do this thing in order that you can do this thing in order that you can do this thing) and there isn't some magical way around that for all the research that has been done on auto-parallelisation. Mobile CPUs aren't pushing the highest possible clock speed, they're aiming for power efficiency, so again they're not hit by the hard physical limit on how high we can clock things.

I didn't mean clock speed, I mean overall compute performance of the chip. Server CPU clock speeds have been steady for a while but their core counts and overall compute performance has still improved a fair amount every generation especially when compared to desktop CPUs. Plenty of server applications can make good use of the extra cores as it can directly translate to being able to have more VMs running on the machine. There are a lot of things that are hard to parallelize but look at what sort of things people are trying to do now days that require a ton of compute. Machine learning, AI, advanced image/video processing, all of those tasks are highly parallelizable or even can use alternative architectures as I mentioned earlier where clock speed becomes totally irrelevant.

feedmegin
Jul 30, 2008

MaxxBot posted:

I didn't mean clock speed, I mean overall compute performance of the chip. Server CPU clock speeds have been steady for a while but their core counts and overall compute performance has still improved a fair amount every generation especially when compared to desktop CPUs. Plenty of server applications can make good use of the extra cores as it can directly translate to being able to have more VMs running on the machine.

Sure...but the argument that started this off was something like 'if AMD were stronger Intel would be forced to compete and our desktop CPUs would be shooting up in performance again'. My counter-argument is that this is untrue because of physics, and this remains the case. People at home, even power users, aren't generally running dozens of VMs or high-traffic webservers or whatever so giving them more CPUs or more cores wouldn't do them any good, and otherwise we are stuck with 5% improvements in the only area where improvements matter for anything that you can't do with your GPU.

Adbot
ADBOT LOVES YOU

HMS Boromir
Jul 16, 2011

by Lowtax
IPC is what's getting the 5% boost per generation anyway. Clock speeds are practically stagnant.

  • Locked thread