Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
PC LOAD LETTER
May 23, 2005
WTF?!

Beef posted:

The whole idea is that Intel offers cheaper SKUs that have a bunch of special-purpose accelerators disabled.
Are they actually dropping their prices though? Especially enough to matter?

The accelerators, going by the the commentary from the STH guys about Sapphire Rapids, are the main reason to buy that chip. If you don't use them they hardly make any sense to buy in the first place since otherwise it'll just get utterly spanked by Genoa for most applications.

Seems kinda dumb from a institutional standpoint to me too since the accelerators would really benefit from wide adoption to help justify software support. But then Intel has pretty much flubbed widely introducing the AVX512 stuff too so this would be just more of the same I guess.

Adbot
ADBOT LOVES YOU

SSJ_naruto_2003
Oct 12, 2012



PC LOAD LETTER posted:

Are they actually dropping their prices though? Especially enough to matter?

The accelerators, going by the the commentary from the STH guys about Sapphire Rapids, are the main reason to buy that chip. If you don't use them they hardly make any sense to buy in the first place since otherwise it'll just get utterly spanked by Genoa for most applications.

Seems kinda dumb from a institutional standpoint to me too since the accelerators would really benefit from wide adoption to help justify software support. But then Intel has pretty much flubbed widely introducing the AVX512 stuff too so this would be just more of the same I guess.

They insured that the avx512 capable chip I purchased can no longer do avx512 (12600k)

Beef
Jul 26, 2004
The economics for datacenter CPUs are totally different than client.

  • We're firmly in the "dark age of silicon", meaning that we can put a lot more transistors on a die than we can power. Transistor real estate is extremely cheap if it is a feature that can be power gated.
  • A few big players absolutely dominate sales. It makes economic sense (because of the previous point) to add something that only benefits a specific big customer's application (like SAP HANA, DPDK, ...).
  • Those accelerators are typically so special purpose that their use rarely overlaps. If customer A buys a new Xeon because of accelerator X, it likely does not care about accelerators Y and Z.
  • It does not make economic or technical sense to make a version of a Xeon without that Google-only or Oracle-only accelerator. So it gets fused off instead.
  • Having a single SKU means that everyone is essentially subsidizing the R&D for features used by only a few big players. Intel now actually has competition in the datacenter, so offering a cheaper version to customers is probably another way they can give the Xeons a competitive edge.

These are not in-core features like AVX-512 we're talking about. It's stuff being integrated inside the die but outside the core, like security enclaves and an IO accelerator.
Take DLB as an example, it is essentially a PCIe device attached to the on-die mesh network to help with a certain aspect of software-defined network processing. It is absolutely useless in Xeons for an HPC cluster. Perhaps you could use it for a handful of head nodes, so it might make sense to pay for that feature for those head nodes but not for all other worker nodes.


quote:

Seems kinda dumb from a institutional standpoint to me too since the accelerators would really benefit from wide adoption to help justify software support.

Absolutely. Hardware segmentation like that sucks from a software developer's point of view. It probably that only Intel is going to bother doing the development to either add support to existing applications or create new software products that rely on the hardware functionality.

BlankSystemDaemon
Mar 13, 2009



To put a fine point on it, no consumers matter to Intel - their entire business is built around big corporations as their consumers, and providing what those customers want.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

BlankSystemDaemon posted:

To put a fine point on it, no consumers matter to Intel - their entire business is built around big corporations as their consumers, and providing what those customers want.

A bold move given that customers increasingly want extremely high core counts per socket and rock bottom power usage per core, something that Intel is behind not only AMD but also Ampere and Annapurna Labs on.

PC LOAD LETTER
May 23, 2005
WTF?!

Beef posted:

The economics for datacenter CPUs are totally different than client.
Sure but I don't see how that works in favor of Intel's approach here (they'd have to significantly drive down costs of their Xeon money maker while still having the accelerators on die anyways while Genoa eats their lunch) and I don't see how it'll drive adoption of these accelerators either which will greatly diminish their practical value in the market place.

Intel is huge and can throw lots of resources at a given product to try and jump start adoption, but as they've shown with AVX512, if you limit the products that support it too much that doesn't much matter.

Their accelerators are powerful and could be compelling if they get widespread support, yes even if some of the use cases are niche now, but if they don't get support they'll be a waste of money and die space.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Intel buying Xilinx makes sense to be able to integrate these off-die SOCs and make them programmatically accessible even. It's a medium between the full-blown custom dies that many top dollar customers want and some companies are actually using FPGAs for their SDN already (namely MS and IBM at the very least for updating fabric routes for much better latency than via classic software-heavy SDN). Furthermore, FPGAs have been used for a good while by HFT firms and being able to sell the CPUs together in one package is certainly attractive for many use cases.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

necrobobsledder posted:

Intel buying Xilinx makes sense to be able to integrate these off-die SOCs and make them programmatically accessible even. It's a medium between the full-blown custom dies that many top dollar customers want and some companies are actually using FPGAs for their SDN already (namely MS and IBM at the very least for updating fabric routes for much better latency than via classic software-heavy SDN). Furthermore, FPGAs have been used for a good while by HFT firms and being able to sell the CPUs together in one package is certainly attractive for many use cases.

Minor correction, Intel bought Altera, AMD bought Xilinx. Your points stand though!

AMD just announced a Xilinx price hike :sigh:

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

PC LOAD LETTER posted:

Sure but I don't see how that works in favor of Intel's approach here (they'd have to significantly drive down costs of their Xeon money maker while still having the accelerators on die anyways while Genoa eats their lunch) and I don't see how it'll drive adoption of these accelerators either which will greatly diminish their practical value in the market place.

If the premise is that there are some features that only few customers want, then there are several scenarios how Intel could handle it.

1. They could manufacture different hardware SKUs, but this would cause unwanted expenses. Most CPUs wouldn't have the features, small portion would feature A, another B, and a tiny portion both A and B.

2. They could only only manufacture full featured CPUs. This would force most customers to overpay for features they don't want, and the few customers that want them would get the CPUs cheaper than they are worth.

3. They can hardware disable the features. This way they can build a single product, but sell it a different prices to customers.

4. They can software disable the features. Similar situation to the previous, but if you get a new version of the software that could benefit from the feature youcan then license it instead of buying new hardware. This would significantly improve the chances the feature would achieve wider use.

PC LOAD LETTER
May 23, 2005
WTF?!

Saukkis posted:

If the premise is that there are some features that only few customers want
Yeah I don't really buy that.

If it was truly only something that a relative handful of customers wanted Intel wouldn't bother in the first place. They have to have volume to keep their fabs profitable.

By making the features widely available they'd open the door to widespread adoption which could only help them sell more in the long run if support takes off. By paywalling it that possibility is pretty much 0 and they have to hope that those "few" customers will sign up to get shafted on pricing while competing against Genoa.

You'd might as well have argue that Intel should've paywalled AVX512, heck AVX256 while you're at it, years ago too lol

My WAG is the shareholders pushed for the Intel VIP's to try something, anything, to get more money in and this was what squeezed out.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
You've been buying CPUs and GPUs with functionality present but disabled for years. That isn't a big deal at all.

Reoccurring monthly fees to access hardware features is stupid bullshit though, and that is what this is looking to try to do.

WhyteRyce
Dec 30, 2001

All the people who cry foul over product segmentation would probably have a total meltdown if they ever saw a the giant product SKU spreadsheet or a soft SKU test plan

WhyteRyce fucked around with this message at 18:38 on Dec 1, 2022

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

PC LOAD LETTER posted:

If it was truly only something that a relative handful of customers wanted Intel wouldn't bother in the first place. They have to have volume to keep their fabs profitable.

By making the features widely available they'd open the door to widespread adoption which could only help them sell more in the long run if support takes off. By paywalling it that possibility is pretty much 0 and they have to hope that those "few" customers will sign up to get shafted on pricing while competing against Genoa.

It is a handful of customers. A handful of customers that represent an enormous revenue stream. If you can say "Hey, we run this task that represents 20% of your computing 60% faster than AMD can in the same wattage/space/whatever", that can be worth many millions of dollars.

At the same time, literally no one else gives a gently caress. All opening up the door does is mean that they have to charge through the roof on all their products with this feature, or they lose out on capturing the revenue that drove the development in the first place.

WhyteRyce
Dec 30, 2001

K8.0 posted:

It is a handful of customers. A handful of customers that represent an enormous revenue stream.

Yeah. I wonder how many product meetings some people have sat in because large Intel has been doing one-off catering to actual big deal customers (i.e. not your pcmasterrace crowd) since forever

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

necrobobsledder posted:

Intel buying Xilinx makes sense to be able to integrate these off-die SOCs and make them programmatically accessible even. It's a medium between the full-blown custom dies that many top dollar customers want and some companies are actually using FPGAs for their SDN already (namely MS and IBM at the very least for updating fabric routes for much better latency than via classic software-heavy SDN). Furthermore, FPGAs have been used for a good while by HFT firms and being able to sell the CPUs together in one package is certainly attractive for many use cases.

Intel tried on socket fpgas in the Skylake era, but the power budget on socket is too tight for it to be competitive. With CXL and Gen5 coming soon, it’s difficult to see on package or on die fpgas except for weird edge cases.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

WhyteRyce posted:

All the people who cry foul over product segmentation would probably have a total meltdown if they ever saw a the giant product SKU spreadsheet or a soft SKU test plan

Yeah these complaints seem to pop up every few years (product launch cycles!) and is the same bitchin' every drat time.

Honestly having a soft-settable ability to change the skus vs a fused version is even better because then you have an easy upgrade path if it requires it, without even needing to put hands on the hardware.

WhyteRyce
Dec 30, 2001

priznat posted:

Yeah these complaints seem to pop up every few years (product launch cycles!) and is the same bitchin' every drat time.

Honestly having a soft-settable ability to change the skus vs a fused version is even better because then you have an easy upgrade path if it requires it, without even needing to put hands on the hardware.

That software HT thing was odd because it was just Intel trying to recoup some money after grandma paid the least amount of money she could for some Gateway PC she bought at Walmart

Cygni
Nov 12, 2005

raring to post

I think it makes a ton of sense in Big Iron land. And personally speaking, I think a "pay once to unlock stuff" model for the consumer market could be a positive thing in decreasing ewaste and increasing the longevity for platforms. Of course, it could also be abused and end up worse for consumers.

As it stands, I have piles of CPUs that I bought for peanuts that very easily could have had years more service life if they had efuses to activate the rest of the silicon that is already present... stuff like Ivy Bridge/Haswell 2/2 Celerons that could have been one efuse away from being a 4/8 i7 still in daily use for general desktop users. And thats not even bringing up the thousands (millions?) of tons of laptops with low tier BGA CPUs that people hucked into the trash.

But it would all come down to the pricing. Anand's Law and all, "no bad products, just bad prices" (and no, he wasnt talking about products that could harm you, its just an easy maxim about pricing, cmon). But if people could pay $15 to turn these Haswell Celerys into, say, a 4790S and have something useful for a family member to take to college? Or to build an HTPC? That sounds like a win for everyone vs it ending up in the trash, and may actually end up giving Intel a better margin than selling a new low end boxed CPU. Of course, they could also inflate CPU prices across the board to compensate for peoples longer platform lives, and the whole thing could be a net negative for the consumer. But they could already do all that today, so.

I think part of peoples revulsion for the idea stems from the fact that they want to believe that they are buying a product that is giving everything the company could possible give them for the dollar out of the box, and that the margins are set by taking the cost to produce the thing and adding some fixed percentage on top...but thats not how silicon world has been for 25+ years. Reminding people of that fact pisses em off, so instead we all just continue pretending that a 7600X or 13600K had a "defect" that led to it being fused and sold for less, when in reality it was perfectly functional and cost the exact same to produce as the 7800X or 13900K next to it in the case. The price and performance difference was created by the producer to make more money.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
setting aside a more utopian set-up, my issue with that sort of thing breaking into the more mainstream consumer space is "how do you know?" - if it's possible to unlock a CPU, and it'll stay unlocked, forever, with no more need for any kind of validation or internet connectivity or whatnot until the end of time, that solves half the problem, but the other half is, if I'm buying second-hand i3-10100 that the seller claims has been unlocked to become an i5-10400, or even just an i3-10105, it seems like validation would be difficult at best.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Cygni posted:

I think it makes a ton of sense in Big Iron land. And personally speaking, I think a "pay once to unlock stuff" model for the consumer market could be a positive thing in decreasing ewaste and increasing the longevity for platforms. Of course, it could also be abused and end up worse for consumers.


Dell now sells you a PC with X Cores at Y Speed*

*3 month introductory promotion, retention of speed and core count requires monthly $19.99 subscription.


People would just jailbreak them or run whatever Sr. Juarez authentication server emulator the torrent site they use recommended.

In big iron land, it's even more of a nightmare because of how many software systems that could benefit from more cores are already licensed on a per core basis. I'm sure Oracle would be super interested in a list of all the customers who used it, so they could conduct a 'random' licensing audit and stick people with 10 more core licenses at a 5x 'pay or we sue you' premium.

Cygni
Nov 12, 2005

raring to post

gradenko_2000 posted:

setting aside a more utopian set-up, my issue with that sort of thing breaking into the more mainstream consumer space is "how do you know?" - if it's possible to unlock a CPU, and it'll stay unlocked, forever, with no more need for any kind of validation or internet connectivity or whatnot until the end of time, that solves half the problem, but the other half is, if I'm buying second-hand i3-10100 that the seller claims has been unlocked to become an i5-10400, or even just an i3-10105, it seems like validation would be difficult at best.

Yeah, it could easily end up similar to the AMD PSB problem for used CPUs where you basically have to take the sellers word for it and seek a refund if its wrong. Like the PSB problem, the bigger issue might end up being the listings in this theoretical that just say... nothing, because the nobody at the ewaste recycler bothered to check beyond "does it post?"


Methylethylaldehyde posted:

Dell now sells you a PC with X Cores at Y Speed*

*3 month introductory promotion, retention of speed and core count requires monthly $19.99 subscription.


People would just jailbreak them or run whatever Sr. Juarez authentication server emulator the torrent site they use recommended.

In big iron land, it's even more of a nightmare because of how many software systems that could benefit from more cores are already licensed on a per core basis. I'm sure Oracle would be super interested in a list of all the customers who used it, so they could conduct a 'random' licensing audit and stick people with 10 more core licenses at a 5x 'pay or we sue you' premium.

The monthly service thing is my nightmare as a hardware dork, but like you pointed out, we will probably be the ones to crack that. The market rejection of Stadia was a good sign of the general rejection of "hardware as a service", but I'm still worried its creeping in anyway.

On the big iron side, I was thinking more of Sapphire Rapids whole story being "we are the platform with the accelerators". Efusing those various accelerators for the customers that don't want them would allow Intel to charge them a lower upfront cost, and pass that charge on to the 2nd owner or whoever else if they want to turn those accelerators on. It makes sense from a segmentation standpoint.

AMD's story is "we are the platform with the cores/raw performance", and is artificially segmenting Genoa based on TDP. Aka performance per core. If you want a higher TDP, you pay more money for it, but the hardware underneath is identical. Intel instead segmenting based on accelerators, and allowing people to upsell later, might give them a different inroads to customers, because they wont win going against Genoa head on. So yeah... it makes sense to me.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

in a well actually posted:

Intel tried on socket fpgas in the Skylake era, but the power budget on socket is too tight for it to be competitive. With CXL and Gen5 coming soon, it’s difficult to see on package or on die fpgas except for weird edge cases.
I imagine that Intel engineers were aware of the power profile and wanted to see what they could sacrifice for some heterogeneous compute given a lot of FPGA gains were really from process improvements than anything else and Intel was leading for quite a while. Granted, integrated heterogeneous computing really just never took off commercially and the biggest example of that reality is the lack of competitive iGPU options compared to discrete GPU. However, if you look at a total market perspective iGPU is a grand slam half court dunk given that's what basically everyone does... except the company that did it best isn't even Intel or AMD but Apple. Which goes back to why the software side of the equation for a heterogeneous compute model is the real holy grail. It's part of why I worked on heterogeneous compute compilers and FPGA firmware designs for a while in school - the compute model to use it all was so much friction for developers that it requires a company like Intel, AMD, or perhaps TSMC to really shape up the software (we tried various research programming languages and vector languages like Matlab or R / S Plus were settled on). Except with nVidia they went with shaders on GPU render pipelines and ran the GPGPU ball back and forth across the field with the resurgence of neural nets. So basically nVidia was better positioned in many respects to do heterogeneous compute circa 2010. But that's what the nVidia Ion chipset series sort of became in a sense as a product - to determine if nVidia could go to market as the OEM and they just noped out in the end and stuck with discrete devices.

mmkay
Oct 21, 2010

Who is Big Iron and where does the name come from?

ijyt
Apr 10, 2012

idk but it sounds like one of those buzzwords that one person uses correctly and the rest of the thread latches on to for a 4-12 week period

BlankSystemDaemon
Mar 13, 2009



mmkay posted:

Who is Big Iron and where does the name come from?
Big Iron doesn't exist anymore, the last Big Iron machine made was the Sun E10000.

I guess some people also refer to mainframes as Big Iron, but in my mind they were different back in the day.

Cygni
Nov 12, 2005

raring to post

Its a term that’s been in use since the 80s (maybe earlier?) to describe what used to be called “mainframes”. Big processing, big computing, big boxes, big iron. It’s now more broadly used to describe big server/datacenter customers, as the old “mainframe” concept has itself been kinda eroded by cloud/distributed compute and x86 in general.

“Big iron” these days to me are the big customers ordering and deploying batches of hundreds/thousands of compute units/pizza boxes. I generally wouldn’t include supercomputers in that though… as that’s a whole other ballgame. I dunno if I would include the hyperscalers either, but that might just be me. It’s a fun old term, have fun with it.

E: according to techopedia, the term is from the 1970s and was used to differentiate from in-vogue terms like minicomputer and microcomputers. Ahh what a time. https://www.techopedia.com/definition/2157/big-iron

Cygni fucked around with this message at 10:16 on Dec 2, 2022

BlankSystemDaemon
Mar 13, 2009



Mainframes existed before big iron, and still exist to this day.

Minis and micros were, respectively, fridge sized and palm sized.

Cygni
Nov 12, 2005

raring to post

BlankSystemDaemon posted:

Mainframes existed before big iron, and still exist to this day.

Minis and micros were, respectively, fridge sized and palm sized.

Tbh its not like there are real definitions for any of those terms. If you Google it, the PET/Apple II/TRS-80 are all described as micros (there weren’t really palm sized computers in the 70s), and Wikipedia and Techopedia both have Big Iron and mainframes as synonymous.

It doesn’t really matter, they are all fun terms, not legal descriptors.

BlankSystemDaemon
Mar 13, 2009



Yeah today I'm not convinced it matters a lot.

As an example, poudriere@bigiron.local is is the user on my buildserver that's responsible for building both FreeBSD and the ports I use, as well as the documentation tree whenever I need to test changes.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

BlankSystemDaemon posted:

Big Iron doesn't exist anymore, the last Big Iron machine made was the Sun E10000.

I guess some people also refer to mainframes as Big Iron, but in my mind they were different back in the day.

How do *you* define big iron?

shrike82
Jun 11, 2005

Something like a DGX if we're talking about stuff we actually work with

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

mmkay posted:

Who is Big Iron and where does the name come from?

Think Marty Robbins had a song about him.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

carry on then posted:

Think Marty Robbins had a song about him.

Now I'm picturing some dudebro silicon valley dipshit with socks+sandals, a 10 gallon Stetson, and a 1U pizzabox server poorly belted to his hip, 40mm case fans screaming in protest.

Cygni posted:

The monthly service thing is my nightmare as a hardware dork, but like you pointed out, we will probably be the ones to crack that. The market rejection of Stadia was a good sign of the general rejection of "hardware as a service", but I'm still worried its creeping in anyway.

Stadia was poo poo for a whole bunch of reasons entirely unrelated to the underlying Hardware as a Service model.

Honestly I'd expect the PS5+/XBX-Xtreme to flirt with it before desktop/server hardware tries it. "Unlock Pro mode, 120hz output, and 15 extra Apex Legends FPS, for only $19.95/mo!". If you can monetize the compute efficiency of the mid-cycle refresh, that's tens of millions of dollars in basically free money. And the PS5/XBX is a locked down enough platform that they could probably actually prevent most people from being able to unlock the extra shaders+cache without too much effort. Or at least console banning people for jailbreaking it, forcing offline only mode.

Methylethylaldehyde fucked around with this message at 23:46 on Dec 2, 2022

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Methylethylaldehyde posted:

Stadia was poo poo for a whole bunch of reasons entirely unrelated to the underlying Hardware as a Service model.

What problems did it have? The one time I tried it at friend's place it worked fine and he had been happy with the service.

Inept
Jul 8, 2003

Saukkis posted:

What problems did it have? The one time I tried it at friend's place it worked fine and he had been happy with the service.

Buy games and they only work on stadia
Few games
Run by Google so people immediately knew it would get dropped within a few years
They bought some game studios and shitcanned them in under a year
Proprietary game controller that needs the stadia app to connect to a PC, so the hardware is now e-waste

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Have any reviewers looked at hyper-threading on/off on Alder Lake / Raptor Lake?

I don't understand the trade-offs of hyper-threading anymore now that CPUs are acting more like GPUs where they are frequently power limited. Hyper-threading uses additional power, so will I get higher clocks for a given load without hyper-threading? Given that a 13900K has 24 physical cores, does it really need the 8 more hyper-thread cores?

BlankSystemDaemon
Mar 13, 2009



HalloKitty posted:

How do *you* define big iron?
A single server, with a backplane that CPUs and daughterboard expansion cards attach to, that takes up an entire rack.
Sort of like this and a whole bunch of SAS chassis as well as a bunch of daughterboard expansion chassis.

The issue with that, of course, is that motherboards look nothing like that anymore - nowadays we have no northbridge that the CPU connects to, and there's no single bus that carries everything; instead we have a CPU with its own PCI lanes as well as a bus to carry everything else to the PCH.

BlankSystemDaemon fucked around with this message at 06:00 on Dec 3, 2022

Potato Salad
Oct 23, 2014

nobody cares


I think that while we try to use big iron to describe a system, what really matters is how it's describing a use case.

Beef
Jul 26, 2004

Twerk from Home posted:

Have any reviewers looked at hyper-threading on/off on Alder Lake / Raptor Lake?

I don't understand the trade-offs of hyper-threading anymore now that CPUs are acting more like GPUs where they are frequently power limited. Hyper-threading uses additional power, so will I get higher clocks for a given load without hyper-threading? Given that a 13900K has 24 physical cores, does it really need the 8 more hyper-thread cores?

Hyperthreading makes logical threads share most of the core's infrastructure with only a negligible transistor overhead to save register state and such, so the hardware power cost difference of HT on or off can be safely ignored.

The consideration of turning HT on or off has more to do with efficiency: if enabling HT for your application makes it perform worse, it will drop the watt/perf ratio. If your workload runs faster with HT, it increases its power efficiency.

Think of it this way; if a thread spends half its time idle waiting for an external memory load, it is wasting half its power. A core will waste less power if it can switch to another logical thread that can make progress. On the flipside, if the core is happily number crunching away, it will waste power if it has to switch to another thread.

Rule of thumb whether HT will help or not: are you running a large amount of threads that are doing a lot of memory or IO accesses? Then leave HT enabled. That means raytracing, encoding/compression, databases, webservers, compiling etc. Theoretically, games could benefit, but rarely are running enough threads to benefit.


edit: forgot to mention that modern GPUs are essentially giant hyperthreaded machines. There is a ton of transistors dedicated to saving a stupid amount of register states, each pixel gets its own logical thread, and a hardware scheduler.

Beef fucked around with this message at 10:35 on Dec 4, 2022

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

BlankSystemDaemon posted:

Mainframes existed before big iron, and still exist to this day.

Minis and micros were, respectively, fridge sized and palm sized.

I dont think at any point anyone has used micro to mean, like, a palm pilot.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply