Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009




priznat posted:

It’s pretty wild it lasted as long as it did, iirc it was supposed to be based on five for the 586.

Intel never gave us a Sexium though :negative:

E: lol they did trademark it though https://alter.com/trademarks/sexium-74695934
Didn't it start out life as the P5 microarchitecture which introduced MMX and superscalar functionality to the original 80386 design?

The 80486 added a few microarchitecture changes, but the big difference was the presence of the L1 cache and a built-in FPU, whereas a superscalar design is a more fundamental redesign.

EDIT: Yep, at the microarchitecture level, only BSWAP was added compared to 80386 while the FPU added 5 separate instructions, according to Agner Fogs instruction latency tables.

BlankSystemDaemon fucked around with this message at 23:46 on Sep 16, 2022

Adbot
ADBOT LOVES YOU

PBCrunch
Jun 17, 2002

Lawrence Phillips Always #1 to Me

BlankSystemDaemon posted:

Didn't it start out life as the P5 microarchitecture which introduced MMX and superscalar functionality to the original 80386 design?

The 80486 added a few microarchitecture changes, but the big difference was the presence of the L1 cache and a built-in FPU, whereas a superscalar design is a more fundamental redesign.

MMX instructions were introduced on the fourth (!) process node version of the Pentium.

P5: 60-66 MHz, Socket 4 (5V), 3.1M transistors, 0.8μm (800 nm)
P54C: 75-100 MHz, Socket 5 (3.3V), 3.2M transistors, 0.5 or 0.6μm depending on who you ask
P54CQS: 120 MHz, Socket 5 (3.3V), 3.3M transistors, 0.35μm
P54CS: 133-200 MHz, Socket 7 (3.3V), 3.3M transistors, 0.35μm
*
P55C: 120-233 MHz, Socket 7 (2.8V), 4.5M transistors, 0.28μm
Tillamook: 166-300 MHz, different formats for mobile and embedded applications, 4.5M transistors (0.25μm)

The * is for the weirdo P24T Pentium Overdrive chips that ran on a kind of parallel riser board with a built-in VRM to let the 3.3V chips run on 5V 486 boards.
P24T: 63-83 MHz, Socket 2 or Socket 3 (3.3V), 0.6μm

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

PBCrunch posted:

MMX instructions were introduced on the fourth (!) process node version of the Pentium.

P5: 60-66 MHz, Socket 4 (5V), 3.1M transistors, 0.8μm (800 nm)
P54C: 75-100 MHz, Socket 5 (3.3V), 3.2M transistors, 0.5 or 0.6μm depending on who you ask
P54CQS: 120 MHz, Socket 5 (3.3V), 3.3M transistors, 0.35μm
P54CS: 133-200 MHz, Socket 7 (3.3V), 3.3M transistors, 0.35μm
*
P55C: 120-233 MHz, Socket 7 (2.8V), 4.5M transistors, 0.28μm
Tillamook: 166-300 MHz, different formats for mobile and embedded applications, 4.5M transistors (0.25μm)

The * is for the weirdo P24T Pentium Overdrive chips that ran on a kind of parallel riser board with a built-in VRM to let the 3.3V chips run on 5V 486 boards.
P24T: 63-83 MHz, Socket 2 or Socket 3 (3.3V), 0.6μm

You said it all better than I could have. One Weird Trick I remember reading, years after I could have used that information, was that with beefier cooling P55C still supported 3.3V signaling and could work on older motherboards, especially since they used a remapped multiplier. I wish I'd known that when I stumbled on a dual socket Digital Pentium 90 on a curb back around 2002 - a pair of Pentium MMXes would have made it a decently spry Shoutcast server in a corner of my room.

It is weird how the Pentium name lost its luster with the Pentium 4 and was then kept around as the rung above their Celeron-named chips for a decade and a half. I think deprecating those two product lines is going to bite them in the rear end - "drat it, why did I buy this low end desktop with an INTEL PROCESSOR?"

PBCrunch
Jun 17, 2002

Lawrence Phillips Always #1 to Me
Way back when I had a laptop that came with a K6 233 MHz CPU. Not K6-2, K6. It was fine at the time of purchase, but technology marches on.

I read about the then-new K6-III with some big-rear end (for the time) L3 cache on the die. The important bit for me was that it remapped the 2x multiplier to something like 6x. My crummy little socket 7 laptop had jumpers. It had markings to set a 2x multiplier. Could it run a K6-III at 400 MHz?

Only one way to find out. I ordered the chip and wouldn't you know it, the machine booted on the first try. Instant performance doubling. The battery life was terrible bad, but all laptops had pretty crappy battery life back back then. WiFi want really a thing so you needed wires for network access anyway. The laptop got a little bit hotter than before, bit it was as stable and reliable as ever.

RIP Celeron. The first PC I ever built for myself had two Celeron 366A overclocked to 550 MHz and beyond on an Abit (also RIP) BP6 with an Nvidia Riva TNT2 Ultra on the side. Oh, the days when overclocking had meaningful benefits and was fun.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
The 300A...If only we can still buy something that performs just as well as the top-end but at 1/4 the price

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

BlankSystemDaemon posted:

Didn't it start out life as the P5 microarchitecture which introduced MMX and superscalar functionality to the original 80386 design?

The 80486 added a few microarchitecture changes, but the big difference was the presence of the L1 cache and a built-in FPU, whereas a superscalar design is a more fundamental redesign.

EDIT: Yep, at the microarchitecture level, only BSWAP was added compared to 80386 while the FPU added 5 separate instructions, according to Agner Fogs instruction latency tables.

Yeah it still would have been a 586 anyway but they couldn’t trademark that designation.

Kind of an interesting development tree though!

FuturePastNow
May 19, 2014


Just make the Celeron and Pentium the Core i1 and i2, still a dumb name but nobody would be confused

Wild EEPROM
Jul 29, 2011


oh, my, god. Becky, look at her bitrate.
the core naming scheme is a dumb way of naming things in the first place.

pentium 4: okay, it's the 4th pentium
core duo: sure, it's got 2 cores now.
core 2 solo, duo, quad: sure, it's the second version of core
core i7: the 7th generation of processors

oh but then there's core i5 which is the 5th generation?

wait i7 2nd generation? 3rd generation?
core i9?
core i3?

SwissArmyDruid
Feb 14, 2014

by sebmojo

Weird Al is gonna have to rename that song, I guess. https://www.youtube.com/watch?v=qpMvS1Q1sos

wargames
Mar 16, 2008

official yospos cat censor

Wild EEPROM posted:

the core naming scheme is a dumb way of naming things in the first place.

pentium 4: okay, it's the 4th pentium
core duo: sure, it's got 2 cores now.
core 2 solo, duo, quad: sure, it's the second version of core
core i7: the 7th generation of processors

oh but then there's core i5 which is the 5th generation?

wait i7 2nd generation? 3rd generation?
core i9?
core i3?

well i2, i4, and i6 were the itanium series.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Wild EEPROM posted:

the core naming scheme is a dumb way of naming things in the first place.

pentium 4: okay, it's the 4th pentium
core duo: sure, it's got 2 cores now.
core 2 solo, duo, quad: sure, it's the second version of core
core i7: the 7th generation of processors

oh but then there's core i5 which is the 5th generation?

wait i7 2nd generation? 3rd generation?
core i9?
core i3?

Are you mad at BMW having a 3/5/7 series? You still get the generation and other specs in the model number.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
you see its very confusing so suckers can keep buying outdated SKUs and feel good at full MSRP prices

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
can’t wait to see the looks on all your faces next week when they announce they’re rebranding to the Pentium Corporation :smuggo:

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Paul MaudDib posted:

can’t wait to see the looks on all your faces next week when they announce they’re rebranding to the Pentium Corporation :smuggo:

Wouldn't be the worst thing they've done recently! :haw:

Dr. Video Games 0031
Jul 17, 2004

Some chinese reviewers apparently got their hands on a final retail sample of the 13900K: https://www.bilibili.com/read/cv18648273

The results are all within the realm of the expected and it appears to be a pretty detailed review, so there's no reason to doubt its authenticity. +40 - 50% multi-threaded, +10 - 15% single-threaded. Lots of really good boosts to mult-threaded applications. +13% CS:GO performance at 1080p with other games seeing lower boosts.

The important thing to note is that these tests were done with no power limits on either CPU, and the 13900K with no limits reached an eye-watering 343W. That kind of power draw is simply not going to be doable for a lot of users, so the actual results in an average user's computer may be a fair bit lower. I didn't really understand the google translation for why they didn't test the chip with power limits in place (something something motherboard NDAs), but they say they'll update the review with those tests later.

Boat Stuck
Apr 20, 2021

I tried to sneak through the canal, man! Can't make it, can't make it, the ship's stuck! Outta my way son! BOAT STUCK! BOAT STUCK!

Dr. Video Games 0031 posted:

Some chinese reviewers apparently got their hands on a final retail sample of the 13900K: https://www.bilibili.com/read/cv18648273

The results are all within the realm of the expected and it appears to be a pretty detailed review, so there's no reason to doubt its authenticity. +40 - 50% multi-threaded, +10 - 15% single-threaded. Lots of really good boosts to mult-threaded applications. +13% CS:GO performance at 1080p with other games seeing lower boosts.

The important thing to note is that these tests were done with no power limits on either CPU, and the 13900K with no limits reached an eye-watering 343W. That kind of power draw is simply not going to be doable for a lot of users, so the actual results in an average user's computer may be a fair bit lower. I didn't really understand the google translation for why they didn't test the chip with power limits in place (something something motherboard NDAs), but they say they'll update the review with those tests later.

Netburst is back baby!

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
ok now im very interested in how much a 13th gen can be downpowered while retaining 90% of the performance

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
Where can I buy a 700 Hz monitor

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Dr. Video Games 0031 posted:

Some chinese reviewers apparently got their hands on a final retail sample of the 13900K: https://www.bilibili.com/read/cv18648273

The results are all within the realm of the expected and it appears to be a pretty detailed review, so there's no reason to doubt its authenticity. +40 - 50% multi-threaded, +10 - 15% single-threaded. Lots of really good boosts to mult-threaded applications. +13% CS:GO performance at 1080p with other games seeing lower boosts.

The important thing to note is that these tests were done with no power limits on either CPU, and the 13900K with no limits reached an eye-watering 343W. That kind of power draw is simply not going to be doable for a lot of users, so the actual results in an average user's computer may be a fair bit lower. I didn't really understand the google translation for why they didn't test the chip with power limits in place (something something motherboard NDAs), but they say they'll update the review with those tests later.

Didn't the 13900K increase the core/thread count by 33% over the 12900K?

So in general, they basically got the 10-15% ST increase by cranking the power up.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Winter is coming. I see no problem with a 350 watt CPU.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



K8.0 posted:

Winter is coming. I see no problem with a 350 watt CPU.

Well yeah, because it's from Intel. ;)

Dr. Video Games 0031
Jul 17, 2004

SourKraut posted:

Didn't the 13900K increase the core/thread count by 33% over the 12900K?

So in general, they basically got the 10-15% ST increase by cranking the power up.

The 13900K is going from 8P + 8E to 8P + 16E, so it's +50% cores but all the new cores are E-cores. I think the max boost clock is also higher this time around, which accounts for the higher ST scores.

Shipon
Nov 7, 2005
More cores no one really needs for the most part, but that's the only way to get the best single threaded performance out of a generation since they segment their products this way.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Intel lied to consumers for almost a decade, claiming anything more than 4c/8t on a non-enthusiast desktop socket or 10% IPC uplift YOY was an impossibility.

This just feels like a wild oveecorrection.

CoolCab
Apr 17, 2005

glem
man remember the more cores picture, lol

CoolCab
Apr 17, 2005

glem
you die a hero or live long enough to become a villain

SwissArmyDruid
Feb 14, 2014

by sebmojo
I dunno. I'm sure Intel is genuinely onto something with the E-cores, but they seem to be using them indiscriminately and like a bludgeon.

SwissArmyDruid fucked around with this message at 06:55 on Sep 18, 2022

lih
May 15, 2013

Just a friendly reminder of what it looks like.

We'll do punctuation later.
well raptor lake has been confirmed to only exist as a stop-gap because meteor lake was behind schedule

Dr. Video Games 0031
Jul 17, 2004

I hope you're excited for even more e-cores for the 14th and 15-gen too by the way. Intel apparently plans not adding any more P-cores for a while and just pushing more and more E-cores instead.

lih
May 15, 2013

Just a friendly reminder of what it looks like.

We'll do punctuation later.
isn't the rumour that meteor lake is going to have a third type of core in low power E-cores

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
hot take: what if you put big/little on the server platform? even in, eg, web services, you've still got threads and functions that are "hotspots" and could benefit from being moved to a high-performance core. Something like zen3 (in the epyc clock ranges) is fine as a little core, but, instead of a zen3 ccx you could roughly have 3-4 intel golden cove cores, ish, iirc. Obviously, again, there are a lot of things where twice the cores at 70% the potency really adds up, but, you'd also have some fat cores to deal with hotspots.

you could of course do things like launching particular threads or types of delegates onto particular threadpools, but, you could also do it at a method level and annotate particular methods as being "hotspots" and count time spent inside those, or similar, and preferentially allocate those threads onto the p-cores. I bet you could squeeze some extra architectural PPA with either heuristics or annotation like that.

yeah lol at the support involved, but, moving a couple big enterprise applications to it might actually get a lot of bang. There's a lot of applications where java or rdbms etc are the particular bottleneck and you really wish you could run that one part faster.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

SwissArmyDruid posted:

I dunno. I'm sure Intel is genuinely onto something with the E-cores, but they seem to be using them indiscriminately and like a bludgeon.
I think the main issue so has been that a) they're running the P and E cores at the same voltage*, which is not ideal for the latter and to make it worse, b) they're boosting everything to the moon to get the benchmark wins over AMD.




https://chipsandcheese.com/2022/01/28/alder-lakes-power-efficiency-a-complicated-picture/

So seems like you wouldn't want to run at more than 3GHz or 10W, but that's not what happens. So depending on the task, there's probably a lot of room to optimize the P/E core power for the best results. Strangely I haven't really seen someone do this in a comprehensive manner.

I was never a huge fan of big.little especially with the instruction set issues, but it seems to actually make sense for desktop usage. I can't think of many (well, any, really) tasks that would scale to more than 8 threads, but wouldn't benefit from another 16 E-cores. And the tradeoff is really 4 E cores for 1 P core.


*

Paul MaudDib posted:

one specific problem with Alder Lake is that the big cores are designed to run fast and the little cores are (supposed to) run efficient, but they're driven off the same voltage rail. So if you want your P-cores to go fast, you're pouring voltage into the E-cores far beyond what you'd ideally want them to run. As long as they're on the same rail, you might as well clock them as high as they can go, it's not going to reduce power that much simply by reducing frequency without bringing the voltage down along with it... but I think that is what is behind a lot of the "wow the little cores are space efficient but not really all that much more power efficient" stuff. It's not that they were designed that way as a purposeful thing, that's just the implication of having them on the same rail. Sierra Forest or Atom SOC efficiency may look fairly different to Alder Lake because they'll run the e-cores at their ideal voltage instead of unintentionally squeezing the last 10% by running them at p-core voltages.

Arivia
Mar 17, 2011

Dr. Video Games 0031 posted:

I hope you're excited for even more e-cores for the 14th and 15-gen too by the way. Intel apparently plans not adding any more P-cores for a while and just pushing more and more E-cores instead.

I really appreciate this HUB video that got linked in the gpu thread about how little background multitasking is actually hurting your gaming performance on today's CPUs: https://www.youtube.com/watch?v=Nd9-OtzzFxs

I have a really hard time thinking of what the e-cores are really gonna be useful for outside of content creation or streaming. For someone like me who's just gaming with a bunch of browser tabs open to reference and Discord up, I figured the e-cores would be a big help, but it sure doesn't really look like it.

Rinkles
Oct 24, 2010

What I'm getting at is...
Do you feel the same way?

Arivia posted:

I really appreciate this HUB video that got linked in the gpu thread about how little background multitasking is actually hurting your gaming performance on today's CPUs: https://www.youtube.com/watch?v=Nd9-OtzzFxs

I have a really hard time thinking of what the e-cores are really gonna be useful for outside of content creation or streaming. For someone like me who's just gaming with a bunch of browser tabs open to reference and Discord up, I figured the e-cores would be a big help, but it sure doesn't really look like it.

My 11600K will sometimes choke on Defender's Antimalware Executable in cpu-intensive games, I'm guessing an extra core or two would help with that?

Rinkles fucked around with this message at 10:30 on Sep 18, 2022

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

mobby_6kl posted:

I think the main issue so has been that a) they're running the P and E cores at the same voltage*, which is not ideal for the latter and to make it worse, b) they're boosting everything to the moon to get the benchmark wins over AMD.

it does have FIVR for each e-core module or p-core though, so, it doesn't matter as much as I thought it did. each FIVR should be able to deliver an independent voltage.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Paul MaudDib posted:

it does have FIVR for each e-core module or p-core though, so, it doesn't matter as much as I thought it did. each FIVR should be able to deliver an independent voltage.

Ah, thanks for the more accurate information. So essentially that just means that the E-cores are just pushed beyond the point of diminishing returns by design

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



mobby_6kl posted:

Ah, thanks for the more accurate information. So essentially that just means that the E-cores are just pushed beyond the point of diminishing returns by design

Because they want the MOST FRAMZ

VorpalFish
Mar 22, 2007
reasonably awesometm

Arivia posted:

I really appreciate this HUB video that got linked in the gpu thread about how little background multitasking is actually hurting your gaming performance on today's CPUs: https://www.youtube.com/watch?v=Nd9-OtzzFxs

I have a really hard time thinking of what the e-cores are really gonna be useful for outside of content creation or streaming. For someone like me who's just gaming with a bunch of browser tabs open to reference and Discord up, I figured the e-cores would be a big help, but it sure doesn't really look like it.

I mean that's just high core count cpus in general; there are workloads that scale well with core counts like rendering, compiling, video encoding, etc but if you just play games, you care more about per core performance than parallelism past like 6 cores probably.

There's a reason nobody's recommending the 5950x as a gaming CPU. These halo products are good at gaming, sure, but if that's your main use case it's a waste of money.

cerious
Aug 18, 2010

:dukedog:

lih posted:

isn't the rumour that meteor lake is going to have a third type of core in low power E-cores

Yes but only on the SoC die per this

Adbot
ADBOT LOVES YOU

canyoneer
Sep 13, 2005


I only have canyoneyes for you

SourKraut posted:

Because they want the MOST FRAMZ

Getting 800 frames per second in a 10 year old video game is the most important problem to be solved in computing

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply