Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Nintendo Kid
Aug 4, 2011

by Smythe

Paul MaudDib posted:

The idea on 4+1 is that you build it using slower, lower-power processes than the big cores.

Which is completely unnecessary in the x86 world. It doesn't get you anything. You simply clock down one of your cores and it uses much less power naturally. The extra 0.1 watt power saving or whatever isn't worth the significantly more complex situation of having a different architecture core (say atom maybe) on the same die.

Adbot
ADBOT LOVES YOU

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Paul MaudDib posted:

The tradeoff is that your software has to be smart enough to take advantage of it. If your kernel treats the battery-saver core like a normal core you're going to have issues.
No software is smart enough for this. Also, 4+1 is dead in favor of 4+4 A53+A57 (or A53+A53 if you're Huawei lol), which software is even less equipped to handle.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Professor Science posted:

No software is smart enough for this. Also, 4+1 is dead in favor of 4+4 A53+A57 (or A53+A53 if you're Huawei lol), which software is even less equipped to handle.

I would think it's a pretty straightforward fix - you tweak your kernel scheduler and power manager to prefer the battery-saver core when load is below some threshold. I guess I shouldn't have said "software" - that's a kernel thing. Userland software shouldn't handle processor management.

Guess I'm behind the times on that. In terms of being "equipped to handle that" my intuition would be that it's a lot simpler to write a rule for handling one low power core when threshold < X (let's say sysload < ~0.1) than 4 cores, for various reasons.

PC LOAD LETTER
May 23, 2005
WTF?!
The whole 'power saver core' thing really only seems to matter for form factors that are extremely power limited like smartphones. For laptops, desktops, and servers current x86 CPU power saving tech can do a pretty good job in low performance or idle situations.

http://techreport.com/review/24879/intel-core-i7-4770k-and-4950hq-haswell-processors-reviewed/7

At idle or in low performance situations the PSU inefficiency is probably a bigger problem for most desktops since most of the cheap 80+ PSU's have crap efficiency under 20% of capacity. Only relatively recently have even the more expensive 80+ Gold PSU's been even trying to resolve that issue.

big shtick energy
May 27, 2004


Paul MaudDib posted:

I would think it's a pretty straightforward fix - you tweak your kernel scheduler and power manager to prefer the battery-saver core when load is below some threshold. I guess I shouldn't have said "software" - that's a kernel thing. Userland software shouldn't handle processor management.

Guess I'm behind the times on that. In terms of being "equipped to handle that" my intuition would be that it's a lot simpler to write a rule for handling one low power core when threshold < X (let's say sysload < ~0.1) than 4 cores, for various reasons.

Google wasn't able to do it (Android never really took advantage of the 4+1 arrangement properly), which means either they didn't care to because the gains were small, or weren't able to because of limitations with what they could do with the software.

Nintendo Kid
Aug 4, 2011

by Smythe
There was both Samsung Galaxy S 4 phones with quad core "high power usage" cores and quad core "low power usage" cores, and Samsung Galaxy S 4 models with just a single quad core (and faster) CPU. The one with the high and low power switchable set had neglibly more battery life in real world use (something like a few minutes on hours and hours of active use and also a few extra minutes onto the days of standby).

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Paul MaudDib posted:

I would think it's a pretty straightforward fix - you tweak your kernel scheduler and power manager to prefer the battery-saver core when load is below some threshold. I guess I shouldn't have said "software" - that's a kernel thing. Userland software shouldn't handle processor management.
everybody seems to think this (including _lots_ of people in industry) and it's magical thinking, that you can somehow know the load at an exact point in time on the small core and instantly migrate. migration is not free power-wise, it's not free latency-wise, and there's no way to predict the need to migrate from one to the other (you only know load in the past, and past load is not an indicator of future load, especially when you consider that most workloads are bursty). as a result, you're going to have up to N ms (whatever your scheduler interval is) of being totally overloaded on the low power core before the scheduler load balances. and that means you still have to clock up the larger cores, which is not free latency-wise either. so best case, you're right and you probably have some period of time where the system performed badly (which may be okay or it may cause dropped frames or stuttering or other bad things), or you powered up the big core unnecessarily, migrated, kept it on for as long as it takes the scheduler to decay the big core's load, and then migrated everything back to the little core (so much for saving power).

canyoneer
Sep 13, 2005


I only have canyoneyes for you

FaustianQ posted:

Hey come on now, HP-Compaq recovered :ohdear:

I love the words of Sun's ex-CEO describing the merger as "two garbage trucks colliding"

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

Hey come on now, HP-Compaq recovered :ohdear:

u wot m8

Are we talking about the same HP that's splitting into two companies by the end of this year?

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

SwissArmyDruid posted:

u wot m8

Are we talking about the same HP that's splitting into two companies by the end of this year?

Which you think will perform better, H or P?

SwissArmyDruid
Feb 14, 2014

by sebmojo

Wheany posted:

Which you think will perform better, H or P?

Actually, it's going to be a enterprise/consumer split. I can't remember who holds onto the printer business, but I fear it's the Meg Whitman-led consumer division.

Kazinsal
Dec 13, 2011


SwissArmyDruid posted:

Actually, it's going to be a enterprise/consumer split. I can't remember who holds onto the printer business, but I fear it's the Meg Whitman-led consumer division.

Printers are going to consumer. Whitman's running Enterprise, and the executive VP of printing and PCs will be running conumer.

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
Garbage all around

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Don't forget the HP-EDS merger too. Now it's three garbage trucks.

HP is a microcosm of tech company mismanagement and a godawful place to work. It survived because it's a giant but its inertia is finally running out. In no way should it be viewed as a successful anything.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Wikipedia posted:

The Wall Street Journal and confirmed by other media, will result in two publicly traded companies: Hewlett-Packard Enterprise and HP, Inc. Meg Whitman will serve as chairman of HP, Inc. and CEO of Hewlett-Packard Enterprise, Patricia Russo will be chairman of the enterprise business, and Dion Weisler will be CEO of HP, Inc.

HP:
Chair: Meg Whitman
CEO: Dion Weisler

HPE:
Chair: Patricia Russo
CEO: Meg Whitman

Worse than I thought.

JawnV6
Jul 4, 2004

So hot ...

Professor Science posted:

everybody seems to think this (including _lots_ of people in industry) and it's magical thinking, that you can somehow know the load at an exact point in time on the small core and instantly migrate. migration is not free power-wise, it's not free latency-wise, and there's no way to predict the need to migrate from one to the other
Core Hopping falls in the same kind of gap.

Nintendo Kid posted:

Which is completely unnecessary in the x86 world. It doesn't get you anything. You simply clock down one of your cores and it uses much less power naturally. The extra 0.1 watt power saving or whatever isn't worth the significantly more complex situation of having a different architecture core (say atom maybe) on the same die.
There's still times when activity is required every 16ms or so, and you're completely clocked off otherwise. It's not really enough time to get down to a platform power state and back up each time. With a 4+1 (if I'm understanding it correctly) kind of setup you could take power away from the whole big core plane.

And I really think that you want as different an ISA as possible. With similar-enough ISA's people hide behind ACPI and try to take this difficult real-time problem that's very hardware dependent and leave it up to an OS policy with half of those hardware details unavailable. I really don't think that's going to solve it. If you break it and have a completely different ISA, you're ensuring that anyone who wants those power savings knows what they're doing and has invested enough time to understand it.

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?
Reading an issue of Maximum PC online, back from when the Phenom 9600 was about to be released.

http://books.google.com/books?id=bgIAAAAAMBAJ&printsec=frontcover&rview=1#v=onepage&q&f=true

It makes me sad to see AMD having spent practically the last 7 years trying to play off their misfortunes as a potential value-market boon. How has AMD survived for so long, especially after being practically ignored by OEMs for so long?

Is it inertia? Are we watching a slow death? VIA didn't take as long to slow down to a 14mil/yr revenue business, but then again they didn't have any viable offerings beyond a brief blip with the Nanos back when the industry was saying the netbook was the future.

Anyways, these back issues are fun. "Powerline-based networking is the future?" "Hey, those GeForce 8800GT's are good!" "What is this Windows Home Server thing? Oh, it's pretty neat!"

A Bad King fucked around with this message at 15:44 on Apr 19, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Angry Fish posted:

How has AMD survived for so long, especially after being practically ignored by OEMs for so long?

AMD was never good, just competitive and if only thanks to Netburst head trauma that Intel got over. The P-Ms were competitive with 754 and 939 single cores, and used like 60% less power. Imagine a timeline where Intle never goes netburst and just focuses on releasing updated P-Ms, never losing the performance crown to AMD and beating AMD to any important milestone. Imagine 2004 expect instead of P4Es barely meeting an Athlon 3800+ in performance, some Socket P monster destroys AMDs offerings, 2005 drops C2D, and from 2001-2006 never needs to change the socket undercutting any value capability AMD has.

AMD lives because Intel had a "Bulldozer", and it colored consumer perception enough. AMD might also be alive because the smartest thing they ever did was acquire ATI (which, rip Radeon, Samsung please buy!).

SwissArmyDruid posted:

u wot m8

Are we talking about the same HP that's splitting into two companies by the end of this year?

I wasn't entirely serious, still they did bounce back a little after Miss Fiorina. Hey, did you know she wants to run for POTUS? America, the next HP-Compaq.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

FaustianQ posted:

AMD was never good,

Now chill that kind of rhetoric. drat. Whose 64-bit instruction set became de facto? Intel's IA-64?

HalloKitty fucked around with this message at 16:35 on Apr 19, 2015

Not Wolverine
Jul 1, 2007

HalloKitty posted:

Now chill that kind of rhetoric. drat. Whose 64-bit instruction set became de facto? Intel's IA-64?

Correct me if I am wrong, but isn't that mainly because IA-64 was 64bit only and AMD's was 32 and 64bit? Like if you bought an Itanic, you would also be required to get a 64bit OS (which did not exist in the consume world at the time) and upgrade all of your software to 64bit versions, and even today a lot of software is 32bit. Or where there other bigger problems with the Itanium?

Nintendo Kid
Aug 4, 2011

by Smythe

Crotch Fruit posted:

Correct me if I am wrong, but isn't that mainly because IA-64 was 64bit only and AMD's was 32 and 64bit? Like if you bought an Itanic, you would also be required to get a 64bit OS (which did not exist in the consume world at the time) and upgrade all of your software to 64bit versions, and even today a lot of software is 32bit. Or where there other bigger problems with the Itanium?

Yeah IA-64 was completely unrelated to x86.

GRINDCORE MEGGIDO
Feb 28, 1985


Crotch Fruit posted:

Correct me if I am wrong, but isn't that mainly because IA-64 was 64bit only and AMD's was 32 and 64bit? Like if you bought an Itanic, you would also be required to get a 64bit OS (which did not exist in the consume world at the time) and upgrade all of your software to 64bit versions, and even today a lot of software is 32bit. Or where there other bigger problems with the Itanium?

AMD added 64 bit instructions to X86, Itanic was a totally different architecture.

"AMD was never good" - "some alternate timeline where they weren't"... that's kinda bad posting right there.

GRINDCORE MEGGIDO fucked around with this message at 17:15 on Apr 19, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

HalloKitty posted:

Now chill that kind of rhetoric. drat. Whose 64-bit instruction set became de facto? Intel's IA-64?

And? Yea, AMD definitely came out with a 64-bit instruction set, but still need Intels permission to use x86. Their processors are still poo poo, and their GPUs are hot, and they'll run out of money where Intel just gobbles up the AMD64 license and locks everyone out of x86-64 forever.

Hey, VIA isn't total poo poo, they made good chipsets once.
Hey, Voodoo isn't bad, they came up with SLI!
Hey, Cyrix had a lot of really smart people on their bench, stop calling them bad :qq:

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Crotch Fruit posted:

Correct me if I am wrong, but isn't that mainly because IA-64 was 64bit only and AMD's was 32 and 64bit? Like if you bought an Itanic, you would also be required to get a 64bit OS (which did not exist in the consume world at the time) and upgrade all of your software to 64bit versions, and even today a lot of software is 32bit. Or where there other bigger problems with the Itanium?

A combination of x86-64 being backwards compatible and the original Itanium's being a dumpster fire sealed the deal for the Itanic. My understanding is that Microsoft in particular made it very clear they supported x86-64 over Itanium because of BC even if IA-64 was the technically superior instruction set. Then came the first Itanium stinking up the joint leaving a bad taste in people's mouths and the first Opteron's being made available soon after that were a much more desirable choice in the server market.

That being said, saying AMD was never good was straight up wrong. They were clearly superior to Intel's offerings during the early Athlon 64 era when its competition were Netburst CPU's. 200 dollar Athlon 64's were generally faster than the 1000 dollar Pentium 4 Extreme Edition available at the time.

GRINDCORE MEGGIDO
Feb 28, 1985


FaustianQ posted:

And? Yea, AMD definitely came out with a 64-bit instruction set, but still need Intels permission to use x86. Their processors are still poo poo, and their GPUs are hot, and they'll run out of money where Intel just gobbles up the AMD64 license and locks everyone out of x86-64 forever.

Hey, VIA isn't total poo poo, they made good chipsets once.
Hey, Voodoo isn't bad, they came up with SLI!
Hey, Cyrix had a lot of really smart people on their bench, stop calling them bad :qq:

Are you off your meds?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
It's strange to hear people define good as "not poo poo". AMD not making GBS threads their pants while Intel fucks up with Netburst doesn't make AMD good. If Intel never went netburst and just stayed the course with P-M, AMD would never even have a golden age people could pine and weep for, instead AMD might be competitive up until 2005 instead of 2009.

JawnV6
Jul 4, 2004

So hot ...

FaustianQ posted:

they'll run out of money where Intel just gobbles up the AMD64 license and locks everyone out of x86-64 forever.

That's not a viable long term play for Intel though. A lot of AMD's existence is owed to second sourcing, and the bigger customers would start funding any alternative they could find if Intel suddenly found themselves sole owners of x86. Intel would have a great run for ~5 years until all the money being dumped into obviating them caught up and they find themselves begging for fab work.

JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS

FaustianQ posted:

Bunch of stuff

VIA NEVER made good chipsets.

Every loving one of them was either unstable, had lovely disk throughput, conflicts with hardware, or some unholy combination of the above. The best thing that could be said about them is that they were usually fairly cheap.

JnnyThndrs fucked around with this message at 19:33 on Apr 19, 2015

WhyteRyce
Dec 30, 2001

JnnyThndrs posted:

VIA NEVER made good chipsets.

Every loving one of them was either unstable, had lovely disk throughout, conflicts with hardware, or some unholy combination of the above. The best thing that could be said about them is that they were usually fairly cheap.

VIA and Creative Labs arguing over whose fault it was that audio was scratchy was fun

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

JawnV6 posted:

That's not a viable long term play for Intel though. A lot of AMD's existence is owed to second sourcing, and the bigger customers would start funding any alternative they could find if Intel suddenly found themselves sole owners of x86. Intel would have a great run for ~5 years until all the money being dumped into obviating them caught up and they find themselves begging for fab work.

So where is all that funding for the current alternative known as AMD?

LLCoolJD
Dec 8, 2007

Musk threatens the inorganic promotion of left-wing ideology that had been taking place on the platform

Block me for being an unironic DeSantis fan, too!

FaustianQ posted:

It's strange to hear people define good as "not poo poo". AMD not making GBS threads their pants while Intel fucks up with Netburst doesn't make AMD good.

They seemed pretty "good" to me at the time, because the price/performance ratio was the best around. Your "good" seems to be some theoretical target that even the best R&D on the planet wasn't able to produce at the time.


FaustianQ posted:

If Intel never went netburst and just stayed the course with P-M, AMD would never even have a golden age people could pine and weep for, instead AMD might be competitive up until 2005 instead of 2009.

If.

I'm not sure how much partisan posting is really going on here. I think I speak for the vast majority in saying that competition is good for driving innovation and lower prices. If anyone misses AMD's "golden age" it's for that reason rather than for AMD brand loyalty. So surely that's understandable.

JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS

WhyteRyce posted:

VIA and Creative Labs arguing over whose fault it was that audio was scratchy was fun

Considering I had a SB Platinum and a dual PIII w/Via 694x chipset at the time, yeah, that was loads of fun :)

JawnV6
Jul 4, 2004

So hot ...

FaustianQ posted:

So where is all that funding for the current alternative known as AMD?

Half of it's in gcc/llvm/compiler research, the other half is in ARM? AMD's limping along just fine, but dump everything going to them into those two areas and it changes things. I'm confused why you think "second source" apparently implies gangbuster profits for AMD instead of subsistence-level they're currently getting.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

LLCoolJD posted:

They seemed pretty "good" to me at the time, because the price/performance ratio was the best around. Your "good" seems to be some theoretical target that even the best R&D on the planet wasn't able to produce at the time.


If.

I'm not sure how much partisan posting is really going on here. I think I speak for the vast majority in saying that competition is good for driving innovation and lower prices. If anyone misses AMD's "golden age" it's for that reason rather than for AMD brand loyalty. So surely that's understandable.

That theoretical target was the PIII-M and P-M, both of which could compete with the Athlons just for being mobile chips. It's kind of hard to say that making PIII-M and P-M desktop parts was unachievable.

Has nothing to do with brand loyalty, but exactly as you said - AMD's golden age was the height of real competition between the companies, and then everything was downhill from Bulldozer. Wanting that again is something I hope for, but I'm under no illusion that such a time was because of AMD's genius but rather Intel's idiocy.

JawnV6 posted:

Half of it's in gcc/llvm/compiler research, the other half is in ARM? AMD's limping along just fine, but dump everything going to them into those two areas and it changes things. I'm confused why you think "second source" apparently implies gangbuster profits for AMD instead of subsistence-level they're currently getting.

So they're pretty much okay with someone holding onto the AMD64 license that's not Intel, regardless of that companies competitive capability, as every scrambles to try and obsolete x86-64 to begin with?

Also, 2016 will be hilarious for AMD stock. I can't wait for their next noncompetitive products in CPUs and GPUs.

JawnV6
Jul 4, 2004

So hot ...

FaustianQ posted:

So they're pretty much okay with someone holding onto the AMD64 license that's not Intel, regardless of that companies competitive capability,
Dunno who "they" is, but the mobile segment is where the money and attention is. There's a long history of companies competing over the bargain-segment and learning enough iterating there to eat up the higher-end segments. That's all I'm saying will happen. There's no need to invoke a cabal of shadowy figures.

I mention compilers because I think ISA's matter less and less. Apple could come out with a new phone on a new ISA and hardly mention it to anyone. They control the entire toolchain that gets HLL code onto their mobile platforms. It wouldn't be free, but if some ISA had 50% gains on power or perf they could eat that cost.

FaustianQ posted:

as every scrambles to try and obsolete x86-64 to begin with?
This is pretty much happening. Your phrasing acts like it isn't, which is sorta confusing? Intel's doing their best to jam it into the mobile segment but everyone else is counting down the days until ARM scales up to servers.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

JawnV6 posted:

This is pretty much happening. Your phrasing acts like it isn't, which is sorta confusing? Intel's doing their best to jam it into the mobile segment but everyone else is counting down the days until ARM scales up to servers.
either that or Power stuff starts coming out. I think it's much more likely that the EU funds ARM enough to be a thing in servers rather than IBM somehow making money on Power to continue meaningful development, though.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

JawnV6 posted:

Dunno who "they" is, but the mobile segment is where the money and attention is. There's a long history of companies competing over the bargain-segment and learning enough iterating there to eat up the higher-end segments. That's all I'm saying will happen. There's no need to invoke a cabal of shadowy figures.

This is pretty much happening. Your phrasing acts like it isn't, which is sorta confusing? Intel's doing their best to jam it into the mobile segment but everyone else is counting down the days until ARM scales up to servers.

Uh, I wasn't referencing a shadowy cabal either, just using they in the same sense as you were here

JawnV6 posted:

...A lot of AMD's existence is owed to second sourcing, and the bigger customers would start funding any alternative they could...

JawnV6 posted:

This is pretty much happening. Your phrasing acts like it isn't, which is sorta confusing? Intel's doing their best to jam it into the mobile segment but everyone else is counting down the days until ARM scales up to servers.

I mean fair enough and everything, but why care about AMD64 at all then? It seems pretty drat irrelevant since Intel currently has defacto control over x86-64 in the server market, and is kind of irrelevant in the mobile market. So Intel grabs AMD64, no one cares as ARM marches on and Intel fiddles with an increasingly dead technology, woop, or moves to ARM as well.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

FaustianQ posted:

I mean fair enough and everything, but why care about AMD64 at all then? It seems pretty drat irrelevant since Intel currently has defacto control over x86-64 in the server market, and is kind of irrelevant in the mobile market. So Intel grabs AMD64, no one cares as ARM marches on and Intel fiddles with an increasingly dead technology, woop, or moves to ARM as well.
cause Windows desktops. if Apple wanted OSX to be ARM64 or Power or homegrown ISA, they could (new Xcode, it compiles to the new thing, they're a big enough market that people will generally do whatever). same thing with Linux, since the vast majority is open source. the massive amount of legacy Windows software that will never be compiled to target a new ISA is the only reason why anyone really cares about x86 as an ISA (vs Intel or AMD processors as generic CPUs at a given price/perf point) at this point.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Oh come on, this market is saturated with out of work desperate coders, I'm sure we can put them on commission, and start them on recompiling legacy code :kheldragar:

Adbot
ADBOT LOVES YOU

WhyteRyce
Dec 30, 2001

If our financial institutions still run COBOL I doubt anyone bothers to port some Windows 98 greeting card software

  • Locked thread