Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Kerbtree posted:

People are already just-about making it work already on the surface pro X that’s ARM with a compat later. Windows HAL’s a thing.

https://youtu.be/BceSt_Mx8Hk

Whatever happened with Intel getting pissy about how x86-on-ARM violated their IP, anyway? Seemed frivolous but I’m not a big fan of the way IP law works around instruction sets in the first place... evidently if it went ahead they must have reached some settlement?

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

shrike82 posted:

RE ML and CPUs, inferencing is moving to INT8 quantized operations.

GPUs will always be better for training but in production when you're serving inferencing queries to end-users, people shift to CPU since they're cheaper and scale up better.

What no that is not necessarily true at all

The cheaper part is, I suppose, but I work on a system which does production inference on GPU because the product would literally be impossible if limited by CPU inference performance, even if said CPU had AVX512. (Even if you just gave an Intel AVX512 CPU the same TOPS as a GPU, it'd still be impossible - GPUs have much faster memory interfaces and we need that memory bandwidth.)

I shouldn't be any more specific than that, so let's change topic to a completely unrelated thread-relevant public example. Apple has been shipping inferencing acceleration in SoCs since the iPhone X. Face ID needs lots of performance to recognize (or reject) the phone's owner's face in a tenth of a second or so, and also has to be power efficient, so just doing it on the CPU with standard SIMD wasn't really an option. And now that Apple has that hardware in their SoCs, they're finding other ways to use it.

From your description I'm guessing you're talking about some kind of cloud workload where the datasets are relatively small, the amount of inference work done per query is low, and the cost of cloud instances with GPUs is prohibitive. Change any of those parameters and things can flip in favor of doing inference on GPUs or dedicated ML accelerators.

shrike82
Jun 11, 2005

Yah, obviously it'll vary by use-case. I do NLP stuff - we do training on a mix of local GPUs, cloud TPUs but we've migrated to CPU-based inferencing mostly for scale + cost.

Stubb Dogg
Feb 16, 2007

loskat naamalle
Main problem at work with real-time inference is time spent copying between CPU and GPU memory, it takes lots of hand tuning to fix tensor graph’s device placements to proper devices to get latency and GPU utilisation to acceptable levels.

Having CPU that had proper tensor cores with shared memory with CPU would be a killer and we’d switch our inference workloads there immediately.

Dr. Fishopolis
Aug 31, 2004

ROBOT

Stubb Dogg posted:

Having CPU that had proper tensor cores with shared memory with CPU would be a killer and we’d switch our inference workloads there immediately.

I guess that answers my question from a page ago. I wonder who will do it first.

repiv
Aug 13, 2009

Isn't Intels AMX basically tensor cores on the CPU?

https://en.wikichip.org/wiki/x86/amx

It's supposed to ship with Sapphire Rapids next year

shrike82
Jun 11, 2005

I've spent a fair bit of time recently on transferring GPU-trained models to either CPUs or embedded platforms with cut-down tensor cores for inferencing. They're tenable for a lot of use-cases. The thrust of research and work in this area is to compress or distill the models in an efficient manner without losing much if any performance. Common techniques include throwing out precision (FP32->FP16->INT8), pruning them (throwing out swathes of weights), or distilling them (using the trained model as a teacher to transfer its knowledge to a smaller model with an efficient architecture).

Research has shown that a good way to build ML models is to train large and deep models then compress them down to fit your inferencing platform.

Beef
Jul 26, 2004
Chiming in with some GPU vs CPU NN training experience. GPU have a distinct advantage when training deep networks with convolution layers, because a lot of the VRAM-resident data gets reused and there is a high computational intensity (flops/byte). However, there are a ton of critically useful models that are shallow feedforward networks. On top of that, many real world datasets are sparse (take a shot every time someone says 'embedding' at a conference). In those cases, the GPU's training advantage largely becomes one of existing software and frameworks. One major advantage of CPU training is that you don't have to fight to make your model fit the relatively limited GPU RAM.
It was also nice not having to fight for the handful of GPU nodes in the on-residence cluster :coal:

Beef fucked around with this message at 10:04 on Jul 27, 2020

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Beef posted:

Chiming in with some GPU vs CPU NN training experience. GPU have a distinct advantage when training deep networks with convolution layers, because a lot of the VRAM-resident data gets reused and there is a high computational intensity (flops/byte). However, there are a ton of critically useful models that are shallow feedforward networks. On top of that, many real world datasets are sparse (take a shot every time someone says 'embedding' at a conference). In those cases, the GPU's training advantage largely becomes one of existing software and frameworks. One major advantage of CPU training is that you don't have to fight to make your model fit the relatively limited GPU RAM.
It was also nice not having to fight for the handful of GPU nodes in the on-residence cluster :coal:

How many of the nodes are new enough to have the AVX512 CPU instructions though? Or are you in private industry and not a university that's still running Westmere in production? https://www.vanderbilt.edu/accre/technical-details/

WhyteRyce
Dec 30, 2001

Some fallout at Intel
https://www.businesswire.com/news/home/20200727005752/en/Intel-Technology-Organization

Murthy is out which can't be a bad thing

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, there's really no way Intel's board should be looking at the current state of things and not trying to figure out whose heads need to go on the chopping block.

WhyteRyce
Dec 30, 2001

I like how the press release didn't even pretend he wasn't fired

WhyteRyce
Dec 30, 2001

DrDork posted:

Yeah, there's really no way Intel's board should be looking at the current state of things and not trying to figure out whose heads need to go on the chopping block.

It probably didn't take them long to figure this out and you knew the board wasn't happy with him when they spent so long looking for a replacement CEO, continually ignoring the theoretically qualified candidate sitting there right under the CEO, continually looking for anyone, before finally deciding on the guy who said previously he didn't want the job. And honestly, given his time in job, giant compensation package, bluster and big talk, the amount of changes and consolidation that he oversaw, and continued poor technical track record, if there was one person who deserved an unceremonial boot it was him

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

> Kelleher has been head of Intel manufacturing, where she ensured continuous operations through the COVID-19 pandemic while increasing supply capacity to meet customer needs and accelerating the ramp of Intel’s 10nm process.

they just promoted an underling, what exactly does that guarantee given the manufacturing senior leadership as a whole clearly was incompetent

WhyteRyce
Dec 30, 2001

Murthy was bad and in charge of a whole bunch of things that they just unconsolidated. They still have issues and the scrutiny needs to go down farther in the ranks but getting rid of someone very bad at the top is never a bad thing

Also, for all we know Murthy got fired for lying or fibbing about 7nm up until the point he couldn't hide it any longer and BS had to go out and publicly get reamed for the delay and I'm sure bosses hate getting embarrassed covering for their employee failures

WhyteRyce fucked around with this message at 00:09 on Jul 28, 2020

Beef
Jul 26, 2004

Twerk from Home posted:

How many of the nodes are new enough to have the AVX512 CPU instructions though? Or are you in private industry and not a university that's still running Westmere in production? https://www.vanderbilt.edu/accre/technical-details/

Yeah most are Broadwells and Haswells, I did see some improvements testing with Skylake AVX512. Even if the first layer is a sparse computation, the rest is the usual dense matrix mult that does well under avx512.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



WhyteRyce posted:

BS had to go out and publicly get reamed for the delay and I'm sure bosses hate getting embarrassed covering for their employee failures

If he only just found out about any issues/delays, then he's a bad boss.

WhyteRyce
Dec 30, 2001

SourKraut posted:

If he only just found out about any issues/delays, then he's a bad boss.

BS is a finance guy not a tech guy. Differing to his tech experts that report to him is probably all he can do and what a person like him should do.

not that I'm saying BS isn't without fault, but he's a bean counter and the bean counting has been going pretty well and if a technical person is needed to pull Intel out of these troubles then the board should have hired a technical person

WhyteRyce fucked around with this message at 01:02 on Jul 28, 2020

Cygni
Nov 12, 2005

raring to post

thinkin bout that interview of Steve Jerbs on the death of Xerox by "toner heads" again

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

WhyteRyce posted:

BS is a finance guy not a tech guy. Differing to his tech experts that report to him is probably all he can do and what a person like him should do.

I can see that cutting both ways. I mean, yeah, at some point you gotta trust that your department heads have the experience and information to give you reasonable reports. "We've run into difficulties but I'm confident we can overcome them in x months" is the sort of info the CEO wants and can work off of, rather than an in-depth explanation of node mechanics that probably takes a PhD to fully understand.

On the other hand, when you're now years behind, you'd think there'd have been a bunch of those sorts of meetings, and at some point you'd expect the CEO to start asking some other people whether said department head is blowing smoke up his rear end or not instead of just taking him at his word.

Of course, none of us have any actual idea how any of that worked out--best we can do is wave good bye and hope Kelleher does better.

wet_goods
Jun 21, 2004

I'M BAAD!

wet_goods posted:

The bad MGMT is at the top of the manufacturing org and a few tiers of awful vps there, not the c-level. The biggest sin out of the c level people is that they didn't fire about three tiers of managers in the manufacturing org when 10 failed long before Bob was CEO. Bob should should 100% axe people at the top of manufacturing now that 7nm is going to be a zoo because that's on him.

One more thing, it's the c level and sales people that have kept things growing at actually a really good rate for the past few years to cover for manufacturing gently caress ups.

Self quoting cause I am a prophet

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



DrDork posted:

I can see that cutting both ways. I mean, yeah, at some point you gotta trust that your department heads have the experience and information to give you reasonable reports. "We've run into difficulties but I'm confident we can overcome them in x months" is the sort of info the CEO wants and can work off of, rather than an in-depth explanation of node mechanics that probably takes a PhD to fully understand.

On the other hand, when you're now years behind, you'd think there'd have been a bunch of those sorts of meetings, and at some point you'd expect the CEO to start asking some other people whether said department head is blowing smoke up his rear end or not instead of just taking him at his word.

Of course, none of us have any actual idea how any of that worked out--best we can do is wave good bye and hope Kelleher does better.

You said it a lot better than I would have. To me it comes off just as scapegoating, and while probably to some extent justified, at some point the "the buck" stops with the CEO, and his head will have to roll.

Josh Lyman
May 24, 2009


Quarantine and the anticipation of building a Zen 3/RTX 3000 machine has me catching up to the last 8 years of CPU progress. A noob question about PCIe though: I'm still using a 3570K/Z77/GTX 970 machine. Ivy Bridge supported 16 PCIe 3.0 lanes, Z77 supports a single x16 device, and the 970 is a PCIe 3.0 x16 device. Does this mean that any other PCIe device, like my wireless card, cuts the video card down to x8? Similarly, does this mean that with the TOP GAMING CPU 9900K/10900K which only support 16 PCIe 3.0 lanes, if you use an add-in card or NVMe SSD then your $1000 RX 2080 Ti also only gets 8 lanes? It seems like Zen 2 supporting 24 lanes would help in this regard.

Josh Lyman fucked around with this message at 01:11 on Jul 28, 2020

WhyteRyce
Dec 30, 2001

DrDork posted:


On the other hand, when you're now years behind, you'd think there'd have been a bunch of those sorts of meetings, and at some point you'd expect the CEO to start asking some other people whether said department head is blowing smoke up his rear end or not instead of just taking him at his word.


7nm wasn't as badly delayed until now and this was pretty swift response once the information became public. The 7nm delay is not the 10nm delay and the accepted agreement at most levels at Intel before BS took over seemed to be that the failures and learnings from 10 were addressed in 7. BS could have been paranoid and continued to drill but, again, he doesn't have the technical chops to really back that up or go to war with the entire company.

Now, an alternative is that BS did know about the delays but sat on it until he couldn't any longer and fed Murthy to the wolves, but I don't think BS is the type of guy to open himself up to that kind of scrutiny and prosecution.

Not excusing BS and he deserves some blame but you don't put a finance guy in charge to fix technical problems at a technical company

WhyteRyce fucked around with this message at 01:17 on Jul 28, 2020

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Intel chipsets have separate lanes for GPUs. The CPU controls the GPU's lanes and allocates 16 *dedicated* lanes. The Z390/490 chipset then has 24 PCIe lanes to itself to allocate to system components and add-in cards that aren't the GPU. So when you see "24 PCIe lanes," it should really be "16+24," but they don't say that because then people would assume they have 40 PCIe lanes to play with.

Josh Lyman
May 24, 2009


BIG HEADLINE posted:

Intel chipsets have separate lanes for GPUs. The CPU controls the GPU's lanes and allocates 16 *dedicated* lanes. The Z390/490 chipset then has 24 PCIe lanes to itself to allocate to system components and add-in cards that aren't the GPU. So when you see "24 PCIe lanes," it should really be "16+24," but they don't say that because then people would assume they have 40 PCIe lanes to play with.
Okay, so in my case, my 3570K has all 16 PCIe 3.0 dedicated to the GPU, then the Z77 chipset has 8 additional PCIe 2.0 lanes for add-in cards?

Does AMD also have separate GPU + add-in/NVMe lanes? 24 seems like an unusual number in this case.

karoshi
Nov 4, 2008

"Can somebody mspaint eyes on the steaming packages? TIA" yeah well fuck you too buddy, this is the best you're gonna get. Is this even "work-safe"? Let's find out!

Cygni posted:

thinkin bout that interview of Steve Jerbs on the death of Xerox by "toner heads" again

https://www.youtube.com/watch?v=yraBG1s4gm8 if you're curious.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

WhyteRyce posted:

Murthy was bad and in charge of a whole bunch of things that they just unconsolidated. They still have issues and the scrutiny needs to go down farther in the ranks but getting rid of someone very bad at the top is never a bad thing

Also, for all we know Murthy got fired for lying or fibbing about 7nm up until the point he couldn't hide it any longer and BS had to go out and publicly get reamed for the delay and I'm sure bosses hate getting embarrassed covering for their employee failures

He was hired in 2015 do the initial 10nm is probably not on him but everything from 2017 and later sure is

Anyone have details on what went so wrong?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

EUV hard. 10nm too ambitious. Culture rot.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
I'm kind of feeling some meritocracy vibes here...

PC LOAD LETTER
May 23, 2005
WTF?!

WhyteRyce posted:

7nm wasn't as badly delayed until now and this was pretty swift response once the information became public.
Huuuh?

7nm was originally supposed to be 2017-18!

It absolutely was as badly delayed as Intel's 10nm (which was supposed to be out in 2016)!

WhyteRyce posted:

The 7nm delay is not the 10nm delay and the accepted agreement at most levels at Intel before BS took over seemed to be that the failures and learnings from 10 were addressed in 7.
That was what Intel was saying but there were rumors 7nm was having issues too.

Back when the first leaks about their 10nm process were coming out in mid/late 2018 or so there was some mention that their 7nm process was having problems too.

What those problems were no one knew and those rumors weren't exactly detailed anyways (supposedly they made some of the same mistakes they did with 10nm, that is excessively aggressive economic, performance and transistor density targets....remember the team doing the work was different but answered to the same management as the 10nm team) so most seem to have forgotten or ignored them but there were hints that Intel's entire line up of future processes might be messed up and need serious, if not drastic hail mary, efforts to fix them.

WhyteRyce
Dec 30, 2001

PC LOAD LETTER posted:

Huuuh?

7nm was originally supposed to be 2017-18!

It absolutely was as badly delayed as Intel's 10nm (which was supposed to be out in 2016)!

That was what Intel was saying but there were rumors 7nm was having issues too.

Back when the first leaks about their 10nm process were coming out in mid/late 2018 or so there was some mention that their 7nm process was having problems too.

What those problems were no one knew and those rumors weren't exactly detailed anyways (supposedly they made some of the same mistakes they did with 10nm, that is excessively aggressive economic, performance and transistor density targets....remember the team doing the work was different but answered to the same management as the 10nm team) so most seem to have forgotten or ignored them but there were hints that Intel's entire line up of future processes might be messed up and need serious, if not drastic hail mary, efforts to fix them.

That's an old slide and not representative of what Intel was saying publicly for some time. Intel hasn't publicly said or admitted to 7nm being massively delayed and I think the future roadmap plans were based around 7nm not being pushed out at least a year. This news is brand new and fucks up their already hosed up plans

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

The analyst line was that the problems on 10 were because they weren’t on EUV and that 7 was significantly different tech so the delays around scaling 10 weren’t going to impact the 7 schedule beyond normal.

PC LOAD LETTER
May 23, 2005
WTF?!

WhyteRyce posted:

That's an old slide and not representative of what Intel was saying publicly for some time.
Yeah but it was the original release date right?

Doesn't really matter how old it was if it was a official roadmap.

WhyteRyce posted:

Intel hasn't publicly said or admitted to 7nm being massively delayed and I think the future roadmap plans were based around 7nm not being pushed out at least a year.
Yeah initially they did the same thing with 10nm back in 2018 when the rumors first started to surface and look how that played out. Obviously you can't go believing every rumor but since the 10nm rumors panned out and the 7nm rumors supposedly came from the same source I don't think you can dismiss them.

PCjr sidecar posted:

The analyst line was that...
The original rumors weren't really all that clear about what the problem was. Just that they were huge and basically shat all over Krzanich and blamed him and the rest of the board for meddling where they shouldn't to please shareholders.

PC LOAD LETTER fucked around with this message at 04:05 on Jul 28, 2020

WhyteRyce
Dec 30, 2001

PC LOAD LETTER posted:

Yeah but it was the original release date right?


What does the original road map in this context mean? The road map changed. Intel said it identified and created solutions or plans to mitigate 7nm issues as well as explained how things would be different. The recent roadmap is the one BS has been pitching and Murthy had full ownership of delivering. Who cares about whatever roadmap stuff BK pitched or promised and tossed out in the context of what Swan is responsible for.

quote:

Yeah initially they did the same thing with 10nm back in 2018 when the rumors first started to surface and look how that played out. Obviously you can't go believing every rumor but since the 10nm rumors panned out and the 7nm rumors supposedly came from the same source I don't think you can dismiss them.

BK also most likely got canned over his 10nm response and lying about the health and status of it. I really find it hard to believe that BS would willingly put himself into that position. He's appeared to be more transparent about things, possibly because the Board said no more of that bullshit.

I guess to go back to my original point to not lose the forest from the trees, there probably needs to be a deeper inspection of what went wrong in the organization and further restructurings but Murthy absolutely should be fired and saying he was just a sacrificial goat is ignoring the amount of compensation and responsibility he was given. Bob isn't the right person to fix a huge technical mess this has become, but he has kept the company financials running pretty well and did take swift action when this news became public so he's doing probably what I expect a financial guy to do

WhyteRyce fucked around with this message at 04:33 on Jul 28, 2020

PC LOAD LETTER
May 23, 2005
WTF?!

WhyteRyce posted:

What does the original road map in this context mean?
Well as I was pointing out: 7nm was as delayed as 10nm. I honestly don't see how you can look at this any other way.

Yeah they changed the road maps but so what?

If AMD released a road map saying Zen3 was going to be out later this year last year and then released another one later saying 'nope we changed it now its 2021' everyone including me would call it a delay too.

WhyteRyce posted:

Who cares about whatever roadmap stuff BK pitched or promised and tossed out in the context of what Swan is responsible for.
Since the development of their 7nm started in something like 2015, and ran for quite a while, under BK I'd say it matters quite a bit.

Intel is still trying to cope with getting 10nm out right? And that started development in 2014 or so.

Lots of things, mistakes included, get 'baked' into products from the get go so even though BK is long gone now the fallout from his decisions will still be felt for a long time to come. Hell even their 5nm process development started under BK I think so even after 7nm they might still be dealing with major manufacturing issues for all we know.

WhyteRyce posted:

I really find it hard to believe that BS would willingly put himself into that position.
Lots of people had a hard time believing Intel hosed up their 10nm process too or that there would be problems with 7nm as well.

WhyteRyce posted:

saying he was just a sacrificial goat
When did I say this?

Quote me where I said exactly this.

To be clear: I'm not saying this at ALL and I don't have a clue how you got that out of what I posted.

FWIW too I don't think BS can do much of anything to speed up or magic away Intel's process troubles. Everything I've read about process development seems to suggest that doing anything at all new is incredibly difficult, risky, expensive, and takes lots of time (years) and so careful planning is typically required 3-5yr in advance to get things done.

Because of that mistakes made early on can result in multi year delays to get fixed. That sure does seem to be the situation Intel is in here with 10nm and I see no reason to be terribly optimistic about their 7nm process that they were once so sure of.

quote:

Intel is "very pleased" with the progress it is making on its 7-nanometer manufacturing technology, which might come as a surprise given that it's still not shipping 10-nanometer 'Cannon Lake' processors in volume, and won't until the end of next year. As it turns out, Intel has a separate team working on 7nm.

Dr. Murthy Renduchintala, chief engineering officer at Intel and head of the company's technology, systems architecture, and client group, made some interesting comments about 10nm and 7nm at Nasdaq's 39th Investor Conference.

"7 nanometers for us is a separate team and a largely separate effort. And we are quite pleased with our progress on 7, in fact very pleased with our progress on 7, and I think that we have taken a lot of lessons out of the 10-nanometer experience as we defined that and defined a different optimization point between transistor density, power and performance, and schedule predictability," Dr. Renduchintala said.

WhyteRyce
Dec 30, 2001

PC LOAD LETTER posted:

Well as I was pointing out: 7nm was as delayed as 10nm. I honestly don't see how you can look at this any other way.


Again because it's in the context of Bob Swan and Murthy's current commitments to investors. The roadmap changed multiple times, but the current roadmap, the one they just said would have a 1 year delay of 7nm which just sent the stock down like 15% and just got Bob in a whole poo poo of hot water with investors which just got Murthy fired is what we are talking about. Previous roadmap changes already happened and already had the effect it would on investors and the stock. Bob and Murthy completed owned this one. From a historical, technical perspective you are correct but you also are missing the point. This is new information, a new delay, and what caused Murthy to rightfully finally get the boot.

I think my whole argument is that this delay is new information and he fired the technical person in charge of this pretty promptly after this came out, which is about as good as you can expect from a finance guy in regards to a technical problem. Bob isn't the guy to fix this but he's doing about as well as you can expect a finance guy to do in the role and if you don't like it then go tell the Board off for picking him

quote:

Lots of people had a hard time believing Intel hosed up their 10nm process too or that there would be problems with 7nm as well.

False equivalence? Bob's behavior and level of transparency have nothing to do with hard technical problems and the limits of physics so I don't know why you are bothering to try and equate the two.

quote:

When did I say this?

Quote me where I said exactly this.

I never said you said this. Someone else said they felt like Murthy was being scapegoated and I think that undersells the impact, responsibility, and compensation he had. He's not the only reason why 7nm is late but he has his own treasure trove of failures to own up to and was rightfully let go

WhyteRyce fucked around with this message at 05:41 on Jul 28, 2020

PC LOAD LETTER
May 23, 2005
WTF?!

WhyteRyce posted:

Again because it's in the context of Bob Swan and Murthy's responsibilities.
Their individual responsibilities aren't in question at all by me in my OP nor do I see how that would matter at all given that the road map I linked was also official too.

Yes I know they just released info about new delays recently, I'm pointing out that it was already delayed and has been for a long long time. That the old road map is old doesn't suddenly mean that it no longer counts somehow.

WhyteRyce posted:

False equivalence?
Oooor I'm pointing out people's beliefs in general about what Intel as a company could or would do don't much matter?

WhyteRyce posted:

Bob's behavior and level of transparency have nothing to do with hard technical problems and the limits of physics so I don't know why you are bothering to try and equate the two.
I wasn't and I don't know why you think I was trying to.

Quote me exactly where I said BS was to blame for technical issues with Intel's 10/7nm processes.

WhyteRyce posted:

I never said you said this
But your reply was only to me in a post that was quoting only me and you sure weren't clear at all that you were talking to someone else in thread.

Are you drunk posting or something??

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



WhyteRyce posted:

Again because it's in the context of Bob Swan and Murthy's responsibilities. The roadmap changed multiple times, but the current roadmap, the one they just said would have a 1 year delay of 7nm which just sent the stock down like 15% and just got Bob in a whole poo poo of hot water with investors which just got Murthy fired is what we are talking about. Previous roadmap changes already happened and already had the effect it would on investors and the stock. From a historical, technical perspective you are correct but you also are missing the point. This is new information, a new delay, and what is caused Murthy to rightfully finally get the boot.

You're using strawmen to try and deflect criticism of Bob Swan's leadership away for some unknown reason, when who the hell cares whether Bob Swan gets criticized or not? No-one is arguing that Bob Swan should have known more than Murthy with regards to the issues at hand or that he should personally be involved on the technical side for solving it.

You got proven wrong on your arguments about the 10 nm/ 7 nm timelines, and instead of just acknowledging it, you keep trying to argue away that it wasn't a delay because Intel updated their roadmaps/etc. But no one except you is associating it with impacts on the stock market, investors, etc. And if Intel initially indicates that 10 nm is out in 2016 and 7 nm would be out in 2017, as was shown, and then update it between initial presentation and those dates to show it's now 2018, 2019, 2020, whatever, it doesn't matter what the reason for the delay is, because it is still a delay.

And for reference, last week's 7 nm update was the 4th roadmap/update that Intel has publicly provided guidance on since Bob Swan took over as CEO. Which isn't to say that he is to blame for it, but he should officially be on the hot seat for it, and if there are further delays, he should be held even more accountable because he can be involved in project updates at a high level without having to have any detailed technical knowledge of the challenges that are occurring.

WhyteRyce posted:

False equivalence? Bob's behavior and level of transparency have nothing to do with hard technical problems and the limits of physics so I don't know why you are bothering to try and equate the two.
Ultimately it doesn't matter what you believe, because as you continue to like to point out, he is a "finance guy", and so he's going to strive to maximize shareholder value, just as BK did... until it cost him. Which isn't to say that BS will do the same, and hopefully he demands and provides full transparency going forward.


WhyteRyce posted:

I never said you said this and other people than you have been posting in here about this
That was me, and I feel it was justified and even said as much by saying he deserved it to some extent, this is a problem that goes far deeper than Murthy alone. Maybe Bob Swan can start by changing Intel's toxic work culture that has driven everyone I know who used to work on their fabrication engineering teams away over the last 8 years.

But ultimately, you seem to have some type of emotional investment in defending Bob Swan/Intel, but a delay is a delay is a delay, whether due to technical reasons, economic reasons, force majeures, whatever your preference. But if a CEO comes into a role having either known beforehand or learning afterward that one of the issues that caused his predecessors ouster were the issues to deliver 10 nm, I would certainly think that the new CEO would have a pretty vested interest in keeping track of how it is progressing, and not just from a lunchroom "Hey Murthy, how's 7 nm going?" "Good". "Cool, thanks for the update, I'll let the investors know!"

Adbot
ADBOT LOVES YOU

punk rebel ecks
Dec 11, 2010

A shitty post? This calls for a dance of deduction.
It's crazy how thing have managed to completely flip, but the time for it to be possible checks out. I can't believe it's been almost ten years since the Bulldozer fiasco. I wonder if Intel is in a worse position now than AMD was in 2011?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply