Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

canyoneer posted:

The PE pushed back and said that's stupid, because you're going to want these people back in 8 months and they have a pretty unique skillset, I'm not going to go along with this. It continued to heat up and BK took a swing at the guy. They both landed a few punches before the other people in the room pulled them apart, nobody got fired, and life went on.

literally fistfighting the executives to keep your people is SS+ tier management.

Adbot
ADBOT LOVES YOU

WhyteRyce
Dec 30, 2001

The “you’re going to have to rehire them in 8 months” thing is extra meaningful because when BK became CEO and did the massive ACT layoffs he specifically took steps to make rehiring difficult and explicitly stated in his talks that this was a continual Intel problem and they were implementing explicit “no rehire” policies

Dude was a petty little poo poo and probably still smarting from that confrontation

Beef
Jul 26, 2004

movax posted:

Am I correct in quoting (likely from someone in this thread) that the first few seconds of first power-on of Nehalem executed more clock cycles / instructions than had ever been simulated for the design? Or was that my old boss (ex-Intel) exaggerating slightly?

I’ve done logic design / validation before, but nothing at the scale of a modern CPU or experience with leading-edge validation tools, so I’m curious.

Sounds about right. Cycle-accurate simulators are extremely, extremely slow.

Architects relied nearly exclusively on cycle accurate simulators for design development, testing and measuring performance. You can only simulate a few mil instructions in any kind of actionable time with those kind of techniques. That also limits you to single-core traces of SPEC benchmarks and the like.
Things have changed these days. Slowly, it's the tanker that has been turning for a decade now. Jim Keller and Pat helped accelerate that though. Intel has bought Simics for system/platform simulations and has developed some application-level accurate simulators in-house (in the style of Gem5/Carbon/Sniper). Of course those run off specs provided (or not) by the architect teams, which can be divorced from reality. And, those kind of high level simulators will still not save you from some reset trace being shorted to ground. But, they can be used to guide design based on real applications: does it make sense to double L3 cache here? add HBM? what about this kind of prefetcher?

RTL simulators are used after the design is frozen. They are also slow as balls and exceedingly expensive, but can be used to inspect waveforms and validate some state individual blocks or state machines. Anecdotally, I find that that still lets a ton of stupid design bugs through, ranging from hard to find edge cases when tons of things are in flight and interact in a weird way to dumb as balls "oops we did not test DMA with size 0, why would anyone want to do that?". I'm not on the validation side of any products, so I do not know if any other tools are used for that.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Does Validation in Intel speak refer to pre silicon (ie system verilog etc) or system level test? Some places change between “verification” and “validation”

Beef
Jul 26, 2004
idk. I mostly just hear "pre-silicon validation" being used.

Mind that pre-silicon validation tools are also used during power-on to reproduce and root cause hardware bugs.

WhyteRyce
Dec 30, 2001

Intel uses both the term pre silicon and post silicon validation. In my time they were very separate roles that did very separate things, although some post-silicon engineers would frequently use giant emulators or FPGA setups before power-on as part of pre-silicon work. Smart leaders said what if pre-silicon and post-silicon used the same tool to do testing and forced you to devote all resources to making this happen

It’s just a matter of scale right. Emulators/FPGAs you do a few cycles and for post you just crank the dial right like that dril tweet

WhyteRyce fucked around with this message at 00:09 on Feb 9, 2023

Beef
Jul 26, 2004
gently caress it, we'll fix it in post.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
https://twitter.com/VideoCardz/status/1623380010919665688?t=og9jSc2Qzqg1Llj9PLpBzw&s=19

Rip rocket lake, we barely knew you

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

movax posted:

Am I correct in quoting (likely from someone in this thread) that the first few seconds of first power-on of Nehalem executed more clock cycles / instructions than had ever been simulated for the design? Or was that my old boss (ex-Intel) exaggerating slightly?

I’ve done logic design / validation before, but nothing at the scale of a modern CPU or experience with leading-edge validation tools, so I’m curious.

I worked on SoC FPGA emulation for a few years and we used to joke about exactly this. It's exaggerated in some ways; the total number of cycles simulated should be reasonably large by tapeout. But it's also serious in others. Have you ever simulated ten continuous seconds worth of the chip being powered on? One hundred? One thousand? RTL sim is so slow that out of necessity most test cases are short and very directed. It's common to just slam values into config registers through a "backdoor" system built for sim so that regression tests can hit the ground running without needing to boot anything. It's hard to use RTL simulation tools to run all of a large, complex system over a long period of time.

Even in cases where the FPGA emulator board could only run core clocks at single-digit MHz, it was several orders of magnitude faster, which greatly expanded the kind of things you could test. FPGA isn't good at the post-synthesis logic being identical to post-synthesis ASIC logic, because let me tell you, FPGA synthesis tools get up to Some Bullshit. However, it was great for things like a designer undersizing a FIFO leading to the occasional performance hiccup in real world use cases. We'd catch that kind of problem easily in FPGA when it might have taken weeks in RTL sim, provided we'd even known we should go looking.

JawnV6
Jul 4, 2004

So hot ...

WhyteRyce posted:

I don't know how many times I've had to hear designers complaining about validation people just coming up with these implausible, unrealistic scenarios that waste everyone's time when it's all emulation or A0 testing and then immediately start bitching about coverage gaps when it it's a high priority customer issue blocking launch

Those complaints about validation wasting time and resources got more traction when PC sales start to slip and managers get desperate to find some way to maintain profit margins above all else.
I was barely in the headcount discussions when I left, but yeah everyone acts like validation is a wank fest until it's not. And it was certainly after my time, but I'm guessing the spectre/meltdown conversation took some cheap shots at validation for not figuring it all out.

movax posted:

Am I correct in quoting (likely from someone in this thread) that the first few seconds of first power-on of Nehalem executed more clock cycles / instructions than had ever been simulated for the design? Or was that my old boss (ex-Intel) exaggerating slightly?

I’ve done logic design / validation before, but nothing at the scale of a modern CPU or experience with leading-edge validation tools, so I’m curious.
idk if I've told it here, but one of my early career-limiting moves was making exactly this joke at a team dinner. In front of the boss's boss's boss, SVP now. We'd just slipped tapeout by 4 weeks and I said something like "but it's fine, post-silicon will catch up in a few seconds." She politely chuckled, then dismantled the entire thing (visibility & controllability). You don't learn quite as much from the lump of metal you can't talk to compared to a complete trace of every signal in the design.

10s of a 4ghz chip is 40 billion cycles. There are smaller sections of the chip "clusters" that will have simulation environments, a reasonable test there might be 100,000 cycles. 400,000 tests doesn't seem astronomical over a multi-year design cycle? But with a few dozen chips in the lab, by the 1 day mark I think it's almost certain you're eclipsing the pre-si cycle count. Possibly even running a meaningful instruction or two!

BobHoward posted:

I worked on SoC FPGA emulation for a few years and we used to joke about exactly this. It's exaggerated in some ways; the total number of cycles simulated should be reasonably large by tapeout. But it's also serious in others. Have you ever simulated ten continuous seconds worth of the chip being powered on? One hundred? One thousand? RTL sim is so slow that out of necessity most test cases are short and very directed. It's common to just slam values into config registers through a "backdoor" system built for sim so that regression tests can hit the ground running without needing to boot anything. It's hard to use RTL simulation tools to run all of a large, complex system over a long period of time.

Even in cases where the FPGA emulator board could only run core clocks at single-digit MHz, it was several orders of magnitude faster, which greatly expanded the kind of things you could test. FPGA isn't good at the post-synthesis logic being identical to post-synthesis ASIC logic, because let me tell you, FPGA synthesis tools get up to Some Bullshit. However, it was great for things like a designer undersizing a FIFO leading to the occasional performance hiccup in real world use cases. We'd catch that kind of problem easily in FPGA when it might have taken weeks in RTL sim, provided we'd even known we should go looking.
In my experience, doing a full gate-level synthesis run of boot happens... once. Even then it cheats because letting it run for 18 hours to simulate hundreds of thousands of really boring cycles where it's just waiting for a PLL to lock is a waste.

canyoneer
Sep 13, 2005


I only have canyoneyes for you
HR executive goes to doctor. Says she's depressed. Says life seems harsh and cruel. Says she feels all alone in a threatening business environment in a company that has difficulty with recruiting and retaining key talent, despite correcting a long-running compensation gap to peer companies by unilaterally raising salaries 10 months ago.
Doctor says, 'Treatment is simple. Chief People Officer from Fortune 50 company Intel Christy Pambianchi is in my LinkedIn network. Go and seek mentoring from her. That should help you solve your problem.'
HR executive bursts into tears. Says, 'But doctor...'

Yaoi Gagarin
Feb 20, 2014

canyoneer posted:

HR executive goes to doctor. Says she's depressed. Says life seems harsh and cruel. Says she feels all alone in a threatening business environment in a company that has difficulty with recruiting and retaining key talent, despite correcting a long-running compensation gap to peer companies by unilaterally raising salaries 10 months ago.
Doctor says, 'Treatment is simple. Chief People Officer from Fortune 50 company Intel Christy Pambianchi is in my LinkedIn network. Go and seek mentoring from her. That should help you solve your problem.'
HR executive bursts into tears. Says, 'But doctor...'

Hardware in general seems to pay terribly compared to software. Idk why

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

BobHoward posted:

So now the relevant questions become: Why not hardware state machines? Why not ordinary device drivers?

The answer to the first is that if we RTL engineers actually put in a shitload of complex hardware state machines, we'd have to debug them before tapeout. It is insanely hard to do this kind of work in simulation, so the bugs would not get found until validation, and we'd have to do a bunch of chip spins to fix them. FPGA and other forms of pre-silicon emulation can help, but probably not enough.

The answer to the second is that often, you need to make control or other decisions quite frequently, but it would be wasteful to wake up a real CPU core for every one because the total compute power required is only a tiny fraction of a percent of a real CPU. Also, these decisions often come with hard realtime requirements, which are difficult to provide on cores shared with user applications. A simple 32-bit or 64-bit RISC core with fully deterministic timing costs an inconsequential amount of die area these days, so the easy path is to just slap one down and clock it so low it uses nanowatts.
As a software person that did hardware design with HDLs on FPGAs for a few years of my career roughly I'm kind of being cheeky to suggest that hardware folks learn from software. There are some novel, imaginative lessons to learn and some dead wrong lessons, and the best lessons are usually from watching the failures of others and taking them to heart. Unfortunately you're also correct that test harnesses themselves are non-zero cost and can cost tons of money, so basic economics will rear its head and foil the most noble of plans by engineers. Last I remember there was some solid proof that essentially we can't simulate basically everything relevant on a 2 GHz clock rate CPU, but we can use stochastic methods to probe some edge cases, for example, similar to how some software folks are now finding they can simply test every single floating point number for a 32-bit float, except we can go in reverse directions for hardware where certain really awkward states may need to be shown to be impossible to enter. There's also formal methods like from TLA+ that may be useful for hardware design implementations as well. Finding all sorts of buffer overflow and memory mapping issues in hardware are all quite easy to discover or at least mitigate if these SOCs are engineered with better validation methods and security in mind. I would hope that after the kind of work done by Bunny Huang and the threats from various state actors around the world that hardware engineers would be overall driven toward better secure systems practices but given the kind of vulnerabilities I've seen in the past few years I don't think the industry is prioritizing it as well as I'd prayed it would.

sauer kraut
Oct 2, 2004
Ugh I wanna die :smith:
New PC came with an older IME build but everything ran fine, so naturally I couldn't resist patching aand.. now my 13400F is recognized everywhere (incl. Bios) as a 12100F.
Anyone ever came across this problem? Board is an Asus Prime B660M-K D4.

sauer kraut fucked around with this message at 11:48 on Feb 14, 2023

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
uh did you get that off wish or alibaba or somesuch

sauer kraut
Oct 2, 2004
I could swear it ran fine as a sixcore when I got it. The brick&mortar guy has been the local PC hero for over a decade now so I don't think there is any foul play over a 210€ CPU.
Holy crap if he got scammed at the distributor level.. going to have a talk tomorrow.

sauer kraut fucked around with this message at 11:49 on Feb 14, 2023

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
I've never heard of that happening so that is super shady.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

I'd roll back a BIOS version or something to see if it fixes it. You can always take the heatsink off if you think you were bait and switched, but remember to clean off thermal paste and apply new stuff when you put it back on.

sauer kraut
Oct 2, 2004
Welp mystery solved. Builder plugged in a loaner 12100 to flash the board to 13. Gen, and promptly forgot about it :3:

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

sauer kraut posted:

Welp mystery solved. Builder plugged in a loaner 12100 to flash the board to 13. Gen, and promptly forgot about it :3:

I like hearing about happy, untroublesome endings

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
"Forgot"

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.
Was it reporting as a 13400 before or did you just not notice it?

I think negligence is more likely here as that seems like a lot of work to swap a $100 CPU with a $200 CPU.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Yeah, I'd also assume it was a mistake. Computer shop guy is not exactly getting away with a lot by holding onto a midrange processor that he can't resell because it was already opened.

redeyes
Sep 14, 2002

by Fluffdaddy

Eletriarnation posted:

Yeah, I'd also assume it was a mistake. Computer shop guy is not exactly getting away with a lot by holding onto a midrange processor that he can't resell because it was already opened.

Can't resell? Of course he can. Regardless, that made me laugh a bit.

sauer kraut
Oct 2, 2004
Cheers for the kind replies. I never really looked too closely either until I noticed only 4 cores in a benchmark overlay.

Guy almost had a panic attack on the phone and sent me an iPhone pic of my tray 13400F waiting in the shop.

Anime Schoolgirl
Nov 28, 2002

sauer kraut posted:

Welp mystery solved. Builder plugged in a loaner 12100 to flash the board to 13. Gen, and promptly forgot about it :3:
free cpu!!!!

Beef
Jul 26, 2004
Things are going to get interesting. Warren Buffet sheds his TSMC investments and Bloomberg shitposts a rumor that Intel is going to cut back on the dividends.


PS. How good are 2012-era workstations? I've seen some place throw/give away (unused) rack servers and workstations from that era and I'm wondering if I can make some school IT happy with a donation.

movax
Aug 30, 2008

Beef posted:

PS. How good are 2012-era workstations? I've seen some place throw/give away (unused) rack servers and workstations from that era and I'm wondering if I can make some school IT happy with a donation.

Power efficiency might be the only bummer, but I'd consider myself a power user and I was running a 2600K (overclocked) up until last summer running Win10 and it was fine. Maybe they can use them to populate a lab or something?

Cygni
Nov 12, 2005

raring to post

I’ve somehow ended up with piles of Ivy Bridge stuff from 2012, and it does just fine for basic daily use. I don’t think I would bother with Nehalem/Westmere stuff at this point though.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Hoard old electronics now before the blockade/invasion of taiwan happens and strangles semiconductor supplies :tinfoil:

I read an extremely depressing article about the UK wargaming this scenario and poo poo will get pretty hosed up

Potato Salad
Oct 23, 2014

nobody cares


why the gently caress is Intel being allowed to slow down its subsidized rollout of domestic fab capacity

how are we not holding them at the point of a bayonet on this

this is a deeply crucial national security vulnerability

Potato Salad
Oct 23, 2014

nobody cares


priznat posted:

I read an extremely depressing article about the UK wargaming this scenario and poo poo will get pretty hosed up

it will be

SO

hosed

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Potato Salad posted:

it will be

SO

hosed

I'll be out of a job that's for drat sure

Methanar
Sep 26, 2013

by the sex ghost

Potato Salad posted:

why the gently caress is Intel being allowed to slow down its subsidized rollout of domestic fab capacity

how are we not holding them at the point of a bayonet on this

this is a deeply crucial national security vulnerability

I just don't know how we got to the point that the overwhelming majority of the global semiconductor manufacturing base was built within artillery range of NK and China. This problem should have been obvious 20 years ago to the military.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Potato Salad posted:

why the gently caress is Intel being allowed to slow down its subsidized rollout of domestic fab capacity

how are we not holding them at the point of a bayonet on this

this is a deeply crucial national security vulnerability


Methanar posted:

I just don't know how we got to the point that the overwhelming majority of the global semiconductor manufacturing base was built within artillery range of NK and China. This problem should have been obvious 20 years ago to the military.

:capitalism:

Cygni
Nov 12, 2005

raring to post

china is not going to destroy taiwan, yall read too much dogshit

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Beef posted:

Things are going to get interesting. Warren Buffet sheds his TSMC investments and Bloomberg shitposts a rumor that Intel is going to cut back on the dividends.


PS. How good are 2012-era workstations? I've seen some place throw/give away (unused) rack servers and workstations from that era and I'm wondering if I can make some school IT happy with a donation.
Cuts to dividends and taxes on stock buybacks? How number go up!?

I have an Ivy Bridge 3470 as my main PC, and it's fine. Like you can tell it's slower than something brand new, but still usable in a way that doesn't make you mad.


movax posted:

Power efficiency might be the only bummer, but I'd consider myself a power user and I was running a 2600K (overclocked) up until last summer running Win10 and it was fine. Maybe they can use them to populate a lab or something?
"perf/watt" might not be great, but they're still not using that much power. Like maybe 25W total system idle? I measured it a while ago, will check if I have the note somewhere.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

mobby_6kl posted:

Cuts to dividends and taxes on stock buybacks? How number go up!?

I have an Ivy Bridge 3470 as my main PC, and it's fine. Like you can tell it's slower than something brand new, but still usable in a way that doesn't make you mad.

"perf/watt" might not be great, but they're still not using that much power. Like maybe 25W total system idle? I measured it a while ago, will check if I have the note somewhere.

It sounded like OP was talking about servers and workstations, which I interpreted as using Xeon E5 platform chips. Compared to desktop stuff, these will absolutely use more power at idle, to the point where I'm not even sure if Sandy Bridge Xeons are worth the power to have turned on. A higher-end Sandy Bridge Xeon was going on Ebay for under $100 7 years ago. That's how old these are: https://www.servethehome.com/intel-xeon-e5-2670-v1-prices-dropping-now-around-100/

Also for anything remotely server-type workload, anything that would be happy running on an 8 core Sandy Bridge Xeon would also be equally happy running on say, 3 or 4 cores of something modern, and modern chips have more cores too. It'd take 6+ of these sandy bridge boxes to match up to even a single modern entry level server, Xeon Silvers are 20 cores now, each of which is much faster than 2012-era Xeons.

Server idle power usage is high, and a bunch of them adds up quickly. The reason why these got cleared out on Ebay for $70-100 7 years ago is because they were not worth the power to run even then.

Client stuff idles lower and a single box has less impact, so there's no reason not to use your i5-3470.

Yaoi Gagarin
Feb 20, 2014

Beef posted:

Things are going to get interesting. Warren Buffet sheds his TSMC investments and Bloomberg shitposts a rumor that Intel is going to cut back on the dividends.


PS. How good are 2012-era workstations? I've seen some place throw/give away (unused) rack servers and workstations from that era and I'm wondering if I can make some school IT happy with a donation.

I'd call the school IT first and ask them.

Adbot
ADBOT LOVES YOU

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
China isn't going to attack Taiwan anytime soon. Y'all been listening to way too many right-wing pundits. The Chinese military is a complete loving joke that has no way to invade Taiwan, never mind cope with the goatse-dwarfing rear end stretching that would promptly commence courtesy of the US Navy.

China still buys their fighter jet engines from Russia. Russia, the country which has utterly failed to conquer a much smaller country that DOESN'T have a loving ocean between them or important economic ties to the US, by and large has better military technology and manufacturing capability than China.

The US government has been incredibly successful at strangling the Chinese economy. Lots of poo poo is made in China - with machines made outside of China, in US-aligned countries. As long as that remains the case, China is not ever going to be a military superpower, and even if it were it would have literally decades of catching up to the US to do, not to mention trying to match the hilariously huge US spending.

K8.0 fucked around with this message at 22:13 on Feb 16, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply