Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shaocaholica
Oct 29, 2002

Fig. 5E

movax posted:

search LinkedIn for validation/verification engineers at Intel and see how many hits you get

I wonder how much that pays. I went to school for EE and I never heard of that but maybe thats why I'm not doing professional EE work post grad.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Shaocaholica posted:

I wonder how much that pays. I went to school for EE and I never heard of that but maybe thats why I'm not doing professional EE work post grad.

IMO it's more of a computer engineering position, but depending on what your school considered EE you might have the skillset for it, which is primarily HDL, OVM/UVM, experience with Cadence/Mentor/Synopsys/whoever simulators and a familiarity with the silicon you're debugging (computer architecture in general, maybe experience in CG if you're doing graphics stuff, comms if you're doing wireless, etc).

e: and if you job search you can find out Intel is doing some kind of stereographic 3D imaging gizmo.
e2: and I like how none of the positions want VHDL :getout:

Shaocaholica
Oct 29, 2002

Fig. 5E

movax posted:

IMO it's more of a computer engineering position, but depending on what your school considered EE you might have the skillset for it, which is primarily HDL, OVM/UVM, experience with Cadence/Mentor/Synopsys/whoever simulators and a familiarity with the silicon you're debugging (computer architecture in general, maybe experience in CG if you're doing graphics stuff, comms if you're doing wireless, etc).

e: and if you job search you can find out Intel is doing some kind of stereographic 3D imaging gizmo.
e2: and I like how none of the positions want VHDL :getout:

Haha, I had to design a simple MMX capable CPU as a senior project in VHDL. It was such a lovely CPU but my hand drawn adders were the bomb yo.

Now I work in hollywood doing CG and stereo poo poo. Barely even touch code. Go figure.

JawnV6
Jul 4, 2004

So hot ...

Shaocaholica posted:

Passed 200 hours of prime95 but crashes on Crysis right away? I've seen that case and never have I heard anyone blame the chipmaker or the developers.

Gosh, immediate crashes are easy to root cause? Validation missed a precious gem in you.

redeyes
Sep 14, 2002

by Fluffdaddy
Can someone explain to me why Haswell mobos include all kinds of poo poo around the CPU socket? I thought Haswell had the VRMs inside the CPU and thus they don't need to spend money on external VRM's.. so what the hell is going on?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

redeyes posted:

Can someone explain to me why Haswell mobos include all kinds of poo poo around the CPU socket? I thought Haswell had the VRMs inside the CPU and thus they don't need to spend money on external VRM's.. so what the hell is going on?
The motherboard still has to convert from +12V down to ~1.8V for IVR on the CPU to handle the last step. The motherboard no longer has to do the complicated high-frequency switching and supply multiple rails, however.

movax
Aug 30, 2008

redeyes posted:

Can someone explain to me why Haswell mobos include all kinds of poo poo around the CPU socket? I thought Haswell had the VRMs inside the CPU and thus they don't need to spend money on external VRM's.. so what the hell is going on?

You still need the lower voltage rails for DDR and other logic on the motherboard and the FIVR wants around 2.4V in IIRC. So you end up with a two stage buck converter, with the motherboard design getting easier because now you just have to buck down to a constant 2.4V (all VIDs are taken care of by FIVR). The motherboard will still utilize a switching topology, it'll just be simpler and easier to implement as it's less demanding. Intel did some really cool things to get an insanely high switching frequency + ~18nH or so of inductance utilizing thin-film on-die magnetics which leads to the excellent ripple performance.

The VRM is made up of "power cells" which can be scaled up and down in quantity to meet the needs of a specific skew. Reported fs is anywhere from 30 to 140MHz, with each cell having sixteen phases (at least the prototype). The current capacity is essentially thermally constrained; I think prototype was ~25A per cell.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

movax posted:

No TSX sucks on Ks, but let's face it, business adoption of CPUs that support that + eventual roll out of software to leverage it will all be utilizing non-K SKUs.
The only interesting thing to TSX for desktop computers is lock elision. And that can be trivially supported and quasi retrofitted by implementing it in the respective places, e.g. the Win32 mutex/critsec APIs and the various threading libraries on Linux.

WhyteRyce
Dec 30, 2001

Shaocaholica posted:

I wonder how much that pays. I went to school for EE and I never heard of that but maybe thats why I'm not doing professional EE work post grad.

Pretty much any college student you interview will have no clue about validation or that it even exists and when you try to explain what the job is to them they think you just mean test benches, screening, or checking for manufacturing defects. It's very disheartening :mad:

WhyteRyce fucked around with this message at 21:37 on Jun 5, 2013

Magog
Jan 9, 2010
drat Intel, why must you disappoint. I've been holding out with a unstable sandy bridge for over a year waiting for Haswell... I guess the motherboards have some cool gimmicks this generation.

Shaocaholica
Oct 29, 2002

Fig. 5E
Well I bit the bullet and bought a Haswell 4770K. First 'new' CPU I've bought since my Q6600. Quite a hit to the wallet since all I could salvage was HDDs and a GTX 470.

Phantom Limb
Jun 30, 2005

blargh

movax posted:

IMO it's more of a computer engineering position, but depending on what your school considered EE you might have the skillset for it, which is primarily HDL, OVM/UVM, experience with Cadence/Mentor/Synopsys/whoever simulators and a familiarity with the silicon you're debugging (computer architecture in general, maybe experience in CG if you're doing graphics stuff, comms if you're doing wireless, etc).

e: and if you job search you can find out Intel is doing some kind of stereographic 3D imaging gizmo.
e2: and I like how none of the positions want VHDL :getout:

Validation also sucks imo and unless you're specifically passionate about it, you'll just be miserable. I know some people who did verification as a stepping stone to design work and they basically hate life.

JawnV6
Jul 4, 2004

So hot ...
It shouldn't be a stepping stone to design, it should be a stepping stone to architecture. And they might just be the special snowflake types who would've been equally unhappy at the reality of design work staring at the same 5 critical paths for a couple months. imo.

Henrik Zetterberg
Dec 7, 2007

WhyteRyce posted:

Pretty much any college student you interview will have no clue about validation or that it even exists and when you try to explain what the job is to them they think you just mean test benches, screening, or checking for manufacturing defects. It's very disheartening :mad:

This is exactly what I thought when I applied for an Intel validation job during my senior year of my EE degree.

I'm now in my 8th year of the job :tipshat:

Phantom Limb posted:

Validation also sucks imo and unless you're specifically passionate about it, you'll just be miserable. I know some people who did verification as a stepping stone to design work and they basically hate life.

This most certainly incorrect.

e: VV I was more addressing his blanket statement that it is miserable. Of course people like different things.

Henrik Zetterberg fucked around with this message at 01:58 on Jun 6, 2013

movax
Aug 30, 2008

Phantom Limb posted:

Validation also sucks imo and unless you're specifically passionate about it, you'll just be miserable. I know some people who did verification as a stepping stone to design work and they basically hate life.

This is my opinion on it also, but then you have:

Henrik Zetterberg posted:

This most certainly incorrect.

so people like different things. I have similar views on testing in automotive and other industries; driving a car until it breaks is kind of fun and awesome when you're fresh out of school but 10 years down the line, I'd be hoping you're far far above that, but again, that's my opinion/feelings.

I feel like validation would be amazing if you were autistic. (no offense Henrik/anyone)

Phantom Limb
Jun 30, 2005

blargh

movax posted:

This is my opinion on it also, but then you have:


so people like different things. I have similar views on testing in automotive and other industries; driving a car until it breaks is kind of fun and awesome when you're fresh out of school but 10 years down the line, I'd be hoping you're far far above that, but again, that's my opinion/feelings.

I feel like validation would be amazing if you were autistic. (no offense Henrik/anyone)

Fair enough. I think my perception of it is also colored by the fact a lot of younger engineers at my company got shoved into doing validation since there was no other work, and they pretty much universally hated it.

WhyteRyce
Dec 30, 2001

Like any other field it depends on your personality, the group you work in, and the type of work you are doing. You can learn a lot about how poo poo works in validation. But you have to be a bit self-driven and if all you are doing is just running tests and then throwing up your hands when they fail then I can see being miserable.

borkencode
Nov 10, 2004
To K or not to K..

Is the virtualization support of the non-K 4770 worth not getting the 100 MHz bump of the 4770K? Or is the extra MHz worth the extra cost? I'm not really interested in overclocking, and use Virtualbox occasionally.

\/\/\/\/\/ Mostly for just messing around with linux, nothing very intensive.

borkencode fucked around with this message at 03:00 on Jun 6, 2013

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
"Occasionally" for what? Both CPUs have Vt-x, so purely CPU-based performance will be similar. The non-K's Vt-d is more a feature for a permanently virtualized router or NAS or suchlike VMs where direct access to peripherals improves I/O throughput and latency.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

borkencode posted:

To K or not to K..

Is the virtualization support of the non-K 4770 worth not getting the 100 MHz bump of the 4770K? Or is the extra MHz worth the extra cost? I'm not really interested in overclocking, and use Virtualbox occasionally.

You do not sound like a good candidate for a K sku chip. Go with the one that works for your situation.

JawnV6
Jul 4, 2004

So hot ...

Phantom Limb posted:

Fair enough. I think my perception of it is also colored by the fact a lot of younger engineers at my company got shoved into doing validation since there was no other work, and they pretty much universally hated it.

You've got to stamp that out.

There's a bit of a difference between what companies call "validation/verification". There's the team that answers "Is the layout an equivalent circuit to the RTL?" that's mostly automated now and if you have humans doing any significant portion of it, yeah it's not the best. There's also the team that answers "Is this design doing what's expected?" that requires actual engineering since over the life of the project you have time to run maaaaaybe 1 second of real CPU time, so you have to be clever what tests you run on what portions of the chip to make sure the billions to get the first ones back enables the post-silicon teams to run multiple seconds (:eyepop:) of quality content to flush out the nastier bugs.

If you exclusively hire RCG's, treat them like second class citizens, and actively poach the best of them for other teams... really hard to imagine an outcome other than a terrible team? Ideally you've got experienced validators who engage during the early stages of architectural planning (or even take architectural ownership) to shape the project in a way that's easy to validate to cut that long pole down and let you build a wider tent.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Shaocaholica posted:

Passed 200 hours of prime95 but crashes on Crysis right away? I've seen that case and never have I heard anyone blame the chipmaker or the developers.

More frequently, it's passed 1 hour of Prime95, and then 6 months later it crashes on Crysis. And then they blame the developers, because they've pretty much forgotten about that overclocking thing they did (and besides, it's stable!).

It does hurt Intel if using their newer, fancier features increases support costs because of overclocking, and makes developers more reluctant to actually use them (and, for that matter, it hurts overclockers if their overclock is limited by features they don't really need, or if their system is rendered unstable by things they can't stress test). I don't know how much any of that weighs into Intel's decision to remove any of those features, but there are at least some good reasons to do so.

Shaocaholica
Oct 29, 2002

Fig. 5E

Zhentar posted:

More frequently, it's passed 1 hour of Prime95, and then 6 months later it crashes on Crysis. And then they blame the developers, because they've pretty much forgotten about that overclocking thing they did (and besides, it's stable!).

It does hurt Intel if using their newer, fancier features increases support costs because of overclocking, and makes developers more reluctant to actually use them (and, for that matter, it hurts overclockers if their overclock is limited by features they don't really need, or if their system is rendered unstable by things they can't stress test). I don't know how much any of that weighs into Intel's decision to remove any of those features, but there are at least some good reasons to do so.

I always thought that the vast majority of overclockers consider a speed stable only if ALL aspects of the CPU are stable. Hence the 100s of different stress tests. If people just wanted a bare minimum of stability then they wouldn't even bother stress testing and just say 'hey it booted into windows, mission accomplished'.

At that point you might as well sell a special overclocking chip that has almost everything disabled so people can clocked them up to 10ghz and still have a lovely CPU but hey, its -10Ghz-.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Shaocaholica posted:

I always thought that the vast majority of overclockers consider a speed stable only if ALL aspects of the CPU are stable. Hence the 100s of different stress tests. If people just wanted a bare minimum of stability then they wouldn't even bother stress testing and just say 'hey it booted into windows, mission accomplished'.

At that point you might as well sell a special overclocking chip that has almost everything disabled so people can clocked them up to 10ghz and still have a lovely CPU but hey, its -10Ghz-.

Different strokes for different folks. Some overclockers are happy with "does it boot?" Others are fine if it plays their games and who cares about anything else. I wrote the OC thread OP with an eye toward arbitrarily stable, 24/7-safe overclock, but that's not the only way it has to be, especially in the suicide run crowd, where all they want is a validated CPU-Z upload.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I guess it's a form of drag racing of sorts where each year everyone's rushing for updated records, and it's alright to crash after the finish line and everyone gets to define their own line. So it's hardly competitive in this respect either to me without some rules and standards.

Shaocaholica
Oct 29, 2002

Fig. 5E

necrobobsledder posted:

I guess it's a form of drag racing of sorts where each year everyone's rushing for updated records, and it's alright to crash after the finish line and everyone gets to define their own line. So it's hardly competitive in this respect either to me without some rules and standards.

Well theres competitive motorsports and there are sports cars normal people buy in the 1000s. Intel owes nothing and gains nothing from targeting the suicide run guys.

TomWaitsForNoMan
May 28, 2003

By Any Means Necessary
So from the looks of things the performance increase isn't going to be that great, is that a fair assessment?

As someone who has a Sandy Bridge i7-2600K would I even see a noticeable performance increase in games if I don't have a GPU bottleneck?

movax
Aug 30, 2008

TomWaitsForNoMan posted:

So from the looks of things the performance increase isn't going to be that great, is that a fair assessment?

As someone who has a Sandy Bridge i7-2600K would I even see a noticeable performance increase in games if I don't have a GPU bottleneck?

If you have a 2600K you have no reason to upgrade whatsoever unless you absolutely need every single bit of CPU performance possible. For pure gaming, there's no need to consider upgrading for awhile yet.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Shaocaholica posted:

I always thought that the vast majority of overclockers consider a speed stable only if ALL aspects of the CPU are stable.

Even if this is true, the vast majority of overclockers do not have the knowledge, understanding, or tools to verify that all aspects of their CPU are stable. They run Prime95/OCCT/stress test of choice, testing some small portion of their CPU. If they're really a go getter, they'll run the tool in several different modes, stress testing a little bit more of their CPU.

Have you ever run a VT-x stress test? I'm guessing not, since I don't think there even is one. How do you even know if your overclock is truly stable, then? You wouldn't run one for VT-d, for the same reason, and likewise, TSX. And with TSX, it's actually potentially a significant problem, because it will theoretically be used by everyday desktop applications, and since there's lots of interaction with memory systems, it's very likely on a critical path or two.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

movax posted:

If you have a 2600K you have no reason to upgrade whatsoever unless you absolutely need every single bit of CPU performance possible. For pure gaming, there's no need to consider upgrading for awhile yet.

Yeah, for better and for worse Sandy Bridge is the Core2Quad of the next few years. Just gonna keep on keeping on while Intel focuses on improvements in areas that make them more money because they make the big buyers happier. Regular user experience will not meaningfully differ at all, though non-overclockers will be getting progressively faster computers, the capabilities of which will be rarely if ever used. It's looking like it will be years before desktop builders are going to actually start being able to notice a difference due to the CPU.

One part Sandy Bridge was awesome, one part Intel knows where their bread is buttered and that's where they're understandably spending their money. Sucks for those of us who would be upgrading to new -K SKUs along the way, but the performance difference just doesn't make sense. The only reason to upgrade is if you're running out of PCI-e lanes or want to do development work with some of the new specialized instruction sets.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
One thing that's been missing a little in these conversations is that DDR4 is supposed to be coming sometime in the next couple years, but with prices so low on DDR3 and consumer trends, I don't see it being of much use / value until into Broadwell for end-users. With laptop and desktop revenues on the decline across all makers I have a hard time believing they'll even release DDR4 in DIMMs for a couple years either. So LP DDR4 in servers or as BGA (or other OEM-centric) packages first is the question to me.

Also, Agreed is a hardcore SH/SCer to the core apparently posting all the way into his surgery day.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

necrobobsledder posted:

One thing that's been missing a little in these conversations is that DDR4 is supposed to be coming sometime in the next couple years, but with prices so low on DDR3 and consumer trends, I don't see it being of much use / value until into Broadwell for end-users. With laptop and desktop revenues on the decline across all makers I have a hard time believing they'll even release DDR4 in DIMMs for a couple years either. So LP DDR4 in servers or as BGA (or other OEM-centric) packages first is the question to me.

Also, Agreed is a hardcore SH/SCer to the core apparently posting all the way into his surgery day.

I'm in the freakout zone now, too, waiting on a call within the next two hours to tell me whether the lab work or chest x-ray would disqualify me for surgery, and if not, what time I need to show up in the morning - so me appearing composed and logical is totally a front allowing me to act out when really I'm terribly anxious.

SH/SC is my happy place. And TFR chat. TFR IRC is great. Is there a SH/SC IRC? I bought mIRC a month or two ago right before Mibbet stopped sucking :v:

To the point, the biggest thing is that performance gains for DDR4 vs. DDR3 suck, salient especially given the disappointing real world outcome of Intel's supposed-to-be-hot-stuff further memory access integration (I think Factory Factory calculated it at approximately PCI-e 2.0 4x access speeds, which is fractional compared to theoretical I/O bandwidth). Take an already marginal improvement and then cut down how much it means, and really we are in a position of waiting to see if maybe Broadwell will take it further or if it'll be another Tock before we see the kinds of gains that make even the big guys who do pay attention to fractional gains because they mean something on the bottom line perk up their ears.

Agreed fucked around with this message at 21:31 on Jun 6, 2013

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I know Forbes is far from the best source for tech news, but a columnist of theirs brings up a point I hadn't considered yet: Haswell prices being higher than Ivy Bridge. http://www.forbes.com/sites/sharifsakr/2013/06/04/intel-haswell-prices/

I'm excited for the better battery life and integrated graphics that comes with Haswell, but side by side I wouldn't pay much of a price premium for a Haswell device at all. I didn't realize it cost any more because pricing seemed just like Sandy & Ivy bridge around launch, stores are advertising 4770K for $279 and 4670K for $199. I'm really hoping that Haswell lineups and pricing comes from Dell and Lenovo soon, I'm telling several people who need laptops within 2 months to wait and see what happens rather than buying now.

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:54 on Mar 23, 2021

JawnV6
Jul 4, 2004

So hot ...

Shaocaholica posted:

I always thought that the vast majority of overclockers consider a speed stable only if ALL aspects of the CPU are stable. Hence the 100s of different stress tests.
Oh my, hundreds of tests? That might begin to cover some of the billions of paths through the chip. As I said before though, I doubt you'd even hit every vmexit possibility, and since those have a very high chance of going to triple fault shutdown, are quite nasty to try to root cause long after the fact.

It's cute that you think "100s" is a big big number though :3: It's ok, most designers are quite myopic.

sincx posted:

Monopoly pricing.
With slashed idle power, TCO should be lower and the price reflects that value add. Not to mention FIVR reduces component count on the board, lowering the suppliers cost elsewhere. Or you could just go for the most simplistic kneejerk analysis, that's good too.

Shaocaholica
Oct 29, 2002

Fig. 5E

JawnV6 posted:

Oh my, hundreds of tests? That might begin to cover some of the billions of paths through the chip.

I was speaking for the mentality, not the actuality. If you wrote something that would crash an otherwise 'stable' CPU and released that into the overclocking community, they would reevaluate their view of stable.

Zhentar
Sep 28, 2003

Brilliant Master Genius

JawnV6 posted:

With slashed idle power, TCO should be lower and the price reflects that value add. Not to mention FIVR reduces component count on the board, lowering the suppliers cost elsewhere.

From Anandtech's numbers, the reduced idle power should save me as much as $10/yr over a comparable Ivy Bridge. Maybe the motherboards will be appreciably cheaper, although the constant shift onto the CPU never seems to help much there.

Rawrbomb
Mar 11, 2011

rawrrrrr
How exactly are intel doing monopoly pricing? Haven't the high level chips always been around the same price for a few years now +/- :10bux:

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It's a lot more apparent in the Ultrabook and mobile SKUs, where top performers are priced competitively with an i7-3930K-to-3970X, not to price/performance chips like the i5-4670K and i7-4770K.

Adbot
ADBOT LOVES YOU

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Rawrbomb posted:

How exactly are intel doing monopoly pricing? Haven't the high level chips always been around the same price for a few years now +/- :10bux:

A joke, I assume? As only Intel has been considered high end for a few years now

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply