Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
McGlockenshire
Dec 16, 2005

GOLLOCKS!

FISHMANPET posted:

So I'm trying to figure out the difference between E5-2600 and E5-2400.

Intel's ARK site can be useful here.

Here's a comparison between the E5-2609, E5-2603, E5-2407 and E5-2403. The E5-2600s have two QPI links, support twice the memory and have one additional memory channel which yields more memory bandwidth.

They're also a different socket type.

Adbot
ADBOT LOVES YOU

Shaocaholica
Oct 29, 2002

Fig. 5E
When Intel releases a new architecture to the market (IVB), how long do they keep making the older architecture(SNB)?

Shaocaholica fucked around with this message at 06:54 on Jul 5, 2012

movax
Aug 30, 2008

Shaocaholica posted:

When Intel releases a new architecture to the market (IVB), how long do they keep making the older architecture(SNB)?

The consumer SKUs fall off relatively quickly, but they guarantee availability for products coming out of their embedded group, and I imagine the server SKUs may enjoy a little longer longetivity as well. For example, Intel told us we can buy embedded SKUs of say the i7-620 for at least the next eight years.

hobbesmaster
Jan 28, 2008

movax posted:

The consumer SKUs fall off relatively quickly, but they guarantee availability for products coming out of their embedded group, and I imagine the server SKUs may enjoy a little longer longetivity as well. For example, Intel told us we can buy embedded SKUs of say the i7-620 for at least the next eight years.

And SNB-E is still the highest end consumer hardware. As discussed earlier IVB-E and related xeons won't be out for quite a while so there's plenty of SNB related processors still being sold.

As for embedded, Intel will still sell you Pentium 3s if you want them.

Shaocaholica
Oct 29, 2002

Fig. 5E
How does turbo boost handle small infrequent CPU loads? For instance, image editing. You might move a slider for 0.5 seconds but you want to see realtime feedback. Or any other situation where there's something like 0.1-0.5 seconds of full load and then long gaps of idle. How much of the CPUs full potential can be realized in those brief moments if the CPU isn't running at full speed? I know it should be able to switch really really fast but how fast? Is there logic there to prevent it from thrashing?

Shaocaholica fucked around with this message at 16:26 on Jul 5, 2012

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

McGlockenshire posted:

Intel's ARK site can be useful here.

Here's a comparison between the E5-2609, E5-2603, E5-2407 and E5-2403. The E5-2600s have two QPI links, support twice the memory and have one additional memory channel which yields more memory bandwidth.

They're also a different socket type.

Yes; E5-2400s exist mainly to allow OEMs to reuse slightly-modified Westmere-EP motherboards and systems instead of designing new Socket R platforms.

edit:

Shaocaholica posted:

How does turbo boost handle small infrequent CPU loads? For instance, image editing. You might move a slider for 0.5 seconds but you want to see realtime feedback. Or any other situation where there's something like 0.1-0.5 seconds of full load and then long gaps of idle. How much of the CPUs full potential can be realized in those brief moments if the CPU isn't running at full speed? I know it should be able to switch really really fast but how fast? Is there logic there to prevent it from thrashing?

It scales up and down on the order of microseconds.

in a well actually fucked around with this message at 16:40 on Jul 5, 2012

Shaocaholica
Oct 29, 2002

Fig. 5E

theclaw posted:

It scales up and down on the order of microseconds.

According to this, the OS is actually in control of turbo boost:

wikipedia posted:

It is activated when the operating system requests the highest performance state of the processor.

Has anyone done any synthetic tests in windows/linux/osx to see how turbo works under different dynamic loads?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
AnandTech looked into p-state operation on the Bulldozer Opterons and found that AMD's architecture needed fiddling with the OS's power management for full performance, but also that Intel's Westmere (Nehalem die shrink) chips did not.

In AnandTech's look at Lynnfield it went more in depth about how Turbo Boost works, including that while the OS governs p-states, it does so based on the I/O of a 486's worth of transistors that manage CPU power states. That overview also covers the basics of when Turbo will kick on and when it won't, i.e. that it's controlled by temperature and power draw, and the higher each of those is, the less turbo you will see under stock behavior. With variables like that, synthetic benchmarks are even more useless than they usually are.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

theclaw posted:

Yes; E5-2400s exist mainly to allow OEMs to reuse slightly-modified Westmere-EP motherboards and systems instead of designing new Socket R platforms.

Good enough for me, since Dell's using the E5-2400s are 25% cheaper.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Shaocaholica posted:

According to this, the OS is actually in control of turbo boost:

That's not actually what that means. What that really means is turbo boost won't run when the OS has told SpeedStep to underclock the processor. The CPU itself is totally in control of whether or not it overclocks itself.

japtor
Oct 28, 2005

hobbesmaster posted:

And SNB-E is still the highest end consumer hardware. As discussed earlier IVB-E and related xeons won't be out for quite a while so there's plenty of SNB related processors still being sold.

As for embedded, Intel will still sell you Pentium 3s if you want them.
Where's stuff like the E3 v2 fit in? Is it just considered a more or less regular IVB part or something? (I don't know the whole "E" nomenclature well to begin with)

JawnV6
Jul 4, 2004

So hot ...
All the p-state stuff is well documented in the ACPI spec. It defines the communication between applications, the OS, and hardware regarding power states. It's basically the evolution of the MPS spec, APM, all that fun stuff. It's a great read if you're some kind of nerd.

movax
Aug 30, 2008

JawnV6 posted:

All the p-state stuff is well documented in the ACPI spec. It defines the communication between applications, the OS, and hardware regarding power states. It's basically the evolution of the MPS spec, APM, all that fun stuff. It's a great read if you're some kind of nerd.

I unfortunately have experience with ASL now, I will die forever alone and unloved, like a true super-nerd. Some program manager somewhere decided we should support ACPI-mediated PCIe hot-plug as well :saddowns:

Shaocaholica
Oct 29, 2002

Fig. 5E

JawnV6 posted:

All the p-state stuff is well documented in the ACPI spec. It defines the communication between applications, the OS, and hardware regarding power states. It's basically the evolution of the MPS spec, APM, all that fun stuff. It's a great read if you're some kind of nerd.

Do Macs use ACPI? I ask because Apple isn't on the list of companies that contribute to it.

JawnV6
Jul 4, 2004

So hot ...

movax posted:

I unfortunately have experience with ASL now, I will die forever alone and unloved, like a true super-nerd. Some program manager somewhere decided we should support ACPI-mediated PCIe hot-plug as well :saddowns:
I've read... large parts of the spec. Also written MPS tables, etc.

Shaocaholica posted:

Do Macs use ACPI? I ask because Apple isn't on the list of companies that contribute to it.
You can implement a spec without contributing to it's definition? If you've heard mac people talking about DSDT/SSDT, those are both part of ACPI.

Shaocaholica
Oct 29, 2002

Fig. 5E

JawnV6 posted:

You can implement a spec without contributing to it's definition? If you've heard mac people talking about DSDT/SSDT, those are both part of ACPI.

Well yeah but I figure if they were going to implement it they would contribute as well given their size. Just thought they might be rolling their own thing.

movax
Aug 30, 2008

Shaocaholica posted:

Well yeah but I figure if they were going to implement it they would contribute as well given their size. Just thought they might be rolling their own thing.

ACPI is a pretty complex monster (understatement), Apple would have been fools to not use ACPI when they jumped over to x86. There's a goon here who could comment much more on that if he feels like revealing himself.

JawnV6 posted:

I've read... large parts of the spec. Also written MPS tables, etc.

The few parts of Aptio that still involve assembly also happen to be the ones responsible for ACPI table generation/MPS generation/etc. I picture a dark cave-like room in Norcross filled with skinny Chinese and Indian dudes cranking this poo poo out.

Shaocaholica
Oct 29, 2002

Fig. 5E
So Apple doesn't contribute to ACPI because of how new they are to x86? Or is it a closed club? It would seem that Apple would be one of the most vocal on new features.

Zhentar
Sep 28, 2003

Brilliant Master Genius
Apple controls their hardware. If Apple wants a feature, they tell their suppliers to make it, and it gets done, whether it's a ratified part of a standard or not (see also: mSATA).

JawnV6
Jul 4, 2004

So hot ...

Shaocaholica posted:

So Apple doesn't contribute to ACPI because of how new they are to x86?

:confused: Why do you think ACPI is x86-only?

You're also vastly misunderstanding the depth of information that ACPI can provide. The extensions have mostly been to support entirely new interfaces like PCIe cards.

Shaocaholica
Oct 29, 2002

Fig. 5E

JawnV6 posted:

:confused: Why do you think ACPI is x86-only?

Because of this:

movax posted:

ACPI is a pretty complex monster (understatement), Apple would have been fools to not use ACPI when they jumped over to x86.

Also, the contributers to ACPI seem to all be in the x86 industry with the exception of HP I guess which does some non x86 stuff.

JawnV6
Jul 4, 2004

So hot ...
Apple would have been fools to not use little-endian when they jumped over to x86. Therefore all little-endian machines are x86.

Lots of triangles
Oct 7, 2002
I'm sick of the newbie avatar

Shaocaholica posted:

Well yeah but I figure if they were going to implement it they would contribute as well given their size. Just thought they might be rolling their own thing.

They can't really 'roll their own thing', as ACPI controls interoperability with any OS (OS X, Windows, etc). If Apple wants to continue supporting Boot Camp, their firmware needs to publish ACPI tables that work with MS's ACPI interpreters.

movax
Aug 30, 2008

Some (old) Intel test hardware snuck its way onto eBay apparently. Worth a look if you're curious about what their test fixtures look like.

I guess if Intel is like most companies, most of this type of hardware is either destroyed, shoved into storage somewhere, floats from cube-to-cube, or remain as trophies on office walls.

e: I forgot about ACPI chat; nowhere did I intend to say ACPI was x86 only

feld
Feb 11, 2008

Out of nowhere its.....

Feldman

Lots of triangles posted:

They can't really 'roll their own thing', as ACPI controls interoperability with any OS (OS X, Windows, etc). If Apple wants to continue supporting Boot Camp, their firmware needs to publish ACPI tables that work with MS's ACPI interpreters.

You're forgetting that ACPI is broken thanks to Microsoft

Ankit Chaturvedi posted:

So ACPI is a great specification, it lets you put your PC to sleep/suspend and save power and all. But then why does your linux box fail to recover from a suspend-resume cycle? The answer is again MS. ACPI standard was first implemented by Microsoft a long time ago, and due to the popularity of Windows hardware manufacturers chose to get their hardware passed by the Microsoft’s Hardware Compliance Tests. But herein lies the problem. Microsoft’s ACPI implementation differs slightly from the specification, and so came a lot of buggy BIOS’es that don’t fully follow the ACPI specs from Intel, which is what Linux implementation is based upon. So much for the history, now the question is, what does it affect on your system? For Linux developers, this provides a challenge as vendors are reluctant to change their BIOS to conform to Linux.

ACPI in Linux is implemented pretty straightforwardly. The ACPI specification says that a ACPI enabled OS must be able to understand AML, the ACPI Machine Language. For the linux-kernel to understand the ACPI Machine Language it needs an interpreter inside the kernel, which amounts to about 72000 lines of code alone and is part of the kernel since the 2.6 series.

movax
Aug 30, 2008

feld posted:

You're forgetting that ACPI is broken thanks to Microsoft

I guess you can "blame" Microsoft for enabling lazy companies to skirt around being "100% compliant" with ACPI and their BIOSes breaking on Linux. Personally, our company is the exact opposite; we test/debug our BIOS code against Linux 99% of the time because that's all our hardware will run in a customer setting. Booting anything else is just icing on the cake.

Also, not really Intel CPU chat but I stayed under a rock regarding their renaming of their Ethernet controllers. Didn't realize that 82xxx was out-of-style, and I350/etc were in.

Longinus00
Dec 29, 2005
Ur-Quan

movax posted:

I guess you can "blame" Microsoft for enabling lazy companies to skirt around being "100% compliant" with ACPI and their BIOSes breaking on Linux. Personally, our company is the exact opposite; we test/debug our BIOS code against Linux 99% of the time because that's all our hardware will run in a customer setting. Booting anything else is just icing on the cake.

Also, not really Intel CPU chat but I stayed under a rock regarding their renaming of their Ethernet controllers. Didn't realize that 82xxx was out-of-style, and I350/etc were in.

It doesn't help that statements like this are in the public record.

http://antitrust.slated.org/www.iowaconsumercase.org/011607/3000/PX03020.pdf

There's really no reason to do anything else once you get the magic WHQL certification for consumer/laptop boards anyway as the following exchange illustrates.

Foxconn posted:

Dear Ryan,

You are incorrect in that the motherboard is not ACPI complaint. If it were not, then it would not have received Microsoft Certification for WHQL.

Ryan posted:

I saw you targeting Linux with an intentionally broken ACPI table, you also have one for NT and ME, a separate one for newer NT variants like 2000, XP, Vista, and 2003/2008 Server, I'm sure that if you actually wrote to Intel ACPI specs instead of whatever quirks you can get away with for 8 versions of Windows and then go to the trouble of giving a botched table to Linux

http://ubuntuforums.org/showthread.php?t=869249

Longinus00 fucked around with this message at 20:39 on Jul 25, 2012

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I've previously posted here and elsewhere about the abysmal power delivery quality on Gigabyte's Intel motherboards, which result in problematic swings in CPU core voltage that violate Intel's specifications. Anandtech has discovered that Gigabyte is falsifying the voltages reported to monitoring software to make their boards look better. Instead of the true values, the motherboard reports the voltage set in the BIOS, with some minor variations to make the reading seem real. This explains why amateurish reviewers started getting rock solid voltage readings under load on their Gigabyte boards.

Countdown to Gigabyte acknowledging a "bug" and promising to fix it...

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Alereon posted:

I've previously posted here and elsewhere about the abysmal power delivery quality on Gigabyte's Intel motherboards, which result in problematic swings in CPU core voltage that violate Intel's specifications. Anandtech has discovered that Gigabyte is falsifying the voltages reported to monitoring software to make their boards look better. Instead of the true values, the motherboard reports the voltage set in the BIOS, with some minor variations to make the reading seem real. This explains why amateurish reviewers started getting rock solid voltage readings under load on their Gigabyte boards.

Countdown to Gigabyte acknowledging a "bug" and promising to fix it...

Unfortunately, that's not exactly news, and nothing seems to be happening to change it. AnandTech first noted the behavior in May with their second Gigabyte board review. If you track the three Gigabyte reviews, comments go like this:
  1. Hey, that looks great!
  2. That looks suspiciously good. Must be manipulated by some middleware. Not unheard of, but hmm.
  3. We're not actually sure this has any use, but it's being manipulated anyway.

Henrik Zetterberg
Dec 7, 2007

movax posted:

Some (old) Intel test hardware snuck its way onto eBay apparently. Worth a look if you're curious about what their test fixtures look like.

I guess if Intel is like most companies, most of this type of hardware is either destroyed, shoved into storage somewhere, floats from cube-to-cube, or remain as trophies on office walls.

:lol:

I know exactly what this is. Hilarious.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

Unfortunately, that's not exactly news, and nothing seems to be happening to change it. AnandTech first noted the behavior in May with their second Gigabyte board review. If you track the three Gigabyte reviews, comments go like this:
  1. Hey, that looks great!
  2. That looks suspiciously good. Must be manipulated by some middleware. Not unheard of, but hmm.
  3. We're not actually sure this has any use, but it's being manipulated anyway.

The final analysis seems to be "well, they're tricking overclockers into thinking they have more stable voltage than they really do, but it doesn't seem to make the product any worse, just, cough, a bit buggy."

That seems like a good board for IVB, even for IVB overclocking. You just have to know what's going on behind the outright misinformation it shows you and you can still overclock well on it.

But I'll tell you this, if I found out Asus was doing some similar bullshit I'd reevaluate their standing in my eyes just out of principle - and I already didn't intend to get a Gigabyte product, but this pretty much clinches it. The only thing this affects is the enthusiast's perception of a value that is correlated but not necessarily causally related to a particular overclock. They're putting out a product that pats the user on the head with some comforting bullshit and then works behind the scenes normally. That's a breach of trust.

Supradog
Sep 1, 2004

A POOOST!?!??! YEEAAAAHHHH
Have anyone here actually had a use for the dvi/hdmi/dp contacts a Z77 motherboard?

Ika
Dec 30, 2004
Pure insanity

Running a third monitor @ work using those.

movax
Aug 30, 2008

Didn't see much talk here about Intel buying a stake in ASML, and the subsequent psuedo-whoring by ASML to get some more investment from competitors. I like the theory that Intel made their lives a little harder process-equipment wise because they're in the lead with 22nm fabrication...but nobody else is (they're all in the midst of planning their 20nm shrinks I assume), so they have to inject some cash into the tooling manufacturers to get the tools/tech they need.

Bing the Noize
Dec 21, 2008

by The Finn

Alereon posted:

I've previously posted here and elsewhere about the abysmal power delivery quality on Gigabyte's Intel motherboards, which result in problematic swings in CPU core voltage that violate Intel's specifications. Anandtech has discovered that Gigabyte is falsifying the voltages reported to monitoring software to make their boards look better. Instead of the true values, the motherboard reports the voltage set in the BIOS, with some minor variations to make the reading seem real. This explains why amateurish reviewers started getting rock solid voltage readings under load on their Gigabyte boards.

Countdown to Gigabyte acknowledging a "bug" and promising to fix it...

Well gently caress.
Is this an issue restricted to Gigabyte Z77 boards/overclocking though?
(I have a Gigabyte H77 board and I will not be happy if it's giving my CPU weird voltage)

Bing the Noize fucked around with this message at 18:03 on Jul 28, 2012

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

ACID POLICE posted:

Well gently caress.
Is this an issue restricted to Gigabyte Z77 boards/overclocking though?
(I have a Gigabyte H77 board and I will not be happy if it's giving my CPU weird voltage)
No, it's been a consistent issue with their Intel boards since LGA-1156. If you make sure LoadLine Calibration/vDroop Compensation (I can't remember which Gigabyte calls it) are disabled that will help a lot, since that magnifies the problem. LLC jacks the voltage up under load to compensate for droop, the key problem is that when the CPU load drops, this sends a brief high voltage spike to the CPU, possibly causing it to crash or potentially even be damaged. If you're not overclocking and LLC is disabled it may well work fine, but there have been people posting in the Haus that just couldn't get their systems stable without switching motherboard brands or disabling power saving features.

Example using made-up numbers: Your set voltage is 1.35v, under maximum load the voltage droops to 1.25v. Using LLC, the set voltage would rise to 1.45v under load, so that it would be close to your set 1.35v after droop. The problem is that when the CPU exits load, not only does it get the full 1.45v that LLC was setting before it adapts to the load, there's also a brief moment of overshoot where it gets more than that, maybe 1.50v or so. Now, these spikes are short enough that the CPU probably won't be damaged, but it certainly may hang or crash.

Bing the Noize
Dec 21, 2008

by The Finn

Alereon posted:

No, it's been a consistent issue with their Intel boards since LGA-1156. If you make sure LoadLine Calibration/vDroop Compensation (I can't remember which Gigabyte calls it) are disabled that will help a lot, since that magnifies the problem. LLC jacks the voltage up under load to compensate for droop, the key problem is that when the CPU load drops, this sends a brief high voltage spike to the CPU, possibly causing it to crash or potentially even be damaged. If you're not overclocking and LLC is disabled it may well work fine, but there have been people posting in the Haus that just couldn't get their systems stable without switching motherboard brands or disabling power saving features.

Example using made-up numbers: Your set voltage is 1.35v, under maximum load the voltage droops to 1.25v. Using LLC, the set voltage would rise to 1.45v under load, so that it would be close to your set 1.35v after droop. The problem is that when the CPU exits load, not only does it get the full 1.45v that LLC was setting before it adapts to the load, there's also a brief moment of overshoot where it gets more than that, maybe 1.50v or so. Now, these spikes are short enough that the CPU probably won't be damaged, but it certainly may hang or crash.

Thanks for the explanation, I'm going to do some testing and keep a closer watch on my board's power management.

Man and I bought this board for stability :smithicide: (aside from it was one of the only boards that booted OS X sans DSDT)

Shaocaholica
Oct 29, 2002

Fig. 5E
The Haswell wiki page says this:

"DDR4 for the enterprise/server variant (Haswell-EX)"

Does that mean no DDR4 for consumer cpus? If so, seems odd new tech would be introduced to the server/workstation market first.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Shaocaholica posted:

If so, seems odd new tech would be introduced to the server/workstation market first.

It does? The server market has a lot of niche needs and demand for high performance, and is willing to pay high dollar for it.

Adbot
ADBOT LOVES YOU

Shaocaholica
Oct 29, 2002

Fig. 5E

Zhentar posted:

It does? The server market has a lot of niche needs and demand for high performance, and is willing to pay high dollar for it.

I thought that market is usually very conservative and only adopts new tech thats matured a bit in the consumer space? Mission critical + new tech seems risky.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply