|
FISHMANPET posted:So I'm trying to figure out the difference between E5-2600 and E5-2400. Intel's ARK site can be useful here. Here's a comparison between the E5-2609, E5-2603, E5-2407 and E5-2403. The E5-2600s have two QPI links, support twice the memory and have one additional memory channel which yields more memory bandwidth. They're also a different socket type.
|
# ? Jul 4, 2012 21:41 |
|
|
# ? May 8, 2024 22:51 |
|
When Intel releases a new architecture to the market (IVB), how long do they keep making the older architecture(SNB)?
Shaocaholica fucked around with this message at 06:54 on Jul 5, 2012 |
# ? Jul 5, 2012 05:49 |
|
Shaocaholica posted:When Intel releases a new architecture to the market (IVB), how long do they keep making the older architecture(SNB)? The consumer SKUs fall off relatively quickly, but they guarantee availability for products coming out of their embedded group, and I imagine the server SKUs may enjoy a little longer longetivity as well. For example, Intel told us we can buy embedded SKUs of say the i7-620 for at least the next eight years.
|
# ? Jul 5, 2012 14:48 |
|
movax posted:The consumer SKUs fall off relatively quickly, but they guarantee availability for products coming out of their embedded group, and I imagine the server SKUs may enjoy a little longer longetivity as well. For example, Intel told us we can buy embedded SKUs of say the i7-620 for at least the next eight years. And SNB-E is still the highest end consumer hardware. As discussed earlier IVB-E and related xeons won't be out for quite a while so there's plenty of SNB related processors still being sold. As for embedded, Intel will still sell you Pentium 3s if you want them.
|
# ? Jul 5, 2012 15:10 |
|
How does turbo boost handle small infrequent CPU loads? For instance, image editing. You might move a slider for 0.5 seconds but you want to see realtime feedback. Or any other situation where there's something like 0.1-0.5 seconds of full load and then long gaps of idle. How much of the CPUs full potential can be realized in those brief moments if the CPU isn't running at full speed? I know it should be able to switch really really fast but how fast? Is there logic there to prevent it from thrashing?
Shaocaholica fucked around with this message at 16:26 on Jul 5, 2012 |
# ? Jul 5, 2012 16:23 |
|
McGlockenshire posted:Intel's ARK site can be useful here. Yes; E5-2400s exist mainly to allow OEMs to reuse slightly-modified Westmere-EP motherboards and systems instead of designing new Socket R platforms. edit: Shaocaholica posted:How does turbo boost handle small infrequent CPU loads? For instance, image editing. You might move a slider for 0.5 seconds but you want to see realtime feedback. Or any other situation where there's something like 0.1-0.5 seconds of full load and then long gaps of idle. How much of the CPUs full potential can be realized in those brief moments if the CPU isn't running at full speed? I know it should be able to switch really really fast but how fast? Is there logic there to prevent it from thrashing? It scales up and down on the order of microseconds. in a well actually fucked around with this message at 16:40 on Jul 5, 2012 |
# ? Jul 5, 2012 16:35 |
|
theclaw posted:It scales up and down on the order of microseconds. According to this, the OS is actually in control of turbo boost: wikipedia posted:It is activated when the operating system requests the highest performance state of the processor. Has anyone done any synthetic tests in windows/linux/osx to see how turbo works under different dynamic loads?
|
# ? Jul 5, 2012 19:17 |
|
AnandTech looked into p-state operation on the Bulldozer Opterons and found that AMD's architecture needed fiddling with the OS's power management for full performance, but also that Intel's Westmere (Nehalem die shrink) chips did not. In AnandTech's look at Lynnfield it went more in depth about how Turbo Boost works, including that while the OS governs p-states, it does so based on the I/O of a 486's worth of transistors that manage CPU power states. That overview also covers the basics of when Turbo will kick on and when it won't, i.e. that it's controlled by temperature and power draw, and the higher each of those is, the less turbo you will see under stock behavior. With variables like that, synthetic benchmarks are even more useless than they usually are.
|
# ? Jul 5, 2012 19:41 |
|
theclaw posted:Yes; E5-2400s exist mainly to allow OEMs to reuse slightly-modified Westmere-EP motherboards and systems instead of designing new Socket R platforms. Good enough for me, since Dell's using the E5-2400s are 25% cheaper.
|
# ? Jul 5, 2012 19:41 |
|
Shaocaholica posted:According to this, the OS is actually in control of turbo boost: That's not actually what that means. What that really means is turbo boost won't run when the OS has told SpeedStep to underclock the processor. The CPU itself is totally in control of whether or not it overclocks itself.
|
# ? Jul 5, 2012 20:17 |
|
hobbesmaster posted:And SNB-E is still the highest end consumer hardware. As discussed earlier IVB-E and related xeons won't be out for quite a while so there's plenty of SNB related processors still being sold.
|
# ? Jul 5, 2012 23:47 |
|
All the p-state stuff is well documented in the ACPI spec. It defines the communication between applications, the OS, and hardware regarding power states. It's basically the evolution of the MPS spec, APM, all that fun stuff. It's a great read if you're some kind of nerd.
|
# ? Jul 6, 2012 05:21 |
|
JawnV6 posted:All the p-state stuff is well documented in the ACPI spec. It defines the communication between applications, the OS, and hardware regarding power states. It's basically the evolution of the MPS spec, APM, all that fun stuff. It's a great read if you're some kind of nerd. I unfortunately have experience with ASL now, I will die forever alone and unloved, like a true super-nerd. Some program manager somewhere decided we should support ACPI-mediated PCIe hot-plug as well
|
# ? Jul 6, 2012 06:12 |
|
JawnV6 posted:All the p-state stuff is well documented in the ACPI spec. It defines the communication between applications, the OS, and hardware regarding power states. It's basically the evolution of the MPS spec, APM, all that fun stuff. It's a great read if you're some kind of nerd. Do Macs use ACPI? I ask because Apple isn't on the list of companies that contribute to it.
|
# ? Jul 6, 2012 08:18 |
|
movax posted:I unfortunately have experience with ASL now, I will die forever alone and unloved, like a true super-nerd. Some program manager somewhere decided we should support ACPI-mediated PCIe hot-plug as well Shaocaholica posted:Do Macs use ACPI? I ask because Apple isn't on the list of companies that contribute to it.
|
# ? Jul 6, 2012 19:29 |
|
JawnV6 posted:You can implement a spec without contributing to it's definition? If you've heard mac people talking about DSDT/SSDT, those are both part of ACPI. Well yeah but I figure if they were going to implement it they would contribute as well given their size. Just thought they might be rolling their own thing.
|
# ? Jul 6, 2012 19:45 |
|
Shaocaholica posted:Well yeah but I figure if they were going to implement it they would contribute as well given their size. Just thought they might be rolling their own thing. ACPI is a pretty complex monster (understatement), Apple would have been fools to not use ACPI when they jumped over to x86. There's a goon here who could comment much more on that if he feels like revealing himself. JawnV6 posted:I've read... large parts of the spec. Also written MPS tables, etc. The few parts of Aptio that still involve assembly also happen to be the ones responsible for ACPI table generation/MPS generation/etc. I picture a dark cave-like room in Norcross filled with skinny Chinese and Indian dudes cranking this poo poo out.
|
# ? Jul 6, 2012 20:27 |
|
So Apple doesn't contribute to ACPI because of how new they are to x86? Or is it a closed club? It would seem that Apple would be one of the most vocal on new features.
|
# ? Jul 6, 2012 21:05 |
|
Apple controls their hardware. If Apple wants a feature, they tell their suppliers to make it, and it gets done, whether it's a ratified part of a standard or not (see also: mSATA).
|
# ? Jul 6, 2012 21:23 |
|
Shaocaholica posted:So Apple doesn't contribute to ACPI because of how new they are to x86? Why do you think ACPI is x86-only? You're also vastly misunderstanding the depth of information that ACPI can provide. The extensions have mostly been to support entirely new interfaces like PCIe cards.
|
# ? Jul 6, 2012 22:44 |
|
JawnV6 posted:Why do you think ACPI is x86-only? Because of this: movax posted:ACPI is a pretty complex monster (understatement), Apple would have been fools to not use ACPI when they jumped over to x86. Also, the contributers to ACPI seem to all be in the x86 industry with the exception of HP I guess which does some non x86 stuff.
|
# ? Jul 6, 2012 23:08 |
|
Apple would have been fools to not use little-endian when they jumped over to x86. Therefore all little-endian machines are x86.
|
# ? Jul 9, 2012 21:49 |
|
Shaocaholica posted:Well yeah but I figure if they were going to implement it they would contribute as well given their size. Just thought they might be rolling their own thing. They can't really 'roll their own thing', as ACPI controls interoperability with any OS (OS X, Windows, etc). If Apple wants to continue supporting Boot Camp, their firmware needs to publish ACPI tables that work with MS's ACPI interpreters.
|
# ? Jul 10, 2012 05:49 |
|
Some (old) Intel test hardware snuck its way onto eBay apparently. Worth a look if you're curious about what their test fixtures look like. I guess if Intel is like most companies, most of this type of hardware is either destroyed, shoved into storage somewhere, floats from cube-to-cube, or remain as trophies on office walls. e: I forgot about ACPI chat; nowhere did I intend to say ACPI was x86 only
|
# ? Jul 25, 2012 15:00 |
|
Lots of triangles posted:They can't really 'roll their own thing', as ACPI controls interoperability with any OS (OS X, Windows, etc). If Apple wants to continue supporting Boot Camp, their firmware needs to publish ACPI tables that work with MS's ACPI interpreters. You're forgetting that ACPI is broken thanks to Microsoft Ankit Chaturvedi posted:So ACPI is a great specification, it lets you put your PC to sleep/suspend and save power and all. But then why does your linux box fail to recover from a suspend-resume cycle? The answer is again MS. ACPI standard was first implemented by Microsoft a long time ago, and due to the popularity of Windows hardware manufacturers chose to get their hardware passed by the Microsoft’s Hardware Compliance Tests. But herein lies the problem. Microsoft’s ACPI implementation differs slightly from the specification, and so came a lot of buggy BIOS’es that don’t fully follow the ACPI specs from Intel, which is what Linux implementation is based upon. So much for the history, now the question is, what does it affect on your system? For Linux developers, this provides a challenge as vendors are reluctant to change their BIOS to conform to Linux.
|
# ? Jul 25, 2012 18:19 |
|
feld posted:You're forgetting that ACPI is broken thanks to Microsoft I guess you can "blame" Microsoft for enabling lazy companies to skirt around being "100% compliant" with ACPI and their BIOSes breaking on Linux. Personally, our company is the exact opposite; we test/debug our BIOS code against Linux 99% of the time because that's all our hardware will run in a customer setting. Booting anything else is just icing on the cake. Also, not really Intel CPU chat but I stayed under a rock regarding their renaming of their Ethernet controllers. Didn't realize that 82xxx was out-of-style, and I350/etc were in.
|
# ? Jul 25, 2012 20:25 |
|
movax posted:I guess you can "blame" Microsoft for enabling lazy companies to skirt around being "100% compliant" with ACPI and their BIOSes breaking on Linux. Personally, our company is the exact opposite; we test/debug our BIOS code against Linux 99% of the time because that's all our hardware will run in a customer setting. Booting anything else is just icing on the cake. It doesn't help that statements like this are in the public record. http://antitrust.slated.org/www.iowaconsumercase.org/011607/3000/PX03020.pdf There's really no reason to do anything else once you get the magic WHQL certification for consumer/laptop boards anyway as the following exchange illustrates. Foxconn posted:Dear Ryan, Ryan posted:I saw you targeting Linux with an intentionally broken ACPI table, you also have one for NT and ME, a separate one for newer NT variants like 2000, XP, Vista, and 2003/2008 Server, I'm sure that if you actually wrote to Intel ACPI specs instead of whatever quirks you can get away with for 8 versions of Windows and then go to the trouble of giving a botched table to Linux http://ubuntuforums.org/showthread.php?t=869249 Longinus00 fucked around with this message at 20:39 on Jul 25, 2012 |
# ? Jul 25, 2012 20:37 |
|
I've previously posted here and elsewhere about the abysmal power delivery quality on Gigabyte's Intel motherboards, which result in problematic swings in CPU core voltage that violate Intel's specifications. Anandtech has discovered that Gigabyte is falsifying the voltages reported to monitoring software to make their boards look better. Instead of the true values, the motherboard reports the voltage set in the BIOS, with some minor variations to make the reading seem real. This explains why amateurish reviewers started getting rock solid voltage readings under load on their Gigabyte boards. Countdown to Gigabyte acknowledging a "bug" and promising to fix it...
|
# ? Jul 26, 2012 01:36 |
|
Alereon posted:I've previously posted here and elsewhere about the abysmal power delivery quality on Gigabyte's Intel motherboards, which result in problematic swings in CPU core voltage that violate Intel's specifications. Anandtech has discovered that Gigabyte is falsifying the voltages reported to monitoring software to make their boards look better. Instead of the true values, the motherboard reports the voltage set in the BIOS, with some minor variations to make the reading seem real. This explains why amateurish reviewers started getting rock solid voltage readings under load on their Gigabyte boards. Unfortunately, that's not exactly news, and nothing seems to be happening to change it. AnandTech first noted the behavior in May with their second Gigabyte board review. If you track the three Gigabyte reviews, comments go like this:
|
# ? Jul 26, 2012 01:55 |
|
movax posted:Some (old) Intel test hardware snuck its way onto eBay apparently. Worth a look if you're curious about what their test fixtures look like. I know exactly what this is. Hilarious.
|
# ? Jul 26, 2012 02:47 |
|
Factory Factory posted:Unfortunately, that's not exactly news, and nothing seems to be happening to change it. AnandTech first noted the behavior in May with their second Gigabyte board review. If you track the three Gigabyte reviews, comments go like this: The final analysis seems to be "well, they're tricking overclockers into thinking they have more stable voltage than they really do, but it doesn't seem to make the product any worse, just, cough, a bit buggy." That seems like a good board for IVB, even for IVB overclocking. You just have to know what's going on behind the outright misinformation it shows you and you can still overclock well on it. But I'll tell you this, if I found out Asus was doing some similar bullshit I'd reevaluate their standing in my eyes just out of principle - and I already didn't intend to get a Gigabyte product, but this pretty much clinches it. The only thing this affects is the enthusiast's perception of a value that is correlated but not necessarily causally related to a particular overclock. They're putting out a product that pats the user on the head with some comforting bullshit and then works behind the scenes normally. That's a breach of trust.
|
# ? Jul 26, 2012 05:45 |
|
Have anyone here actually had a use for the dvi/hdmi/dp contacts a Z77 motherboard?
|
# ? Jul 26, 2012 07:57 |
|
Running a third monitor @ work using those.
|
# ? Jul 27, 2012 11:44 |
|
Didn't see much talk here about Intel buying a stake in ASML, and the subsequent psuedo-whoring by ASML to get some more investment from competitors. I like the theory that Intel made their lives a little harder process-equipment wise because they're in the lead with 22nm fabrication...but nobody else is (they're all in the midst of planning their 20nm shrinks I assume), so they have to inject some cash into the tooling manufacturers to get the tools/tech they need.
|
# ? Jul 27, 2012 22:40 |
|
Alereon posted:I've previously posted here and elsewhere about the abysmal power delivery quality on Gigabyte's Intel motherboards, which result in problematic swings in CPU core voltage that violate Intel's specifications. Anandtech has discovered that Gigabyte is falsifying the voltages reported to monitoring software to make their boards look better. Instead of the true values, the motherboard reports the voltage set in the BIOS, with some minor variations to make the reading seem real. This explains why amateurish reviewers started getting rock solid voltage readings under load on their Gigabyte boards. Well gently caress. Is this an issue restricted to Gigabyte Z77 boards/overclocking though? (I have a Gigabyte H77 board and I will not be happy if it's giving my CPU weird voltage) Bing the Noize fucked around with this message at 18:03 on Jul 28, 2012 |
# ? Jul 28, 2012 17:59 |
|
ACID POLICE posted:Well gently caress. Example using made-up numbers: Your set voltage is 1.35v, under maximum load the voltage droops to 1.25v. Using LLC, the set voltage would rise to 1.45v under load, so that it would be close to your set 1.35v after droop. The problem is that when the CPU exits load, not only does it get the full 1.45v that LLC was setting before it adapts to the load, there's also a brief moment of overshoot where it gets more than that, maybe 1.50v or so. Now, these spikes are short enough that the CPU probably won't be damaged, but it certainly may hang or crash.
|
# ? Jul 28, 2012 19:04 |
|
Alereon posted:No, it's been a consistent issue with their Intel boards since LGA-1156. If you make sure LoadLine Calibration/vDroop Compensation (I can't remember which Gigabyte calls it) are disabled that will help a lot, since that magnifies the problem. LLC jacks the voltage up under load to compensate for droop, the key problem is that when the CPU load drops, this sends a brief high voltage spike to the CPU, possibly causing it to crash or potentially even be damaged. If you're not overclocking and LLC is disabled it may well work fine, but there have been people posting in the Haus that just couldn't get their systems stable without switching motherboard brands or disabling power saving features. Thanks for the explanation, I'm going to do some testing and keep a closer watch on my board's power management. Man and I bought this board for stability (aside from it was one of the only boards that booted OS X sans DSDT)
|
# ? Jul 28, 2012 19:16 |
|
The Haswell wiki page says this: "DDR4 for the enterprise/server variant (Haswell-EX)" Does that mean no DDR4 for consumer cpus? If so, seems odd new tech would be introduced to the server/workstation market first.
|
# ? Aug 2, 2012 19:06 |
|
Shaocaholica posted:If so, seems odd new tech would be introduced to the server/workstation market first. It does? The server market has a lot of niche needs and demand for high performance, and is willing to pay high dollar for it.
|
# ? Aug 2, 2012 19:25 |
|
|
# ? May 8, 2024 22:51 |
|
Zhentar posted:It does? The server market has a lot of niche needs and demand for high performance, and is willing to pay high dollar for it. I thought that market is usually very conservative and only adopts new tech thats matured a bit in the consumer space? Mission critical + new tech seems risky.
|
# ? Aug 2, 2012 19:33 |