Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Zhentar
Sep 28, 2003

Brilliant Master Genius

Factory Factory posted:

The two chips on the R-series LGA are not "CPU and eDRAM," they're "CPU-eDRAM MCP and PCH." They're SoCs.

That is a picture of "CPU-eDRAM MCP and PCH", yes. You can see the PCH on one package on the left side, and on the right, you can see the CPU and eDRAM dies on the Multi Chip Package.

If you read the text of the Anandtech article, you'll notice it states exactly that. If you click through to their VR-Zone source, you'll notice they also posted a diagram matching the pictured package, with the larger die showing a quad core CPU with GPU, and the smaller die next to it labelled "L4".

Adbot
ADBOT LOVES YOU

Zhentar
Sep 28, 2003

Brilliant Master Genius

Shaocaholica posted:

Whats the worst that can happen? The PSU shuts down the whole machine if the CPU goes into low power idle?

As the power draw drops below 6W, the 12V rail will start creeping up to a higher voltage. If the PSU is well-behaving, at some point it can creep too far out of spec, and the PSU will trigger an overvoltage protection shutdown.

Zhentar
Sep 28, 2003

Brilliant Master Genius

necrobobsledder posted:

Seems pretty ballsy given that they'll have to die harvest Haswell chips for something and if that's what they're doing for their mobile strategy, well that's pretty risky if you ask me.

They don't really have to, though. Intel likes getting good yields, and they are pretty good at it. You can't even harvest that many dual core CPUs from Haswell anyway - the CPU cores are only 1/3rd of the total die area. They'd get a low single digit percentage gain in yields out of it.


Ivy Bridge Pentiums weren't die harvested either. They had two dual core layouts (with GT1/GT2 graphics) that they used to supply the Pentium/Celeron lines.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Shaocaholica posted:

Passed 200 hours of prime95 but crashes on Crysis right away? I've seen that case and never have I heard anyone blame the chipmaker or the developers.

More frequently, it's passed 1 hour of Prime95, and then 6 months later it crashes on Crysis. And then they blame the developers, because they've pretty much forgotten about that overclocking thing they did (and besides, it's stable!).

It does hurt Intel if using their newer, fancier features increases support costs because of overclocking, and makes developers more reluctant to actually use them (and, for that matter, it hurts overclockers if their overclock is limited by features they don't really need, or if their system is rendered unstable by things they can't stress test). I don't know how much any of that weighs into Intel's decision to remove any of those features, but there are at least some good reasons to do so.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Shaocaholica posted:

I always thought that the vast majority of overclockers consider a speed stable only if ALL aspects of the CPU are stable.

Even if this is true, the vast majority of overclockers do not have the knowledge, understanding, or tools to verify that all aspects of their CPU are stable. They run Prime95/OCCT/stress test of choice, testing some small portion of their CPU. If they're really a go getter, they'll run the tool in several different modes, stress testing a little bit more of their CPU.

Have you ever run a VT-x stress test? I'm guessing not, since I don't think there even is one. How do you even know if your overclock is truly stable, then? You wouldn't run one for VT-d, for the same reason, and likewise, TSX. And with TSX, it's actually potentially a significant problem, because it will theoretically be used by everyday desktop applications, and since there's lots of interaction with memory systems, it's very likely on a critical path or two.

Zhentar
Sep 28, 2003

Brilliant Master Genius

JawnV6 posted:

With slashed idle power, TCO should be lower and the price reflects that value add. Not to mention FIVR reduces component count on the board, lowering the suppliers cost elsewhere.

From Anandtech's numbers, the reduced idle power should save me as much as $10/yr over a comparable Ivy Bridge. Maybe the motherboards will be appreciably cheaper, although the constant shift onto the CPU never seems to help much there.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Sober posted:

so is there any reason to pick up Haswell over IB?

Haswell is better than IB, just by margins that are disappointing to people. If you're buying something right now, the only reason to buy an IB is if you can get it significantly cheaper than Haswell (which won't happen at retail prices, but could happen with a good inventory clearing sale).

Tab8715 posted:

I'm just hoping I can leapfrog from Sandy Bridge to Skylake... That would be such an ideal scenario and seems easy enough.

Haswell looks pretty tempting to me, and I'm on a Nehalem right now. Intel's been doing a steady 15-20% performance improvement each new architecture, which I think makes for a compelling upgrade after 3 of them.

Zhentar
Sep 28, 2003

Brilliant Master Genius
But IB already had plenty, so that doesn't really matter.

Zhentar
Sep 28, 2003

Brilliant Master Genius

zeroprime posted:

They've got to be running up fast against a physical barrier to any continued process shrinkage.

They've been running into those and working around them for a long time.

Zhentar
Sep 28, 2003

Brilliant Master Genius
All the major players are planning on sticking to silicon through 7nm. 5nm could be a different story.

Zhentar
Sep 28, 2003

Brilliant Master Genius

davebo posted:

I can't help but think that my PC case is mostly empty and if cpu's were twice the size I wouldn't ever notice once I put it together.

Yeah, the physical space used up isn't the problem. The biggest limiting factor is the speed of light - in a 5GHz processor, each cycle lasts 0.2ns. In 0.2ns, light can travel a little over two inches. Electricity in a copper wire goes quite a bit slower. Making fast chips becomes increasingly difficult as they get bigger, because it simply takes too long for electricity to get from one end of it to the other.

Edit:

davebo posted:

weren't there chips on a completely different architecture that didn't generate heat to work? Or am I misremembering completely?

You are. The laws of thermodynamics are not compatible with such a thing.

Zhentar fucked around with this message at 21:17 on Jun 18, 2013

Zhentar
Sep 28, 2003

Brilliant Master Genius
I read something vague about TSMC using Germanium for 5nm, and was trying to look for some more reliable info... instead, I found this:

Some idiot in 2004 posted:

I don't think so! Frankly, I think that the commercial availabilty of processors and memory modules will never reach beyond the 90 manometer node.

Does anyone want to face the cold hard truth that even light waves themelves are not that small!

Zhentar
Sep 28, 2003

Brilliant Master Genius

Yeah, that is what I read. But I wanted something more reliable (and detailed); using a silicon-germanium alloy is old hat and it seems more likely that it's some other take on silicon+germanium than that it's cutting silicon out of the picture.

Zhentar
Sep 28, 2003

Brilliant Master Genius

Jan posted:

It's mostly idle curiosity, because as a graphics developer I'd like to gently caress around and see how they handle eDRAM. Obviously it'd be in their best interest to handle it as transparently as possible in their drivers... I suppose they'd just hook up to D3D texture allocations, and try to cram all the render targets into eDRAM.

The eDRAM is just an L4 cache. It's not even set aside for graphics specifically; it's just another, bigger layer of caching for the CPU/GPU. It's transparent even to their drivers, because it's handled by the CPU itself.

Adbot
ADBOT LOVES YOU

Zhentar
Sep 28, 2003

Brilliant Master Genius

JawnV6 posted:

With the furious attention to power management, I'm surprised enthusiasts are still able to wring correct performance out of overclocks. Intel's very good at characterizing speed paths and binning. Overclocking is built on the assumption that they screwed up one of those. Or, as I suspect, the speed path that put a part into a lower bin isn't hit by the OC stress test.

Intel characterizes speed paths at a certain TDP. A good part of what overclockers get out of chips is just accepting an extra 50 or 100 watts of power dissipation.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply