|
I agree that this is probably not as big of a deal as most people are making, but it feels like the endcap on the desktop era. I remember when I was a TA'ing a computer science course and was blown away that none of the freshman knew what the internet was like pre-google. In another couple years I would bet we're going to get the first crop of students who have never owned a desktop.
|
# ? Nov 29, 2012 06:39 |
|
|
# ? May 9, 2024 23:44 |
|
Chuu posted:I agree that this is probably not as big of a deal as most people are making, but it feels like the endcap on the desktop era. We're entering an era where all the new engineering students just have smart phones and tablets and never have done any actual work or tinkering around on their own In my day we didn't have app stores and if you wanted something done you'd just write a drat program yourself to do it WhyteRyce fucked around with this message at 08:00 on Nov 29, 2012 |
# ? Nov 29, 2012 07:57 |
|
I still don't get how BGA is that much better than LGA? I'm sure there's minor improvements in signalling, but is it so much it's worth it?
|
# ? Nov 29, 2012 16:33 |
|
Combat Pretzel posted:I still don't get how BGA is that much better than LGA? I'm sure there's minor improvements in signalling, but is it so much it's worth it? It's probably much more of a packaging and cost issue than a performance issue.
|
# ? Nov 29, 2012 16:35 |
|
I'll tell you what though, eliminating validation on a whole package saves approximately 1 million headaches for me. Bye LGA!
|
# ? Nov 29, 2012 17:48 |
|
WhyteRyce posted:We're entering an era where all the new engineering students just have smart phones and tablets and never have done any actual work or tinkering around on their own The boards I learned on seem hilariously quaint by comparison. Parallel port JTAG, a separate board for USB, Windows MFC programming... so painful. Henrik Zetterberg posted:I'll tell you what though, eliminating validation on a whole package saves approximately 1 million headaches for me. Bye LGA!
|
# ? Nov 29, 2012 18:59 |
|
You guys are missing another great upside to this whole deal. Instead of buying i3/i5/i7, intel is now in a better position than ever to make that software controllable. This gives you the upgrade path that you want without having to buy a whole new mobo/cpu combo. Hell, if you have a OS that supports cpu hotplug you might even be able to do this online.
|
# ? Nov 29, 2012 20:22 |
|
JawnV6 posted:FPGA's with millions of gates That just makes students soft
|
# ? Nov 29, 2012 20:27 |
|
Longinus00 posted:You guys are missing another great upside to this whole deal. Instead of buying i3/i5/i7, intel is now in a better position than ever to make that software controllable. This gives you the upgrade path that you want without having to buy a whole new mobo/cpu combo.
|
# ? Nov 29, 2012 20:32 |
|
Andy Bryant Will Now Lead Intel Into The Foundry Era This may be too biz strategy focused for this thread. This article is pretty interesting. It's one of the more believable 'future of Intel' articles I've read. The author there makes a pretty good case for Intel ditching Atom and mobile chips entirely, keeping their profitable client and datacenter businesses, and opening their leading edge fabs as foundries making chips for their current competitors. It makes sense in a lot of ways. There are almost no semiconductor designers who operate their own fabs anymore, and the acceleration in the investment required to keep up with fab tech is even leaving some pure foundry companies in the dust. They will probably need to either make piles of money by a big win in the mobile space (which few are optimistic about) or exit the mobile space entirely and open up as a foundry for those former competitors. Right now it's that awkward in-between.
|
# ? Nov 29, 2012 21:56 |
|
WhyteRyce posted:We're entering an era where all the new engineering students just have smart phones and tablets and never have done any actual work or tinkering around on their own "Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers?" (Torvalds). And this is what happens when you have essentially a monopoly of a product. It may not be a big deal at the beginning for most people, but I believe it will hurt in the long term, by having fewer options when looking to buy a new machine. On the other hand, it'll take some time to get there, and lots of things can happen in the meantime.
|
# ? Nov 30, 2012 06:16 |
|
AnandTech's Podcast 12 talked about the Broadwell BGA rumor for almost an hour. If they have any concrete information, they were NDA'd. The discussion has largely followed what it's been here, except with two points we hadn't thought of: 1) Broadwell, being a process shrink, may be a redo of Westmere, in terms of impact on the desktop. Westmere produced some dual-core mainstream parts where Lynnfield and Nehalem had been just quad-core high-end parts, and the super-high-end server-type/six-core parts came late in the game. The big deal with Westmere was introducing the HD Graphics hardware and making nice laptop parts. Similarly, while Broadwell may be BGA-only, that would be because it will be a mobile-focused release. Skylake after it may return to LGA-based desktop parts. Intel would basically be saying "Haswell is good enough to sell desktops for 24-28 months, not just 12-14." And this would be a prime time for AMD to get back in the game, if it ever will. 2) BGA packaging may tie into Intel's Larrabee/Xeon Phi dream, where down the line, you slot in your main compute to a backbone board the way you plug a GPU into a motherboard today. The desktop of 2020 or so would involve a heterogeneous mix of a heavy-duty x86 compute card, a Xeon Phi-like lightweight x86 card, and the GPU, with no component being as "central" as a CPU is today.
|
# ? Dec 3, 2012 20:52 |
|
Factory Factory posted:
Maybe I'm not thinking radically enough, but I don't see how this would really be any better for any consumer or consumer PC maker.
|
# ? Dec 3, 2012 22:19 |
|
Well, Intel's basically a "consumers get the enterprise's cast-offs and/or gizmo chips" these days. They're just getting more explicit about it.
|
# ? Dec 3, 2012 22:26 |
|
In 2020 I see handheld devices being entry level computers unless there's some huge leap in what people will want to do on their computers in the next 8 years. I don't see anybody trying to push multiple components like that except in high end computers for gaming and idiot workplaces that think their "current" version of Office 2013 needs $3000 computers. For this to happen handheld devices will need to get programs that are not cut down versions of their desktop counterparts, multi-monitor support, multiple window support, and a UI that supports both touch and keyboard+mouse controls. Businesses will have a harder time considering the amount of legacy software they have to deal with that nobody wants to replace because of cost or a worthless user refuses to change. VMs can help with this, although more work needs to be done to make this seamless. If you run shitty_legacy_app from a VM it should just seamlessly start up without the user knowing they are running in a VM. As a bonus, don't require IT to setup the application to do it.
|
# ? Dec 4, 2012 03:29 |
|
Can someone confirm if this is how IVB memory controller works: I came across that diagram here: http://www.xbitlabs.com/articles/memory/display/ivy-bridge-ddr3_2.html
|
# ? Dec 4, 2012 06:29 |
|
Shaocaholica posted:Can someone confirm if this is how IVB memory controller works: Looks like something out of the EDS which I don't know if it's really supposed to be publicly confirmable/shared or not.
|
# ? Dec 4, 2012 07:42 |
|
But more generally, yes, IVB (and SNB) will interleave on asymmetric DIMMs. If you pair a 2GB DIMM and an 8GB DIMM, the first 4GB of RAM is interleaved (dual-channel), and the remaining 6GB on the 8GB DIMM operates as single-channel.
|
# ? Dec 4, 2012 07:47 |
|
Cool, thanks. How far back have Intel memory controllers supported interleaving on asymmetric DIMMs? Its not a recent thing is it?
|
# ? Dec 4, 2012 08:36 |
|
Well...that explains 6GB memory setups on laptops.
|
# ? Dec 4, 2012 09:07 |
|
Crackbone posted:Maybe I'm not thinking radically enough, but I don't see how this would really be any better for any consumer or consumer PC maker. Ever since I read about the Xeon Phi, I had hoped something like that would make it to the consumer market simply because it would mean I can put together a computer in like 5 minutes.
|
# ? Dec 4, 2012 11:07 |
|
Shaocaholica posted:Cool, thanks. How far back have Intel memory controllers supported interleaving on asymmetric DIMMs? Its not a recent thing is it? I think all the IMCs (so since Core iX) can do it, I don't know about older NBs/MCHs.
|
# ? Dec 4, 2012 17:16 |
|
Well this seems pretty good: http://www.maximumpc.com/article/news/intel_says_company_committed_sockets2012 I guess start the FUD on how long "the foreseeable future" is and exactly which "socketed parts in the LGA package" they intend to keep selling.
|
# ? Dec 8, 2012 06:43 |
|
Nostrum posted:Well this seems pretty good: Probably all it takes is a few employees browsing rumor article comments and AMD's response to realize that it'd be a PR nightmare and competitive opportunity they don't necessarily have to give away.
|
# ? Dec 8, 2012 08:15 |
|
So Intel used to have some ULV mobile procs like the U7700 which had a TDP of 10W. What ever happened to that class of CPU? Seems like the lowest power IVB is 17W which is considerably more than 10W of older 65nm C2Ds. What gives? The embedded GPU?
|
# ? Dec 12, 2012 07:00 |
|
Shaocaholica posted:So Intel used to have some ULV mobile procs like the U7700 which had a TDP of 10W. What ever happened to that class of CPU? Seems like the lowest power IVB is 17W which is considerably more than 10W of older 65nm C2Ds. What gives? The embedded GPU? Yeah I think the overall power usage of a system using that IVB part is lower than the comparable system with a 10w mobile C2D and a separate integrated GPU and so on back in the day.
|
# ? Dec 12, 2012 07:05 |
|
Yeah, chipset and iGPU were a big deal, power-wise, in the C2D days. The original Atoms had ~2.5W TDPs for the CPUs, but the chipset (Intel 945GC, latest revision of the C2D era chipset) was a good 22.2W. A 17W Ivy Bridge ULV is matched with a 4.1W or 3.7W HM77 PCH, for a bit less power overall. Plus I think Intel is starting to put 10W and 13W IVB ULVs to market, anyway. Configurable TDP was always planned to be a feature for IVB, the idea being you could have different levels of performance for Turbo Boost based on battery/plug, workload, user preference, etc. Right now, the rumor mill is suggesting the following: Speaking of low-power, Intel just beat ARM to market on many-tiny-cores microserver parts with (presumably an all-new) Atom, codename Centerton. It's a Saltwell-core Atom with ECC, VT-x, 8 PCIe lanes, and support for up to 8 GB of ECC RAM per dual-core SoC. The idea behind microservers is that thread-heavy, compute-light applications show much better performance per watt on a ton of wimpy cores than on fewer beefy cores - stuff like content delivery, simple search, and other stuff I'm not gonna pretend to understand. Minor roadmap update, too: Xeon E3 v3 (Haswell) is being promised in 2013 (which isn't a huge shock, but nice to know), and Atom is going to be synced on process starting then, too, first with the 22nm Avoton, and then 2014 bringing 14nm Broadwell and "next-gen" Atom in the same year. Avoton will probably be based on the redesigned Silvermont Atom core, and there's a good chance that it will beat ARMv8 to market.
|
# ? Dec 12, 2012 09:47 |
|
So Atom is not dead? Dead in the consumer space though?
|
# ? Dec 12, 2012 23:10 |
|
There are still Atom devices out there. Right now, mostly they're going into entry-level Win8 Pro tablets (Clover Trail), but there are also Android phones (Medfield), and $200 netbooks (Cedarview). Future ULV Haswell will hit the same TDPs as current nettop Atoms, though, and probably at a much higher performance point.
|
# ? Dec 12, 2012 23:17 |
|
Shaocaholica posted:So Atom is not dead? Dead in the consumer space though?
|
# ? Dec 13, 2012 00:30 |
|
Factory Factory posted:Yeah, chipset and iGPU were a big deal, power-wise, in the C2D days. The original Atoms had ~2.5W TDPs for the CPUs, but the chipset (Intel 945GC, latest revision of the C2D era chipset) was a good 22.2W. A 17W Ivy Bridge ULV is matched with a 4.1W or 3.7W HM77 PCH, for a bit less power overall. I was pretty excited when I saw the announcement since it seemed like a no-brainer to get an ITX version of this out there tailored for FreeNAS that could be loaded up with ECC Memory. Then I saw the 8GB limit. Seriously Intel?
|
# ? Dec 13, 2012 06:04 |
|
Chuu posted:I was pretty excited when I saw the announcement since it seemed like a no-brainer to get an ITX version of this out there tailored for FreeNAS that could be loaded up with ECC Memory. Then I saw the 8GB limit. Seriously Intel? Just imagine Paul Otellini jumping around on stage, sweating and yelling "MARKET SEGMENTATION! MARKET SEGMENTATION! MARKET SEGMENTATION!" and you've pretty well got the reason for it.
|
# ? Dec 13, 2012 16:34 |
|
Chuu posted:I was pretty excited when I saw the announcement since it seemed like a no-brainer to get an ITX version of this out there tailored for FreeNAS that could be loaded up with ECC Memory. Then I saw the 8GB limit. Seriously Intel? The -K versions have always lacked Vt-d as well, which sucks and is pure market segmentation as well. At least you still get Vt-x on them, but I think Microsoft would get very annoyed if they started segmenting that away. In other news, the 2600K and its friends are getting discontinued in March (not that this wasn't expected just more of a FYI), and model numbers for Haswell chips were leaked, no real surprises there either. 4670K will take over as recommended CPU for the most part, with the 4770K coming in at the top for now. New graphics core as well, so Haswell laptops that don't sport a discrete GPU (ThinkPad T440 ) are looking better and better.
|
# ? Dec 13, 2012 16:41 |
|
movax posted:The -K versions have always lacked Vt-d as well, which sucks and is pure market segmentation as well. At least you still get Vt-x on them, but I think Microsoft would get very annoyed if they started segmenting that away. I have a dinosaur first gen Athlon 64 paired with a dinosaur Geforce GT 240. I am broke, and by default waiting until Haswell for a refresh. From what I've been reading, I can probably expect the integrated gpu to be better than what I currently have
|
# ? Dec 13, 2012 17:21 |
|
movax posted:In other news, the 2600K and its friends are getting discontinued in March (not that this wasn't expected just more of a FYI), and model numbers for Haswell chips were leaked, no real surprises there either. Interesting that the TDP is going up slightly on the top-end SKUs, though I guess not super-surprising given the transistor increase isoprocess. Makes me wonder what they're going to do re: attaching the heatspreader. If it's still glued on with crappy thermal paste, the unlocked parts are going to have very little headroom for overclocking (which was always a concern, but anticipating the problem to be the integrated VRM bits). I hope they go back to soldering it, or at least use a higher-quality paste on the -K SKUs.
|
# ? Dec 13, 2012 20:04 |
|
Looks like the dual-core mobile "i5" (god I wish they'd just stick to i3 for dual/i5 for quad/i7 for HT) Ivy Bridge stuff launched a month after the desktop quad parts. Not sure whether to hold out for said hypothetical T440 or buy a T430 with the NVS 5400M now as far as GPU performance given the rumors of the extra SPs being there more for power consumption than anything.
|
# ? Dec 13, 2012 20:13 |
|
Factory Factory posted:Interesting that the TDP is going up slightly on the top-end SKUs, though I guess not super-surprising given the transistor increase isoprocess. Makes me wonder what they're going to do re: attaching the heatspreader. If it's still glued on with crappy thermal paste, the unlocked parts are going to have very little headroom for overclocking (which was always a concern, but anticipating the problem to be the integrated VRM bits). I hope they go back to soldering it, or at least use a higher-quality paste on the -K SKUs. Aren't people content now going bare die or re-applying the TIM? I know there's always going to be people who won't but the OC crowd typically are less hesitant to void warranties and such.
|
# ? Dec 13, 2012 20:19 |
|
Well, in theory, overclocking voids your warranty anyway. But of all the folks who have posted about IVB overclocks in the overclocking thread, only one had the chutzpah to pop off the heatspreader and reapply TIM. It worked well, but a lot more people (probably myself included) aren't willing to go even that far, never mind apply 80 lbs. of mounting force directly to the die of our expensive CPUs, what with their easily-warping PCBs.
|
# ? Dec 13, 2012 20:38 |
|
Factory Factory posted:Interesting that the TDP is going up slightly on the top-end SKUs, though I guess not super-surprising given the transistor increase isoprocess. Makes me wonder what they're going to do re: attaching the heatspreader. If it's still glued on with crappy thermal paste, the unlocked parts are going to have very little headroom for overclocking (which was always a concern, but anticipating the problem to be the integrated VRM bits). I hope they go back to soldering it, or at least use a higher-quality paste on the -K SKUs. Yeah, definitely interested to see what the per-clock improvements are as well over Sandy. I think most folks are in agreement that this is going to be very nice for the mobile-space, but desktops should see some loving too. People with C2Qs might actually upgrade finally!
|
# ? Dec 13, 2012 20:45 |
|
|
# ? May 9, 2024 23:44 |
|
Factory Factory posted:Well, in theory, overclocking voids your warranty anyway. But of all the folks who have posted about IVB overclocks in the overclocking thread, only one had the chutzpah to pop off the heatspreader and reapply TIM. It worked well, but a lot more people (probably myself included) aren't willing to go even that far, never mind apply 80 lbs. of mounting force directly to the die of our expensive CPUs, what with their easily-warping PCBs. Don't need that much mounting force with a water block and those fully sealed joints seem to be pretty popular these days.
|
# ? Dec 13, 2012 23:23 |