Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Here's some early coverage of Bulldozer benchmarks that was discussed in the Sandy Bridge thread. In short, the line is that a Bulldozer CPU performs close to a Core i7 hex-core CPU.

For reference, here are single- and multi-threaded benchmarks of current CPUs. Given that Bulldozer has 8 cores, it's not entirely surprising that it would be competitive with a 6 core CPU. It's always been expected that Bulldozer would put out excellent performance in heavily threaded integer that can take advantage of its cores, the real concern is how it performs in more typical workloads that depend more on the performance of each thread. There's also the possibility that the benchmark may not have been entirely fair, for example running an 8 CPU benchmark would max out a Sandy Bridge i7 or a Bulldozer, but would leave 4 logical CPUs unused on the Gulftown i7. Interestingly, in an 8 CPU test configuration, the Sandy Bridge i7 is exactly as fast as a Gulftown i7, thus probably as fast as an 8-core Bulldozer, all with only 4 physical cores. This would make it twice as fast core-for-core as Bulldozer, and AMD would really need to do magical things with clock speeds and Turbo to start to make up the difference. Turbo/Cool'n'Quiet have historically been problematic for AMD, so we'll see how well this all works out.

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

adorai posted:

Does anyone really care about "typical" single threaded workloads any more? More and more apps are being written with multithreading in mind, and even desktop computing seems to be moving towards virtualization (see: XP mode). Going forward single thread performance will be less and less important across the board. Most of the money is made in the enterprise, where max memory and number of cores are king.
On the desktop per-thread performance is still very important, sure some apps are multi-threaded but past 3 cores very few will take advantage of it. Outside of applications that can scale to an arbitrary number of threads (like 3Dsmax or Cinebench), you're basically not going to use more than 4 cores on the desktop.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Here's an Xbitlabs article with more details on the Bulldozer delay and the accelerated release of the Llano series of APUs. AMD's spin appears to be "it's not a delay because we said it would launch in Q2 and it's still going to launch in Q2. Barely." This sounds like a Bulldozer-related problem, because Brazos is launching on-time and consistent with conservative expectations, and it seems like Llano will be launching ahead of schedule on the same Global Foundaries 32nm process, despite the fact that it's basically a new platform.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
RDRAM first debuted on the i820 chipset for the Coppermine Pentium IIIs. The 1.13Ghz .18micron Coppermine P3 was disastrous because it didn't actually work, certain intensive workloads caused the CPU to error out without additional voltage. The Tualatin P3s (using the .13micron process with copper interconnects instead of aluminum) were pretty excellent processors since they overclocked well, had up to 512KB of full-speed L2, and ran pretty cool. The only problem was that by that time the Thunderbird was out so the Tualatin was only interesting for its mobile processors or if you were married to Intel for some reason.

The really lovely thing about the early P3s/P4s that used RDRAM was that PC133 compatibility was provided via a Memory Translator Hub that added additional latency and overhead, and also turned out to be broken as poo poo resulting in recalls and free replacements with RDRAM (seriously every other generation Intel fucks up their platform somehow and they still have reputations as the gods of QA and reliability). The P4 sucked a lot less by the time the i875P brought dual-channel DDR to the platform, combining the high memory bandwidth of RDRAM with the low latency of SDRAM. By this time the Pentium 4 HT was out with a faster 800Mhz FSB, HyperThreading, and more cache, but without the horrible power leakage of the later Prescott processors.

Bonus Edit: And I got beaten a little on the Tualatin/i820 stuff, that's what I get for getting distracted reading about the old K75 processors with their variable L2 cache clockspeeds :P

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
It seems like it's going to come down to how efficient AMD's Turbo Core 2.0 is. Turbo Boost on the Sandy Bridge has turned into a huge platform advantage, and there's basically nothing out there that's going to let AMD leverage more than four cores. AMD has been trying ever since Cool'n'Quiet without much success, but maybe they closed the gap. I'm more excited about the Llano APU, it's like getting a free Radeon 6500-class videocard with your Athlon II X4, which isn't a bad deal. The CPU can also probably stand up with an external videocard a lot better than an i3.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
My guess would be that Asus got enough advanced notice of the AM3+ spec to design those boards with it in mind, though perhaps not to the level where they are fully compliant with the final spec.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
The motherboard choice is a LITTLE disingenuous on the AMD side. A more appropriate choice would be a ASUS M4A87TD/USB3 AM3 AMD 870 SATA 6Gb/s USB 3.0 AMD Motherboard for $92.99, that's $202.98 for the AMD CPU+mobo, $354.98 for the Intel CPU+mobo, a difference of $152.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Coredump posted:

So wrong. So very wrong. Tech Report has covered this very subject in more thorough detail and Athalon x4 is a great system to build around on a budget.

Edit: Hey here's a link! http://techreport.com/articles.x/18448/17 Ownage.
That article is a year old, doesn't include the current generation of Intel processors, and STILL shows the Intel Core i5 750 as having the best performance/dollar when total system cost is factored in.

I mean, obviously if you can't afford $350 for your motherboard and CPU then AMD is going to give you compellingly better value than Intel will, and of course when I built a computer for my grandma I used an Athlon II X4 and the onboard video because she's barely going to use that CPU and the higher performance would have been wasted. If you do notice performance though, and you have the money, it's a little silly to say an i5 2500 wouldn't be worth it.

Alereon fucked around with this message at 21:38 on Apr 1, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I was typing up a post but then I decided that instead we should all have a group hug and agree that if you wanna spend >$350 on your CPU and motherboard you should buy Intel, and if you don't you should buy AMD, and everyone can decide for themselves which group they fall into for their own reasons. Also I want to see an Intel Cedar Trail Aton and a 28nm die-shrunk AMD E-series duke it out for low-power x86 supremacy later this year or maybe next. My money is on the E-series because I still think Intel has no loving clue about the Atom.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
The AMD750 (Irongate) chipset was horribly unstable, mostly because it was the first generation chipset and the first one for the platform. People were still in the habit of using the same generic low-draw 250W power supply and generic PC100 RAM they would in a Pentium II system, which didn't work because AMD systems were a lot more sensitive. The Via KX133 was actually an improvement over Irongate, though it didn't support the "SuperBypass" tech AMD used to improve performance. By the time of the KT133A the Via chipsets actually made for a pretty reasonable platform, as long as you didn't have a Soundblaster Live! card.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
It kinda pissed me off that the nForce2 was never supported in Vista, I mean I know it was a few years old when Vista launched, but at least there could have been Vista 32-bit drivers like every other chipset manufacturer. After the nForce2 they kind of turned into the Pentium 4-era Intel of chipsets, making monstrosities that used a lot of power (thus REQUIRING a fan) and making motherboard manufacturers buy NF200 PCIe bridge chips to "upgrade" PCIe v1.0 nForce chipsets into PCIe v2.0.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

pienipple posted:

http://www.active-hardware.com/english/reviews/mainboard/a7n8x.htm

This was the Athlon XP board to have, and it was great. I still have it in my htpc but it's defunct because I dropped my monstrous Zalman heatsink on the chip and cracked the core.
Pshhh, Abit NF7-S v2.0, bitches. Remember when nVidia accidentally made good onboard audio on the nForce2, called SoundStorm? I assume it was an accident because they never repeated their mistake on future chipsets.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
The 28nm successor to the AMD Ontario and Zacate low-power x86 processors, the Wichita series, has taped out at AMD for production at TSMC. "Taping out" means the chip design has been completed and sent for initial test manufacturing, after which they make any further necessary changes. About two weeks ago we learned that the AMD Radeon HD 7000-series taped out about six weeks prior, indicating that both products may be on track for a launch potentially as early as Q3. I'd expect the new C- and E-series processors to have VLIW4-based Radeon HD 7300 graphics or something to that effect, I'm thinking it will have 64 shaders (one VLIW4 SIMD block), but they could double this if they don't scale clockspeeds instead.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Keep in mind that we're talking about an incredibly simple device here, nothing like a CPU or a GPU. At 40nm the Zacate APU is 75mm^2, even with added hardware Wichita may be smaller. Small parts make excellent products for a new process shrink, as they're less affected by a high defect-rate due to process teething problems. As far as whether AMD wants to jump on 28nm early, you're drat right they do. They've been stuck on the 40nm process longer than anyone expected and are at the limits of its capabilities with the 6970. While they certainly don't want to have to deal with potentially low yields and unpredictable issues, they require the competitive advantage in the graphics space that an early and successful transition to 28nm brings. The low-power x86 market is another area where AMD is highly competitive but supply-constrained, so it makes sense to transition to 28nm quickly where they can increase their performance and potentially make gains on manufacturing cost and volume.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Not A Gay Name posted:

Well for CPU's AMD is still at 45nm not 40nm. As far as I know there are no 40nm CPUs only GPUs in which case, Nvidia is still at the 40nm process as well.

In those regards they are certainly behind Intel with the 32nm process which is what I thought (at least the high end) Bulldozer was going to rather than 28nm.

Though I don't see any reason they can't skip 32 and go to 28 if it's ready.
AMD CPUs are currently manufactured at Global Foundries on the 45nm Silicon-On-Insulator process, transitioning to 32nm. AMD GPUs are currently manufactured at TSMC on a bulk-silicon 40nm process, transitioning to 28nm. You can't make AMD CPUs at TSMC because they're designed for GF's Silicon-On-Insulator tech, and vice versa. GPUs have historically been made at "half-node" processes, which are sort of like half-steps beteween process shrinks. For example, while CPUs shrank from 90nm to 65nm to 45nm, GPUs went from 80nm to 55nm to 40nm.

Alereon fucked around with this message at 02:09 on Apr 15, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Global Foundries is using SOI, TSMC (and nearly everybody else) isn't, sorry if I wasn't being clear. That means there's pretty substantial differences in the processes and the resulting chips. Those significant differences are why it made sense for AMD to fab their Zacate APUs at TSMC, since they were reusing their graphics cores and could design new CPU cores for TSMC's bulk CMOS process.

E: Apparently I can't spell foundries.

Alereon fucked around with this message at 02:09 on Apr 15, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I happened to be looking at Zacates on Newegg, and saw this new Sapphire option that looks good:

SAPPHIRE Pure White Fusion E350 for $109.99

It has a Marvell Yukon Ethernet chipset which tend to work better than Realtek for some people. I'm just bummed we don't have more options with passive heatsinks, you have to spend $145 for the Asus, though it is a nice product with extra eSATA and USB3.0.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech posted a Nettop and Mini-ITX Buyer's Guide today, one of the interesting things I noticed was that they used an Antec nettop case with a 100mm quiet side-fan, which I'm thinking would let you remove the fan from the Zacate heatsink and still get acceptable cooling.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Nordic Hardware has a rundown of the Llano benchmarks leaked by DonahimHaber. They could of course be BS, but they seem pretty reasonable as benches of IGP game performance versus Sandy Bridge. It still won't play Crysis, but everything else works well. Fudzilla also has details on the top of the Llano mobile lineup. The highest-end mobile CPU will be the Fusion A8-3530MX, a 32nm quad-core with a base clock of 1.9GHz, Turboing up to 2.6Ghz. It features Radeon HD 6620G graphics (400 shaders @ 444Mhz). This compares to the current Phenom II X4 940BE, a 2.4Ghz 45W CPU without graphics.

Like I've said in the past, the critical aspect is how well AMD's Turbo Core works. The graphics seem pretty compelling, so per-thread CPU performance is the only real question.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Here's a Google translation of the Donanim Haber article with the original image attached.

Google Translate posted:

Bulldozer FX and AMD's next-generation Fusion processors LIano series began to arrive on the price information. Upcoming launch date with each passing day, new details emerged the AMD front, not yet officially doğrulanmamakla with four, six and eight-core processors are a new generation of end-user retail prices published by a Chinese site. News from the table in order to be understood according to the model of parallel processors 160-400 will have integrated graphics unit, code-named Fusion LIano A series of processors will be presented suggests the level of $ 70-170. As is known, these processors are based on Husky K10.5 core architecture design using the updated version.

AMD's high-performance segment konumlandıracağı an aggressive pricing strategy Bulldozer FX processors will come in as expected. 32nm process technology and according to the model until 8MB'a 8MB'a L2 and L3 memory capacity of up to set prices for its new generation Bulldozer FX processor is varied between $ 190 and $ 320. AMD's official statements to be remembered Bulldozer FX 8-core processor, Intel's Core i7-2600 karşılaştırılyordu model. The resulting price chart and the current selling price of the Core i7-2600K'nın also taken into account, AMD's fastest processor 8-core Core i7-2600 also are growing, with estimates that will fight.

Only registered members can see post attachments!

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Tab8715 posted:

The hell? They can't be right, the integrated graphics core has a R 6550? That's what I've got in my desktop right now...
To be fair, it isn't directly comparable to desktop-class cards because of the reduced memory bandwidth. Llano has 29.8GB/sec shared between the CPU cores and GPU, desktop cards (those intended for 3D) have at least 64GB/sec. The lower-end cards intended for HTPC and video applications do have as little as 28.8GB/sec though, so Llano will easily mean videocards are only necessary for gaming.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Factory Factory posted:

Actually, video cards aren't anywhere near as bandwidth-intensive as their interconnects suggest. Everything below a dual-GPU card actually loses only a handful percent of their performance scaling all the way down to a PCIe x4 connect.

Also, PCIe 2.0 x16 only runs at 8 GB/s (500 MB/s per lane times 16 lanes). You might be thinking gigabits.
I'm talking about memory bandwidth, a Radeon HD 6570 has a 128-bit 4Ghz GDDR5 memory bus, but the Radeon HD 6550 is going to have to make do with a 128-bit 1866Mhz DDR3 bus, shared with the CPU.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
The Brazos APU is pretty tightly integrated, it's definitely not comparable to Clarkdale or Atom, which don't have integrated memory controllers. Clarkdale actually used separate chips (32nm CPU cores, 45nm northbridge) on the same package, Atom uses a single chip but with two independent ICs linked by an on-die FSB. These approaches require little development work and provide some of the cost savings of integrated memory controllers, but you don't get the performance improvements offered by an IMC (and an actual IMC uses even less power and die space). The main limitation to Brazos performance is the very low CPU clocks and lack of Turbo support, both of which should be remedied in the 28nm die shrink. New chipsets probably also wouldn't hurt (especially for nettops), but overall platform power usage is already pretty low.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Fudzilla has an article about AMD's Fusion strategy (direct link to AMD presentation slides), as well as another article with more details about the upcoming Fusion Z-series APUs for tablets. Intel has announced they're slashing the prices on their upcoming 32nm Atoms by about 50%, so competition is really starting to heat up in the low-power computing arena. I'm thinking we'll see Atoms in cheap, low-performance devices (like ChromeOS smartbooks), with Fusion processors making significant headway in Windows-based devices that require higher performance. The real question for AMD longer-term is whether they can write Linux/Android graphics drivers that will allow them to capture some of that market, or if they'll just leave it to the ARM SoCs. AMD has two year until Intel produces a competitive Atom (22nm Silvermont in 2013), so they better take advantage of this time by executing well.

Let's all just pray the rumors of Carly Fiorina being selected as the new AMD CEO were false, otherwise AMD is just plain done.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
AMD says it has sold 5 million Fusion APUs so far, and that it is sold out, with demand far exceeding supply. Engadget is patting themselves on the back for predicting the death of the netbook, but I think the reality is that consumers are catching on to the fact that Atoms simply aren't fast enough for netbooks. They were barely adequate when the netbook form factor first appeared, but because the Atom never evolved the computing demands for basic web browsing far outpaced it. It's unfortunate but understandable that Fusion APUs are ending up in low-end laptops rather than the netbooks they'd be perfect for.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Bulldozer has been delayed until approximately late July due to performance issues. Apparently the B0 and B1 steppings had unacceptable performance, so AMD is spinning up a B2 stepping and hoping that it will make a big enough difference. So much for those optimistic performance projections :(

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Nordic Hardware is reporting that the Bulldozer lineup has been canceled, replaced with a new lineup with much more conservative clockspeed targets, to be launched in September. This news story came out before we got confirmation at Computex that AMD was going to have to make a new stepping, so it seems pretty likely right now. Since Llano still seems to be on track, I'm thinking this is more of an issue with the Bulldozer architecture than the manufacturing process (though we have no idea what the clockspeed targets are on Llano). Unfortunately, this probably spells the end of any chance AMD had at competing with Sandy Bridge in terms of per-thread performance, and doesn't bode well for their chances with well-threaded workloads when compared to Sandy Bridge-E. It seems like Bulldozer is going to end up like the Thuban hex-core Phenom IIs, not the fastest, but the cheapest way to get 6+ cores if you have an application that can use them all.

On the plus side, we know that Intel's next-generation Ivy Bridge CPUs have been pushed back from Q1 2012 to Q2, meaning AMD is going to have a generous period of graphics dominance with Llano and Brazos.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I think it's fair to say that the 900-series chipsets and boards will be delayed to launch alongside Bulldozer, but that's just a rebrand anyway and you can buy AM3+ boards now (list linked in the OP). Realistically though, you should probably put some serious thought into an Intel Sandy Bridge CPU and Z68 board. AMD could still pull off something awesome, but that's looking less likely and less worth waiting for.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Sinestro posted:

That is not true. There are 2 128-bit FPUs in a module, but for 256-bit FP calculation, they can fuse. Unless AVX is bein gused on one core, there are two FPUs, each tied to one core.
That's not really accurate, there's a single FPU with two 128-bit FMACs, which is the amount of FPU resources you normally dedicate to a single core. See the Bulldozer slides in comparison to Phenom II on this page. The FPU is, in effect, using Hyperthreading to allow it to serve two cores. That said, I don't think FPU performance is a limiting factor in most workloads, but if you're building a cluster to process your floating point scientific data, you'll probably buy Xeons. The actual problem for AMD appears to have been that Bulldozer couldn't scale to high enough clockspeeds to offer competitive per-thread performance, which is what has always been the big risk of going with >4 core CPUs.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Xbitlabs is reporting their sources as saying that Bulldozer is currently topping out at 2.5Ghz before Turbo, which certainly explains why they couldn't launch it. Even desktop Sandy Bridge has nearly 1Ghz over that, and Sandy Bridge-E will probably have a similar lead, and that's with 8 cores.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has some more AMD updates from Computex, one is that in 2012 they intend to launch the Trinity APU series to replace Llano, this will combine Bulldozer modules with a Radeon graphics core. Another Bulldozer tidbit from Anandtech is that they are reporting that B1 stepping CPUs are clocking at 3.8Ghz nominal, which contradicts earlier claims that they were only able to achieve 2.5Ghz nominal. This could mean that the clockspeed issue stories were wrong and the problems were just actual performance, or it could be that the issue is what clocks they can hit within their TDP targets. The processor renumbering really points to the problem being clockspeed-related somehow, but it's always possible that story was just wrong.

AMD has also officially announced their Z-series "Desna" APUs for tablets. The AMD Z-01 is basically an Ontario C-50 chip that has been binned for lower power consumption, coming in at 5.9W versus 9W on the C-50. It's a 1Ghz dual-core CPU with a 280Mhz Radeon HD 6250 GPU.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
AMD has announced the E-450 APU, it's basically the E-350 with slightly bumped CPU and GPU clockspeeds, graphics turbo support, and support for DDR3-1600 memory (instead of DDR3-1333). This should result in significant performance boosts for graphics-bound games, though unfortunately most games are CPU-bound on Zacate, and they apparently can't bump CPU clocks meaningfully without a die shrink.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

E2M2 posted:

So is there a price range on how much these new Bulldozer CPUs will cost? And the new motherboards?
Here's a pricing/spec table from Donahim Haber on May 20th. This was before the rumors of performance/lineup changes, so things may have changed, and take it with a grain of salt. It looks reasonable, however.

Alereon fucked around with this message at 08:41 on Jun 5, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
It's a plus for AMD if they're trying to maintain profitability :)

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Llano previews are out, I'm thinking of giving it its own thread. Anandtech Desktop Preview, Anandtech Notebook Preview.

Notebook: Absolutely unbeatable for inexpensive gaming performance, with Sandy Bridge-like excellent general usage battery life. The CPU performance is poor, but you're clearly buying a Llano notebook for the graphics. The CPU performance does hold back the GPU in CPU-heavy games like StarCraft 2, but not by enough that the GPU still doesn't give it a commanding lead. The new Hybrid Crossfire isn't bad (when it works), though it depends on how cheaply they can throw low-end GPUs into laptops (low-end AMD mobile GPUs seem pretty drat cheap). No graphics turbo :(

Desktop: Disappointing. The GPU is more hamstrung by sharing memory bandwidth with the CPU than hoped, though it still has a compelling performance advantage. The CPU is showing its age, easily trounced by Sandy Bridge Core i3s. It still obviously wins at gaming without a dedicated graphics card thanks to 50-100% faster graphics performance, but I'm holding out for final reviews and to find out how overclockable it is.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

freeforumuser posted:

Trouble for AMD is SB + GT540M is better than mobile Llano in every aspect and the former isn't exactly expensive to boot either.
Keep in mind that Anandtech was reviewing a 35W low-power processor, and the laptop with a GT 540M was using a 45W quad-core i7 2630QM. The GPU itself is competitive enough that it instantly obsoletes everything below the GT 555M/GTX 560M, which is amazing when you consider that we're talking about integrated graphics and not having to pay for a videocard at all. I don't think AMD or its OEMs are stupid enough to price Llano notebooks similarly to Sandy Bridge notebooks with capable dGPUs. One thing we don't have yet are gaming battery life benchmarks, which is one area we can expect Llano to excel. It's already winning over dGPU-equipped Sandy Bridge in the Internet and video playback tests, once the dGPU kicks on you'd expect battery life for the GT 540M laptop to fall in a hole.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

HalloKitty posted:

Let's be honest though, the BAPCo thing probably is because their CPUs aren't performing all that well, and they think the suite didn't emphasise GPU performance enough..
Keep in mind that nVidia and Via quit at the same time along with AMD. This is really an issue of Intel controlling BAPCo and selecting and weighting the benchmarks to show Intel's products in the best possible light. That probably means a focus on single-threaded performance and with minimal emphasis on graphics performance, which would definitely disadvantage their competitors who optimized for multi-threaded performance and GPU speed.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

COCKMOUTH.GIF posted:

Yes, AMD is trying to go up against a monolithic company that's been using underhanded tactics. In response, they should be shifting their focus on what needs to be done to compete with Intel. Llanos is a good start, but they could continue looking at what Intel is not doing well and strong-arm into that. For instance, Microsoft and Intel aren't getting along in terms of ARM architecture. Work with Microsoft then, see if some deals can be made. Start there, see if Apple will do the same. Spend more on R&D, look at what can be done in the server market. AMD could also try combining the AMD/ATi technology to start pushing out AMD reference mainboards (like Intel does). Make them stable, make them reliable, push them to consumers and manufacturers. I'd build an AMD machine with an AMD reference board if it's comparable to Intel's. Stability, simplicity and well-documented functionality. Obviously you'll need a comparable CPU to go with it.
SemiAccurate had an interesting article on Wednesday about the AMD/ARM collaboration. In short, they're working on creating a standard on-chip interconnect so that manufacturers can combine ARM CPU cores with an AMD GPU block, or AMD Bobcat CPU cores with hardware acceleration blocks from/for ARM for ultra-low-power x86/x64 applications. The fact that Bobcat CPU cores are designed for production in third-party fabs (since Brazos is fabbed at TSMC and not Global Foundries) will help make adoption more likely.

There's no real reason for AMD to make their own branded motherboards, this makes them a competitor with their own partners and is what killed 3Dfx years ago. Intel isn't doing so well in the motherboard arena either, as the complexity and engineering challenges of modern motherboards require a level of focus and dedicated resources that you only see in companies like Asus. None of Intel's motherboards have any interesting or differentiating features, and even their high-end boards are unreliable garbage because they simply don't do the engineering/QA necessary to make good products. Intel's recent drive to push their own branded motherboards could easily end up as a coup for AMD, as Intel damages their relationships with their partners.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Star War Sex Parrot posted:

Wait what? Intel said Windows 8 for ARM will not have an emulation layer. Has Microsoft stated otherwise?
MS stated the day after the Intel statement that it was "factually inaccurate and unfortunately misleading." They didn't elaborate though, and I think it's more likely they were responding to Intel's claims that there were four different Windows 8 for ARM SKUs and apps for one wouldn't be compatible with any of the others, which would be bizzare if true so is probably just Intel FUD.

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Realistically, how much software do you actually need for Windows 8 ARM? As long as Firefox, Flash and Java get ported, that pretty much covers you for everything but games.

Also, would .NET applications need to be recompiled, or can the CLR just JIT into native ARM code?

  • Locked thread