Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

David Tennant posted:

If Firefox, Flash, and Java are the only major things ported, what's going to stop you from buying an Android tablet instead?
It's just like Android vs Windows Phone 7, MS is going to have to compete on their own merits with the Google alternatives. To be honest, I don't think it's going to work, though it will make Windows a lot more compelling option for low-power x64 machines.

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

fishmech posted:

Ask yourself: how many programs were ported to Itanium Windows? How many programs were ported to PowerPC Windows NT (when that existed)?
To be fair, both of those architectures failed because of the lack of compelling benefits versus x86/x64. Imagine if Itanium had been fast enough to justify rewriting code for it, for example. These days though, we're going to see people porting apps to JavaScript webapps, not recompiling them for ARM on Windows.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

fishmech posted:

ARM has no compelling benefits other than low power usage, which comes at the cost of performance. Noone's going to be replacing full laptops or desktops with ARM based ones because the high-speed ARM CPUS that start to approach 2 generation old x86/x64 chips also use as much power as those used.

The only thing slower than x86 apps in an emulation layer on ARM is Javascript replicating a full x86 app on ARM.
The thing is, even four 1Ghz Cortex A9 cores is enough to provide a pretty effective computing experience when combined with the capable hardware acceleration you find on modern ARM SoCs. No one's talking about beating Intel Core processors, but Atoms don't have effective hardware acceleration so are brought to their knees by video. It's pretty easy for ARM CPUs to take market share in any market with processors like Atom. If you think of the way most people use small computing devices like that, we're talking web browsing, Youtube, and Facebook games, not complex native applications.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

fishmech posted:

The problem is that if you want to get a lot of people bothering to port things to work on the platform, knowing it will be restricted to tablets and netbooks is a disincentive. If ARM was actually capable of providing a robust experience at a good pricepoint on full laptops and desktops, then there'd be a lot more push to actually port apps to work on ARM windows.

Also ARM devices do provide a good experience now - but your iPad or Android device sure as poo poo ain't stuck running Javascript based programs.

Edit: And seriously, for something that's meant to be a netbook? If all you want to do is beat the Atom there's the AMD Zacate or whatever it is netbooks.
I think the market for ARM is a lot larger than you're giving it credit for. Smartphone SoCs are becoming similar in capabilities to current-generation game consoles, four 1Ghz+ Cortex A9 cores with hardware acceleration really is "good enough" to provide a good computing experience, and once we transition to Cortex A15 that will be even more true. The average user doesn't even use any native apps anymore, doing everything through their web browser. My grandma still likes to design and print craft projects so she'll probably keep her PC since there isn't a comparable web equivalent yet, but all my mom does is read HuffPo and stuff, listen to Pandora, and watch Hulu/Netflix/Youtube. A cheap ARM nettop would work great for her, and would outperform the old machine she currently uses with much lower power usage.

Modern JS engines and the ready availability of hardware acceleration mean that there are very few applications that can't be replaced by webapps that have acceptable or even superior performance from the end-user's perspective. The only common application that does enough work that this isn't the case is 3D games, and we're going to see exciting developments in that area thanks to WebGL. You won't be playing Crysis on your ARM netbook/nettop, but you might be playing Battlefield Heroes, Quake Live, or even Counter-Strike without the developer maintaining a native-code plugin for your platform.

AMD's Brazos series (Zacate/Ontario/Desna) certainly are some great low-power processors, but the on-the-ground reality is that severe shortages (compared to demand, AMD is pumping them out) mean that no one's making Zacate netbooks because they can make much more expensive ultra-portable notebooks, small desktops, or tablets, and those products that are coming out are at high prices. Even if these supply problems were solved, ARM processors will come in at a substantially lower power envelope and with much higher power efficiency, meaning much longer netbook battery life and easier integration into nettops. ARM SoCs are also physically smaller and more tightly integrated (fewer supporting chips) than even the Brazos platform, which has its own advantages.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
And now back to our regularly scheduled Bulldozerchat:

XbitLabs is reporting that Bulldozer-based Opterons will have user-configurable TDPs with 1W granularity. This means AMD will no longer be shipping SE/HE/EE versions of its CPUs, instead you just buy the speed grade you want and set the TDP in the BIOS to exactly what you want it to be.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Llano reviews are officially out:

Anandtech's A8-3850 Desktop Review
Anandtech's A8-3850 HTPC Review
Anandtech's Desktop Llano Overclocking and ASRock A75 Extreme6 Review

Turbo Core: Only available on the 65W TDP processors, adds up to 300Mhz to CPU clock speed only. Disappointing.

Overclocking: Overclocking is of limited usefulness because the chip still tries to stay within the same power envelope, so overclocking the CPU underclocks the GPU. Overclocking the memory is highly beneficial, it overclocks the CPU cores as well, but the improved memory bandwidth more than makes up for the reduced power available to the GPU. For an unoverclocked system, you definitely want 1866Mhz DDR3 if possible, or at least 1600Mhz.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Longinus00 posted:

How is GPGPU for AMD parts in linux? If this more or less requires the official AMD drivers to actually use the fusion part of the CPU I'm not going to be interested at all.
Phoronix has lots of tests of AMD hardware in Linux. Here's their review of AMD Fusion with open source drivers, though that was Brazos and not Llano. Why don't you want to use the Catalyst drivers?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
[H]ardOCP has an article on E3 rumors about next generation console hardware. The really interesting thing here is that Sony may be using Bulldozer as the Playstation 4's CPU. Nintendo is switching to an IBM quad-core CPU, and Microsoft may be switching to a next-generation Cell for the Xbox. Only the news that the PS4 might use Bulldozer is shocking, the Wii used IBM PowerPC cores, the PS3's Cell used IBM PowerPC cores plus the SPE vector co-processors, and the 360 used the same IBM PowerPC cores from the Cell minus the SPEs. I would certainly expect upcoming consoles to continue using IBM PowerPC CPU cores, though the continued usefulness of the SPEs is questionable now that we have GPUs with incredible, accessible compute performance.

On the GPU front, AMD will be powering all three next-generation consoles. The 360 and Wii are both powered by AMD GPUs, it makes a lot of sense that Sony would switch to AMD given the direction nVidia seems to be going with GPUs. It's possible that Sony could be using a Bulldozer-based APU in the PS4, I don't think this would necessarily provide the graphics horsepower that you'd want, but then again it makes sense in the context of Sony saying they wanted to make the PS4 a less expensive console with less investment in hardware development.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

wicka posted:

What does that mean, exactly?
nVidia's GPUs are overly compute-focused for the console market, and their power efficiency is significantly worse. This article from SemiAccurate about the development of nVidia's upcoming Kepler GPU has a bit more information, especially about the manufacturing and engineering challenges nVidia is facing on TSMC's 28nm process that AMD seems to have surpassed. Basically AMD can deliver a graphics solution that's smaller, cooler, and less expensive for the same amount of performance, and they seem to be in a better position to actually deliver upcoming products sooner (remember how late the Geforce 400-series was). I'm mostly comparing the Radeon HD 6870 to the Geforce GTX 560 Ti here since I don't think anyone's putting a GTX 580/R6970 in a console, but I think the same relative proportions are likely to apply to the next generation of mid-range parts.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

wicka posted:

It's not really strange or amazing that Sony promoted something new and expensive and it turned out not to be the next big thing.
To be fair, the Cell was an interesting solution to the problem that was solved by the next generation of GPUs that came out right after the PS3. The Cell really isn't a bad idea inherently, it just seems like the GPU (Geforce 7950 GT) is so weak that no one ever had much use for the capabilities, especially given the programming effort needed to unlock them. Sony tried to offer the Cell for embedded applications where you need a lot of performance, like TVs and video boxes, but it turns out an SoC with an ARM CPU core and some dedicated video decode hardware is a much better, cheaper, and more efficient solution. There just aren't that many applications that require a lot of processing power, don't have available ASICs, and won't/can't be run on x86.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Ryokurin posted:

It will be interesting which generation of GPU they use, VLIW4 or vector + scalar which is what Southern Isles is alleged to be based off of. VLIW5 and 4 was much more efficient than Nvidia's designs, but was also pretty hard to fully utilize which explains why it never really walked all over anything Nvidia put out. Vector + Scalar is more close to what Nvidia uses, but with some improvements. I doubt it will be hotter than what Nvidia will produce but we also don't know yet if the improvements they have done will actually be seen in the real world either.
Here's the Anandtech article about Graphics Core Next (GCN) for those who haven't seen it. I'm pretty sure that GCN is still pretty far out, more likely a target for the Radeon HD 8000 or even 9000 series. I think it's confirmed that Southern Islands, the Radeon HD 7000-series due for release in a couple months, will be VLIW4-based. Bulldozer APUs are definitely VLIW4-based. I think AMD is likely to maintain their power usage lead for awhile, since it's more of a philosophy of aiming for a lower target, and AMD has a significant technology lead in memory controllers (one of the reasons the R6870 has such good efficiency).

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
It makes a bit more sense for MS to use an AMD CPU rather than Sony, as you don't have to find a way to reimplement code that was using the SPEs. Still, I remain unconvinced that an APU would provide enough graphics horsepower to provide a real generational leap over the 360 and PS3. Console games are EXTREMELY shader-bound, and you can do cool things with more shader horsepower like use post-processing to hide how lovely your game looks. Currently console games are rendered at below native resolution (1024x576 for example), then scaled up for display using a soft filter. The goal is to increase rendering performance, while using the blur effect from scaling the image up to hide aliasing and your low-resolution textures.

With more shader horsepower, you can render at native resolution and use a shader-based antialiasing filter like nVidia's FXAA (which works on all hardware since it's just a pixel shader and is integrated into the game by the developers) or AMD's MLAA (which is part of the drivers so only works on AMD hardware, but is otherwise similar). That'll improve edge sharpness and clarity a lot, but with only 30GB/sec of memory bandwidth shared between the CPU and GPU we're probably not going to be seeing games with high-res, detailed textures. On the plus side, you'll notice low-res textures less because there will be neat shader tricks covering them up. Unfortunately this isn't the same result as just throwing a real GPU in there with dedicated GDDR5 memory, which WOULD give you enough bandwidth for high-res textures and real anti-aliasing.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Keep in mind that this generation of consoles is all about cost reduction, both in terms of bill of materials but ESPECIALLY hardware development investment. I'd love to see AMD develop some custom APU with a wide GDDR5 memory bus, but I think it's more likely that any console using a Bulldozer APU will use a regular AMD Trinity APU (Bulldozer cores plus VLIW4 graphics) and a custom chipset to provide a low-cost integrated platform.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

JnnyThndrs posted:

Wasn't the RROD issue due to leadfree solder hassles(similar to those lovely HP DV-series laptops), rather than poor design? Or were there multiple causes?
While I think the actual problem was similar (high temperature differentials causing weakened and disconnected solderballs), the cause was different. On the HP laptops it was due to a design defect on the packaging for Geforce 8M GPUs (specifically, a missing underfill layer designed to cause the solderballs to stick to the die), which is the generation following the 360. I think they key problem for the 360 was insufficient cooling, since the later models had much larger heatsinks with heatpipes.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Sinestro posted:

Llano is the chip without a market. Anyone who would want one would get SB + /[H|Z]6\d/ using the IGP, or AM3 + dGPU. :iiam: why AMD thinks there is a big market for "lovely CPU with good (for a IGP) GPU". The laptop performance is poo poo vs the category of "Any dGPU + SB", and Optimus makes the batt. life compatible to a IGP. The notion of Llano in the desktop is laughable, because the desktop is the land of the power user, where Intel is king. I guess for gaming on a *very* tight budget at 720p resolutions, it might make sense.
A Llano system is a good $100-$200 cheaper that a Sandy Bridge system, and it actually has sufficient graphics horsepower for moderate gaming and HTPC applications. You only get the Intel HD Graphics 3000 if you buy a K-edition desktop processor (or a lovely i3 2105, or low-power i5 2405S), and I don't think you really appreciate how terrible previous-generation integrated graphics was on the AM3 platform. It's true that the CPU performance, equivalent to an Athlon II X4 640 or so, isn't very impressive, but it's fast enough to beat Intel's dual-core CPUs and fast enough for basic gaming and desktop applications. Basically, if you're buying anything less than an i5 2500K, or care more about decent graphics performance than high-end CPU performance, Llano is a better option. I really wish they'd get Turbo working on a future refresh though, that should help a lot in applications that don't use all four cores, and really boost competition against Intel.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Digitimes is reporting that yields are significantly lower than expected on AMD's 32nm process, resulting in shortages of Llanos. This is probably also the reason why the Bulldozer release date keeps creeping back.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
SemiAccurate got their hands on an AMD internal presentation disclosing the Bulldozer die size as 315mm^2, which is very large compared to other contemporary CPUs. A full Sandy Bridge die is 216mm^2, the Phenom II X6 (Thuban) 45nm die was 346mm^2.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Some weird news on the GPU front that I didn't see earlier: Nordic Hardware claims to have leaked specs and details for Radeon HD 7000-series cards, and that the Radeon HD 7900-series will use Rambus XDR2 memory, rather than GDDR5. The specs for the Radeon HD 7800-series are bang-on with expectations, basically the 6900-series die-shrunk to 28nm and slightly overclocked. I'm pretty skeptical both of the claims that the 7900-series uses XDR and that it would be manufactured at TSMC, I think the rumors up until now have been that it would use the Global Foundries 28nm process, since a ground-up redesign is a great time to switch to another fab.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Alereon posted:

Some weird news on the GPU front that I didn't see earlier: Nordic Hardware claims to have leaked specs and details for Radeon HD 7000-series cards, and that the Radeon HD 7900-series will use Rambus XDR2 memory, rather than GDDR5. The specs for the Radeon HD 7800-series are bang-on with expectations, basically the 6900-series die-shrunk to 28nm and slightly overclocked. I'm pretty skeptical both of the claims that the 7900-series uses XDR and that it would be manufactured at TSMC, I think the rumors up until now have been that it would use the Global Foundries 28nm process, since a ground-up redesign is a great time to switch to another fab.
Welp that was fast:

Charlie Demerjian from SemiAccurate posted:

Let me be blunt here, THERE IS NO XDR2 IN SI/HD7000/GCN. Trust me on this, the spec list floating is complete bull, and you can tell by who is re-posting it and who is not. Some people know, and they are being VERY quiet on the subject.
This is not meant to knock XDR2 and/or Rambus, it is just a statement about what is in and what is not in the next GPU.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has a post detailing pricing and model lineup for the initial AMD Bulldozer FX launch. It looks like they're pushing the release date back until Q4, which starts in October. Intel plans to compete by releasing a Core i7 2700K CPU, which will take the top slot and push down pricing on the i7 2600K and i5 2500K.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Bob Morales posted:

Interesting, hope the benchmarks they show are accurate. Not sure if their top end part will be worth it, being priced over the i5 2500k.

The slides showing the improvement against the Phenom X6 worry me, because there's not much performance there to begin with.
Their benchmarks are also against the i7 980X on the Intel side, and while that's a hex-core, it's not terribly competitive with the Sandy Bridge quad cores in most applications, especially gaming. I think Bulldozer is going to end up performing to expectations: unbeatable value in highly threaded applications, coming up short in most other applications. Turbo Core seems to be pretty effective, but performance-per-clock-per-core doesn't seem to be there, though I could be wrong.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Corvettefisher posted:

I was fairly sure down the line they planned to push out a 16core desktop cpu
There really isn't anything on the desktop that could use 8 cores, much less 16. The 16-core Opteron Bulldozers will be made using two 8-core dies, which has the added benefit of providing a quad-channel connection to the memory, which is quite necessary for servers.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech is reporting that AMD is finally confirming that they are having 32nm yield issues and that these issues will cause them to miss revenue forecasts. I guess that confirms why we don't have Bulldozer yet, along with the Llano shortages we already knew about.

Edit: Remember how we were supposed to get the Radeon HD 7800-series this month? Fudzilla is reporting that AMD has pushed the release back due to issues with the TSMC 28nm low-power process. There's still some hope of a Q4 2011 soft launch, but they don't expect product to be available in volume until 2012. One question is whether we'll see additional delays for the 7900-series, as those products were expected in Q1 2012, but are using an even more problematic process. TSMC:argh:

Alereon fucked around with this message at 05:11 on Sep 29, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

FormerFatty posted:

Do any AMD motherboards support SSD caching? If not, will they in the future?
No, but expect third-party SSD caching solutions to become more prevalent in the near future.

Longinus00 posted:

Hmm, I wonder why the architecture is always so behind in games?
Games have been AMD's weak area for awhile, it's due to their poor per-thread performance. Since games are typically very poorly threaded, an 8-core processor pretty much has to suck unless they have incredibly good Turbo Core. We'll get more details about precisely what went wrong when Anandtech finally does their review.

Alereon fucked around with this message at 23:04 on Sep 29, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Agreed posted:

How long do we let them slide on the "Intel are basically rat bastards" thing before we just come out and say that everything has been AMD's weak area for, what, five years now? You don't shrink from 25% to less than 5% of the server market despite extremely low priced hardware without some culpability. Their processors have been lagging in performance since Intel ditched the long pipeline and started building more around the Pentium M architecture. The good old days are going to be awfully hard to reclaim at this rate and I'm worried it's going to put Intel in a position of even more market dominance that'll be like a boot on AMD's throat, how can they get up?

At least their graphics cards are legitimate great price to performance cards, and they swept the next gen consoles. That'll... help. But Bulldozer is looking like a total disappointment at this point, maybe even a flop if they can't get the price down (and how will they do that when their yields are so crap?) and I am pretty upset about it. Every single thing that could go wrong has, both from a bad luck but also from a decision making standpoint. Yeah, Intel still hosed them over pretty bad, it would be unfair to forget about that but god drat it.
I think that's a little pessimistic. Their desktop performance has been generally weak, but they delivered a hex-core processor far more cheaply than Intel did, and maintained performance leadership in the value segment up until Sandy Bridge. Their strategy has always been more cores/$ and more performance/$ and watt in servers (as well as memory quantity and bandwidth/$), which doesn't translate to a desktop strategy since desktop applications are still poorly threaded. Marketshare numbers don't exactly tell the whole story because AMD Opterons excel in 4+ socket configurations, while Intel leads in 1-2 sockets, so each server AMD wins sells more processors and those processors are worth more (as they're 12-core, 8-way models). AMD has had great success in the supercomputer market for this reason, for example. Basically AMD's server strategy is targeting huge servers for virtualization, while Intel concentrates on more performance-sensitive applications.

The situation is almost entirely reversed in the low-power market, as Intel completely hosed themselves with Atom and won't begin to recover until 2013. AMD's low-power Fusion processors (E, C, Z-series) have a commanding lead and sell as fast as AMD can make them. Their Llano processors have pretty disappointing CPU performance, but putting a decent GPU on-die makes them pretty compelling for laptops where the costs (especially in terms of power) of a discrete GPU aren't worth it. Llano also delivers 4 cores in a space where Intel usually delivers 2, which doesn't always help but isn't as useless as having more than 4 cores typically is.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

freeforumuser posted:

It is pessimistic because Intel can annihilate AMD in every sector, if they choose to.
You're basically saying "in a world where Intel could do anything it wanted, it would win", which is obviously true. In a world where AMD could do anything it wanted, it would win. In the real world, companies have limited R&D and can support limited product lines, and sometimes they plain screw something up. Intel seems to make terrible choices with regularity, but generally executes well on those plans (minus the 6-series chipset and SSD bugs). AMD seems to have pretty good decision-making, but just isn't very good on the details.

quote:

Server: I'm don't track stuff here, but anything wrong with the Intel server platform was long fixed since Nehalem with the IMC, QPI and HT. Sandy bridge only made the same stuff even better.
There is no Sandy Bridge for servers, it'll be Ivy Bridge before Intel does a refresh. Westmere-EX is by no means bad, but you get a maximum of 10 cores per socket with a minimum TDP of 105W. AMD is happy to sell you a 12-core with an 85W TDP right now, and a 16-core Bulldozer with a configurable TDP tomorrow (proverbially). The AMD platform also has advantages with regard to how much memory you can use, how much that memory costs, and how much power it uses. The Intel platform is pretty awesome for high-load servers that demand low response times, the AMD platform is pretty awesome for virtualization farms, applications that can use shittons of threads, or situations where you want to shove as many cores into a given space/power/money budget as possible.

quote:

Desktops: Release an unlocked i3. No reason to even buy AMD anymore.
A better choice would be an i3 with Turbo Boost, that would really kick AMD's rear end at CPU responsiveness. However, Intel already beats AMD at per-thread CPU performance, AMD's answer is vastly superior graphics performance (even vs HD 3000) and twice the cores. Intel can't touch that until Ivy Bridge.

quote:

Laptops: Cut SB prices against Llano. Actually Intel doesn't even need to do anything since dualcore SBs laptops with Llano-grade GPUs are already as competitive as it is, pricing-wise. (considering how slow is the Llano CPU)

Netbooks: Release a single core SB. Zacate? What is that again?
See above regarding graphics and core count. Single core SB also couldn't compete with Zacate because of the high power cost of its support chips. They could design a new SoC based around a power-tweaked Sandy Bridge core, but that's very non-trivial work and they're already working on the Silvermont Atom for 2013, which is supposed to be actually good this time. Or so they say.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I wonder if it has any way of keeping two floating point threads from being assigned to the same module? There were cases back in the day when HyperThreading reduced performance significantly, and it seems like there could be situations where you get similar problems on Bulldozer.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
A quad-core Bulldozer should be compared to a dual-core Phenom II. Each Bulldozer module contains two integer cores and a floating point unit, so a "quad-core" Bulldozer is only two modules.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
How does the first Bulldozer review describe performance? "Downright tragic."

If those numbers are anywhere near what the platform should be producing, AMD is hosed. It's simply not performing worth a drat in multi-threaded applications, and that should be its strong-suit. The fact that in some tests it's even losing to a Phenom II leaves me at a loss for words.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

trandorian posted:

Intel vs. ARM isn't even a battle. Intel laptops can run for 7 hours (Macbook Air, Macbook Pro) or more time (netbooks) all while having way more computing power to hand than any ARM chip.

The latest, most cutting edge, 8 core ARM chips are on par, performance wise, with a Core 2 Duo from 2006, and take just as much power to do that as said Core 2 Duo chips did. Cost more too. It would take a radical change in a lot of things to get comparable performance for everyday non-trivial Windows applications on the ARM platform, and if things actually started getting close, Intel could simply license ARM themselves and start cranking them out again.
I'm not sure where these rumors of poor performance scaling on ARM got started, but they simply aren't true. The Dhrystone MIPS performance numbers from this Wikipedia article are illuminating. To start with, performance on a currently-available ARM Cortex A9 is almost exactly double the clock-for-clock, core-for-core performance of the Intel Atom. The upcoming ARM Cortex A15 architecture dramatically improves performance and efficiency, a dual-core Cortex A15 at 2.5Ghz would be similar in performance to an Athlon 64 X2 4200+ dual-core CPU, and a quad-core Cortex A15 at 2.5Ghz would beat any Intel Core 2 Duo ever produced, including Core 2 Extreme processors. As far as power usage, you can buy cellphones with dual-core 1.5Ghz Cortex A9s, and while the Cortex A15 is more powerful and complex, it also heralds the shrink to 28nm. While a quad-core Cortex A15 would be more like a powerful tablet or netbook processor, we're still talking <5W maximum.

The reality is that with the move to an out-of-order architecture in the Cortex A9, ARM processors became performance competitive, especially with low-power x86 designs like the Atom. With the move to the Cortex A15, per-core performance rises to a level that begins to be adequate for desktop workloads. When paired with a current-generation mobile GPU you have performance that's better than a 360/PS3, meaning even gaming isn't outside the realm of possibility. When you consider whether such a system would be usably fast, remember that people were happy to buy Atom netbooks and nettops with a tiny fraction of the raw CPU power and no ability to play video or do 3D, but an ARM processor can do all that AND is unencumbered by a legacy x86 Windows codebase. We're not going to see ARM beating Core i5s for desktop performance benchmarks, but i3s might be within reach, and with a fraction of the power usage.

Edit: I should note that it's hard to directly compare a mobile processor built on a low-power manufacturing process to a desktop one built on a high-performance process, much less processors with vastly different CPU architectures. If ARM decided to directly attack the desktop market, they'd probably design a processor to be built on a high-performance process and scale to high clockspeeds. This is a lot of work and will take a long time, so it's likely that they'll just attack the periphery of the market with Cortex A15, where Intel/AMD's performance isn't required. Keep an eye on nVidia though, they have a project to develop a high-performance ARM processor using Code Morphing Technology they licensed from Transmeta. Code Morphing was originally used to try to run x86 code faster and more efficiently on a special processor custom-designed for the purpose, the Transmeta Crusoe. nVidia licensed it with the goal of making x86 programs run on their ARM processors, but Intel sued to put the kibosh on that. Instead, nVidia will go back to Transmeta's roots and try to use Code Morphing to run ARM code on a custom-designed CPU even faster/more efficiently than it would on an ARM CPU.

Alereon fucked around with this message at 12:41 on Oct 9, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Just for anyone wondering, final reviews will be up at 12:01AM Eastern, which is in about an hour.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Distressing fact: The DIPS/clock/core (number of Dhrystone integer operations per clock cycle, per core) for Bulldozer is 3.78, which is pretty similar to the upcoming ARM Cortex A15 at 3.5. For comparison, Sandy Bridge scores 9.43.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Factory Factory posted:

That said, I'm getting to the benchmarks (finally) in Tom's review, and the multicore performance really is pretty good. Even hobbled by whatever inefficiencies may be from the module-not-cores architecture, the FX-8150 is holding up extremely well in content creation apps. Sometimes it's the out-and-out winner between the 2600K, 2500K, and 1100T, sometimes it's in between those, and sometimes...

Well, okay, sometimes it's outperformed by a quad-core Phenom II. Still, my expectations have been minimized sufficiently that this is slightly impressing me.
I would consider an 8-core processor that can't quite equal a quad-core to be a pretty serious failure. The HardOCP Cinebench numbers show Bulldozer BARELY beating a Phenom II X6, and losing slightly to the i7 2600K. Things are a bit better for POVRay, but I'd definitely say that multi-threaded performnace is far below expectations. I never really expected per-core performance to be good, but I at least thought it would win pretty handily in heavily multi-threaded integer workloads, and that is definitively not the case. I would have also hoped that per-thread floating point performance would go up over Phenom II, but instead it seems to have dropped, pretty seriously when you consider that Bulldozer has a 200-500Mhz clock speed advantage, depending on how effective Turbo Core is.

Bonus Edit: Anandtech's review has been delayed because Anand Lal Shimp was hospitalized yesterday (he's fine now). They're hoping to have it up "soon", which should hopefully mean tonight.

Alereon fucked around with this message at 06:24 on Oct 12, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Longinus00 posted:

Cinebench and POVRay are integer workloads?
I certainly could be wrong, but I thought the actual workload was primarily integer, and some quick Googling seems to verify this.

Edit: Yes I was in fact wrong.

Alereon fucked around with this message at 07:42 on Oct 12, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

SourKraut posted:

I almost feel like getting a FX-8150 solely to give AMD some pity cash.
I stopped reading the Anandtech Bulldozer thread because posters were unironically telling people that it doesn't matter how much Bulldozer sucks, they still need to buy it so Intel would have competition. "You should willingly buy garbage so that the other guy won't get away with just making garbage!"

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Maxwell Adams posted:

So... Intel is dropping the price on i5's when bulldozer hits the market, right?
The i5 2500K is positioned against the FX-8120, which is 500Mhz slower than the FX-8150. In that matchup the 2500K is the clear winner, so Intel doesn't have much reason to lower prices. On the other hand, dropping the i7 2600K down to ~$250 or so (or even just replacing it with the i7 2700K) might happen.

Bonus Edit: VR-Zone has done a quick test of memory bandwidth scaling on Bulldozer, it seems that basic DDR3-1600 is required for optimal performance, but going beyond that or using lower latency modules provides minimal benefit. It would be interesting to see how data compression is affected, since that's very memory bandwidth sensitive.

Alereon fucked around with this message at 09:47 on Oct 12, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

HalloKitty posted:

Here's a wild card: Hardware Heaven's review
http://www.hardwareheaven.com/revie...revolution.html

Seems completely off to me, can anyone spot the problems with it? I'm more inclined to believe AnandTech, but it's interesting..
It looks like a problem with their test selection. If I was in a charitable mood I'd say they didn't use a adequate variety of tests, if I wasn't I'd say they only published tests where Bulldozer did reasonably well.

Edit: As I look further this really does just seem like a marketing vehicle, given that they're reviewing the combination of the CPU, motherboard, and videocard.

Alereon fucked around with this message at 15:32 on Oct 12, 2011

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Bob Morales posted:

It's almost like the cherry-picked the games so it wouldn't look so bad. With the gratuitous amount of AMD logos on the page...

They still show it far behind the i7 on the other tasks like encoding/playback. They used the faster RAM but it's only a few % improvement according to some other sites. Plus they don't show the 2500k in the ratings which wouldn't help.
Yeah I read further and the conclusion and 9/10 rating offended me enough to call them out on their own forums for lack of editorial integrity and writing reviews to please their sponsors. My favorite part was when the reviewer said Bulldozer needed a lower price, and still gave it a 9/10 for value. My second-favorite part was where they didn't mention the higher power usage in the conclusion at all, or even on the page with the power usage numbers.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Agreed posted:

Who thought up the modules=cores idea? Why? If it was an engineer I don't understand it, if it was a marketing guy fire the fucker now.
I don't think the idea is necessarily flawed, just the implementation. It looks like the key mistake was to focus on high clock speeds at the expense of IPC, and then being unable to reach those targets due to process teething issues. Well, that and making the processor so big with so much cache that they didn't have the transistor budget to make it good. It really is a repeat of the Willamette Pentium 4 all over again (though that had the opposite cache problem, not remotely enough).

Adbot
ADBOT LOVES YOU

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Agreed posted:

The -idea- isn't stupid, it's actually pretty cool. 2 128bit or 1 256bit FP, that's neat. But advertising it is awful. At their best these will perform like modern 4-core processors. A module is not a core in the sense that people expect something WOWEE from an 8-core processor. Though it sure uses power like an 8-core...
While that's true for floating point workloads, most people really care about integer performance. If Bulldozer actually performed like an 8-core for integer stuff but a quad-core for floating point, pretty much everyone would consider that a good deal. Except somehow they managed to get it to perform like a slower hex-core at best.

Star War Sex Parrot posted:

I'm expecting big things from Southern Islands.
My goal is to get a Radeon HD 7870 for the holidays, hopefully some of the initial cards that trickle onto the market before we get volume in Q1. If things go according to plan it will basically be a tweaked, higher-clocked Radeon HD 6970 with lower power usage and a lower price, which is all I could possibly ask for. At this point I'm concerned about how well the 7900-series will turn out given that it's completely unlike any GPU anyone's ever designed before, so drivers and its overall performance are an unknown quantity.

Alereon fucked around with this message at 17:01 on Oct 12, 2011

  • Locked thread