|
WhyteRyce posted:No one was talking about performance of Medfield vs. ARM's latest offerings, just that opinion of a lot of arm chair architects seemed (or still seems) to be that x86 is just too inefficient and wasteful to be ever used in a small, low power applications. To be clear, the comparison I was referencing between Medfield and ARM processors was for the ARM processor generation previous to Medfield, which is why it turned in numbers that were in the same ballpark. Medfield devices (were there more than one?) were released at about the same time that higher-clocked 32nm Cortex A9 SoCs replaced the 45nm ones, and today we have 32nm Krait cores (Qualcomm's A9/A15 hybrid) in shipping phones and Cortex A15 on the way. 2013-2014 are going to be pretty fun years if Intel doesn't completely gently caress up Silvermont. I think we'll find Silvermont-based SoCs having significantly faster single-threaded CPU performance than ARM, but with ARM SoCs continuing to have far more product wins due to better and more flexible bundled IP blocks. Intel's switch to an in-house GPU will also give the ARM SoC vendors a long lead-time where they can rely on lovely graphics drivers to keep Intel from being competitive. Colonel Sanders posted:I think Apple certainly has enough $$$ to buy AMD, and I don't think there are many other tech companies with that much cash. Assuming Apple buought AMD and the R/D department, I think Apple has the funds needed to compete against Intel. I don't think it would ever happen. Why would Apple want to design their own CPUs? It would mean diverting funds away from developing crazy poo poo like retina displays. But more importantly, to make a profit off research $$$, Apple would have to sell CPU designs to 3rd parties and I really don't believe Apple wants to do that.
|
# ? Oct 17, 2012 02:04 |
|
|
# ? May 14, 2024 01:58 |
|
Alereon posted:By "busted" I guess you meant "confirmed", as Medfield solidly proved that it will take a completely new microarchitecture for Intel x86 offerings to be competitive with ARM. Keep in mind that the phones 32nm Medfield was tested against were using last-gen 45nm dual-core Cortex A9 processors. Intel did write excellent x86 browser code that is responsible for a meaningful browsing performance difference versus older ARM products, but currently-shipping ARM processors beat it because they are just that much faster (and other SoC vendors can and do write custom ARM browser code to improve performance). When you consider that the GPU (PowerVR SGX 540) was obsolete at launch (and Intel doesn't seem likely to make any improvements in the future), and the poor battery life compared to dual-core 45nm ARM SoCs, it's clear why Intel isn't a player in the smartphone market. Maybe I'm missing something. In AnandTech's iPhone 5 review, the Droid Razer i shows to be a significant performer, even compared to the Apple A6, even as a single core, dumpy ol' Atom.
|
# ? Oct 17, 2012 13:30 |
|
Factory Factory posted:Maybe I'm missing something. In AnandTech's iPhone 5 review, the Droid Razer i shows to be a significant performer, even compared to the Apple A6, even as a single core, dumpy ol' Atom. Anandtech had an article with some benchmarks of similar OEM-delivered optimizations provided for the stock Android browser by Qualcomm, I believe, but I'm having trouble finding the article.
|
# ? Oct 17, 2012 20:23 |
|
I agree. x86 isn't imposing any limitations on the user experience.
|
# ? Oct 18, 2012 00:00 |
|
I think this is interesting: http://www.tomshardware.com/news/amd-ultramobile-tablet-apu-cpu,18546.html In particular: Tom's Hardware posted:As a result, he is not only reducing AMD's cost, he is also moving away from the PC market as a whole. "40 to 50 percent" of AMD's future business will not be focused on PCs. Instead, he will aim half of AMD's business three areas: At servers, which will leverage AMD's own CPUs, "third-party" CPUs, and will count on SeaMicro's server fabric to provide custom solutions. Another area will be "semi-custom" APUs for the gaming, industrial and communications market and AMD will be aiming its APUs at ultramobile devices. AMD expects to generate 40-50% of their business in Q3 2013 in areas where they have little or no presence? And by 3rd party CPUs I assume they mean ARM, but other than the TrustZone thing, they haven't announced any work with ARM cores. There's already a lot of companies producing ARM chips, so I'm not sure what AMD could really bring to that market. Sure the market for low power chips has been expanding, even Intel is taking low power seriously, but it seems like AMD is a bit late to the party and will just be squeezed out on both the high end performance and the low end since they can't match the power consumption of Intel or ARM chips.
|
# ? Oct 19, 2012 05:10 |
|
I think the "third party CPUs" thing is Tom's Hardware being lovely when summarizing the call. Xbitlabs outlined the growth areas as: 1. Low-power servers 2. Game consoles and embedded/industrial applications 3. Ultra-mobile computing I think THG got the "third party" reference from sticking together AMD chips with third party (SeaMicro and formerly Cray Research) glue logic. The idea is that low-power APUs have the possibility to be awesome in servers because you get >Atom performance and power efficiency, combined with capable GPUs that can be used for hardware acceleration. The 50% target doesn't seem too unreasonable when you consider that AMD is providing at least the GPU for all three next-gen consoles, and shrinking CPU revenues will actually help them reach this goal. I think they have had some traction with their embedded Radeons, so Trinity APUs and their successors have some potential there. In the Ultramobile market, AMD has Bobcat and its successors positioned as higher-performance, graphically capable alternatives to Atom. Silvermont might improve Atom's CPU competitiveness in 2013, but I think AMD's GPU advantage will likely be compelling for at least two years. 10W Haswells are a risk here, however.
|
# ? Oct 19, 2012 18:19 |
|
Alereon posted:The 50% target doesn't seem too unreasonable when you consider that AMD is providing at least the GPU for all three next-gen consoles, and shrinking CPU revenues will actually help them reach this goal. I think they have had some traction with their embedded Radeons, so Trinity APUs and their successors have some potential there. I hadn't considered the consoles, which would certainly help AMD. Especially if they have a full APU rather than just the graphics, as the new Xbox is rumored to have.
|
# ? Oct 21, 2012 04:52 |
|
e - I'll post that in the GPU thread.
GRINDCORE MEGGIDO fucked around with this message at 10:20 on Oct 22, 2012 |
# ? Oct 22, 2012 10:10 |
|
It's nice to see AMD is still doing things for their hardcore customers like throwing LAN parties, given the current situation. http://www.tomshardware.com/picturestory/610-extravalanza-lan-party-byoc.html
|
# ? Oct 22, 2012 13:29 |
|
So, The Tech Report has just released their review of the FX 8350. It does well in some multi-threaded tests, like x264 encoding and some of the rendering benchmarks, in a couple instances even beating the 3770k. Single threaded performance is poo poo though. Overclocking was more of the same. There's not much overhead in the CPUs, and they require a ludicrous voltage. The sample wasn't stable at 4.5 GHz with 1.5375V. Apparently AMD said that you can "...expect something closer to 5 GHz". Power consumption at this setting was about 60W higher than at stock (262W when overclocked). For most of us, don't bother, because any i5 is a better gaming value. EDIT: Anand's review basically corroborates Tech Report's review, although they had a better overclocking experience, with the 8350 reaching 4.8 GHz. He didn't say exactly what the voltage was (10% above stock), but the power consumption jumped 100W over stock. unpronounceable fucked around with this message at 06:22 on Oct 23, 2012 |
# ? Oct 23, 2012 05:58 |
|
The FX-8350 is really the product Bulldozer should have been. Unfortunately, Ivy Bridge is still better, but at least it's not humiliating. I'm not as confident about AMD making up more ground with the next generation, I fully expect Haswell to exceed expectations with improved clockspeed scaling due to greater experience with the 22nm process. I'm also hoping Intel also goes back to using soldered heat spreaders, or at least metallic thermal paste. This was a big limiter of overclocking with Ivy Bridge, and Haswell's improved IPC should make it a MONSTER when overclocked.
|
# ? Oct 23, 2012 06:41 |
|
On the lower end of things, at least the FX 4300 competes nicely with the i3-3220 (http://www.anandtech.com/bench/Product/700?vs=677), and at a similar/lower price point too. Shame it's still stuck at 95W TDP though.
|
# ? Oct 23, 2012 08:07 |
|
Alereon posted:I'm not as confident about AMD making up more ground with the next generation, I fully expect Haswell to exceed expectations with improved clockspeed scaling due to greater experience with the 22nm process. If I remember correctly, AMD's next process isn't even 22nm. They'll be moving from 32nm to 28nm, which as you said will only widen Intel's lead.
|
# ? Oct 24, 2012 04:57 |
|
I ask this not as a troll, but as a genuinely curious person: How is it that AMD fell this far behind in the CPU arms race? I mean 10 years ago they were neck-and-neck with Intel.
|
# ? Oct 24, 2012 05:31 |
|
chocolateTHUNDER posted:I ask this not as a troll, but as a genuinely curious person: Intel woke up and stopped pulling stupid poo poo; and had the advantage of all their money to follow through.
|
# ? Oct 24, 2012 05:38 |
|
Install Gentoo posted:Intel woke up and stopped pulling stupid poo poo; and had the advantage of all their money to follow through. Add onto that how performance CPU design and manufacture not only are fantastically expensive, but become more so with each successive generation. Also, even when Intel was behind in performance it still leveraged its name to keep a lot of the most lucrative contracts so even when AMD was on top making lots of money was hard for them. Finally, AMD made some big bets on risky gambles that didn't pan out.
|
# ? Oct 24, 2012 05:58 |
|
Killer robot posted:Add onto that how performance CPU design and manufacture not only are fantastically expensive, but become more so with each successive generation. Also, even when Intel was behind in performance it still leveraged its name to keep a lot of the most lucrative contracts so even when AMD was on top making lots of money was hard for them. Finally, AMD made some big bets on risky gambles that didn't pan out. Care to name some of those risky bets that didn't pan out? This kind of stuff fascinates me
|
# ? Oct 24, 2012 06:06 |
|
chocolateTHUNDER posted:Care to name some of those risky bets that didn't pan out? This kind of stuff fascinates me Bulldozer
|
# ? Oct 24, 2012 06:12 |
|
Install Gentoo posted:Intel woke up and stopped pulling stupid poo poo; and had the advantage of all their money to follow through. Not to mention the large variety of products Intel makes...ethernet controllers, flash, etc. Lots of cash sources.
|
# ? Oct 24, 2012 06:15 |
|
Wasn't Intel also doing illegal things like paying their customers to not buy AMD CPUs?
|
# ? Oct 24, 2012 08:31 |
|
Ragingsheep posted:Wasn't Intel also doing illegal things like paying their customers to not buy AMD CPUs? This would be the number one reason amd was unable to get more revenue when they were on top. Intel had to pay a relatively paltry sum because of it.
|
# ? Oct 24, 2012 12:27 |
|
chocolateTHUNDER posted:I ask this not as a troll, but as a genuinely curious person: Part of it may have been the spinning off of manufacturing. At this level making a CPU involves both the design and manufacturing considerations, you can't do them separately. Many of AMDS designs just couldn't be manufactured with decent yield by globalfoundaries.
|
# ? Oct 24, 2012 13:40 |
|
Well, AMD's lackluster single core performance finally caused me to jump ship after 12 years of AMD processors starting with the k6-2. I'll keep an eye on AMD's lineup, but I just don't see how they can recover from bulldozer. I guess with all their rumored design wins in the console market they could scrape up the cash to come back. I was reminded of an interesting article about bulldozer, and I don't know if its been posted but: http://www.xbitlabs.com/news/cpu/display/20111013232215_Ex_AMD_Engineer_Explains_Bulldozer_Fiasco.html SYSV Fanfic fucked around with this message at 14:31 on Oct 24, 2012 |
# ? Oct 24, 2012 14:20 |
|
chocolateTHUNDER posted:Care to name some of those risky bets that didn't pan out? This kind of stuff fascinates me They must have figured that 8 integer cores with poo poo FPU performance would run today's software faster. That's what you get by assuming programmers and compilers were going to magically cause all software to be incredibly multi-threaded. Unfortunately it's still true that fewer, faster cores are better than more, slower cores. Unless of course you're running a handful of applications that can be easily multi-threaded.
|
# ? Oct 24, 2012 14:35 |
|
keyvin posted:I was reminded of an interesting article about bulldozer, and I don't know if its been posted but: That seems like a rather myopic take. You're asking the guy who was in charge of X why a massive engineering effort involving hundreds of people across dozens of unrelated disciplines failed and surprise surprise it's some decision about X!!
|
# ? Oct 24, 2012 18:37 |
|
keyvin posted:I was reminded of an interesting article about bulldozer, and I don't know if its been posted but: Yeah, posted, digested, and puked back up as largely nonsense. That's a facile explanation for the product's many substantial failures at market, and comes from a "told you so!" perspective. The guy saying if they'd only done things the way HE thinks they should have been done, well, by golly, it'd have turned out differently has hindsight to make him look like he's more correct than he actually is. Really, automated layout is pretty much necessary, we're talking a staggering number of incredibly tiny parts. Humans working in conjunction with automated layout tools can produce exceptional products, see: everything else of similar or greater technological sophistication. Really it's a whole mess of different problems, from inefficient resource utilization in the first place to thread scheduling failures (leading to, basically, an either/or "fix" that helped some things and hurt some others) but I think just a lot of bad decisions rolled up together made for a product that nobody was particularly interested in. Piledriver's current performance should have been Bulldozer's performance when Bulldozer launched. Except with greater power efficiency and better relative IPC. And yeah a ton of it goes back to Intel's outright anticompetitive practices back when AMD had some lean, badass CPUs that were cleaning Intel's clock for efficiency, performance, and multitasking (even with Hyperthreading, the shorter execution pipeline and drastically less wasteful processing of the AMD XP series of processors made them better at serialized multitasking than any single-core iteration of the Pentium 4). At no point did AMD ever go above 25% market share in servers despite dramatic superiority in their processors, and that's all on Intel, using bags of money to block paths for what could have been AMD's ascendancy in the market, through forced bundling and a lot of pressure on partners at all levels by Intel. AMD never really got a leg up after that, while Intel was able to keep selling Pentium 4 processors even though they sucked, all the while using their superior resources to go back to the drawing board and build on the P6 processor development branch. The first sign that Intel had something exciting coming was the Pentium M, a processor that was pretty sickeningly close to AMD's designs at the time in terms of the basic engineering principles behind it. Past that, Intel never got a substantial punishment for what they had done to AMD, and AMD had issues growing beyond the Athlon 64 processors. Hell, they were already tapped out with the last of the Athlon XP processors; they never got the kind of performance they hoped for out of the Barton-core XP processors. They barely saved the 64-bit Athlon/Phenom development track by increasing the cache available in the Athlon II/Phenom II, addressing a serious design flaw, but that was the K10 generation, and it went on forever. Bulldozer was supposed to succeed it. Meanwhile Intel had successfully Tick Tocked their way to processors that were eating AMD's lunch and more power efficient besides. Bulldozer was pushed back, and pushed back again... How many times? By the time Bulldozer launched, it would have had to compete on every level, and AMD's miscalculations torpedoed the chances of that happening. Even their marketing material was desperate, putting it up against processors that predated Sandy Bridge by a generation or more. Then the first reviews came in and evidence piled up that Bulldozer wasn't even able to MATCH, let alone BEAT the performance of their own, year-2007-level Phenom II X4 processors when it came to single-threaded performance. Dark loving times.
|
# ? Oct 24, 2012 19:30 |
|
Bob Morales posted:They must have figured that 8 integer cores with poo poo FPU performance would run today's software faster. That's what you get by assuming programmers and compilers were going to magically cause all software to be incredibly multi-threaded. Unfortunately it's still true that fewer, faster cores are better than more, slower cores. Genuinely curious, if bulldozer/piledriver has poo poo FPU performance then how come it's actually competitive in 3D rendering applications like POVRay and video compression? That is 4 FPUs vs. 4 FPUs right?
|
# ? Oct 24, 2012 22:30 |
|
It's four double-wide (256 bit) FPUs that can be automagically partitioned into two 128-bit pipelines in the same situations where something like Hyperthreading would be useful. Intel's FPUs, while also 256-bit wide, cannot subdivide like that. On workloads that aren't as parallelizable or rely on 256-bit wide FP instructions, those double-wide FPUs can't be scheduled as well and work a lot more like four than "eight."
|
# ? Oct 24, 2012 23:12 |
|
Didn't Bulldozer have slightly better performance/power draw on Windows 8? Is that still the case with Piledriver?
|
# ? Oct 25, 2012 05:56 |
|
Maxwell Adams posted:Didn't Bulldozer have slightly better performance/power draw on Windows 8? Is that still the case with Piledriver? Emphasis on the slightly here; most computing tasks still aren't suited to it but the OS can schedule things onto it slightly better. Any recent intel chip still wipes the floor with it in most use.
|
# ? Oct 25, 2012 06:04 |
|
The same scheduling patches made it in to Windows 7, reducing the difference between the two OSes to near-nothing. Interestingly, Bulldozer can be scheduled differently depending on your priorities, using the power management settings. You can schedule a module like two full cores, and this increases per-core performance and power efficiency because it allows higher turbo states when other modules are parked. Alternatively, you can schedule a thread per module before loading up a second, and this yields higher threaded performance at the expense of some power efficiency, as modules can't be idled as much.
|
# ? Oct 25, 2012 06:34 |
|
Is an A10-5800k with a 6670 going to cut it playing Battlefield 3 at 1600x1050 on high
Pegged Lamb fucked around with this message at 15:06 on Oct 25, 2012 |
# ? Oct 25, 2012 14:59 |
|
wikipe tama posted:Is an A10-5800k going to cut it playing Battlefield 3 at 1600x1050 on high http://www.tomshardware.com/reviews/trinity-gaming-performance,3304-7.html Nope.
|
# ? Oct 25, 2012 15:03 |
|
Maxwell Adams posted:http://www.tomshardware.com/reviews/trinity-gaming-performance,3304-7.html Sorry I forgot to mention the passive card. Looks like the answer from what I can find is no here as well Pegged Lamb fucked around with this message at 15:15 on Oct 25, 2012 |
# ? Oct 25, 2012 15:08 |
|
wikipe tama posted:Sorry I forgot to mention the passive card. Looks like the answer from what I can find is no here as well
|
# ? Oct 25, 2012 18:30 |
|
I'll probably go with that but the box says the gpu can combine with a 6670 (or 6570, or 7670) for some kind of crossfire boost. If it can't even play Skyrim on medium at high it's pretty much crap. I need to do more research before buying stuff Pegged Lamb fucked around with this message at 21:47 on Oct 25, 2012 |
# ? Oct 25, 2012 21:34 |
|
wikipe tama posted:I'll probably go with that but the box says the gpu can combine with a 6670 (or 6570, or 7670) for some kind of crossfire boost. That capability is technologically neat and also a useless gimmick for all practical intents and purposes.
|
# ? Oct 25, 2012 22:29 |
|
Speaking from experience (A8-3500M laptop w/ Radeon 6750M discrete GPU), Crossfire only serves to make everything slow. I get much better performance just running the discrete by itself. It really is useless.
|
# ? Oct 25, 2012 22:49 |
|
I'm really interested in the FX 8350. It still has trouble competing with Intel in some areas, but it looks like it performs significantly better then my lovely FX 6100. Video encoding on that processor is painfully slow.
|
# ? Oct 27, 2012 16:23 |
|
|
# ? May 14, 2024 01:58 |
|
Mill Village posted:I'm really interested in the FX 8350. It still has trouble competing with Intel in some areas, but it looks like it performs significantly better then my lovely FX 6100. Video encoding on that processor is painfully slow. 8350 is OK to a certain degree, it seems, but is still ridiculous in terms of power consumption, and has almost no overclocking headroom on air unlike the Intel gear. How did you end up with a Bulldozer in the first place?
|
# ? Oct 27, 2012 17:26 |