Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Rastor posted:

Interesting rumors of an AMD "project Fastforward" which seems aimed at reducing memory bottlenecks. Could this mean your RAM would be bundled on the package with the APU?

Edit: or is it more like it would be on the motherboard or otherwise placed near the processor?

On the package. Stacked DRAM is new but not unheard of - it's been in the pipeline for, I believe, Nvidia Volta ever since that was revealed on Nvidia's roadmap, and Intel's Knight's Landing many-core CPU/coprocessor uses stacked DRAM as well. E: Volta was pushed back, stacked DRAM will be in Pascal, a new uarch between Maxwell and Volta.

However, it's expensive and low-yield, so don't expect it to replace the entirety of system RAM. For example, Knight's Landing has ~16 GB of stacked DRAM to match up with ~60-70 Silvermont Atom cores. Expect it to be used as a last-level cache in smaller amounts instead.

Factory Factory fucked around with this message at 13:11 on Jul 22, 2014

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I briefly tried 1920x1080 with Batman: AO and Civ 5 to an Atom Z3770 tablet, which is as close as you can get to a Q1900 without being a Q1900. Worked fine. Host specs very similar to mayodreams'. Gigabit to two-stream 802.11n 5 GHz (i.e. 300 Mbps client with zero spectrum crowding).

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
If you can't afford Intel, how can you afford the electricity to run AMD? :smuggo:

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

adorai posted:

How much time does the cpu on your home pc spend at full load? At idle there isn't much difference.

I crunched the numbers on this for the previous system building thread. If you take the higher idle power...



...and then you game for four to eight hours per week or so, then at $0.20 per KWh and spending $350 for an Intel CPU/Mainboard over $200 for AMD, you break even in two years. After two years, the TCO for an FX chip exceeds the TCO for an Intel chip.

E: Plus since it's lower performance, you'll want to upgrade sooner, and then the whole concept of "buy AMD for better price/performance" just shits itself completely.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Col.Kiwi posted:

Huh?

http://ark.intel.com/products/80807/Intel-Core-i7-4790K-Processor-8M-Cache-up-to-4_40-GHz

(You're right in the point you're making but I think you're a bit confused on 4790k specs)

The turbo bin for 4 cores is a lower clock rate than the turbo bin for 1 core (which is what the 4.4 GHz maximum turbo refers to).

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Lord Windy posted:

Why do their FX chips suck so much? They draw more power and run faster in terms of Ghz but they just aren't as good as comparable Intel chips?

They don't do as much work per clock tick. They are severely outclassed by Intel in single-core performance, to the point where 8 FX cores at very high clocks struggle to keep up with 4 Intel cores at lower clocks. Programs that cannot use all eight cores (and there are a ton of them, especially games) are dominated by Intel.

And drawing more power isn't a good thing, just the opposite. For a given level of performance, it's better to achieve it using less electricity, not more. And Intel just kills AMD chips here, too.

There are all sorts of reasons as to why, but it all boils down to Intel doing a good job at CPUs for Core and AMD doing a bad job for A-series and FX.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

WhyteRyce posted:

Oh I see, you're talking about power/performance with Intel chips for laptops, not that the big (negative) deal with Intel being bad performance per watt on desktop.

What? That's referring to Core, not Netburst. Intel rather handily reversed the whole bad performance per watt thing.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
My first PC was built on the Asus A7A266, with the ALi Magik 1 chipset by Acer that enabled both SDR and DDR SDRAM support for Thunderbird Athlons.

I recall it was many, many years before motherboards were good enough that reviews stopped routinely counting the number of crashes and bluescreens during reviews.

Re: Ars' pre-post-mortem of AMD, this one blew my mind:

quote:

According to both reports at the time and to Ruiz’s own book, Nvidia was considered a potential acquisition target first, since the company had plenty of graphics experience and some of the best chipsets for AMD's K7 and K8-based CPUs. But Jen-Hsun Huang, Nvidia's outspoken CEO, wanted to run the combined company—a non-starter for AMD's leadership. The discussion then turned to ATI, which AMD eventually bought for $5.4 billion in cash and more in stock in October 2006.

A combined AMD-Nvidia would've been a loving POWERHOUSE. Who knows what would've happened to ATI, but AMD-Nvidia vs. Intel... Holy poo poo.

Factory Factory fucked around with this message at 17:34 on Aug 21, 2014

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
It'd be tough to say. The CPU side would include their ARM uarch license and their SoCs, so it'd leave... Well, something that isn't ATI any more, but the way it shambles kind of resembles it from one angle.

Graphics is basically the only thing they do now that isn't entirely about putting CPUs in things, and a lot of the graphics stuff is about putting them in the CPUs that get put in things.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I'd like a general CPU thread. Alereon talked about an SoC thread last year or something, but while that's neat, it's also less distinct from a CPU thread now that Haswell and future have SoC versions, and AMD is now doing SoCs e.g. Kabini AM1 chips and the ARM Cortex A57-based Seattle server SoC. In the meantime, there's no place to talk about ARM stuff like Nvidia Denver or Apple Cyclone or whatever Qualcomm is doing lately.

Speaking of Qualcomm, fun fact that I had forgotten about its Adreno GPUs: those are the result of AMD selling off ATI's Imageon cores.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Anyone else feelin' pretty dumb after the last few posts? Heck, I'm almost ready to buy an AMD CPU.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
One time I programmed a method for taking screenshots at high speed by installing FRAPS, setting up a small RAM drive, and having a while loop mash the screenshot hotkey. I am totally qualified to contribute meaningfully to this conversation.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

eggyolk posted:

This makes a lot more sense. Thank you. It's also makes me wonder why people speak about the shift from DDR3 to DDR4 as a relatively unimportant or inconsequential step. If memory bandwidth is such a limiting factor as you say, then while isn't there more development on the thinnest point of the bottleneck?

I finally have something useful to add:

There has been quite a bit of development! But rather than improving the DRAM bus, it's been more in the direction of using predictive prefetch and lots of cache, so that a significant percentage of accesses out to DRAM are in progress or finished by the time they are needed. These techniques and the underlying hardware, which are iterated on with each uarch generation, are combined with multiple levels of cache. Cache memory is closer to the chip than DRAM and so can be accessed with less latency and higher bandwidth, with the trade-off that the closer you get, the less room there is for large amounts of memory.

It's pretty much standard these days for the cache levels to be coherent. Nutshell: if one core is operating on an area of memory, another core will be able to see those changes in real time. This description is probably an oversimplifcation, so have a giant Wiki article I guess.

For example of progress in cache, here's a table I ganked from AnandTech about the iterations on cache size, latency, and bandwidth across three Intel uarchs:



These are only per-core numbers. L3 cache is shared among the cores and the entire chip. For a fully decked-out quad-core die, there are 64 KB of L1 cache per core, 2 MB of L2 per core, up to 8 MB of L3 chipwide, and an optional 128 MB L4 (Crystalwell).

Most CPUs and APUs above the netbook/nettop level have three levels of cache before the memory controller ever has to be involved. Three-level cache was a supercomputing feature in 1995. In 2003, on-chip L3 cache was an esoteric server feature on Intel's Itanium (as well as optional off-chip L4). In 2008, AMD's Phenom II made L3 cache a high-end desktop feature. Now it's pretty much a given in a notebook or larger CPU, and it's becoming more common in ultra-mobile parts (especially ones with high-performance GPUs).

Some desktop/notebook Intel chips using integrated graphics implement an L4 cache, Crystalwell - an off-chip SRAM cache that's only 128 MB, but it runs super fast (compared to DRAM) and is something like half the latency of going all the way out to DRAM. In server chips, right this instant it's mostly just enormous L3 cache.

The Next Big Thing in avoiding RAM access is using stacked DRAM technology to put a shitload of memory right next to the CPU on the same package, used as last-level cache. Nvidia has it on the roadmap a few GPU uarchs down, Intel is already doing it with their current-gen many-core Knight's Whatsit (up to 16 GB of L4 cache for like 70 Silvermont Atom cores).

So it is getting worked on! But rather than simply making the bottleneck wider, development is going into avoiding reaching a bottleneck state in the first place.

Factory Factory fucked around with this message at 06:40 on Aug 29, 2014

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Building six AMD APU systems and want the stock coolers, but don't need all of those manuals? Save up to $3 per system with a six-pack.

I'm sure this makes business sense, somehow, but it seems a bit silly.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
AMD keeps slinging the 8-cores. Now it's the FX-8310, 95W TDP, 3.4 GHz, no turbo clock, $125. Multiplier is unlocked, though.

AMD really needs to get off Piledriver and do something new if they want to keep selling in the desktop non-APU space.

Bright side: TSMC announced that they'll fab AMD CPUs at their 16nm FinFET node.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
IIRC it takes about six years from starting a design to sipping product. But this suggests that they had no idea how bad Bulldozer was until it first missed it's release date.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Rosoboronexport posted:

And scratch that, HP is only offering AMD on the 14" model, other models feature Intel Bay Trail Celerons or Atoms. Either Intel is throwing Bay Trails to OEMs for almost free or AMD has problems getting the performance or thermals for these things.

Intel's taking a loss on Bay Trail. The SoC itself is priced at cost, and then they'll help you engineer an ARM design into a Bay Trail design and provide extra support chips for free. They're in the "market grab" phase right now trying to catch up to ARM, and they're banking that the investment means less subsidy will be necessary from here on out.

orange juche posted:

Unless something crazy happens when you no longer constrain an an ARM architecture processor within the thermal limits of a tablet, ARM gets walked all over by Intel's Core architecture. Sure ARM is extremely power and thermal efficient, but it also doesn't have much oomph under the hood at comparable thermal limits when compared with Core anything, due to race to sleep and stuff that Intel has been doing with its ultra low power chips.

Most ARM cores today are architected for low power, and Core is very much a high-power core in comparison. Today's ARM cores are much closer to apples-to-apples vs. the current Atom, which is x86 like Core but very ARM-like in terms of power and performance.

I think the biggest ARM-ISA core right now is Apple's Swift, and while it's pretty buff, it's not up to Core in buffness and certainly not up to Core in attempts to scale it up to desktop-type thermals and core counts.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

PerrineClostermann posted:

So this popped up on my news feed...

Am I correct in assuming this doesn't bode well for AMD?

That's a particularly dour reporting job. AnandTech and Tech Report are much more neutral-to-positive.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I love my ASRock Q1900-ITX, but I do not expect it to game. Portal 2 is a struggle for my laptop, which has an older Sandy Bridge CPU with Intel HD 3000 graphics, and the J1900's GPU is certainly not better. If you want local gaming (and not just Steam Home Streaming like I use, for which the J1900 works great), then I would definitely favor an AM1 APU. The CPU strength is actually equal, with a far better GPU. The downside is power consumption - 25W vs. 10W for Bay Trail D. Now, that 15W is important to me for an always-on NAS/HTPC, but for a system that actually gets turned off, you may struggle to give a poo poo.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Well, if the Zen uarch is worth anything, than maybe actually! If.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
We've got our first Zen uarch rumors.

Assuming this is true: 14nm FinFET (that's probably Samsung's process), FM3 socket, DDR4 controller on-chip. PCIe controller at rev 3.0, and moving on-chip, so the northbridge is done on AMD, too. Up to 8 cores in 95W. No word on graphics. Uarch details are light except for a rumor that it's moving from Module-based, Bulldozer-ish clustered multithreading to symmetric multithreading on unitary cores, like Intel does with Hyperthreading.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
3rd party, but apparently fulfilled by Amazon.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

SYSV Fanfic posted:

I was looking at AMD APU video benchmarks. There doesn't seem to be a lot of gain since trinity. Has AMD stated they are only working on power consumption or did they realize killing off the low end card market was a bad idea?

With only dual-channel DDR3 to work with, they're pretty bottlenecked. Their low-end cards have far more memory bandwidth to work with. Within that constraint, power consumption has indeed become their priority, and the gains have been pretty solid. The A10-7800 at 45W performs like the A10-6800K at 100W in many titles. The next big thing is going to be applying Tonga's (R9-285) end-to-end memory compression scheme I think, which should help a lot, but I'm not sure if that's coming in the next generation or if it will have to wait for Zen uarch APUs in 2016 (which will also benefit from DDR4).

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

NyxBiker posted:

Is the AMD Opteron 4334 optimal for a small-medium web hosting company? (On a Dedi)

There's no way it's optimal and I have doubts it's even acceptable. It's about half the per-core performance of an Intel Ivy Bridge Xeon, and there are now Haswell Xeons with solidly better performance per watt than Ivy. The scale-out prospects on an old AMD CPU are just awful. If you did Atom or ARM microservers you'd get far better performance per watt at a similar cost, with the downside of lower peak performance. And you could just keep some buff Xeon servers around for the relatively small number of heavy load customers, and they'd be the best of everything except for up-front hardware costs.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Yeah, Mom/Grandma builds can get by on significantly less hardware. As long as cost lines up and power consumption isn't a deal-breaker (and for a desktop, it almost never is), it'll work great and the CPU-side of the chip will be plenty. Heck, my mom is using a dual-core 1.1 GHz Haswell Celeron laptop and she's thrilled with it.

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

slidebite posted:

She certainly plays grandma games (seek and find, card games, etc) but I'd also be playing on it when I come and visit if I'm killing some time. I'd like a bit of real 3D performance but I am under no illusions it's going to give a modern card a run for its money.

Could I play, say, Skyrim at 1600x900 with an on chip solution?

I'm not against a super-small form factor like a NUC, I'm just not sure if it'll be any cheaper by the time I get it equipped.

If I had to pick a way right now, I'd probably lean towards an econo build with an mATX factor and an A8. It would probably run me "around" $600 CDN.. and I'm not sure I could do one of those tiny form units ready to go for less than that.

A NUC with Iris Pro graphics will rock Skyrim at 1080p. The lesser HD 4400 graphics on an i3 or whatnot are more 1366x768 Medium, though they have way more ability to do shader-based tasks than push pixels, so you might get 1366 High a lot better than 1600 Low. A-series APU graphics... A8 will be about like Iris Pro, maybe a bit under, but geared more to push pixels, so higher res at lower details will have a performance curve more like a desktop GPU.

  • Locked thread