Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
SwissArmyDruid
Feb 14, 2014

by sebmojo

sauer kraut posted:

Phenom II X2 and a Radeon 6850 I got cheap because my 9600 GT died.
Just ran the Tomb Raider benchmark at normal settings with SSAO and got 79 fps min, 120 max.
The first game that I was interested in and couldn't play was DA: Inquisition a month ago.

Was this the last good CPU AMD made 5 years ago that y'all are referring to?

I wouldn't say that specific processor, but yeah, I feel like it's been all downhill since the Phenom II quad-cores.

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo

PC LOAD LETTER posted:

The anti trust issue thing is a old holdover fear from the 80's/early 90's. As the mega mergers between banks, telecomm, and media companies have shown in the last 10-15 yr if the company is big enough it doesn't have anything to worry about anymore as far as anti trust regulation goes. The regulators are all too willing to look the other way or even help companies get around the rules or rewrite them if necessary.

Even when they do gently caress up colossally these mega corps are often shielded from much or even any sort of legal action and even if found guilty and fined often have their fines drastically reduced at a later date like BP did with the Deepwater Horizon oil spills or any of the banks and the robosigning scandals. Anti trust is a dead issue in this day and age and I don't know why it keeps being brought up as a serious issue anymore.

It should be noted: Fines are also tax deductible.

SwissArmyDruid
Feb 14, 2014

by sebmojo

PC LOAD LETTER posted:

I didn't know that.

Should've suspected that though.

Newsweek ran an article on the problems with fining big banks being pointless regardless of how big or small the fines are. http://www.newsweek.com/2014/11/07/giant-penalties-are-giant-tax-write-offs-wall-street-279993.html

SwissArmyDruid
Feb 14, 2014

by sebmojo
Some lawyer argued that:

* Business liabilities are tax deductible.
* Fines are a business liability.
* Ergo, fines are tax deductible.

Under the letter of the law this makes complete sense. Under the spirit of the law...

SwissArmyDruid fucked around with this message at 21:56 on Jan 1, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

BurritoJustice posted:

Half height single slot 750ti.

An order of magnitude faster than any iGPU and you don't have to hamstring yourself on the CPU side with an APU.

Jesus gently caress, I have been looking this form factor 750 Ti for months. I'mma grab this and a low-profile bracket and then give this 965 BE I'm on away as an HTPC. Thank you for the link.

SwissArmyDruid fucked around with this message at 06:39 on Jan 4, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

Lolcano Eruption posted:

Noo don't. It's double slot. I have one that's uselessly sitting in my trunk for making that mistake.

No, it's fine, the case I'm going to buy is for mATX boards, and has multiple expansion card slots. I'll just need to get a low-profile bracket if it's not included. They can try to hide it with the end-on shot with the cooler removed on their website, but I have been building computers for too long to not immediately intuit that it needs two slots for the heatsink.

Hmm. Maybe I could take that card off your hands for you? Hit me up in PMs, and we'll hash it out there.

SwissArmyDruid
Feb 14, 2014

by sebmojo
All I can say is we should wait to see how AMD's efforts at integrating HBM as a combination L3 cache/VRAM memory works out. Once we see where that goes, that will probably dictate their entire lineup going forward, for both APUs and x86.

SwissArmyDruid
Feb 14, 2014

by sebmojo
The usual WCCFT caveats apply, but the fact that AMD is funneling more money towards high-performance R&D is a good sign.

http://wccftech.com/amd-earnings-call/

SwissArmyDruid
Feb 14, 2014

by sebmojo

Rastor posted:

Hmm, I think WCCFT is making a super gigantic leap there from the actual quote ("enterprise, embedded and semi-custom ... server ... x86 and ARM-based leadership products") to "high-performance" / "enthusiast" parts. AMD has repeatedly been saying lately that they see their opportunities as being in the performance/watt category and I would expect any goals of "leadership products" to be in that category, not the "benchmark king" category.

I *did* say that the usual WCCFT caveats apply, didn't I? =P

That said, you have to consider what Intel does with their parts. They have a high-end server line, (read: enterprise) whose parts they bring down to the desktop in the form of chipsets that end in 8 or 9. That's why those latest enthusiast i7s have loving absurd amounts of cache, and no integrated graphics, because they're actually server parts that can't be sold as server parts because they run too hot to be stuck inside a 1U case or something. They still cost a goddamn arm, leg and a spleen, though.

Then they also have their mobile parts line, (read: performance/watt) where if it doesn't run cool enough to be put into a laptop or tablet, again, they push it "up" to desktop, because desktop can accommodate better cooling, with the chipsets going up to 7. You think it's a coincidence that their desktop parts run with as low a TDP as they do? That just because Intel reserves their best process tech for themselves that's why they're always ahead in everything? Well, you're not entirely wrong, but even they have some chips that shake out of the binning process. They run too hot to be stuck inside a laptop, but if you slap a big air cooler or closed-loop AIO onto it, it'll be happier than a pig in mud.

That's the genius of Intel's strategy, and if AMD is starting to follow suit, by targeting these two markets and dumping the parts that don't quite cut it into desktop (read: high performance/enthusiast), we could do a lot worse.

SwissArmyDruid fucked around with this message at 05:32 on Jan 24, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

Factory Factory posted:

We've got our first Zen uarch rumors.

Assuming this is true: 14nm FinFET (that's probably Samsung's process), FM3 socket, DDR4 controller on-chip. PCIe controller at rev 3.0, and moving on-chip, so the northbridge is done on AMD, too. Up to 8 cores in 95W. No word on graphics. Uarch details are light except for a rumor that it's moving from Module-based, Bulldozer-ish clustered multithreading to symmetric multithreading on unitary cores, like Intel does with Hyperthreading.

Argh, and no word if HBM is going to make its way on there as a jumbo L3 cache or anything!

SwissArmyDruid
Feb 14, 2014

by sebmojo

blowfish posted:

"up to 8 cores" - if they go in a consumer craptop, will most software be able to make use of that computing power, or will we see processors good for very specific circumstances but bad for general purpose laptops?

I think that the specs listed are server parts downgraded to desktop, a la Xeon down to i7. It does not make sense for AMD to stop with the APUs for the "craptops". Nor does it make sense to bring hot and power-hungry server parts down to mobile.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Rastor posted:

Ah yeah, Zen is very different from Carrizo. Then there's also Godavari.

Actually, Godavari appears to just a refresh of Kaveri. The Kaveri successor appears to be something called "Bristol Ridge"? Notably: DDR4 memory controller onboard.

The usual WCCFT caveats apply: http://wccftech.com/amd-bristol-ridge-apu-2016/

SwissArmyDruid
Feb 14, 2014

by sebmojo

Rastor posted:

B) AMD is making a long-term bet that HSA will be actually used to make your spreadsheets faster and your games fancier.

I'm still hoping that HBM means that processor caches become huge. (relative to their current sizes) Like, "gently caress going to system memory, that poo poo's weak. We're just gonna do most of our work here on-package." It seems like a good way to boost performance. I've now actually got a bet going with a co-worker that this was the entire endgame of HSA all along.

I mean, if they scale putting HBM onto parts all the way down to their sub-10W range, can you imagine what that would do for performance on the tablet/ultrathin end? Slide #12 of this Hynix presentation at Memcon last year ( http://www.memcon.com/pdfs/proceedings2014/NET104.pdf ) says that it should consume half as much power as DDR4, and less than a third the consumption of DDR3. With successive generations of HBM, I don't begin to doubt for a moment that we could see 4 GB as a single stack on the APU package, easy. (Same presentation, Hynix is claiming HBM2 is 1GB per layer. Not my typo. Gigabyte. In 4- or 8-layer flavors.)

I couldn't even begin to speculate on the energy savings, but it seems like it could be significant, increasing bandwidth and feeding those hungry-rear end-and-heretofore-bandwidth-choked GCN cores while reducing power requirements.

Now (and I realize this is a pipedream yet) the only thing that remains to be done is a processor that can look at a given task and figure out what would be done best with the CPU and what would be done best with the GPU, then assign appropriately without the need for OpenCL.

SwissArmyDruid fucked around with this message at 05:50 on Feb 7, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

Hold a tic, what if that's the real advantage of being early to HBM then, not usage on GPUs but on APUs? Intel won't have a processor ready for HBM in 2015-16, maybe not even 2017 (18 is stretching it but possible), but if Zen drops with something like 64MB cache or greater for the CPU and ~128-256MB cache for the APU (maybe I'm not thinking big enough here), that's a massive advantage, correct? Weren't a lot of the short comings of the C2D/C2Q series obviated through copious cache? If so, and Intel doesn't have an answer in ~1 year, that's potentially a lot of market capture for AMD.

This still requires Zen to not be a complete flop, or at least competitive with the Core series, enough so that just dropping HBM on a Skylake processor obviates Zen cores performance lead [?]. I'm an idiot, shatter my dreams.

To address your questions in order:

* I've been saying, AMD was probably working on this as the solution to those bandwidth-choked GCN cores all along, which makes me think that HSA was targetted for HBM all along. I also suspect that whole virtual core thing from a few months back was also targetted for APUs, but didn't pan out quite as well as expected.
* Intel might be along faster than that. They are already embedding, at great cost, 128 MB of eDRAM onto Haswell processors with Iris Pro graphics. They probably had the same idea as well, because the 128 MB serves as split between L4 cache and graphics memory.
* I think you're thinking too small. HBM1 (which was in risk production at the end of last year, and mass production now) comes in 2 gigabit increments. The default shipping configuration seems to be 4 layers high, so 2 Gb x 4 layers = 1 GigaByte of HBM, assuming they don't offer smaller increments in thinner stacks for whatever reason. HBM2, which is slated to go live in mid-2016, is slated to ramp that up to 8 Gb per layer, in 2, 4, and 8 layer configurations. And as I said in some of my previous posts (I am really starting to forget how many times I've typed this) one of the reasons why the graphics in current-gen APUs isn't better and allowing Intel to catch up is because DDR3 is just too freaking slow and doesn't provide enough bandwidth for it to make sense for AMD to put any more than eight GCN CUs onto even the top of the line APU. Consider your most absolute bare minimum AMD card for gaming, the R7 260. That card has 12 GCN CUs, by comparison, off a 128 bit memory bus.

I don't really have much in the way of dream-shattering. I only recently found out about this patent filed by AMD back in 2012: http://www.google.com/patents/US20130346695

Of note is this block diagram:

Look familiar? It should. Remember what I said about a jumbo L3 cache? There it is.

It would be advantageous to AMD if this patent prevents Intel from putting HBM on Broadwell and Sky Lake parts.

SwissArmyDruid fucked around with this message at 12:59 on Feb 8, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

How would 4, 8 or even 16GB work on die with the processor/GCN? Wouldn't this take up enormous space anyway? Would this obviate the need for any kind of third party DRAM? Wouldn't this make the processors ridiculously expensive?

So in theory, AMD could obsolete everything below the shallow end of high range cards? Is this cutting into their own profits? Won't such configurations run pretty drat hot? Is this possibly why ARM cores are being looked into?

Bolded, wouldn't this effectively kill Intel? It'd mean Intel would require a whole new architecture to remain competitive, if it's even possible. We'd be comparing processors were one needs a bunch of ancillary components to run, while the other only needs them for extremely intensive tasks.

Also that patent date is rather suspiciously well timed with the theoretical stage of producing a processor, correct? I think it was Factory who mentioned that processors before market consumption had something like a 6 year development cycle, so if we assume 2011 is were all the theory has been done, then: ?

*No. That's 4, 8, or 16 GB, PER DIE. You really should look at the presentation I linked. It shows a size comparison of a single 4-layer HBM1 die that occupies a space smaller than an asprin pill. As for it making the chip overly large, well, Intel is using very large packages themselves for Broadwell-U anyways:


Back to the presentation: the configuration that is shown as an example looks an awful lot like an old Thunderbird XP. You know, with the bare CPU die in the center, and the four foam stabilizer pads, one at each corner? Given that AMD is partnering with Hynix on this, this could very well be what the new AMD die shots looks like. And yes, it could obviate the need for any system memory. How much it adds to the cost of a processor, nobody knows yet. But it can't be that much if you can, eventually with HBM2 just put one additional die onto a package and be done.

* In theory, yes. With improved GPU performance from HBM, and improved CPU performance from whatever architecture of Zen makes its way down into APUs, you could give OEMs a very convincing reason to go all-AMD. And any AMD graphics solution that's sold is one less sale for NVidia.

* No, I do not think it will kill Intel. There is no way in hell AMD will have sufficient production capability from GloFo or their partners to suddenly take over the kind of production that Intel holds. Their marketshare gains will likely be limited by their production. There's a possibility that Samsung could be able to step in, since it *is* their FinFET process, after all. But how much capacity they could contribute without hurting their own production of their own products remains to be seen.

* It is. I suspect that we are seeing the fingerprints of Dirk Meyer's work just before he left for the second time. AMD has been playing the long con this entire time, assembling a very good hand full of cards. It remains to be seen if they can do anything with them. Execution, I think, will be what makes or breaks AMD in the next few years, not tech.

SwissArmyDruid fucked around with this message at 09:37 on Feb 8, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo
So, they've got an address from where the bum processors were shipped from, a bank account into which the money was transferred, and a name.

Not really seeing the problem here, this ought to be wrapped up pretty quick.

SwissArmyDruid
Feb 14, 2014

by sebmojo
What's the protocol these days. Is it razor blades and boiling water, or is it the vise-and-wooden block shearing method?

SwissArmyDruid
Feb 14, 2014

by sebmojo

SYSV Fanfic posted:

I was looking at AMD APU video benchmarks. There doesn't seem to be a lot of gain since trinity. Has AMD stated they are only working on power consumption or did they realize killing off the low end card market was a bad idea?

Factory Factory posted:

With only dual-channel DDR3 to work with, they're pretty bottlenecked. Their low-end cards have far more memory bandwidth to work with. Within that constraint, power consumption has indeed become their priority, and the gains have been pretty solid. The A10-7800 at 45W performs like the A10-6800K at 100W in many titles. The next big thing is going to be applying Tonga's (R9-285) end-to-end memory compression scheme I think, which should help a lot, but I'm not sure if that's coming in the next generation or if it will have to wait for Zen uarch APUs in 2016 (which will also benefit from DDR4).

Come on, guys, I know you were both reading page 86.

I'll give you the teaser from an AMD patent filing in 2012:

SwissArmyDruid
Feb 14, 2014

by sebmojo

PC LOAD LETTER posted:

G34 has all those pins for extra memory slots + allowing CPU's to communicate, its not needed for HBM or any other on package memory really.

Socket size isn't the limitation for HBM. Its price that could (probably will) kill the idea for low cost APU's which is the way AMD has to price them in order to sell them.

Slapping HBM on their APU's might allow them to get mid-tier-ish GPU performance but also means they'll have to price them lots higher just to break even. WAG on my part here but $2-300 would probably be the price range of a 'high end' APU with a 1-2GB HBM cache on package via interposer. Even tied to Excavator CPU's the performance vs price wouldn't be bad but most enthusiasts, and those are the ones who'd be interested, probably wouldn't bother with it for that price. A low end Intel chip and a mid range dGPU would probably be better value over all even if it ends up costing a bit more.

It sucks but I think they're stuck being bandwidth limited with their APU's for a long time.

I don't think so, not quite as much.

We honestly don't know the pricing for HBM right now, but if I had to take a guess, its eventual pricing cannot be much more than mass-production GDDR5. If it isn't, it doesn't make sense to use it across an entire product line the way Nvidia wants to do with Pascal. Because that's that entire architecture's thing.

I really do not think that it will cost anything remotely approaching Intel's 128 MB of eDRAM. A lot of the cost incurred there is due to the cost of the interconnect, something neatly dodged by how HBM is built from the ground up.

Now, casual information that I have on bulk GDDR5 is from the BOM breakdown of the PS4, which then was said to be $110-$140 for the 8 GB of GDDR5. Per gig, that turns out to be $17.50. That's... not really that much.

I feel that any increase in cost will be large enough to cover the cost of BOM + interposer + added complexity, but remain low enough to allow AMD to market the part to ultrabook makers. It's contingent on them that they get the pricing on this absolutely right.

Pricing should come down even further with HBM2, which is slated to hit the market in 2016. Given that HBM just began sampling risk production back in... I want to say August? That's a phenomenally fast dev cycle for doubling capacity, which means that it must be something they were already cooking + die-shrink.

And really, given that current Intel parts with onboard memory only have 128 MB, I think AMD stands to gain a lot more performance with the 1 GB minimum per stack.

SwissArmyDruid fucked around with this message at 12:20 on Feb 14, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo
PSA, WCCFT can be full of crap, trust nothing unless you've got a second corroborating source other than the one that they list that has independently verified the story instead of just parroting.

It's a good practice in general, but particularly pertinent for WCCFT.

That aside, interesting to see that AMD is aping Intel by targetting servers and mobile, and letting binned parts flow to desktop. I think that's the best approach, and there's no shame in stealing a better way of doing things.

Also, I'm pretty sure that AMD *tried* to make a MorphCore competitor. It was that thing that they were a part of the consortium last year that they debuted and were looking for buyers, wasn't it? I gotta look back through my post history to remember the name.

EDIT: Soft Machines and the VISC architecture.

SwissArmyDruid fucked around with this message at 21:44 on Feb 23, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo
Another design win for AMD. MediaTek is going to start licensing their graphics. http://www.fudzilla.com/news/graphics/37209-mediatek-to-license-amd-graphics

I like to think that it must have been because AMD knows how to play nice with ARM on the same die.

Should be interesting, though. AMD sold off Imageon for way too goddamn little to Qualcomm, which now forms the basis for their Adreno graphics. So technically, this is ATI vs. AMD.

This, plus all the glNext getting rid of embedded OpenGL news should make things pretty interesting.

SwissArmyDruid fucked around with this message at 06:30 on Mar 11, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

HalloKitty posted:

It has a name now: Vulkan, and being based on Mantle, it goes right to the heart of what AMD said: they'd be happy to share.

I really hope Vulkan gets huge support, although Microsoft wouldn't be so happy, as dx12 won't be such a draw to Windows 10.

The king is dead, long live the king.

What I want to see, actually, is whether or not Vulkan can run on existing silicon, as DX12 will. That, I think, will be the biggest draw to widespread adoption beyond "we need a graphics API but we're not building for a Windows platform"

Just as a purely scientific experiment, I'd love to see if any additional performance can be eked out of the PS4 using it, and then seeing how it stacks up a DX12-enabled Xbone.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Somehow, we all missed this news: http://www.bbc.com/news/technology-25635719

China lifted it's ban on video game consoles, and the news broke January of this year. AMD stands to profit, since the Chinese market is ostensibly huge, and, what do you know? AMD runs parts of all three consoles.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Beautiful Ninja posted:

From what I've seen so far with the Xbox One officially released and PS4 available by grey market, the market for the new consoles is small. The consoles are expensive, you can't pirate their software and the lack of F2P software means the Chinese aren't really showing interest, they are sticking with the last gen consoles and PC for their gaming needs.

As the licensor of the tech that goes into the systems, AMD doesn't give two shits if things don't sell in China. They only care if Sony/Microsoft/Nintendo ramps up production to have supply in China. Once the silicon leaves their hands and the money goes in their pockets, they could not give a drat.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Longinus00 posted:

Intel loves new architectures.

https://en.wikipedia.org/wiki/Intel_iAPX_432
https://en.wikipedia.org/wiki/Intel_i860
https://en.wikipedia.org/wiki/Itanium

You might notice a common pattern behind the failure of those projects.

They all start with the letter "i"? :v:

SwissArmyDruid
Feb 14, 2014

by sebmojo
Jesus, that old chestnut again?

SwissArmyDruid
Feb 14, 2014

by sebmojo

Lord Windy posted:

How good are the Samsung fabs? Wouldn't they be better off doing something else than attempt to make AMD designs better?

They already made ARM-based Exynos chips, as well as both planar and 3D TLC NAND. And that's only at their wholly-owned facilities. Recent financials from Nvidia also suggest that they are making chips for Nvidia too, although whether this is GPUs or Tegra is uncertain.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Lord Windy posted:

Since we're already in crazy land, couldn't thye just make ARM desktop chips instead of fighting with Intel on x86? If there is even a market to fight for that is, but Android could probably work as a Desktop thing.

Lemme ask you a question. If Intel wants to get into the mobile SoC market with their sub-5-watt parts and heavily subsidizing these parts, why is it crazy for Samsung to want to get into the x86 market? The idea isn't for Samsung to make ARM desktops, it's to get their fingers into the cross-licensing agreement that AMD has with Intel. Intel lets AMD use the x86 free of charge, because AMD lets Intel use the x86-64 free of charge.

This was the crux of a pretty big lawsuit a few years ago where Intel's lawyers tried to break the agreement when Samsung buying AMD rumors came up.

SwissArmyDruid fucked around with this message at 01:24 on Mar 26, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

Nintendo Kid posted:

Samsung doesn't have the magic ability to make AMD's current lineup perform as well as Intel's stuff, so they'd still need to buy a lot of intel parts for any of their desktops/laptops/servers that are worth a poo poo.

Agreed. But having fabs three other fabs whose 14nm FinFET process can be directly applied to an AMD product, that's going to improve TDPs relative to current parts, at least, so it's *sort of* like magic. (Especially since they were going to be using Samsung's FinFET at GloFo anyways.)

Remember how AMD was said to have lost an Apple contract because of concerns how they wouldn't be able to keep them supplied a few years back?

Remember how AMD couldn't keep up with production when demand spiked heavily for bitcoin mining last year?

I'm not saying that Samsung buying AMD would make everything better. But it wouldn't be completely without benefit or benefit-neutral for AMD. They would actually get something out of it, which, I assume, is why the rumors persist and keep coming back every few years or so.

SwissArmyDruid fucked around with this message at 02:29 on Mar 26, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

slidebite posted:

Yeah, I know it's old but they run drat well and from what I can tell will give still give a new A8 a run for its money... or am I completely missing something stupid obvious?

I actually thought of an NUC but I thought by the time I equipped an NUC I'll be pretty close to the same $$ as a decent build so I kind of ruled it out. She'll want an optical (I know), more storage than the SSD has, wifi, etc.

I guess a 240GB SSD might be big enough, but if not we'll have to go external HDD and the optical.

Do they make NUCs based on the A8 or something so we can take advantage of the better on die GPU? That would make me feel better with not have a PCI for a video path if desired.
e: Her display is (pretty sure) VGA and would rather not go down the road of a new monitor.

Yes, look up Zotac, they make a number of SFF boxen in varying flavors. But really, the Intel graphics have come a long way since (ugh, I'm getting ill just thinking about it), GMA 900 or whatever. Definitely would be okay with an Intel for a Grandma build.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Leaked slides: http://wccftech.com/amd-gpu-apu-roadmaps-2015-2020-emerge/

Of note: 200-300W TDP APU. :eyepop:

For servers, of course, but still.

Discuss.

SwissArmyDruid
Feb 14, 2014

by sebmojo

LeftistMuslimObama posted:

Not to mention that a basic livelock like he's describing would be easily fixed by a competent developer through any number of locking and concurrency primitives.

...are we calling EA/Bioware competent developers, now?

SwissArmyDruid
Feb 14, 2014

by sebmojo

wipeout posted:

Are dragon age and da:I well optimised, or bad ports generally?

Glitchy, hitchy, and make way too many concessions towards console limitations in controls, UI, and map layout.

SwissArmyDruid
Feb 14, 2014

by sebmojo

The Lord Bude posted:

The first dragon age is one of the best games ever made, and it is excellent on PC. Dragon age inquisition is still a decent game, but suffers from being designed with almost no consideration given to PC gamer habits, and also some EA executive deciding that since skyrim was so successful; all RPGs should be more like skyrim.

While I will mostly agree with your statements, I think you're casting DA:O in too-kind a light. It still has a memory leak that remains unpatched to this day. If I didn't have 16 GB of RAM, (And I only have it because I managed to snag it when it was still cheap, before the SEA floods wrecked the DRAM and hard drive factories down there, and the market shifted away from DRAM to NAND.) I'd have taken much longer to finish that game each time, and even then, things get kind of stupid when you're staring at a loading screen for minutes at when it didn't take nearly that long when you first started the game.

SwissArmyDruid
Feb 14, 2014

by sebmojo
We're waiting on a confluence of technologies to come together and create a product that is greater than the sum of its parts. Zen and HBM1/2 combined with GCN in one package.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Angry Fish posted:

Did some reading. Holy poo poo. :stare: But how does this apply to consumer products? A 16 core chip with that much cache and memory on board would be priced in the thousands per unit, right?

I seem to have been misunderstood. Briefly, then:

* Bulldozer and its derivatives suck. Zen is K11b. We hope it's good.
* GCN cores on APUs are bandwidth-starved as hell. There's only about a third of the cores you'd find on a lovely R9 260X, and they're way, way, way downclocked, because any faster and they'd be bottlenecked by the DDR3 anyways. Future GCN parts are alleged to focus on power efficiency, perhaps hinted at by their codename: Arctic Islands. "Greenland" is alleged to be the top-of-the-line part, and purports to pull a Maxwell on GCN.
* HBM is the new hotness. It's got the bandwidth to feed those hungry GCN cores, a wide-rear end memory bus, the physical size to go onto the same package as a CPU/GPU, and there's an AMD patent letting them put HBM in the line between the L2 cache and system memory. If it's anything like how Intel does it, this will allow HBM to be used as a combination of cache and system memory, EXCEPT that this would be L3 cache, not L4. Then, you get the more conventional memory controller tailed off of that, allowing for mixed-memory packages between HBM and DDR4. (At least, it's a good bet that Zen will use DDR4.)

Now, we know that AMD has begun to adopt the same binning strategy as Intel. Intel are now presently only intentionally making parts for the server market and the mobile market, (markets that are power-constrained and thermally constrained) Any chips that fail binning for these two markets get pushed to the desktop channel, where they can get the extra power and cooling that they need to run stably.

--SPECULATION BEGINS HERE--

So, take 4/8 Zen cores, pair it with more GCN cores running at full speed than Carrizo and previous could ever hope to utilize, and drop 2 GB of HBM onto the package. Congratulations, there is now a mainstream notebook part that's capable of competing with Intel's mobile parts, annihilating them in graphics performance, while offering equivalent or lower costs. (There's no goddamn way in hell that putting eDRAM onto processors as Intel has done with Iris Pro parts is cheaper than HBM, or anywhere near the capacity.)

Cut it down to 2 cores, pare back the GCN cores to match, but increase the amount of HBM to 8 GB? Here's a feature-complete SoC for your ultrabooks. You probably won't even need to bother with LPDDR.

Take 16 Zen cores, give them 2048 GCN cores, and load on 32 GB of HBM? You now have a server part that's capable of highly-threaded applications, as well as OpenCL computations.

Server die has some bad cores? Disable the bad ones, put it onto a package with reduced HBM, and you've got a top-of-the-line enthusiast part. That sort of thing.

Mainstream mobile die isn't stable without extra voltage, or runs too hot? Bump it up to desktop, there's your mainstream and low-end desktop parts.

--SPECULATION ENDS HERE--

So that's what we're waiting for. It's just that AMD are being tight-lipped about Zen, and the 300-series won't come out until Computex, so we don't know poo poo yet. But we're hoping.

SwissArmyDruid fucked around with this message at 22:25 on Apr 8, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo
Bam, called it~



APU + Zen + HBM + awesome GCN parts.

I still express worries that AMD is going to shoot themselves in the foot with regard to single-threaded performance again, because 16 Zen cores, even if it is a server part, but other than that, this is probably what our next-gen K11-B is gonna look like.

I was also incorrect that the HBM would be the shared L3 cache.

Article: http://www.fudzilla.com/news/processors/37494-amd-x86-16-core-zen-apu-detailed

SwissArmyDruid fucked around with this message at 22:38 on Apr 10, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

No giant L3 cache, but no one plans for 16GB of HBM without some major horsepower backing it. Easily explains the rumored 300W TDP - 95W for the Zen cores, 200W for the GCN, which could give top tier processors reaching into enthusiast. If AMD don't gently caress up, that's a lot of mobile market capture since Iris is still at what, R7 240 level?

No, I think that TDP number might be way overinflated. Those are Greenland GCN cores, which are known to be part of the Arctic Islands series of silicon, which we know is rumored to emphasize power efficiency a la Maxwell. Also, if Zen is to arrive on 14nm FinFET as rumored, I think the TDP might also be lower again relative to what we know is "normal" for an AMD server part.

However, if that number *is* correct, I'm thinking it's closer to a 50/50 split, as it *is* 16 physical cores, after all. Intel's highest TDP Xeon E7 v2 part is a 155W 15-core clocked at 3.2GHz, with 37.5MB of L3, although that is probably due to binning, as there also exists a gamut of processors with differing TDPs and core counts, as well as frequencies inversely scaling with core count: http://ark.intel.com/products/family/78584/Intel-Xeon-Processor-E7-v2-Family#@Server

(Seems like they have trouble getting that one last core to not be DOA.)

SwissArmyDruid fucked around with this message at 04:28 on Apr 11, 2015

SwissArmyDruid
Feb 14, 2014

by sebmojo

repiv posted:

What's the use case of a fast-ish iGPU in a server though? You either don't need a GPU at all, or you're doing heavy GPGPU work and will be stuffing every PCI-E slot with discrete cards that crush any iGPU.

AMD owns Seamicro. I think that's the use case. You don't build a server chip that you can't use in that microserver company that you own.

SwissArmyDruid fucked around with this message at 04:18 on Apr 12, 2015

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo
It really *is* the promised land if they can pull it off. I happened to poke through ifixit earlier today, and here's a shot of the entire mainboard of the new Macbook:



That's it. From top to bottom. Processor is on the opposite side, but there's an Intel Core-M, non-Iris graphics, DDR, everything. But man, if Zen is comparable to the contemporary Intel architecture, that would be a coup that AMD could easily orchestrate.

  • Locked thread