Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I'm going to OD on all this salt

This likely means nothing but it'd be pretty cool if the EHP was coming along nicely, although this should indicate functional engineering samples of Zen exist and are currently in the late stages of sampling or towards first revisions if AMD wants to meet Q3 target with enough volume.

EDIT: Fixed link

EmpyreanFlux fucked around with this message at 04:36 on Feb 3, 2016

Adbot
ADBOT LOVES YOU

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

FaustianQ posted:

I'm going to OD on all this salt

This likely means nothing but it'd be pretty cool if the EHP was coming along nicely, although this should indicate functional engineering samples of Zen exist and are currently in the late stages of sampling or towards first revisions if AMD wants to meet Q3 target with enough volume.
uh the very next patch indicates that this is for Carrizo?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Professor Science posted:

uh the very next patch indicates that this is for Carrizo?

I hosed up the link, should be fixed!

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

FaustianQ posted:

I hosed up the link, should be fixed!
well, they should have had silicon for at least six months if they're going to be shipping it anytime this year; a new CPU architecture takes a really long time to validate and they're not trivially fixable in software like GPUs. the problem is more "does it work, and does it have acceptable yields and perf/W," because they may have samples floating around under NDA but require respins before they can go to full production.

SwissArmyDruid
Feb 14, 2014

by sebmojo
More AMD stuff. The usual WCCFT caveats apply.

http://fudzilla.com/news/processors/39932-amd-s-zen-based-opteron-will-have-32-cores

TL;DR: CERN computer engineer's slides got out into the wild, apparently he got specs ahead of time.

* 40% improvement in IPC. (I think we knew this already.)
* 32 physical cores + SMT
* 8-channel DDR4

Seems even when they throw the towel in on CMT, they still can't help but load up on physical core count.

At least the interconnect seems to be competitive both ways, both Storm Lake and the leftover SeaMicro fabric are both slated to hit 100 GB/s.

Anime Schoolgirl
Nov 28, 2002

lol that's tripling their threadcount from 32nm/vishera if that's a standard opteron

Assepoester
Jul 18, 2004
Probation
Can't post for 10 years!
Melman v2


Hmmmmm

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.
What's the hmm for? Bulldozer is 2011 and current AMD CPUs are 4gen Bulldozer.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Anime Schoolgirl posted:

lol that's tripling their threadcount from 32nm/vishera if that's a standard opteron

Doubt it, the feeling is AMD will have monolithic 8 core dies, and the server dies will be 16, 24 and 32 cores of 2, 3 and 4 dies. I think standard Opteron will be 16C/32T as it should give the most leeway in frequency within their 95W envelope. Of course that may be meaningless when talking about server processors, and it'll be most interesting if they share sockets with desktop. I could see people trying to stuff Opterons into AM4 boards like ye olde 939, but I just don't think it will be possible.

Also a little concerned at the octachannel memory, this shouldn't mean that desktop will still be locked to drat dual channel, right? Seriously AMD, APUs need quadchannel to function.

SwissArmyDruid
Feb 14, 2014

by sebmojo
The desktop benefits of going beyond dual-channel are well documented. It's nonexistent.

Now, note that I said "benefits".

There is absolutely, 100% an increase in bandwidth. But the the actual performance increase from adding additional channels is insignificant at best, and sometimes, a little worse!

I am willing to accept that what we see with quad-channel memory is the same thing with multiple cores: That applications won't take advantage of the increased bandwidth and number of channels unless ithey're coded specifically for it.

But I am also willing to bet that HBM on-package for desktop is going to be a bigger deal than quad-channel. ESPECIALLY in the context of APUs.

SwissArmyDruid fucked around with this message at 02:43 on Feb 15, 2016

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

SwissArmyDruid posted:

The desktop benefits of going beyond dual-channel are well documented. It's nonexistent.

Now, note that I said "benefits".

There is absolutely, 100% an increase in bandwidth. But the the actual performance increase from adding additional channels is insignificant at best, and sometimes, a little worse!

I am willing to accept that what we see with quad-channel memory is the same thing with multiple cores: That applications won't take advantage of the increased bandwidth and number of channels unless ithey're coded specifically for it.

But I am also willing to bet that HBM on-package for desktop is going to be a bigger deal than quad-channel. ESPECIALLY in the context of APUs.

I don't know about how channels specifically scale, but benchmarks very readily reflect the fact that iGPUs are heavily bottlenecked by bandwidth. Both FM2 and Skylake iGPUs scale noticeably with memory bandwidth.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Paul MaudDib posted:

I don't know about how channels specifically scale, but benchmarks very readily reflect the fact that iGPUs are heavily bottlenecked by bandwidth. Both FM2 and Skylake iGPUs scale noticeably with memory bandwidth.

Yeah, this basically. Even the 8 CU R7s on APUs now would get a huge benefit from the improved bandwidth on quadchannel DDR4, and I believe the benefits overall improve substantially with higher memory clocks. Maybe it'll make little difference when it's a CPU bound task, but the iGPU will want all the bandwidth you can feed it. Also, HBM is highly likely to be restricted to high end APUs only, lower end APUs won't get it and will still likely want quad channel. Also also, mobile may have issues with HBM and there is no reason to handicap oneself like that, at least until a LPHBM solution is available.

They should at least make boards that support dual and quad and just toggle it in the UEFI settings.

SwissArmyDruid
Feb 14, 2014

by sebmojo
I am 100% with you guys in that the GCN cores on past APU products are bandwidth-starved. That said, I think that HBM will be coming down the stack to the mid-range much sooner that you guys think.

With aggressive binning of HBM chips, (and if you think there *aren't* going to be HBM chips that come out of production where the TSVs just don't work for any stacks above it, you're a fool) and the bus being 1024 bits wide for a single stack, they maybe only need a single layer of HBM2 (8 Gb/2GB!) to basically tell Iris Pro, "Get that weak poo poo outta here."

Anime Schoolgirl
Nov 28, 2002

FaustianQ posted:

They should at least make boards that support dual and quad and just toggle it in the UEFI settings.
have to reserve 1152 pins for quad channel ram though

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Anime Schoolgirl posted:

have to reserve 1152 pins for quad channel ram though

So thus PGA can't support Quadchannel? Then it's confirmation that either AM4 is fairly large or it's LGA. I honestly wouldn't be too surprised if it's built off C32 or G34, and maximum reverse compatibility with chips for those sockets for hilarity.

Also, can anyone help me source the 28nm dGPUs in this image? I get the feeling they're mostly prototypes (scrapped) but I wonder if AMD is still considering 28nm dGPUs. Honestly kind of look like scrapped 300 series successors to Oland and Pitcairn.


SwissArmyDruid posted:

I am 100% with you guys in that the GCN cores on past APU products are bandwidth-starved. That said, I think that HBM will be coming down the stack to the mid-range much sooner that you guys think.

With aggressive binning of HBM chips, (and if you think there *aren't* going to be HBM chips that come out of production where the TSVs just don't work for any stacks above it, you're a fool) and the bus being 1024 bits wide for a single stack, they maybe only need a single layer of HBM2 (8 Gb/2GB!) to basically tell Iris Pro, "Get that weak poo poo outta here."

Even a single stack of HBM1 should be enough honestly, and that's a more mature process as well. I mean, I want Bristol Ridge to feature HBM1 but I'll eat my hat :toxx: if it features any.

EmpyreanFlux fucked around with this message at 15:40 on Feb 15, 2016

Anime Schoolgirl
Nov 28, 2002

FaustianQ posted:

So thus PGA can't support Quadchannel? Then it's confirmation that either AM4 is fairly large or it's LGA.
to fit a pin count for PGA that supports quad channel you need a pcb the size of an lga2011 chip for a 1200-1300 pin chip, LGA2011 is only very slightly bigger than a g34 (whereas c32 is slightly bigger than an LGA115x)

i don't think we're going to see quad channel on the consumer end for quite a while, even, especially as ddr4 will look to be practically 75% faster than ddr3 and 40-48gbps would be enough for a 14nm GPU on a 65w APU

PC LOAD LETTER
May 23, 2005
WTF?!

Anime Schoolgirl posted:

lol that's tripling their threadcount from 32nm/vishera if that's a standard opteron
I think they have to have something they can point to that is at least close to or even with Intel and threads is probably the easiest way to get that since they can't win on single thread performance and probably performance/watt too.

Anime Schoolgirl
Nov 28, 2002

PC LOAD LETTER posted:

I think they have to have something they can point to that is at least close to or even with Intel and threads is probably the easiest way to get that since they can't win on single thread performance and probably performance/watt too.
research/server stuff scales well with core count at least

unless you try using ARM :cb:

opteron vishera was okay with contemporaries on performance per watt though the chips were lots of 2ghz cores instead of the 3+ghz monstrosities they seem to be shooting for here

PC LOAD LETTER
May 23, 2005
WTF?!

Anime Schoolgirl posted:

i don't think we're going to see quad channel on the consumer end for quite a while
Yea I doubt AM4 will be quad channel too. Everything that has come out about it (which is very little) seems to suggest the big change is DDR4 + it'll be a do-all (CPU and APU) socket for consumer market stuff.

Quad channel would be awesome for APU's (would mid-range-ish dGPU-esque performance from a iGPU commonplace) but is probably too expensive to do and would be of little to no benefit for the new Zen FX CPU's.

Anime Schoolgirl posted:

research/server stuff scales well with core count at least
Yea for HPC/server thread/core count per package is a big deal and if they price it right + the TDP is OK + performance really is Haswell-ish then AMD should be able to sell quite a bit to that market which would be good for them.

They really need to up their ASP's to survive.

PC LOAD LETTER fucked around with this message at 16:05 on Feb 15, 2016

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Yet remember that AM4 is supposed to cover HEDT (Unless they make consumer server socket boards in which case ignore me because that's actually simplest to get quad/octochannel obviously), so quadchannel isn't impossible. I don't know if you'd ever think of an APU as a HEDT part but it's AMD, the degree of HSA integration on Zen APUs is unknown so there might be a case where a octocore+16 CU (even with HBM2) is going to want quadchannel, and may perform better than an octocore with SMT and dual channel.

Spitballing here, just thinking AMD has options and I don't see the point of shooting themselves in the foot of not having a quadchannel option on some boards.

Anime Schoolgirl
Nov 28, 2002

it'll have to be in a bigger LGA socket than C32 then :shepface:

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Anime Schoolgirl posted:

it'll have to be in a bigger LGA socket than C32 then :shepface:

I dunno, AMD's only limit really is maintaining compatibility for cooling solutions IMHO, so something the size of 1366 or 2011 isn't really out of the question - can Intel raise a stink if AM4 uses similar mounting holes/solutions as 2011? I don't think space will be an issue for 8C/16CU w/14LPP, it's just the necessary pins, and uh, maybe thermal envelope (easily 150W TDP).

Also, I think 1C to 2CU is appropriate if I understand HSA at all, and that scaling much past 1C to 6CU will produce rapidly diminishing returns, yes?

SwissArmyDruid
Feb 14, 2014

by sebmojo

FaustianQ posted:

So thus PGA can't support Quadchannel? Then it's confirmation that either AM4 is fairly large or it's LGA. I honestly wouldn't be too surprised if it's built off C32 or G34, and maximum reverse compatibility with chips for those sockets for hilarity.

Also, can anyone help me source the 28nm dGPUs in this image? I get the feeling they're mostly prototypes (scrapped) but I wonder if AMD is still considering 28nm dGPUs. Honestly kind of look like scrapped 300 series successors to Oland and Pitcairn.



Even a single stack of HBM1 should be enough honestly, and that's a more mature process as well. I mean, I want Bristol Ridge to feature HBM1 but I'll eat my hat :toxx: if it features any.

I don't think "mature" applies in this case, considering how fast they're going from HBM1 to HBM2. For example, Samsung is bypassing making HBM1 entirely, and just going straight to 2, and Greenland (if that's still a thing) is purported to have HBM2 as well.

I think HBM1 was more of a "proof of concept, but we need something to fill the gap between now and the actual product so lets just give it the HBM1 name so we can do something with this silicon".

SwissArmyDruid
Feb 14, 2014

by sebmojo
http://www.pcper.com/news/Memory/Samsungs-HBM2-will-be-ready-you-are

Check those timestamps, yo. Called it.

Man, I really need to get a job doing market analysis.

SwissArmyDruid fucked around with this message at 10:37 on Feb 16, 2016

PC LOAD LETTER
May 23, 2005
WTF?!
Hmm?

Nothing works with what they're describing (HBM2 DIMM's!!) and so far there has been no information or hint of anything working with them either.

Don't get me wrong I'd love to see it but this seems about as likely to happen as GDDR5 DIMM's that were rumored to be supported for AMD APU's but never materialized.

Really thought I don't even think a HBM or HBM2 DIMM would make sense. All that bandwidth they bring is due in part to the way they interface with the GPU/CPU/APU which is only possible via a interposer of some sort. If you mount them on a DIMM I don't think there is much reason to believe they'd get all that much better bandwidth than current or near future DDR4.

edit: the actual Samsung release doesn't specify DIMM's either, just the article itself does, probably a misread or wishful thinking on the author's part....:/

PC LOAD LETTER fucked around with this message at 15:28 on Feb 16, 2016

Durinia
Sep 26, 2014

The Mad Computer Scientist
Posting to confirm that the author of that article is an idiot.

That said, I think there's potentially a market for HBM in HEDT processor packages. Honestly, PCs don't need a lot of memory capacity and that's the biggest drawback of HBM versus DDR. Could really open up APU/iGPU performance in ways DDR can't, in a tiny package.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
WCCF, but looks like Stoney ridge is beating Airmont and low end Skylake in perf/watt for single threaded tasks. I'm impressed at their ability to squeeze more blood for the construction core design and 28nm, I'm seriously thinking Jim Keller had a hand in Excavators design.

Apparently using an even newer Excavator design that some are calling Excavator+ (likely due to Zen and Zen+ nomenclature), and are not Cat cores by the way. Hopefully they can get some bites for designs and develop more SKUs, they look like you could get a phone part out of them for 800mhz idle clock, 1.2ghz base clock and 1.6, 1.7ghz turbo. I don't think they'd be able to make it much faster beyond adding more cores though, 3.5ghz is pretty close to optimum for Excavator.

GrizzlyCow
May 30, 2011
Any idea on how much these will cost compared to current AMD offerings and comparable Intel offerings? If the price is right, these will be pretty good for AMD.

Now the only thing AMD has to do is offer a good midrange SKU, and they might receive some exposure again.

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER


I think if AMD could afford to undercut Intel they would have done it already.

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?
Apparently, Bloomberg is reporting that Intel is looking to license AMD GPU patents, as their deal with NVidia expires Q1 2017. Man, they could use the cash...

SwissArmyDruid
Feb 14, 2014

by sebmojo
Not surprised. This is probably the first step towards AdaptiveSync-capable Intel parts, as promised. That poo poo was never making it into Kaby or Cannon Lake anyways.

Just think, the current deal that Nvidia inked back in 2011 with Intel is $1.5 billion over five years.

A roughly equal deal would be a lifeline to AMD, a kick in the nuts to Nvidia (especially since this cash is high-margin), and Intel gets to indirectly prop up their only other main competition in the CPU market for reasons of "not looking like a trust again".

SwissArmyDruid fucked around with this message at 20:18 on Mar 17, 2016

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

SwissArmyDruid posted:

Not surprised. This is probably the first step towards AdaptiveSync-capable Intel parts, as promised. That poo poo was never making it into Kaby or Cannon Lake anyways.

Just think, the current deal that Nvidia inked back in 2011 with Intel is $1.5 billion over five years.

A roughly equal deal would be a lifeline to AMD, a kick in the nuts to Nvidia (especially since this cash is high-margin), and Intel gets to indirectly prop up their only other main competition in the CPU market for reasons of "not looking like a trust again".

Also, potentially enabling AMD to knock the snot out of what they perceive to be a more realistic competitor: Nvidia (Big Blue may die before they turn poo poo around, Qualcomm can't hack it, I don't think Samsung really competes in the same space?). Competition within GPU space reduces Nvidia's income and thus ability to compete with Intel. Vice versa, Nvidia doesn't want AMD evaporating and Intel buying up RTG, because lol Radeons on Intels process tech vs Nvidia TSMC stuff would be a slaughter.

AMD is right now a battered child being used as a proxy between vicious drunk daddy and emotionally abusive mommy. Depressingly this is a high point for AMD; Don't dwell on that too much.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

FaustianQ posted:

Vice versa, Nvidia doesn't want AMD evaporating and Intel buying up RTG, because lol Radeons on Intels process tech vs Nvidia TSMC stuff would be a slaughter.

I realize that CPU processes are different from GPU processes, but can you imagine AMD's IP on Intel's fabs :circlefap:

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?

SwissArmyDruid posted:

Not surprised. This is probably the first step towards AdaptiveSync-capable Intel parts, as promised. That poo poo was never making it into Kaby or Cannon Lake anyways.

Just think, the current deal that Nvidia inked back in 2011 with Intel is $1.5 billion over five years.

A roughly equal deal would be a lifeline to AMD, a kick in the nuts to Nvidia (especially since this cash is high-margin), and Intel gets to indirectly prop up their only other main competition in the CPU market for reasons of "not looking like a trust again".

I'm highly suspicious that Intel will cut AMD a similar price. I'm sure they want to play these critical patent holders against one another to lower their iGPU costs. 66 million a month is a big fee for critical tech patent licensing. Yet though Intel might ask AMD to give up the secret sauce for less I just saw AMD stock jump over this rumor.

SwissArmyDruid
Feb 14, 2014

by sebmojo

A Bad King posted:

I'm highly suspicious that Intel will cut AMD a similar price. I'm sure they want to play these critical patent holders against one another to lower their iGPU costs. 66 million a month is a big fee for critical tech patent licensing. Yet though Intel might ask AMD to give up the secret sauce for less I just saw AMD stock jump over this rumor.

Uhhhhhhh your numbers seem a bit off. $1.5B / 12 / 5 is $25M a month. $66 sounds more like a quarterly amount.

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?

SwissArmyDruid posted:

Uhhhhhhh your numbers seem a bit off. $1.5B / 12 / 5 is $25M a month. $66 sounds more like a quarterly amount.

Augh. Yes, quarterly amount!

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Keep in mind, cash flow itself isn't everything and AMD may be willing to make trades on more nebulous stuff that they think would keep them in business longer, such as "Don't gently caress us on Zen in the OEM space", "Help support GPUOpen", etc.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Isn't it also in Intel's interest to not have AMD go bankrupt? Like I thought that's why Microsoft invested in Apple way back when and such.

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?

Boris Galerkin posted:

Isn't it also in Intel's interest to not have AMD go bankrupt? Like I thought that's why Microsoft invested in Apple way back when and such.

Intel needs a marginalized competitor in order to avoid Trust issues.

Adbot
ADBOT LOVES YOU

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

A Bad King posted:

Intel needs a marginalized competitor in order to avoid Trust issues.

Which means like 7% server market share, 15% in mobile and laptop, 30% in desktop. AMD runs in the black forever but isn't a real threat. They'd prefer those numbers lower and AMD to date a larger amount of the GPU market of course.

  • Locked thread