Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Notorious b.s.d.
Jan 25, 2003

by Reene

seeing as naples is four loving chips in one package, it would be pretty shameful if they didn't beat an e5 on speeds and feeds

(not so sure about watts and dollars!)

Adbot
ADBOT LOVES YOU

Notorious b.s.d.
Jan 25, 2003

by Reene
ryzen looks surprisingly ok

for naples all this poo poo hangs on their "infinity fabric," because it's four chips in one god drat can. if the interconnect sucks, naples sucks. if the interconnect rules, then... naples might be kinda sorta ok, maybe.

amd has been pretty tight-lipped about the interconnect so, uh, it don't look real good

Notorious b.s.d.
Jan 25, 2003

by Reene

this is latency between cores on a single chip. it is not relevant to the question. what's important for naples is the performance of the interconnect between chips.

naples is literally 4x ryzen-type chips glued together. i don't mean that conceptually, or metaphorically. i mean four discrete, separated pieces of silicon glued into a single package with little wires between them.

if the interconnect between chips is good, then naples will be good. if the interconnect is bad, then naples will be a turd.

"infinity fabric" is the new interconnect. like hypertransport, except, AMD. if anyone sees something about "infinity fabric" pls post it here

Notorious b.s.d.
Jan 25, 2003

by Reene

MrBadidea posted:

what i'm interested in is how the 2 socket systems are gonna handle big gpgpu clusters; each socket has 128 pcie lanes, but in 2 socket setups, 64 are used to provide the fabric interconnect so there's still 128 lanes leaving the sockets for the rest of the system, half from each socket

this could get kinda ugly

if it's 16 pci-e lanes per chip, does that mean it's literally one gpu per chip, and moving data in/out of gpu memory requires going across the interconnect for every memory access?

Notorious b.s.d.
Jan 25, 2003

by Reene

Paul MaudDib posted:

the standard R7 chip (1700, 1700X, 1800X) is just a pair of dies glued together though, so Nipples will actually be 8 pieces of silicon glued together

the ryzen is still a single die, even if it is two "core complexes" connected by fabric on that single die

i'm not sure we can infer very much from the fabric performance in the degenerate case. we don't know how many links exist on the ryzen CCXs vs a naples CCX, or what the specific topology will be with eight CCXs on four dies

Paul MaudDib posted:

it's the same interconnect in both cases, the performance of the interconnect between a pair of dies still tells us a little about how it might perform between 8 dies - although the 5960X is not really the relevant comparison there since you'd want to look at the big quad-die E5s/E7s where the interconnect is being stressed a little harder

as far as i know all intel x86 chips are 1 die in 1 package. even the monster 22 core E5s are on a single gigantic chip.

the main difference between e5 and e7 is how many qpi links you've got, which determines the possible topologies to link sockets together

Adbot
ADBOT LOVES YOU

Notorious b.s.d.
Jan 25, 2003

by Reene

BangersInMyKnickers posted:

High end xeons have the same problem where the 12+ core packages are basically two 6 or 8 core packages glued together with a high speed crossbar.

yes the really big xeon chips (haswell "HCC") have a funny logical layout, but i'm pretty sure they're still physically a single huge die.

here is an hcc die from xeon e5 2500 v3. it's very easy to see, and count, the L2 caches for the 18 cores. (if this is actually two chips glued together, i sure don't see the seam.)



BangersInMyKnickers posted:

Nope, they call it cluster-on-die. Intel broke out the glue gun too.

"cluster-on-die" is a bios flag that changes l3 cache handling

you can either split the big chips into two numa zones, with lower latency and crappier L3 cache performance, or you can leave them in a single pool. higher latency, better caching.

Notorious b.s.d. fucked around with this message at 15:19 on Mar 17, 2017

  • Locked thread