Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
aluminumonkey
Jun 19, 2002

Reggie loves tacos

Otakufag posted:

I decided to get a 2600x and just wait 3600x out instead of getting a 9600k. What's a couple good mobo recommendations that might serve me well even further until 2020 for Ryzen 3?

I'm in the same boat. I'm upgrading my 2500k to a 2700X for now, then to whatever 3rd Gen Ryzen is equivalent when they come out.

Adbot
ADBOT LOVES YOU

Truga
May 4, 2014
Lipstick Apathy

MaxxBot posted:

Here's another pic where the footprint for the second chiplet is clearly visible.

https://twitter.com/brianmacocq/status/1083269332338204672?s=19

wouldn't a dual channel 16 core be hilariously memory bandwidth starved tho? i'm looking at a 16core TR for my pc purely due to quad channel support.

Arzachel
May 12, 2012

Truga posted:

wouldn't a dual channel 16 core be hilariously memory bandwidth starved tho? i'm looking at a 16core TR for my pc purely due to quad channel support.

Do you think Rome is going to be bandwith starved? It has the same ratio of cores/memory channels.

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord
Is your specific workload memory sensitive?

teh_Broseph
Oct 21, 2010

THE LAST METROID IS IN
CATTIVITY. THE GALAXY
IS AT PEACE...
Lipstick Apathy

aluminumonkey posted:

I'm in the same boat. I'm upgrading my 2500k to a 2700X for now, then to whatever 3rd Gen Ryzen is equivalent when they come out.

Count me in too - I've been itching to replace the 2500k setup since about FFXV came out and hammered it, a 9700k setup just sounds way too goddamn expensive, and another 6 months for the Zen 3000s sounds too long. I'm working my way through Asscreed Odyssey now with some lowered settings, had to lower settings on FFXV and new Tomb Raider, and with Anthem and FFXV DLC on the way watching the 2500k choke out and bottleneck my 1080 is getting painful.

Truga
May 4, 2014
Lipstick Apathy

Risky Bisquick posted:

Is your specific workload memory sensitive?

I'm honestly not sure. How memory bandwidth hungry is encoding x264?

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord

Truga posted:

I'm honestly not sure. How memory bandwidth hungry is encoding x264?

Not very, a bit old but still valid

https://www.anandtech.com/show/8959/ddr4-haswell-e-scaling-review-2133-to-3200-with-gskill-corsair-adata-and-crucial/4

Truga
May 4, 2014
Lipstick Apathy
Oh, that's great then.

eames
May 9, 2009

Otakufag posted:

So latencies are going up no matter what? AMD isn't implementing something to counter that?

Im not qualified to judge that, though I suspect they have a huge amount of cache in the 14nm IO chiplet (it is pretty big) to make up for the downsides. I would be shocked if Ryzen 3 turns out to have better latencies (cache and memory) than a current single-ringbus i9.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The IO die isn't that much smaller than the old Zen die, yet is lacking all CPU stuff, which isn't that small either, given the size of the 7nm chiplet. The L4 cache speculation could be very well true.

Also, I've read claims that with Intel, the last CPUs without IMC had pretty dope latencies considering, better than the first models with IMC. So the Zen2 probably isn't that bad either with this change.

The Illusive Man
Mar 27, 2008

~savior of yoomanity~

teh_Broseph posted:

Count me in too - I've been itching to replace the 2500k setup since about FFXV came out and hammered it, a 9700k setup just sounds way too goddamn expensive, and another 6 months for the Zen 3000s sounds too long. I'm working my way through Asscreed Odyssey now with some lowered settings, had to lower settings on FFXV and new Tomb Raider, and with Anthem and FFXV DLC on the way watching the 2500k choke out and bottleneck my 1080 is getting painful.

I'm honestly curious how many CPU upgrades AssCreed Odyssey has spurred. I know it pushed me over the edge when it was 100%ing my old 6700K.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

Space Racist posted:

I'm honestly curious how many CPU upgrades AssCreed Odyssey has spurred. I know it pushed me over the edge when it was 100%ing my old 6700K.

I'm running into this with Hitman 2018 on some maps. Mumbai makes my 1080ti drop down to 20 or 30 fps sometimes at 1440p. Kind of sucks but there's nothing I can really do about it for now. Poor i5-3550 can only do so much.

Arzachel
May 12, 2012

Combat Pretzel posted:

The IO die isn't that much smaller than the old Zen die, yet is lacking all CPU stuff, which isn't that small either, given the size of the 7nm chiplet. The L4 cache speculation could be very well true.

Also, I've read claims that with Intel, the last CPUs without IMC had pretty dope latencies considering, better than the first models with IMC. So the Zen2 probably isn't that bad either with this change.

The IO die is ~122mm2 while Summit Ridge without both CCX works out to ~125mm2.

B-Mac
Apr 21, 2003
I'll never catch "the gay"!

Space Racist posted:

I'm honestly curious how many CPU upgrades AssCreed Odyssey has spurred. I know it pushed me over the edge when it was 100%ing my old 6700K.

Is odyssey more cpu hungry than origins? Origins DLC would push my stock 9900k to 70-80% at times with a single core in the 90s. I’d see a power draw of a 120W on the cpu alone, it was nuts.

Dr. Fishopolis
Aug 31, 2004

ROBOT

B-Mac posted:

Is odyssey more cpu hungry than origins? Origins DLC would push my stock 9900k to 70-80% at times with a single core in the 90s. I’d see a power draw of a 120W on the cpu alone, it was nuts.

denuvo is a hell of a drug.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

Dr. Fishopolis posted:

denuvo is a hell of a drug.

It ended up not being Denuvo but just poo poo programming in general / Arkahm Knight's revenge in which it loads stuff in and out real aggressively and will do that at the speed of your hard disk so if you're on a ssd (or god forbid something stupid like raid 0 nvme ssds) it will create massive driver overhead crushing the cpu.

edit: it doesn't want to leave textures loaded into vram because it's afraid you'll run out of vram so instead it just loads the sames textures into and out of vram thousands of times over the course of an hour.


https://github.com/Kaldaien/SpecialK/releases/tag/sk_odyssey

incoherent
Apr 24, 2004

01010100011010000111001
00110100101101100011011
000110010101110010

MagusDraco posted:

It ended up not being Denuvo but just poo poo programming in general / Arkahm Knight's revenge in which it loads stuff in and out real aggressively and will do that at the speed of your hard disk so if you're on a ssd (or god forbid something stupid like raid 0 nvme ssds) it will create massive driver overhead crushing the cpu.

edit: it doesn't want to leave textures loaded into vram because it's afraid you'll run out of vram so instead it just loads the sames textures into and out of vram thousands of times over the course of an hour.


https://github.com/Kaldaien/SpecialK/releases/tag/sk_odyssey

How the gently caress did it get so bad from origins? literally the same game 1 year apart, CYOA voices, and boat tech.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

incoherent posted:

How the gently caress did it get so bad from origins? literally the same game 1 year apart, CYOA voices, and boat tech.

I was under the impression Origins was also as bad and was the source of all of the "wow look at how bad denuvo and this other drm on top of denuvo hammers the cpu" comments.


I mean they've done two separate performance patches to odyssey since Kaldaien did that write up and did what he could for the game on his end. Latest one added weird microstutters for some people unless you lock the framerate in game with the in game tools (and it has to be locked below 60) though so like...I wouldn't put it past Ubi just being bad at patching poo poo

orcane
Jun 13, 2012

Fun Shoe

incoherent posted:

How the gently caress did it get so bad from origins? literally the same game 1 year apart, CYOA voices, and boat tech.

It's my understanding that Origins does the same thing, but who knows.

I'm only running it off a HDD in 1200p though, and so far it didn't make me want to upgrade my 4790k :v:

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord

quote:

https://www.digitimes.com/news/a20190111PD207.html

AMD has unveiled recently its 7nm CPU and GPU lineup designed for high-performance PCs, gaming and data center applications. The new processor series are all reportedly being fabricated by Taiwan Semiconductor Manufacturing Company (TSMC).

TSMC is also among the backend partners of AMD for its new 7nm computing and graphics products, according to industry sources. Siliconware Precision Industries (SPIL) under Taiwan's ASE Technology Holding, and China-based Tongfu Microelectronics (TFME) are other backend service providers for the chips, the sources continued.

TSMC with its CoWoS (chip-on-wafer-on-substrate) packaging has grabbed orders for AMD's 7nm datacenter CPU,



while SPIL and TFME share the flip-chip packaging orders placed by AMD for its new 7nm CPU and GPU designed for desktops and notebooks, the sources indicated.

TFME (formerly Nantong Fujitsu Microelectronics) through its acquisition of an 85% stake in AMD's Penang, Malaysia and Suzhou, China ATMP (assembly, test, mark, and pack) facilities has allowed the China-based company to win part of the orders for AMD's new 7nm processors, the sources identified.

"2019 will be an inflection point for the industry as we bring these new products to market," said AMD president and CEO Lisa Su when announcing the company's first 7nm CPU and GPU. "From our 7nm Radeon graphics chips to our next-generation 7nm AMD Ryzen and AMD EYPC processors, it's going to be an exciting year for AMD and the industry."

AMD using different packaging for Server and Desktop/GPU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Given they've already shown off the new Threadripper (I assume it was a TR instead of Epyc, since it was chip on substrate and not CoWoS), it's wishful thinking that they may switch over anyway?

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Cowos is just an interposer so maybe hbm is coming?

Cygni
Nov 12, 2005

raring to post

I imagine Rome is actually going to be pretty different from the desktop designs, just cause 1 I/O die with a dual channel config aint gonna fly there. Maybe multiple, smaller I/O dies? Is there some reason a more dense chiplet layout would need an interposer?

As a side note, I will fully admit my predictions were totally wrong about AMD moving to chiplets this fast. This is pretty awesome/insane.

e: Also, no chiplet APUs are coming for Matisse:

https://www.anandtech.com/show/13852/amd-no-chiplet-apu-variant-on-matisse-cpu-tdp-range-same-as-ryzen2000

Cygni fucked around with this message at 22:53 on Jan 11, 2019

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
What makes you think the IO die on the Epyc has only two channels? The thing is enormous compared to the one on the Ryzen.

Cygni
Nov 12, 2005

raring to post

Combat Pretzel posted:

What makes you think the IO die on the Epyc has only two channels? The thing is enormous compared to the one on the Ryzen.

oh i somehow missed that they already showed the package for Rome, my bad

e: that said looking at this picture, it doesnt look like there is an interposer?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Hm, so monolithic designs again. I think this might indicate Matisse is a 4C/8T, probably 12 to 14CU design then, definitely Navi. They might get away with a ~120nm² or so die that way. I'm not seeing the advantage of doing a larger die to accommodate even more cores for what amounts to a mobile and budget desktop design, and more than 14CU seems pointless in a monolithic design without a high bandwidth solution. I dunno, AMD might go eDRAM, GloFo has IBM 14nmHP to make high density eDRAM and eSRAM chips, and they wouldn't need much, 64-128MB being enough really. HBM2 is better but that needs an interposer.

NewFatMike
Jun 11, 2015

Someone posted rather recently that DDR4 4000 is faster than eDRAM, so if motherboards get there, it could be a detriment to allocate die space there.

LRADIKAL
Jun 10, 2001

Fun Shoe
Faster? In throughput? What about latency?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

NewFatMike posted:

Someone posted rather recently that DDR4 4000 is faster than eDRAM, so if motherboards get there, it could be a detriment to allocate die space there.

That depends entirely on a huge number of factors, same generation to same generation, not really? You trade capacity for latency, more or less, so it comes down to the specific implementation details.

I'd be super excited to see a SKU that has a 1GB sRAM cache sitting right next to the IMU chip that acts as a really retarded fast L4 cache. Who cares if it's 3x as big as eDRAM when it has twice the throughput and 20% less latency. A huge quantity of scientific and media systems have core loops that could fit entirely within L4 cache, improving memory related metrics substantially.

NewFatMike
Jun 11, 2015

Methylethylaldehyde posted:

That depends entirely on a huge number of factors, same generation to same generation, not really? You trade capacity for latency, more or less, so it comes down to the specific implementation details.

I'd be super excited to see a SKU that has a 1GB sRAM cache sitting right next to the IMU chip that acts as a really retarded fast L4 cache. Who cares if it's 3x as big as eDRAM when it has twice the throughput and 20% less latency. A huge quantity of scientific and media systems have core loops that could fit entirely within L4 cache, improving memory related metrics substantially.

There's good to know - I was a little suspicious of the figure and I hadn't really researched it enough to know for sure if it was true.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Cygni posted:

e: that said looking at this picture, it doesnt look like there is an interposer?
Yea, the demoed CPU was on substrate. I could swear I managed to google up an image of the CoWoS one, but gently caress me if I could actually replicate that search.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Methylethylaldehyde posted:

retarded fast L4 cache

please don't do that

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

BangersInMyKnickers posted:

please don't do that

Also.. slow fast l4 cache?

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
It could literally be a retarded fast L4 cache, high latency high bandwidth.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

K8.0 posted:

It could literally be a retarded fast L4 cache, high latency high bandwidth.

That's what HBM gets you, lots of throughput, loads of latency, but you have a bus thats 512 bits or 1024 bits wide, so as long as you know what your accesses will look like (shader processing) then it works a treat.

The nicest part of AMD chiplet design is there is no reason why getting a 1GB eDRAM or 256MB sRAM chip from SK Hynix or Micron couldn't work. You'd have to tweak the IMC some to get the 4th layer cache in place, but having a super huge eviction cache from L3 would prevent a lot of memory accesses that would otherwise need to happen. Best thing, is depending on the exact implementation details, you can make it 100% transparent to the 7nm chiplets, which would save them tons of cash on new masks.

karoshi
Nov 4, 2008

"Can somebody mspaint eyes on the steaming packages? TIA" yeah well fuck you too buddy, this is the best you're gonna get. Is this even "work-safe"? Let's find out!

Methylethylaldehyde posted:

The nicest part of AMD chiplet design is there is no reason why getting a 1GB eDRAM or 256MB sRAM chip from SK Hynix or Micron couldn't work. You'd have to tweak the IMC some to get the 4th layer cache in place, but having a super huge eviction cache from L3 would prevent a lot of memory accesses that would otherwise need to happen. Best thing, is depending on the exact implementation details, you can make it 100% transparent to the 7nm chiplets, which would save them tons of cash on new masks.

Or make it 100% OS controlled. Some Xeon SKUs for networking boxes used to have programmable caches where you could slice part of it off and use it for things you don't want touching the memory, i.e. packets not destined for the machine. Surely putting things like the root of the page table, the syscall entry points, interrupt handlers or allowing apps to request part of the SRAM to be mapped into their address space might be useful.

I'd love to see raytracing benchmarks with as much of the spatial tree in SRAM as it could fit. Framebuffer access is coherent but bouncing rays are a horrible memory access pattern for DRAM. SRAM dgaf about access coherency and has great latency to boot, ideal for non-coherent workloads where cache misses kill all your out-of-order IPC gains.

teh_Broseph
Oct 21, 2010

THE LAST METROID IS IN
CATTIVITY. THE GALAXY
IS AT PEACE...
Lipstick Apathy
X-post cause from Parts I was talking about it here too:

2500k@4.5 (with 8gb DDR-1600) swap over to a 2600x (with 16gb DDR-3000) complete! Replaced the power supply in the process, and gyah forgot how long it really takes to actually swap out all that stuff and the wires and clean years of dust and funk in the process, whew. Using the cooler+paste that comes with the 2600x and folks are right it's kinda noisy, though I haven't crawled down and confirmed that's where the sound is coming from. Not worrying about a replacement cooler since I'm probably switching to a Zen2 chip when they come out anyway, figure I'll wait till then.

It's pretty darn cool to have just thrown it together, turned on XMP-2, and bam I'm all set without spending 2 days tweaking and testing overclock settings! Still on the agenda to try and get the advertised 3000mhz and CL15 timing for the RAM, the profile put it at 2933 and CL16. And I haven't done stress testing and watching temps and volts and all that to verify everything's looking good either outside of launching CPU-Z and seeing some CPU speed spikes up to ~4.3ghz.

Couple quick modern benchmarks before and after, same settings and everything. Turned out about as expected - not much of a difference in raw framerate, but overall smoothed out with better frame times and less hitching and stutter. Note, the TR bench is on DX11 cause DX12 caused my last setup to completely take a dump when in game, so while it's an apples to apples comparison there, it may bench better on the 2600x if I turned DX12 back on.

Shadow of the Tomb Raider:



Asscreed Odyssey:

Seamonster
Apr 30, 2007

IMMER SIEGREICH
Thanks for your post, Broseph. I am in the near exact same boat with a 2500k OC'd with a 1080 and wondering what a ryzen upgrade looks like.

90s Solo Cup
Feb 22, 2011

To understand the cup
He must become the cup



What brand RAM did you buy? I heard how finicky Ryzen was when it came to RAM sticks so I bought 16GB of AMD-optimized DDR4-3200 G.Skill Trident Z RGB. Had absolutely no trouble getting those two sticks to 3200Mhz with A-XMP.

I went for all-core overclocking on my 2700x instead of letting Precision Boost do its thing and wound up with a stable 4.23Ghz @ 1.3625v. Before then I was getting brief single-core spikes of 4.5Ghz @ 1.5v.

Adbot
ADBOT LOVES YOU

GutBomb
Jun 15, 2005

Dude?
I get better performance and thermals doing the XFR/PBO stuff in games than I did with manual overclocking with my 2700x. The performance in synthetic benchmarks is actually worse but in game benchmarks and just general game performance it was better.

Also 1.5v might look scary, AMD engineers have posted on reddit several times not to worry about it, that it’s never there for sustained periods and those voltage spikes are in there by design. It’s how XFR/PBO works.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply