Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
8-bit Miniboss
May 24, 2005

CORPO COPS CAME FOR MY :filez:
Huh, I guess this idiosyncrasy is only limited to Asrock I guess or maybe just the x370 Taichi. I'm not sure. But I've had this 1700 on a x370 Taichi using P-state overclocking for over a year now because there's a mindshare of people on an overclock.net thread believing that using standard frequency overclocking somehow doesn't allow for downclocking and undervolting when idle so if you wanted that, your only option was to overclock with P-states. Some other guy in the thread recently thinks it some weird bug because they mention no downclocking occurs when frequency is even but when it's odd it "works".

Decided to give this a try, turned off p-state overclocking, cleared CMOS and started fresh. Set up frequency overclocking and set it to 3.9Ghz I was getting before. Boot back into Windows, yep, no downclocking or undervolting. Boot back into BIOS, tune frequency a step down to 3.875, boot back into Windows. Downclocking and undervolting works.

What?? How does that happen? :psyduck:

Adbot
ADBOT LOVES YOU

SamDabbers
May 26, 2003



Yeah I think the ASRock UEFI is just buggy. The only thing I overclock in the UEFI on my X370 Taichi is RAM, and I set the P-states after booting. It's more predictable that way.

JockstrapManthrust
Apr 30, 2013
Well, time to convert Singapore dollars to your local currency. Looks like Ryzen 3000's are starting to show up in the price lists in Sim Lim Square.

https://www.bizgram.com/pricelists/001%20Bizgram%20Daily%20DIY%20Pricelist.pdf

https://i.postimg.cc/XNKx1vDD/tgterterte.jpg <- price list fragment in case that PDF dies.

JockstrapManthrust fucked around with this message at 20:26 on Mar 2, 2019

NewFatMike
Jun 11, 2015

Via Toms Hardware:



https://www.tomshardware.com/news/amd-ryzen-3000-specification-price,38731.html

Micro Center is selling 9700Ks for $418, :laffo: if the R5 3600X is essentially the same part for $260. If the R9 3850X is hitting 16C/32T up to 5GHz, I am very excited for what entry level Threadripper is going to be like.

NewFatMike fucked around with this message at 23:47 on Mar 2, 2019

OhFunny
Jun 26, 2013

EXTREMELY PISSED AT THE DNC
I can see a $50 price bump on the higher end and a $20-30 bump at the lower end.

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:50 on Mar 23, 2021

90s Solo Cup
Feb 22, 2011

To understand the cup
He must become the cup



sincx posted:

The 3700X looks sweet.

:same:

It hits the price/performance sweet spot for me.

Cygni
Nov 12, 2005

raring to post

Those prices seem awfully low compared to t-ripp. Guess we will see soon.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
The TDPs are officially worthless at this stage. 3600X -> 3700X is a jump of 50% more cores and 200 MHz, but somehow only takes only 10W more power :crossarms:

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Cygni posted:

Those prices seem awfully low compared to t-ripp. Guess we will see soon.
The Threadrippers are rumored to come with appropriately adjusted core counts.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

ConanTheLibrarian posted:

The TDPs are officially worthless at this stage. 3600X -> 3700X is a jump of 50% more cores and 200 MHz, but somehow only takes only 10W more power :crossarms:

They're going to 7nm.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Its garbage bullshit speculation guys, and I can't believe I'm the one saying that.

Khorne
May 1, 2002

EmpyreanFlux posted:

Its garbage bullshit speculation guys, and I can't believe I'm the one saying that.
I'm just hoping for 9900k but cheaper.

To be honest though, I don't see why they can't do those prices. I know AMD has tended to not cut much of a discount in the past when they have a competitive product, but this time I could easily see them trying to pull out in front. They have the lead in HEDT and server processors by a lot right now, and those are high margin.

2020 is supposed to be zen3 which is another tick to zen2 this year's tick. If that's true it's going to be crazy. Especially if they beat intel's 10nm+ to market and stick to AM4.

Khorne fucked around with this message at 04:13 on Mar 3, 2019

k-uno
Jun 20, 2004
Are there many real-world workloads that can use 12-16 cores at 100%, and yet not be bottlenecked by just two channels of DDR4? If the next Ryzen is going to be socket compatible with current chips then it will still be limited to dual channel RAM. Up until this past year pretty much all high end chip designs (including EPYC and the 28 core xeons) topped out around 4 cores per memory channel, though AMD seems to be walking away from it with TR2 (32 cores/4 channels) and the next EPYC chips (which will be up to 64 cores on 8 channels). I'm asking this as a serious question; my experience with scientific computing is that a lot of really hard computing tasks end up being memory bandwidth limited, so 16 cores on 2 channels seems insane to me. But maybe this isn't the case for typical high end consumer tasks?

Khorne
May 1, 2002

k-uno posted:

Are there many real-world workloads that can use 12-16 cores at 100%, and yet not be bottlenecked by just two channels of DDR4? If the next Ryzen is going to be socket compatible with current chips then it will still be limited to dual channel RAM. Up until this past year pretty much all high end chip designs (including EPYC and the 28 core xeons) topped out around 4 cores per memory channel, though AMD seems to be walking away from it with TR2 (32 cores/4 channels) and the next EPYC chips (which will be up to 64 cores on 8 channels). I'm asking this as a serious question; my experience with scientific computing is that a lot of really hard computing tasks end up being memory bandwidth limited, so 16 cores on 2 channels seems insane to me. But maybe this isn't the case for typical high end consumer tasks?
The biggest one will probably be other processes you have running. The main benefit of dual core and quad core initially was that your worst-case fps wasn't getting hammered by background processes. With many engines scaling somewhat with multiple cores, browser windows dominating single cores at times, every app being electron, and whatever else the extra cores should have tangible performance gains in the real world. Benchmarks don't often represent how people actually use computers.

Games that support lots of cores probably won't choke.

I agree with your concern, though. I wish there were a quad channel am4. I wonder if am5 with DDR5 will be quad channel.

Khorne fucked around with this message at 04:15 on Mar 3, 2019

GRINDCORE MEGGIDO
Feb 28, 1985


Super handy if you want to game, stream, and surf the internet at the same time.

crazypenguin
Mar 9, 2005
nothing witty here, move along

k-uno posted:

my experience with scientific computing... But maybe this isn't the case for typical high end consumer tasks?

Correct. Scientific workloads are all optimized around wide SIMD operations on linear chunks of memory, because that's how you actually get maximum performance out of a CPU.

Consumer workloads are all following pointers and stalling the CPU waiting for memory to arrive (i.e. RAM latency sensitive, not bandwidth sensitive.)

Games are somewhere in the middle, but I wouldn't want to guess when a memory bandwidth bottleneck might appear, except that it's probably at least twice as many cores as you're used to, and very likely more.

Khorne posted:

I wonder if am5 with DDR5 will be quad channel.

Probably not, but DDR5 is bringing twice the bandwidth anyway.

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

Isn't adored the youtuber who decided to throw in 100% with the r/amd crowd or am I getting them mixed up with someone else?

Fantastic Foreskin fucked around with this message at 16:06 on Mar 3, 2019

FlapYoJacks
Feb 12, 2009
I am just hoping beyond hope that they come out with a 64C Threadripper.

NewFatMike
Jun 11, 2015

k-uno posted:

Are there many real-world workloads that can use 12-16 cores at 100%, and yet not be bottlenecked by just two channels of DDR4? If the next Ryzen is going to be socket compatible with current chips then it will still be limited to dual channel RAM. Up until this past year pretty much all high end chip designs (including EPYC and the 28 core xeons) topped out around 4 cores per memory channel, though AMD seems to be walking away from it with TR2 (32 cores/4 channels) and the next EPYC chips (which will be up to 64 cores on 8 channels). I'm asking this as a serious question; my experience with scientific computing is that a lot of really hard computing tasks end up being memory bandwidth limited, so 16 cores on 2 channels seems insane to me. But maybe this isn't the case for typical high end consumer tasks?

Rendering products is something that I often have to do multiple times a day, whether it's adjustments to a product's makeup or just colors. Looking at the 2990WX, which has wonky memory access, it still steamrolls more memory optimized configurations.

In my use case, a Threadripper 3000 series will pay for itself with a quickness for work and just let me be lazy with closing out background stuff when I'm playing.

PC LOAD LETTER
May 23, 2005
WTF?!

k-uno posted:

I'm asking this as a serious question; my experience with scientific computing is that a lot of really hard computing tasks end up being memory bandwidth limited, so 16 cores on 2 channels seems insane to me. But maybe this isn't the case for typical high end consumer tasks?
Supposedly AMD is going to be recommending and officially supporting DDR4 3200 memory, at a minimum, for the very high core count AM4 chips. That plus the greatly increased cache is supposed to do surprisingly well at mitigating (but not eliminating) memory bottlenecks. At least that is the rumors anyways.

Everyone in the server market that has gotten to play with 64C/128T Zen2 seems completely unworried and even fairly impressed so far with high core count Rome, which has also been cast as going to be terrible due to memory bandwidth limitations, so there is good reason to believe its not BS.

If the desktop chips can support much higher clocked RAM, like the DDR4 4000 stuff, reliably than the memory bandwidth bottleneck would be pretty much eliminated for most things I'd think. Right now the practical max memory speed you can get away with most of the time without major attempts at overclocking the RAM for Zen+ is DDR4 3400-3600 in comparison.

PC LOAD LETTER fucked around with this message at 08:27 on Mar 3, 2019

Cygni
Nov 12, 2005

raring to post

EmpyreanFlux posted:

Its garbage bullshit speculation guys, and I can't believe I'm the one saying that.

yeah i mean i feel like if release was really that imminent, we would have seen leaks galore from the board partners. god knows they cant help themselves.

PC LOAD LETTER
May 23, 2005
WTF?!

Khorne posted:

I wonder if am5 with DDR5 will be quad channel.
Would be wooonderful for iGPU performance, probably just about eliminate the need for some sort of on package/die huge cache, but probably won't happen on the common and super cheap desktops that APU's get put into mostly. I'd love it if they did of course but yeah probably won't happen.

The issue blocking them from doing that would essentially be cost. The CPU package would need lots more pins (hundreds more) and be bigger. The motherboard would probably need more layers too.

Prices would have to go up to pay for that stuff. How much exactly? I don't know really. But look at how much Intel socket 2066* mobos cost to get a (probably very) rough idea of the amount you could expect to pay retail for a mobo like that. Even the "cheap" ones go for north of $150 new going by newegg prices.

*yeah its a Intel socket but realistically anything AMD would end up shipping to support quad channel memory would overall be fairly similar in terms of pin or pad count, size, cost, etc.

Cygni posted:

yeah i mean i feel like if release was really that imminent, we would have seen leaks galore from the board partners. god knows they cant help themselves.
If a June release for Zen2 is correct that is only 3 months away though.

And prices that turned out to be reasonably closed to accurate were leaking on Zen back in late 2016 (Nov or Dec) which is close to that 3 month time frame too.

Personally I was expecting high(er) prices for Zen2 as well, particularly for the 12 and 16 core parts. But everything from the rumor mill keeps suggesting that AMD is going to be aggressive on pricing Zen2 down to focus on sales volume and not increasing ASP's dramatically like I'd thought they would.

PC LOAD LETTER fucked around with this message at 08:36 on Mar 3, 2019

SwissArmyDruid
Feb 14, 2014

by sebmojo
I would be loving floored if AMD were to ever go above 16 cores on a mainstream desktop socket. Eight-core CCXes + separate IMC ameliorates and remediates a multitude of problems associated with older Zen designs, that I don't think quad-channel memory actually provides anything that DDR5 doesn't on its own.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
DDR5 is likely a 2020 thing at this stage, right?

Kazinsal
Dec 13, 2011


ConanTheLibrarian posted:

DDR5 is likely a 2020 thing at this stage, right?

The first functional DDR4 module was produced in 2011 and processors using DDR4 hit the market in 2015. The first functional DDR5 module was produced in late 2017.

It's going to be a while.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Micron started shipping validation DDR5 a little under a year ago. SK Hynix and Samsung have promised DDR5 products before EOY.

2020-2021 for widespread adoption isn't unreasonable.

SwissArmyDruid fucked around with this message at 12:43 on Mar 3, 2019

PC LOAD LETTER
May 23, 2005
WTF?!
2021 for widespread adoption is probably more reasonable since at launch all the new memory standards tend to be very expensive and not all that much faster than the higher clocked versions of the previous standard of RAM they're replacing.

AMD will probably do a new socket around that time frame too to support DDR5 and has said they will end support for AM4 sometime in 2020.

edit: a trip down memory lane for when DDR4 was first coming on market years ago

tl&dr: don't be in a rush to buy a new platfrom just to get the latest memory standard since its probably not worth it

edit 2: \/\/\/\/\/\/\/\/\/ back when DDR4, DDR3, DDR2, and DDR were first introduced there were always claims of incredible performance increases to be had, and at launch at least, they have never panned out. Now the newer memory standards always were able to scale up and eventually become worthwhile, especially once costs dropped from launch prices, but launch hype is never real. I don't doubt you're going to see some impressive synthetic bench numbers but who really cares about synth benches? \/\/\/\/\/\/\/

PC LOAD LETTER fucked around with this message at 13:15 on Mar 3, 2019

SwissArmyDruid
Feb 14, 2014

by sebmojo

Anandtech posted:

As noted back in May, the primary feature of DDR5 SDRAM is capacity of chips, not just a higher performance and a lower power consumption. DDR5 is expected to bring in I/O speeds of 4266 to 6400 MT/s, with a supply voltage drop to 1.1 V and an allowable fluctuation range of 3% (i.e., at ±0.033V). It is also expected to use two independent 32/40-bit channels per module (without/or with ECC). Furthermore, DDR5 will have an improved command bus efficiency (because the channels will have their own 7-bit Address (Add)/Command (Cmd) buses), better refresh schemes, and an increased bank group for additional performance. In fact, Cadence goes as far as saying that improved functionality of DDR5 will enable a 36% higher real-world bandwidth when compared to DDR4 even at 3200 MT/s (this claim will have to be put to a test) and once 4800 MT/s speed kicks in, the actual bandwidth will be 87% higher when compared to DDR4-3200. In the meantime, one of the most important features of DDR5 will be monolithic chip density beyond 16 Gb.

https://www.anandtech.com/show/13490/cadence-and-micron-ddr5-update

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
It's still not going to solve the bandwidth issue iGPUs have, and which can't be resolved without an exotic of expensive solution.

PC LOAD LETTER
May 23, 2005
WTF?!

EmpyreanFlux posted:

It's still not going to solve the bandwidth issue iGPUs have, and which can't be resolved without an exotic of expensive solution.
It might get closer than you'd think. At least so long as you don't go expecting mid or high end dGPU performance out of a iGPU.

Dual channel DDR5 should able to achieve around 100GBs. For reference a Nvidia 1050 with 128 bit GDDR5 has about 112GBs bandwidth.

Quad channel DDR5 would give around 200GBs which would allow for mid range dGPU performance. High end dGPU performance would require lots of on package or on die high memory no matter what. Heat and power as well as cost would still be major issues though.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Well, here's the thing. GDDR5 is built for bandwidth. Higher latencies, but larger bandwidth, to push entire datasets for a bunch of CUs to chew through. But if your bandwidth isn't large enough to push the major data sets the same way you do with discrete graphics, but latency is low enough that streaming data off the memory can be close to the speed of moving large chunks of data how they do now, the bottleneck becomes the low shader engine counts, so defined because more would just in CUs not being fed fast enough and sitting idle. Enter a model where two, or three, or four smaller datasets are pushed to the iGPU to preserve overall system usability, (because remember, you're sharing the RAM bus with the CPU, too) with their data being pushed to the framebuffer as it finishes, and then only pushing from the framebuffer to the display when a complete frame is ready.

Oh wait, that sounds a lot like why they needed Freesync, doesn't it?

Point is, the increased bandwidth from DDR5 obviates a large chunk of the front end of that equation, which means that even if nothing else changes, and they can shuffle around the thermal budget accordingly, AMD can put a number of CUs onto an iGPU that is better suited to handling graphics in the conventional manner. It becomes a cost-saving measure on the front end of development. And Freesync still improves the experience overall.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

PC LOAD LETTER posted:

It might get closer than you'd think. At least so long as you don't go expecting mid or high end dGPU performance out of a iGPU.

Dual channel DDR5 should able to achieve around 100GBs. For reference a Nvidia 1050 with 128 bit GDDR5 has about 112GBs bandwidth.

Quad channel DDR5 would give around 200GBs which would allow for mid range dGPU performance. High end dGPU performance would require lots of on package or on die high memory no matter what. Heat and power as well as cost would still be major issues though.

Is 1050 performance in 2021 really moving the bar though? I mean it'd basically be an iGPU which could reasonably run everything up to 2015 pretty fluently and that's an impressive feat alone but that's still bottom tier performance. Maybe that's all you need out of them, to compete with the very bottom of dGPUs, but being able to compete with low-mid tier dGPUs while still beating them on price is essentially the goal I think. To get that you need an exotic or expensive solution IMHO; eDRAM, eSRAM, HBM, or Quad Channel. A workable design already exists to meet this performance criteria, the custom unit from the Zubor Z+, a 400mm˛ die that almost meets a 1060 3GB (it's choking on the 2GB of RAM provided to it though so it may actually perform around the RX 570/1060 3GB range rather than just under it.) For 7nm, I think the die would be ~220mm˛, which seems reasonable.

Still I think it's between HBM2 and Quad channel, and the question of which would be cheaper is entirely based on whether or not HBM pricing can ever come down. It might actually be practical to make quad channel as a feature for X series while B series boards remain dual channel and for practical purposes take over what the X series used to offer. Like, as an example X670 would offer PCIE4 support and Quad Channel, but B670 would only offer PCIE3 and Dual Channel. B650 would meet bottom criteria for overclocking and features, A620 boards could exist as extremely low end featureless boards for dirt cheap.

wargames
Mar 16, 2008

official yospos cat censor

EmpyreanFlux posted:

Is 1050 performance in 2021 really moving the bar though?

For games probably no but think of co-processor for streaming, rendering video and that sort of thing. That has been what about of people use their iGPUs for.

SwissArmyDruid
Feb 14, 2014

by sebmojo

EmpyreanFlux posted:

Is 1050 performance in 2021 really moving the bar though? I mean it'd basically be an iGPU which could reasonably run everything up to 2015 pretty fluently and that's an impressive feat alone but that's still bottom tier performance.

As you say, I'm still waiting on an iGPU that can run 2015 games. I played FFXIV and Warframe on a 750ti as the last gasp before I hand-me-downed that computer to my stepbrothers, 1050 performance is the loving LEAST they can shoot for, since iGPUs still can't even hit that 750ti mark yet.

Seamonster
Apr 30, 2007

IMMER SIEGREICH
I've pretty much given up on iGPUs now - that severe bandwidth limitation and all. HBM or some kind of insanely fast cache may be a cure but that's $$$ and the whole point of iGPUs is saving those $$.

Still waiting for 1060/580 level performance out of a slot powered only card though which fortunately seems much more likely with 7nm coming on.

k-uno
Jun 20, 2004

NewFatMike posted:

Rendering products is something that I often have to do multiple times a day, whether it's adjustments to a product's makeup or just colors. Looking at the 2990WX, which has wonky memory access, it still steamrolls more memory optimized configurations.

In my use case, a Threadripper 3000 series will pay for itself with a quickness for work and just let me be lazy with closing out background stuff when I'm playing.

Fair enough! I guess 12/16 core Ryzen would make sense, in that case, probably followed by a 12C/2 channel i9 from Intel a few months later.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

Seamonster posted:

I've pretty much given up on iGPUs now - that severe bandwidth limitation and all. HBM or some kind of insanely fast cache may be a cure but that's $$$ and the whole point of iGPUs is saving those $$.

Still waiting for 1060/580 level performance out of a slot powered only card though which fortunately seems much more likely with 7nm coming on.

Yeah I've always wanted a powerful console-like APU but I think that would only happen if GDDR or HBM were built in and that seems unlikely :(.

Arivia
Mar 17, 2011

MaxxBot posted:

Yeah I've always wanted a powerful console-like APU but I think that would only happen if GDDR or HBM were built in and that seems unlikely :(.

Isn't that what Intel was doing with those CPU+Vega APUs with HBM on die?

Adbot
ADBOT LOVES YOU

PC LOAD LETTER
May 23, 2005
WTF?!

EmpyreanFlux posted:

Is 1050 performance in 2021 really moving the bar though?
Yeah it would be. Going by the rest of your post probably not enough to make you happy.

Which is fair. I think lots of people would be well served by a affordable APU that performed like a RX570 or 580 and had a 4C/8T Zen2 CPU riding along with it on die.

I just don't see that happening anytime soon.

Neither Intel or AMD seems interested in making quad channel DDR4/5 the standard across all their product lines and HBM is still too expensive. I vaguely remember HBM3 was supposed to be focused on getting costs down but I haven't heard anything about it in a long time so I dunno what the status is on that actually appearing in something a consumer could buy for a good price either.

Doing a quick google it seems HBM3 might be a thing in 2020. But who really knows? We'll just have to wait and see.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply