Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cygni
Nov 12, 2005

raring to post

The 35w versions of the 2200 and 2400 just launched, called “GE”, and will be available in AM4 versions. But apparently they are mostly for systems integrators so it might be awhile before we see BIOS support and retail versions in stock.

Core clocks are 3-4 hundred MHz lower, but GPUs are untouched.

Adbot
ADBOT LOVES YOU

Stanley Pain
Jun 16, 2001

by Fluffdaddy

Measly Twerp posted:

and it's highly likely that KVM would have the exact same problem.

I wouldn't be so sure about that. KVM is actively being developed and as others have mentioned tend to find/fix/workaround a number of oddities within running VMs.

Kazinsal
Dec 13, 2011


VirtualBox gets fixes on Windows.

Not so much other platforms.

SamDabbers
May 26, 2003



Combat Pretzel posted:

Regarding B-die memory, what are the chances I will be able to push it a lot, regardless of what the sticks say (e.g. pushing a DDR4-2400 stick to 3200)? Mostly interested about this to get DDR4-3200 ECC sticks.

I got a pair of 16GB B-die EUDIMMs rated for DDR4-2400 CL17 at 1.2V up to DDR4-2933 CL16 at stock voltage on my 1700 without any effort. If it's possible to drive these to 3200, it'll require more voltage and/or relaxed timings, but I haven't spent the time to fiddle.

tl;dr: B-die is the best for Ryzen

SwissArmyDruid
Feb 14, 2014

by sebmojo

Stanley Pain posted:

I wouldn't be so sure about that. KVM is actively being developed and as others have mentioned tend to find/fix/workaround a number of oddities within running VMs.

....have we forgotten about the KVM/NPT bug with the unimplemented stubs bug that persisted for over a decade and was ONLY just squashed last year?

Stanley Pain
Jun 16, 2001

by Fluffdaddy

SwissArmyDruid posted:

....have we forgotten about the KVM/NPT bug with the unimplemented stubs bug that persisted for over a decade and was ONLY just squashed last year?

That was a pretty hilarious happening in general ;).

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy
So an update on my virtual machine issues. The newer BIOS versions for my motherboard did break something, specifically when overclocking. Here's the thread over on the Level1Techs forum.

catsay posted:

Interestingly your ASRock board also seems to have the clocksource instability/calibration problem when overclocked via the BIOS.

This turned out to be correct and running at CPU stock clocks fixes it. Of course the older BIOS versions did not have this issue so... gently caress ASRock?

SamDabbers
May 26, 2003



Measly Twerp posted:

So an update on my virtual machine issues. The newer BIOS versions for my motherboard did break something, specifically when overclocking. Here's the thread over on the Level1Techs forum.


This turned out to be correct and running at CPU stock clocks fixes it. Of course the older BIOS versions did not have this issue so... gently caress ASRock?

The latest BIOS for my X370 Taichi (4.60, PinnaclePI 1.0.0.1) exhibits the same problem with unstable TSC when overclocking. In my case, any OC via the BIOS also only seems to affect core 0, and all the other ones top out at stock frequencies under load. I reverted to 4.40 and everything works properly again. The newer AGESA introduced IBPB instructions for Spectre v2 mitigation, so hopefully the next release fixes things so we can have our overclocks and IBPB too.

One thing I haven't yet tried is overclocking once the OS has loaded a la Ryzen Master. I'm running Linux though, so it takes a little more fiddling. There's a ZenStates.py script out there that can adjust the frequency and voltage MSRs, but it needs the msr kernel module which isn't built in the stock Fedora kernels, and I've been too lazy to take the time to fiddle.

SwissArmyDruid
Feb 14, 2014

by sebmojo

Klyith posted:

consoles have a huge lead-time between nailing the specs and putting out the box, because they have to give devs time to make games for the launch. so they're definitely not using navi. and they're gonna be using AMD graphics because that's the only game in town -- the conditions that drove both consoles to use x86 & AMD have not materially changed. so the PS5 at least (which apparently is already in super-early dev kit land) will probably use ryzen & vega derived architecture.

consoles don't care that vega is a trash fire, they're not really competing against PCs.

however, here's some reasons vega might not be quite as bad a trash fire as you think:
1. it was priced for it's performance competition in the crypto market, not the video games market. AMD has sold big GPUs for minimal margin before, so Vega probably could have been priced cheaper. they didn't because crypto was distorting everything.
2. therefore the only people who bought them were crypto-miners (idiots) and AMD super-fans (also idiots), with the vast majority of purchases being crypto idiots. Vega doesn't even show up on steam hardware charts.
3. given that, zero work is being done to optimize games for vega. Vega really did have arch improvements for GCN, but I doubt they're really getting any attention.
4. GCN is old and creaky, but it's gotten a hell of a lot of lifespan extension already and a lot of targeted optimization based on the consoles. If the next consoles use Vega, that stuff that's being ignored might actually get used!

I've been thinking about this for a bit, and I'm pretty sure you're also in the GPU thread, so I will remind everyone else, GCN is hitting the limits of its efficiency. Die size, power, memory bandwidth, thermals... everything. It sucks down so much power they had to use HBM instead of more stacks of GDDR5 to accomplish the amount of bandwidth needed to field Vega 56/64. And thermals... nobody likes a hot or noisy console.

Now, they *might* be able to rework it to use GDDR5X. That's up to the re-headed GPU department, but at this point, they were probably already building around GDDR6, which is supposed to land this year. With GDDR6 as it's base, I would rather they just turn the burners on and full steam through getting Navi out, and towards whatever the hell comes after it, the sooner, the better. (And for the love of all that is unholy, please let it not be another loving GCN derivative. At the very least, so that they can start with a completely new driver stack that's not the shambling monstrosity that the current GCN drivers are.)

SwissArmyDruid fucked around with this message at 18:45 on Apr 24, 2018

GRINDCORE MEGGIDO
Feb 28, 1985


Ohhhh it's so gonna be a GCN derivative, isn't it.

Cygni
Nov 12, 2005

raring to post

Navi is a GCN derivative.

Truga
May 4, 2014
Lipstick Apathy
there isn't going to be a brand new gpu arch from nvidia or amd ever again, stop dreaming lol

SwissArmyDruid
Feb 14, 2014

by sebmojo

Cygni posted:

Navi is a GCN derivative.

I worded that less-clearly than I should have. Yes, it's a GCN derivative, but what's beyond it is not. The sooner they get Navi out, the sooner they can get to whatever the hell it is that comes after.

sauer kraut
Oct 2, 2004
I see the goalposts are already being shifting to post-Navi, very nice.

Arzachel
May 12, 2012

SwissArmyDruid posted:

I worded that less-clearly than I should have. Yes, it's a GCN derivative, but what's beyond it is not. The sooner they get Navi out, the sooner they can get to whatever the hell it is that comes after.

I mean, that's when the Ryzen money should start bringing results so :same: but I don't think GCN was/is anywhere near their biggest problem.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I don't think GCN is as much the issue as it's made out to be, as the engineers have been very forthright that scaling GCN wider as is needed costs money and is not an intrinsic issue of the uarch. Further GloFo has managed to eek out even more performance from the 14nm, substantial actually when clock for clock a Ryzen 2700X pulls a third less wattage than a Ryzen 1800X.

On the Ryzen front though, Anand redid their testing https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results

The issue was enabling HPET on Intel processors. Ryzen has something like a 1-3% variance when enabled, Intel can have up to 30% variance in games when enabled. What the gently caress Intel.

Also Also, people have been pushing R7 2700Xs to about 4.45-4.5Ghz with a very slight bump to BCLK (can't go above 105BCLK or system loses it's poo poo). So it seems Pinnacle Ridge isn't limited by the process but maybe through design? It's too bad AMD couldn't have found a way to make the BCLK async, multiplier OC is still better but an async BCLK might have enabled even higher clocks, maybe 4.6 to 4.7Ghz, or a really cold 4.4Ghz.

buglord
Jul 31, 2010

Cheating at a raffle? I sentence you to 1 year in jail! No! Two years! Three! Four! Five years! Ah! Ah! Ah! Ah!

Buglord
I tried reading the article but most of it went over my head.

FaustianQ posted:

The issue was enabling HPET on Intel processors. Ryzen has something like a 1-3% variance when enabled, Intel can have up to 30% variance in games when enabled. What the gently caress Intel.
What's the layman translation for this? I noticed a few people mentioned the 30% variance thing in the comments too.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

buglord posted:

I tried reading the article but most of it went over my head.

What's the layman translation for this? I noticed a few people mentioned the 30% variance thing in the comments too.

Basically, there are different levels of accuracy for the onboard clock. Normally you don't get High Performance Event Timing unless you ask for it, because doing so imposes a performance overhead. There is an option in many BIOSs to force all timing calls to use HPET. Anandtech decided to enable this option on all their boards thinking it would give them better results, and it turns out that it imposes a pretty significant performance hit on Intel while having little effect on Ryzen.

(also, if you've ever used Ryzen Master then it's forced by default and you need registry hacks to disable it)

That explains the difference between Ryzen 2000 and Coffee Lake, but the article doesn't explain why they show Ryzen 2000 as being 30% faster than Ryzen 1000, which is still a massive outlier vs other reviewers.

(also, it may imply that some of the results people are getting for BCLK overclocks may not be entirely accurate, because one of the situations Anandtech called out where standard timing can have some drift is when you are using non-standard BCLK)

Paul MaudDib fucked around with this message at 18:42 on Apr 25, 2018

TheJeffers
Jan 31, 2007

FaustianQ posted:

I don't think GCN is as much the issue as it's made out to be, as the engineers have been very forthright that scaling GCN wider as is needed costs money and is not an intrinsic issue of the uarch. Further GloFo has managed to eek out even more performance from the 14nm, substantial actually when clock for clock a Ryzen 2700X pulls a third less wattage than a Ryzen 1800X.

On the Ryzen front though, Anand redid their testing https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results

The issue was enabling HPET on Intel processors. Ryzen has something like a 1-3% variance when enabled, Intel can have up to 30% variance in games when enabled. What the gently caress Intel.

Also Also, people have been pushing R7 2700Xs to about 4.45-4.5Ghz with a very slight bump to BCLK (can't go above 105BCLK or system loses it's poo poo). So it seems Pinnacle Ridge isn't limited by the process but maybe through design? It's too bad AMD couldn't have found a way to make the BCLK async, multiplier OC is still better but an async BCLK might have enabled even higher clocks, maybe 4.6 to 4.7Ghz, or a really cold 4.4Ghz.

It's not a matter of HPET being enabled or disabled, it was that they were forcing the operating system to use it as the sole clock reference for timer-dependent values and since referencing HPET requires reading from a specific memory location, it is incurring frequent I/O calls that are apparently reducing performance on Intel platforms because of Spectre/Meltdown mitigations. In a default "HPET enabled" state the OS will choose the timer it thinks is best for the operation it's being asked to perform and HPET is just one of the sources the operating system can potentially use for that information.

Basically an assumption rooted in extreme overclocking that may have been OK in the past (force HPET as the sole timer) is no longer OK because of Spectre and Meltdown.

TheJeffers fucked around with this message at 18:42 on Apr 25, 2018

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
So it's merely an artifact of Spectre/Meltdown? Huh, as Paul points out that doesn't explain why Pinnacle is so much faster than Summit.

TheJeffers
Jan 31, 2007

I mean, there could always be more things wrong with their testing platform and approach, this was just the most prominent one because not even AMD was putting its chips universally ahead in 1080p gaming.

LRADIKAL
Jun 10, 2001

Fun Shoe
So, maybe HPET is forced on in my system from MSI afterburner and it is affecting my CPU performance? I'll have to look it up and check. If it is forced on, what deleterious effects could it have on my system if I restore the default setting? Phone posting, I'll have to investigate later.

Arzachel
May 12, 2012

FaustianQ posted:

Also Also, people have been pushing R7 2700Xs to about 4.45-4.5Ghz with a very slight bump to BCLK (can't go above 105BCLK or system loses it's poo poo). So it seems Pinnacle Ridge isn't limited by the process but maybe through design? It's too bad AMD couldn't have found a way to make the BCLK async, multiplier OC is still better but an async BCLK might have enabled even higher clocks, maybe 4.6 to 4.7Ghz, or a really cold 4.4Ghz.

Not sure if it's limited to X470 but boards with external clock gen should have an async mode.

The Stilt posted:

Pinnacle Ridge CPUs also support multiple reference clock inputs. Motherboards which support the feature will allow "Synchronous" (default) and "Asynchronous" operation. In synchronous-mode the CPU has a single reference clock input, just like Summit Ridge did. In this configuration increasing the BCLK frequency will increase CPU, MEMCLK and PCI-E frequencies.

In asynchronous-mode the CPU cores will have their own reference clock input. MEMCLK, FCLK and PCI-E input will always remain at 100.0MHz, while the CPU input becomes separately adjustable. This allows even finer grain CPU frequency control, than the already extremely low granularity "Fine Grain PStates" (with 25MHz intervals) do.

GRINDCORE MEGGIDO
Feb 28, 1985


Grr I want to see Threadripper+ and boards with the new chipset.

TheJeffers
Jan 31, 2007

For those who are curious (as I was) whether their system is being forced to use the HPET as its timer source, apparently running the following command can tell you on Windows (use cmd as admin) !!! but keep reading as this will change system settings !!!:

code:
bcdedit /deletevalue useplatformclock
If that command returns an error message then your system isn't forcing HPET. If you get a confirmation message then it was. If you unforced it by running this command and want it the other way you can put it back with:

code:
bcdedit /set useplatformclock true
Either way you need to reboot for any settings changes to take effect.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Arzachel posted:

Not sure if it's limited to X470 but boards with external clock gen should have an async mode.

Ooohh, holy poo poo Pinnacle might be worth it then. I was going to wait for Zen 2 but if I can get 110BCLK async on an X470 a 2700X sounds worth it.

ufarn
May 30, 2009
Oh yeah, what was the embargoed reason why 2800X wasn’t a thing by the way?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
AMD did a thing guys, Q1 2018 financial report. Oh yea, they actually posted earnings per share, GAAP.
http://ir.amd.com/static-files/20fd2f99-1dc5-4ccd-93ce-30927e144920

ufarn posted:

Oh yeah, what was the embargoed reason why 2800X wasn’t a thing by the way?

Apparently AMD didn't see a point in it?

SwissArmyDruid
Feb 14, 2014

by sebmojo

quote:

- Revenue increased 40 percent year-over-year -

good on them for not burying the lead

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Buy a TR+ if you want the 2800X cores.

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord
7nm EPYC sampling this year :wow:

Cygni
Nov 12, 2005

raring to post

ufarn posted:

Oh yeah, what was the embargoed reason why 2800X wasn’t a thing by the way?

Marketing. They are basically saving the 2800 name for when Intel responds in the fall, was what I took away from the quotes.

ufarn
May 30, 2009

Cygni posted:

Marketing. They are basically saving the 2800 name for when Intel responds in the fall, was what I took away from the quotes.
I almost got the impression that it wouldn't even surface then. Maybe I'll just ... wait until June to order. :negative:

At least all RAM price issues will be sorted out by then. :unsmith:

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Risky Bisquick posted:

7nm EPYC sampling this year :wow:

Wait, seriously? That's...hella soon, like wow. Was there a specific date when they'd begin? Sampling for Zen Gen 1 was around September, October IIRC so about 6 months from sampling to shelves. That puts Ryzen 3000 in like, March-June launch window.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

FaustianQ posted:

Wait, seriously? That's...hella soon, like wow. Was there a specific date when they'd begin? Sampling for Zen Gen 1 was around September, October IIRC so about 6 months from sampling to shelves. That puts Ryzen 3000 in like, March-June launch window.

Yup, AMD is maintaining roughly a 1Y cadence on their Ryzen launches. That's been the rumor for a while.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Paul MaudDib posted:

Yup, AMD is maintaining roughly a 1Y cadence on their Ryzen launches. That's been the rumor for a while.

That's still stupid fast for 7nm products, and if Intel is mulling over a Coffeelake 8C for later this year, it means AMD would indeed have a window of time where they legitimately are just better than Intel at everything.

Rastor
Jun 2, 2001

Remember that the nm number is meaningless marketing speak these days, each process has unique advantages and drawbacks. But yeah, Intel has stumbled hard and seems on track to fully lose 100% of their process lead.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Rastor posted:

Remember that the nm number is meaningless marketing speak these days, each process has unique advantages and drawbacks. But yeah, Intel has stumbled hard and seems on track to fully lose 100% of their process lead.

GloFo 7nm seems marginally superior to Intels 10nm though from wikichip.

https://en.wikichip.org/wiki/7_nm_lithography_process
https://en.wikichip.org/wiki/10_nm_lithography_process

GloFo 7nm (DUV)
30nm Pitch
56nm CPP
40nm MMP
0.0353 µm² SRAM (HP)
0.0269 µm² SRAM (HD)

Intel 10nm
34nm Pitch
54nm CPP
36nm MMP
0.0441 µm² SRAM (HP)
0.0312 µm² SRAM (HD)

Scarecow
May 20, 2008

3200mhz RAM is literally the Devil. Literally.
Lipstick Apathy

FaustianQ posted:

GloFo 7nm seems marginally superior to Intels 10nm though from wikichip.

https://en.wikichip.org/wiki/7_nm_lithography_process
https://en.wikichip.org/wiki/10_nm_lithography_process

GloFo 7nm (DUV)
30nm Pitch
56nm CPP
40nm MMP
0.0353 µm² SRAM (HP)
0.0269 µm² SRAM (HD)

Intel 10nm
34nm Pitch
54nm CPP
36nm MMP
0.0441 µm² SRAM (HP)
0.0312 µm² SRAM (HD)

:flashfap: holy poo poo yeaaaaaah

Adbot
ADBOT LOVES YOU

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord

Paul MaudDib posted:

Yup, AMD is maintaining roughly a 1Y cadence on their Ryzen launches. That's been the rumor for a while.

They are on track to sample from TSMC, for both a 7nm Vega AND Epyc. 7nm is rumoured to be 6 core ccx so intel is super boned.

We’ll hear from intel where 10nm is today during the earnings call.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply