Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Combat Pretzel posted:

Is there even a reliable unified list of POST codes?

I thought most BIOS manufacturers did their own thing, and even changed them up from time to time. I've always looked them up. The few I've memorized were common problems with a specific system that I had to support. Most of the time they come on delivery, which I just call in the warranty I'm not going to open it and waste my time. If I'm building a new computer, well it's for myself and I'm going to have the manual with all the error codes in arms reach if I happen to need it.

If it starts beeping a few years later I just google for the error code for that model motherboard.

pixaal fucked around with this message at 15:39 on Mar 15, 2019

Adbot
ADBOT LOVES YOU

Cygni
Nov 12, 2005

raring to post

Pshhhh standardizing post codes/beep codes/light codes would take EFFORT! Just like front panel connector design, why dont we just all do our own thing for decades and then eventually remove all the old documentation from our sites so that nobody can diagnose anything? Now thats good thinkin.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Combat Pretzel posted:

Is there even a reliable unified list of POST codes?

No, but the motherboard manual should have a fairly comprehensive list of codes for that specific motherboard.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


Cygni posted:

Pshhhh standardizing post codes/beep codes/light codes would take EFFORT! Just like front panel connector design, why dont we just all do our own thing for decades and then eventually remove all the old documentation from our sites so that nobody can diagnose anything? Now thats good thinkin.

A few places tried to do synthetic voice errors back in 2004-2006. It was pretty creepy, or a fever dream I'm only a little evidence of it on Google. Pretty sure it was an ASUS board that screamed MEMORY ERROR at me.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Cygni posted:

Pshhhh standardizing post codes/beep codes/light codes would take EFFORT!
The basis of virtually every drat BIOS is from AMI. How much effort can it be?

doomisland
Oct 5, 2004

pixaal posted:

A few places tried to do synthetic voice errors back in 2004-2006. It was pretty creepy, or a fever dream I'm only a little evidence of it on Google. Pretty sure it was an ASUS board that screamed MEMORY ERROR at me.

I had one of those it was quite weird.

pixaal
Jan 8, 2004

All ice cream is now for all beings, no matter how many legs.


doomisland posted:

I had one of those it was quite weird.

With modern text to speech it would probably be an improvement over beep codes. Then again I hate getting new hardware in now, gently caress you Cortana go away. Just need to tell her to shutup 29 more times.

Not Wolverine
Jul 1, 2007
I thought at least on aftermarket boards beep codes were standardized based on whether you had a Phoenix Award or AMI BIOS but OEMs un-standardize poo poo just to gently caress with people (see Dell PSU connectors). But I cant remember a single time I have ever used beep codes to troubleshoot a PC, if no post just reseat the RAM and hope the smoke is still inside the CPU.

Craptacular!
Jul 9, 2001

Fuck the DH
I have a dumb question: I OCed my 1600 with the Wraith Spire (pretty simple OC, 3.7 @ 1.14vcore and medium LLC) and now the fan drone kicks up an octave and then back down regularly and that's annoying. I can actually make it happen by loading certain web sites.

Is this a fan curve thing I ought to adjust or something else?

LRADIKAL
Jun 10, 2001

Fun Shoe
I use Argus monitor to control my fans, and it'll let you set the fan speed based on a 10 second average rather than however fast these things normally respond. Alternatively, you can increase the temperature at which the fan spins up from idle speeds to a higher temperature to help prevent spikes from kicking up the fan speed.

Craptacular!
Jul 9, 2001

Fuck the DH
I figured if i did that, the temp would simply rise to the new threshold and do the same thing. Dunno. I considered simply raising the bottom RPMs up a bit because I'm okay with a little noise, but action-based noise levels annoy the poo poo out of me (this is why I can't stand GPU coil whine either).

EDIT: Default fan curve seems fine, MSI bios does let you adjust the length of time the temperature needs to reach a point to kick up the fans, and going from 0.1 second to 0.3 seems to have fixed the fan that couldn't figure out what speed it wanted.

Craptacular! fucked around with this message at 10:08 on Mar 16, 2019

ufarn
May 30, 2009

Craptacular! posted:

I have a dumb question: I OCed my 1600 with the Wraith Spire (pretty simple OC, 3.7 @ 1.14vcore and medium LLC) and now the fan drone kicks up an octave and then back down regularly and that's annoying. I can actually make it happen by loading certain web sites.

Is this a fan curve thing I ought to adjust or something else?
It's a pstate thing; your minimum CPU usage is set too low in your Windows power settings (hopefully on the Balanced profile). Set it to 20%, and you'll avoid the sawtooth-like temperatures.

Your CPU basically spins up when minor background tasks perform basic things, and because it spins down again afterwards, your fans and their corresponding curves can't really do anything about it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
AFAIK it's Precision Boost creating heat. If you disable it in the BIOS, it stops doing that.

The only real alternative, at least if you don't want a fan curve that results in higher idle temperatures, is water cooling with the fans controlled by the water temperature. At least I had to, because ASRock BIOSes don't do time constants for fan spooling.

LRADIKAL
Jun 10, 2001

Fun Shoe
You noobs. He fixed it!

ufarn
May 30, 2009

LRADIKAL posted:

You noobs. He fixed it!
Not really, they just increased the spin-up time to make it (ie hysteresis) less audible.

ufarn fucked around with this message at 19:01 on Mar 16, 2019

Craptacular!
Jul 9, 2001

Fuck the DH

Combat Pretzel posted:

AFAIK it's Precision Boost creating heat. If you disable it in the BIOS, it stops doing that.

I don’t believe I’m using PBoost. The only setting I know for PBO is my BIOS is the PB Overdrive setting, which is disabled and pointless since this is a 1600 and not a 2600(x).

And no it’s still happening and I’ll try some of the suggestions here. I did move to Windows Balnced as soon as I installed chipset drivers. This isn’t my first Ryzen chip, but it is my first Ryzen OC as I run the home server at stock.

Klyith
Aug 3, 2007

GBS Pledge Week
Keep in mind the +20°C offset on ryzens. On my motherboard (MSI) the fan curve responds to Tdie (actual temperature) when I'm in the bios setting up the fan curve, and Tctl (+20 offset) once the OS boots.

My CPU idles below 30°C (actual) but spikes up about 10 degrees whenever it clocks up to do the smallest amount of work. So given the offset that means anything below ~60°C on the fan curve graph is effectively idle temperature. So my fan curve is a flat on 20% PWM up to 50, a very shallow slope up to 30% at 65, and then some actual fan above that.

Now I have a big 120mm tower heatsink so you'll probably need to be higher than that as a minimum setting, but don't be afraid to go ham on that fan curve if you think it's spinning up for no reason.

LRADIKAL
Jun 10, 2001

Fun Shoe

ufarn posted:

Not really, they just increased the spin-up time to make it (ie hysteresis) less audible.

Right. The issue of the fan constantly throttling is gone.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Craptacular! posted:

I don’t believe I’m using PBoost. The only setting I know for PBO is my BIOS is the PB Overdrive setting, which is disabled and pointless since this is a 1600 and not a 2600(x).
PBO and PB/PB2 are half separate things. I think Precision Boost isn't even called like that in the BIOS. Over here on my X399 Taichi, it's called Core Performance Boost (a quick Google says it's same for regular Ryzen CPUs on various mainboards).

Klyith posted:

Keep in mind the +20°C offset on ryzens. On my motherboard (MSI) the fan curve responds to Tdie (actual temperature) when I'm in the bios setting up the fan curve, and Tctl (+20 offset) once the OS boots.
That's certainly an interesting way to go about it by MSI.

I'm glad I eventually sprang the money for a Commander Pro and some water temp sensor to bypass all this Tdie, Tctl and ratcheting bullshit. Wish my mainboard would have had support for external 10K sensors, to save that money, but alas.

Combat Pretzel fucked around with this message at 01:46 on Mar 17, 2019

Craptacular!
Jul 9, 2001

Fuck the DH

Klyith posted:

Keep in mind the +20°C offset on ryzens. On my motherboard (MSI) the fan curve responds to Tdie (actual temperature) when I'm in the bios setting up the fan curve, and Tctl (+20 offset) once the OS boots.

I think this is only for Ryzen 7s, at least for the original generation? When AMD blogged about it, they only acknowledged the 1700/1800.

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy

Craptacular! posted:

I think this is only for Ryzen 7s, at least for the original generation? When AMD blogged about it, they only acknowledged the 1700/1800.

All of the X series chips have the temperature offset.

Not Wolverine
Jul 1, 2007

Measly Twerp posted:

All of the X series chips have the temperature offset.
Is the temp offset a good thing? It just seems very weird for AMD to choose to report that the CPU is running hotter than it actually is since I think most enthusiasts care about temperatures and want to see lower numbers.

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy

Crotch Fruit posted:

Is the temp offset a good thing? It just seems very weird for AMD to choose to report that the CPU is running hotter than it actually is since I think most enthusiasts care about temperatures and want to see lower numbers.

I've heard some suggest that the +20 adjustment was combat hot spots on the die. For AMD to sell higher clocked X chips they felt they needed to be sure those hot spots were accounted for, as opposed to non X series chips where if you overclock it too far and hot, that's on you.

Klyith
Aug 3, 2007

GBS Pledge Week

Measly Twerp posted:

All of the X series chips have the temperature offset.

ah, I have a 1600X. didn't know it wasn't the whole 1st gen series. NVM craptacular, guess that doesn't have anything to do with your deal.


Combat Pretzel posted:

That's certainly an interesting way to go about it by MSI.

I'm glad I eventually sprang the money for a Commander Pro and some water temp sensor to bypass all this Tdie, Tctl and ratcheting bullshit. Wish my mainboard would have had support for external 10K sensors, to save that money, but alas.

The offest behavior is weird but not difficult once I figured out what was happening.

Fan curves & hysteresis are really not that hard to figure out and set to be non-annoying... but you can't do that when you need to reboot to bios every time you want to change something. Luckily for me I was using speedfan for the last like decade or so, sad that it stopped being updated to work with modern mobos. But all the mobo brands have their own utilities that can set fan curves afaik. (The MSI one is crap tho. Afterburner good, MSI commander bad.)

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE
Argus Monitor is a decent modern alternative to SpeedFan, and less obnoxious than most motherboard vendor fan control software.

Worf
Sep 12, 2017

If only Seth would love me like I love him!

TheFluff posted:

Argus Monitor is a decent modern alternative to SpeedFan, and less obnoxious than most motherboard vendor fan control software.

Is there laptop compatibility there? Jw

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

Statutory Ape posted:

Is there laptop compatibility there? Jw

No idea, never tried it on anything other than a desktop machine.

e: the motherboard compatibility list does say it supports "Lenovo / IBM Thinkpad Notebooks" and "Dell Notebooks". There's a free trial, so try it I guess?

TheFluff fucked around with this message at 14:18 on Mar 17, 2019

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Some Ryzen 3000 news based on people riffling through the newest BIOS updates (MSI, Asus, Biostar) https://www.overclock.net/forum/13-amd-general/1640919-new-dram-calculator-ryzena-1-4-1-overclocking-dram-am4-414.html#post27895416

quote:

Translation into simple language. We have:

1) New memory controller with partial error correction for nonECC memory
2) Desktop processor with two (2 CCD) chiplets on board, 32 threads maximum
3) New MBIST (Memory built-in self-test)
4) Core watchdog - is a fail/safe function used to reset a system in case the microprocessor gets lost due to address or data errors
5) XFR - at the moment I do not see anything special about it, the algorithm and limits have been updated. Scalar Controll come back with new processors.
6) Updated core control has a symmetric configuration of the active cores . In 2CCD configurations, each chiplet has its own RAM channel in order to minimize latency to memory access. 1 channel on 8 cores will be a bottleneck if you use the system in the default state.

quote:

The XFR is actually quite impressive. If I read it right, they added FCLK, which we need to find out what that bus is doing. On Intel Skylake, Anand wrote the following:

The register in question is called the FCLK (or ‘f-clock’), which controls some of the cross-frequency compensation mechanisms between the ring interconnect of the CPU, the System Agent, and the PEG (PCI Express Graphics). Basically this means it is to do with data from the processor to the GPUs. So when data is handed from one end to another, this element of the processor manages the data buffers to allow that cross boundary migration in a lossless way. This is a ratio frequency setting which is tied directly to the base frequency of the processor (the BCLK, typically 100 MHz), and can be set at 4x, 8x or 10x for 400 MHz, 800 MHz or 1000 MHz respectively.
https://www.anandtech.com/show/9607/...-optimizations


They also now allow for the clock set on the Infinity Fabric (UCLK) to select the divisor, which means we are looking at IF being clocked equal to the memory frequency at dual rate instead of single rate (like 3200MHz instead of 1600MHz), potentially. That has a lot of implications on performance if I'm reading that correctly! EXCITED!!!

Edit: Anyone better with limits in calculus, here is some data points from a pro Intel review company, PCPerspective (Ryan Shrout ran it and Shrout Research and regularly attacked AMD, but the latency of going off CCX was shown by them, although their memory timings were crap and I get lower latency than they ever achieved as a combination of core clock, memory speed and timings, etc.).
https://www.pcper.com/reviews/Proces...ging-between-t

Another way would be to test Zen or Zen+ with Sisoft Sandra's test for calculating the latency to see the latency at different memory speeds, then, after that, extrapolate out the expected drop in latency for a speed double the single rate, meaning where the limit is that the curve is approaching as latency is not dropping linearly with the speed increase of the memory controller and therefor the Infinity Fabric. This can show how the bandwidth is double for the upcoming infinity fabric changes due to doubling the speed of the fabric, while the latency improvement would be estimated through this calculation. (math is the reason I dropped from engineering/physics in undergrad; the only way to pass calc II is to have taken calc II (even though calc I can handle this math problem)).

With that information, we can estimate a lot about the upcoming performance increase related to reduced latency, as well as looking at whether there were bandwidth limitations on data related to the IF. Unfortunately, we cannot fully get the picture, but a data point is a data point.

Klyith
Aug 3, 2007

GBS Pledge Week

EmpyreanFlux posted:

Some Ryzen 3000 news based on people riffling through the newest BIOS updates (MSI, Asus, Biostar) https://www.overclock.net/forum/13-amd-general/1640919-new-dram-calculator-ryzena-1-4-1-overclocking-dram-am4-414.html#post27895416

lol at amd fanboys still having a grudge with PCperspective


but zen2 finally solving the IF / memory clock is good

ufarn
May 30, 2009
Interesting if they manage to backport all that functionality to current AM4 mobos. Or at least X470.

90s Solo Cup
Feb 22, 2011

To understand the cup
He must become the cup



Current scuttlebutt says B350 mobos won't get support for Ryzen 3k -- only X470, X370 and B450.

https://www.youtube.com/watch?v=ezyTaUnXJkQ

Risky Bisquick
Jan 18, 2008

PLEASE LET ME WRITE YOUR VICTIM IMPACT STATEMENT SO I CAN FURTHER DEMONSTRATE THE CALAMITY THAT IS OUR JUSTICE SYSTEM.



Buglord
Heresy, b450 is the same as b350

lllllllllllllllllll
Feb 28, 2010

Now the scene's lighting is perfect!
They made a promise once. :-(

Cygni
Nov 12, 2005

raring to post

So like rumored earlier, AMD is powering this Google game streaming service they just launched. People are digging through the specs, but its hard to tell if its a new semi-custom design or traditional Rome + Vega servers running VMs. Or something else entirely.

There is a lot of goofy stuff, like the slides mentioning HyperThreading by name (which would suggest Intel, as AMD can't use the term), but then Intel not being on the 'partners' slide at all. Be interesting to see what Google is actually using, and if its actually usable for latency sensitive games.

https://www.anandtech.com/show/14105/google-announces-stadia-a-game-streaming-service

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Intel can't touch AMD's stuff for cost-effective core density right now. The only way they could set that up at scale is with something running on zen cores.

eames
May 9, 2009

DF has details

2.7 GHz custom CPU with SMT
AVX2 SMID
9.5 MB L2+L3

AMD 10.7 TF GPU
56 CU HBM2

16 GB HBM2 shared by CPU and GPU (:kiss:)
484 GB/s bandwidth

the GPU seems very similar to Vega 56 and as they rightfully pointed out, what a coincidence that Crytek published their RT tech demo on a Vega 56 of all cards. It almost seems like AMD has given Google early access to next gen console silicon scaled up to DC dimensions.

Good for AMD though I don’t think this is good for general high performance computing when gaming moves to the cloud (which seems inevitable once connection and bandwidth issues are solved). They quote 20GB per hour of gameplay.

eames fucked around with this message at 00:07 on Mar 20, 2019

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
That's going to be an assload of EPYCs and Vega 10s.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I'm not sure how to feel about this. Now that consoles are half way decent and not holding back the PC ports/builds, suddenly a new graphics hardware bottleneck shows up for devs to target. While it's easy to toss tons of RAM and cores into a server, sufficient graphics hardware is harder to condense.

Klyith
Aug 3, 2007

GBS Pledge Week

Combat Pretzel posted:

I'm not sure how to feel about this. Now that consoles are half way decent and not holding back the PC ports/builds, suddenly a new graphics hardware bottleneck shows up for devs to target. While it's easy to toss tons of RAM and cores into a server, sufficient graphics hardware is harder to condense.

I don't see the next-gen consoles having higher specs than that thing. Even with google launching a year ahead of them, all that poo poo isn't gonna fit in a $400 box.

And really, graphics hardware is getting to the point where actually targeting the leading edge is a problem in itself. Those ultra-detailed high-res assets cost a lot of money. Cost of development is already a problem right now; if you expect the next gen to set the baseline up to what Rockstar does now then I think you're gonna be disappointed.


What you should be worried about is streaming itself, because making games for streaming play requires changes to gameplay. No matter how good your bandwidth is you can't avoid the speed of light. Input lag is an unavoidable problem and that makes entire categories of games & gameplay either impossible or very different from how they are now.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I just read that it's entirely Linux based and it reads like they want game developers to make Linux builds of their games (or I guess something that'll work fine in Wine/DXVK), maybe something good will come from it. That is if EA, Ubisoft et al even care.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply