Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Indiana_Krom
Jun 18, 2007
Net Slacker
In my experience, RAIDs done in the BIOS have the same amount of resource utilization as pure software methods, but have the immediate downside of often disabling/hiding the SMART status of the attached drives so the OS cannot sound the alarms when one of them is degrading or about to fail. (Not that SMART always catches when a drive is on its way out, the sudden catastrophic failures usually evade it, but on the other hand it will alert you for basically every graceful failure mode and often signal it days/weeks before you would begin to lose data allowing ample time to backup and replace the drive.)

Adbot
ADBOT LOVES YOU

Indiana_Krom
Jun 18, 2007
Net Slacker
Yeah I had a 7700k, it poo poo itself in shadow of the tomb raider pretty badly, a 9900k does that game a lot better. IIRC it was approaching something like 3x the 95% frame rate improvement in the benchmark.

Far Cry 3/4/5/etc on the other hand, practically no difference when isolating CPU, all single core/IPC and that isn't all that different from the 7700k to the 9900k.

Indiana_Krom
Jun 18, 2007
Net Slacker

denereal visease posted:

oh wow, didn't realize ASUS was so thoroughly stunting on their competition; it must be the over-the-top gamer aesthetic / ROG poo poo

posting from a machine with a ROG mobo and ASUS GPU

Posting this using a system with a Gigabyte board, which IMO has been a great sales pitch to go back to ASUS next time I build a PC.

Indiana_Krom
Jun 18, 2007
Net Slacker
There are diminishing returns and just plain limitations on how much heat can reliably be removed in any given form factor. Like a full tower case can handle a lot, but stuff 1000w into one without significant attention to cooling and there will be major thermal problems. Honestly even 500w is already stretching the limits of heat rejection from the average tower case.

And then you have to deal with thermal density, GPUs have it fairly easy because they have thousands of cores that split the load so each individual core isn't dealing with as much heat and it is spread more evenly over the entire chip. CPUs on the other hand have fewer cores that are much higher performance, so the heat and power is concentrated in those cores which are very small areas on the die so it is a huge challenge to get the heat away from those hot spots.

Indiana_Krom
Jun 18, 2007
Net Slacker

Kibner posted:

Yes. I am also checking the windows event log for memory errors.

Big assumption there that the firmware even reports ECC errors to the OS. Like practically no consumer motherboards do last time I checked (which I'll admit has been a year or so). More often than not the only indicator windows will give you of the presence of memory errors is a BSOD.

Indiana_Krom
Jun 18, 2007
Net Slacker

Paul MaudDib posted:

The general advice is don't update unless you have a specific reason. But Cygni is correct that AMD is churning AGESA a lot which requires updated BIOS to get it. If you want the new stuff you'll have to update. But if you don't need new stuff then don't update, because there's always the possibility of introducing new bugs/regressions, and actually this happens quite often on AMD because of how hard AMD is churning the AGESA code. Partners are having to patch around new bugs in AMD's stuff and then AMD changes it again and breaks everything. The "socket support for 5 years" stuff is very difficult on both AMD and partners.

There is very very rarely a compelling performance reason to update BIOS if everything is working - if your processor is supported, then whatever you get 6 months after launch is going to be within 1-2% of what you get when you retire the system. You upgrade because you need new processor support or new features, or maybe AMD finally fixes the USB bug :haw:

Actually it also bears saying that new BIOS can also regress performance too, especially if they do a patch that addresses the Spectre-vulnerability-of-the-week and some workaround hurts performance a little bit.
Yup, generally if it isn't broken, don't fix it.

Also worth noting that for instance if you are running windows, the necessary microcode updates to protect you from the Spectre-vulnerability-of-the-week are loaded at runtime by the OS as long as you are up to date on windows update. Doing it in a BIOS update doesn't change the level of protection you get, it just makes it kick in even before the OS loads, which for Spectre/Meltdown/etc is basically entirely irrelevant since if they have code that could exploit those running BEFORE the OS loads it would mean your system is already completely owned.

Indiana_Krom
Jun 18, 2007
Net Slacker

hobbesmaster posted:

In theory that could match the performance of fast, tuned DDR4.

In theory.

In bandwidth maybe, but DDR4 will crush it in latency by a huge margin.

Indiana_Krom
Jun 18, 2007
Net Slacker

hobbesmaster posted:

So, I’m guessing 280mm asetek means one of the big Arctic ones (or whoever else asetek sells that to). This is “special cooling” in the sense that it will not fit inmost cheaper cases and has a significant performance gain over any air cooler. I would’ve chosen “exotic” for what videocardz was driving at, which probably means something sub ambient.

280mm AIOs don't actually perform all that impressively over a 140mm heat pipe tower cooler. The bigger 140mm tower coolers can outperform even a 360mm an AIO in the right case setup. AIOs are more about flexibility than performance, you can often still mount one in cases that can't handle a 170mm tall tower cooler.

Indiana_Krom
Jun 18, 2007
Net Slacker

CaptainSarcastic posted:

Of course this requires using a flashlight because the inside is so dark that all my components are lurking in shadow.
Or you could just switch the RGB on for a second using that open source RGB controller application.

Indiana_Krom
Jun 18, 2007
Net Slacker
I haven't heard of the removal of the gsync module in actual gsync monitors yet, I think they are still there in the few left on the market. Though maybe with the new VESA standard actually having testing and certification the gsync module can finally be put to pasture.

Indiana_Krom
Jun 18, 2007
Net Slacker
The only reason someone would experience anxiety about applying thermal paste is because they have gotten caught up in the incredibly stupid internet politics of applying thermal paste. (More than you strictly needed is always better performing than the air gaps you get from not using enough.)

Indiana_Krom
Jun 18, 2007
Net Slacker
You can use Isopropyl alcohol and a simple rag to clean and break up most thermal pastes. The basic ~70% stuff won't hurt your hands over the couple minutes it would take to completely dissolve any leftover paste then you can wash off with regular soap and water.

Indiana_Krom
Jun 18, 2007
Net Slacker

SwissArmyDruid posted:



This stuff. Arctic Silver used to repackage and sell d-limonene and a small bottle of 99% isopropyl in teeny-tiny dropper bottles at a steep markup.

I have tried Goo Gone on 8-9 year old Haswells with the stock paste (and from the 486 days, I'm pretty sure it's just paraffin wax, which is why it resists alcohol, and requires a hydrocarbon solvent) and it works a treat. Start with the orange oil, finish with the alcohol, don't be afraid to drop just enough to wet the compound and let it soak a little before agitating.

I do have a bottle of that on my shelf, it is extremely effective against pretty much any thermal grease, paste, pads, powder, etc. Dab a bit on a rag and it will erase arctic silver in a single wipe, it also smells and you could probably run a lawn mower engine with it or explode a poorly ventilated room. Although if it comes to that, gasoline is also a very powerful solvent that will cut through most thermal paste instantly (and as a bonus, is incredibly cheap compared to the average petroleum solvent).

Indiana_Krom
Jun 18, 2007
Net Slacker

SwissArmyDruid posted:

sounds less like you have "Goo Gone" and more "Goof Off", different brand, different formulation.



Goof Off is a petroleum distillate and reeks like you think it would, Goo Gone is made from d-limonene, an extract lipid of orange peels.

Nah, its goo gone from magic american corporation, even says "citrus power" on the label. Read the back label; it still contains petroleum distillates.

Although goof off would also work just as well.

Indiana_Krom
Jun 18, 2007
Net Slacker
Slowing down the GPU so the CPU doesn't have to work as hard is one way of reducing the CPU temperature I guess.

Indiana_Krom
Jun 18, 2007
Net Slacker
Nothing craters your performance with expensive fast RAM more than swapping to disk because you couldn't afford enough of it.

Indiana_Krom
Jun 18, 2007
Net Slacker

Palladium posted:

AKA by far the best utility software ever made by a mobo manufacturer

The reason being is its code was originally an independent piece of software called Rivatuner (Hence why MSI Afterburner to this day also requests to install Rivatuner Statistics Server for the full monitoring/overlay/OSD functionality.)

Although one should note that EVGA Precision X1 is also a branch from Rivatuner, so it is not a guarantee of quality.

Indiana_Krom
Jun 18, 2007
Net Slacker

TheFluff posted:

The 8700K was one of those CPU's that really benefited from delidding, it's completely thermally bottlenecked by the internal thermal paste. You basically had to delid it if you wanted to overclock. I delidded mine and put liquid metal on it back in early 2018, haven't touched it since and the temps seem unchanged. I'm not really feeling a need to upgrade it though, and I'm not sure what I'll do with it if I do. Probably hard to sell, I didn't glue the IHS back on so I really don't want to take it out of the socket.

7700k was the same, I had one and I delidded it and threw on some liquid metal, then lightly glued the IHS back on, I think it was a 15-20C across the board drop in load temperatures (not much change at idle for obvious reasons). But the limit of overclocking I did on that CPU was just making all core turbo the same as single core turbo, so it cranked to 4.5 GHz on all cores all the time at load and had the thermal headroom to get away with it. It also dropped the peak power consumption from 125w to 120w in Prime 95 small FFT AVX, literally 5w of leakage current reduction just from lowering the temperature.

Indiana_Krom
Jun 18, 2007
Net Slacker
Cool Thing! Its the new Hot Thing.

Indiana_Krom
Jun 18, 2007
Net Slacker
Far Cry 3/4/5 (don't have 6) are heavily single thread dependent and don't scale well or sometimes at all with more cores. And they probably also hit memory bandwidth/cache really hard, especially the encrypted DRM flavors of the later games (not only is the engine poorly optimized, but it has to have its memory and executable constantly decrypted/encrypted on the fly in software).

Like seriously, Far Cry 3 doesn't perform any better on a 9900k/GTX 3080 Ti than it does on a 2700k/GTX 680. It is really odd when Far Cry 2/Dunia 1 was one of the first game engines that showed a major benefit from going to 4 cores instead of 2.

Indiana_Krom
Jun 18, 2007
Net Slacker
Last time I went PC building, I found most of my stuff on Newegg and used that to get it all organized, then used google to find the majority of the parts on amazon/B&H/etc and bought them there. Because Amazon's built in search is the most garbage search in the known universe. I don't know how Amazon does it really, you search for, I donno, say stationary supplies and the search shows you car parts and apparel instead because you might need a car and clothes to mail a letter or some poo poo. Its so loving random.

Indiana_Krom
Jun 18, 2007
Net Slacker

orcane posted:

Sorry, the 50 cents of BoM was too much in our $150 entry level mainboard :shrug:

That's 50 cents not going into some executives wallet.

Indiana_Krom
Jun 18, 2007
Net Slacker
MicroUSB wasn't used for charging laptops or anything else big because of its maximum power limitations: you can't charge a laptop on 5 watts.

USB-C power delivery can handle up to 100W, and the newer revisions even have a 240W mode which is why it can charge laptops or other high power devices.

Indiana_Krom
Jun 18, 2007
Net Slacker

Insurrectionist posted:

Babby's first CPU question here. I just built my first PC in like 15 years today after only playing on laptops and not really doing any overclocking ever. I enabled DOPC in the BIOS on my modest 5600X during setup, and I notice my clock speed is fluctuating like crazy. Just sitting on desktop transfering files and occasionally downloading/installing poo poo so far. Is this something I should care about/fix or just natural for overclocked CPUs?

Modern CPUs will boost their clock speeds up and down faster than you can see and is totally normal. The only thing that doesn't look quite right in your graph is that it isn't clocking down as far as it should, at idle on the desktop with normal background stuff going on it should be dropping down to like 1 GHz or less.

Indiana_Krom
Jun 18, 2007
Net Slacker

Klyith posted:

You can't compare that across apps.

The reported clock speed by many apps is a very naive "current clockspeed", to which the CPU reports the max speed of the current fastest core. That result will never drop below 2.2ghz, which is actually the minimum operating frequency of zen 2 & 3 (dunno about 4). Below 2.2ghz you get various halt/sleep/park states where the clock is not running.
code:
> cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq
2200000
If an app is saying 1ghz, it's doing something like querying all the individual cores and then averaging them. Which is a much better and more accurate answer to how hard a CPU is really working at the moment, but still isn't the "real" answer.

Ah, I didn't know what the minimum frequency on Ryzen is, my only experience is my intel cpu that idles at 800 MHz.

Indiana_Krom
Jun 18, 2007
Net Slacker
The base clock on intel cpus is just the guaranteed clock speed it will run at within its official TDP spec, so if a CPU says it is 65w or 95w, the base clock is the minimum guaranteed clock speed that it will operate at while constrained to 65w or 95w. Speedstep has been idling down to 800 MHz since the days of core 2s. AMD is almost certainly doing something similar (and also straight up shutting down entire cores).

Only registered members can see post attachments!

Indiana_Krom
Jun 18, 2007
Net Slacker
It is the same thing you can see that 3080 Ti doing in my attachment on the desktop, as soon as I launch a game it jumps to almost 2 GHz, but can bounce around anywhere from 210 MHz to 2 GHz depending on the load/power limits even though the base clock is 1365 MHz. I've literally never seen it run at the base clock, even at 50% power limit it still hovers in the 1600 MHz range. The main point is that at these super low idle speeds the voltage the GPU is operating at is only .768v and the CPU is similarly low voltage at 800 MHz. It is all for power savings at idle when there is literally no point in blasting away at whatever base clock it has, either they idle down to a really low state, or even power gate portions of the chip off entirely which is how even insanely high power modern desktop chips idle down to the single digit watts.

Indiana_Krom
Jun 18, 2007
Net Slacker

VorpalFish posted:

Does it? Don't think Ryzen does shared L1/L2 so when you disable the cores in the CCD you should be losing half the L1 and L2 cache as well, no?

Yeah, I'm confused here because the 7950X3D having double the L1 and L2 cache makes sense when you consider it also has double the cores. So correct me if I'm wrong but I was assuming the 7800X3D will have the same ratio of L1+L2 cache per core as the 7950X3D.

Indiana_Krom
Jun 18, 2007
Net Slacker

Twerk from Home posted:

Why does the AV1 have to be hardware? Why not just buy a CPU with more cores then games can use anyway and do encoding in software? Software encoders look significantly better than any of the hardware encoders if you can throw enough CPU at them. Software encoders also get better and better over time.

Power. An ASIC will do it more reliably for 1000x less energy cost, quality is secondary.

Indiana_Krom
Jun 18, 2007
Net Slacker

Cygni posted:

It's kinda like the primary US airlines. Everyone has one they just fuckin hate, but its different person to person. gently caress United btw.
:hf: gently caress United.

I haven't had that many motherboards die on me, but I assume it is just like hard drives: Some people swear by western digital, some people swear by seagate, but I've seen them all fail and have zero brand loyalty because of it. For example my current backups are two 16 TB drives loaded identically, one is a seagate, the other is a western digital.

My last Asus board couldn't do XMP even though the memory I had in it was in the QVL, the only difference is the QVL was for the 64 GB configuration with all 4 DIMMs populated and I only had 32 GB with two DIMMs populated (and the board was a T topology so it didn't matter where the sticks were, but I tried both/all combinations anyway!). Currently using a gigabyte board that took considerable fighting to get XMP working, but did actually pull it off in the end. The main gripe I have about gigabyte is if memory training fails, the only way out is to clear cmos and reset EVERYTHING. When the asus boards failed memory training, they would fall back to jedec settings and get you back into bios with all the settings and profiles intact so you could continue troubleshooting.

Indiana_Krom
Jun 18, 2007
Net Slacker

Klyith posted:

Personally I have massive doubts about pretty much all blanket generalizations about brand quality, at least once you go above the garbage level.

I've seen far too many times where people gaslight themselves or others into believing that component X is or is not responsible for their problems based on internet reputation. Dopes out there still saying "AMD drivers bad" if someone has any type of issue and happens to have an AMD GPU, even if the issue is unrelated. And I've seen too many "tier lists" that are junk in one direction or another.


Right now I'd put a lot of weight for AM5 problems on the first-gen memory platform thing, rather than any brand. DDR4 and 3 were the same way. And because that's going to have CPU, mobo, and ram all involved it's easy to get "this one works fine, this one doesn't" and have that be an interaction rather than one particular component.

IIRC there was at one point way back when a user-submitted hard drive reliability survey that was tied to some old enthusiast site. IIRC it was extremely difficult to get unbiased results because most people only reported when they had failures. So NAS & server grade drives looked extremely good, better than they really were, because they were more likely to be used by hyper-enthusiasts who updated their reports every year regardless.

I think the only really good conclusion it could ever show was "yep, IBM deathstars fail a whole lot". Which we probably didn't need a survey to figure out. :v:

There are also products where a particular model or generation are just plain bad, it happens to every manufacturer, sometimes it is impossible to predict a flaw or failure mode that won't show up in the normal testing and validation. Like take the Seagate ST3000DM001, I had one fail at just over 3 years and that particular model was bad enough there was even a class action lawsuit about it. I keep crystal disk info running in the background scanning every drives SMART stats once a day and it caught the impending failure of mine early so I was able to back up everything and replace the drive, and I replaced it with another Seagate drive that is still here working totally fine. I've had basically every brand hard drive, and I've seen them all fail in one way or another enough to know there is no substitute for backups and health monitoring. Unfortunately motherboards and video cards and stuff like that typically don't have as graceful or predictable failure modes as mechanical hard drives and are a much lower volume product so it is harder to tell the difference between a manufacturing defect of a single board or a straight up design flaw that affects all of them to varying degrees because you won't have blackblaze publishing statistics showing something obvious like "47% of these failed during the third year of operation".

Indiana_Krom
Jun 18, 2007
Net Slacker

Prescription Combs posted:

Would be nice to see the total number of each drive per manufacturer in that graph, too.

e: currently agonizing over which drives to get for a new NAS build. gently caress, it helps to actually click the link.

Also the steady climb in Seagate and Toshiba drive failures is mostly explained by those also being the oldest drives in service, so increasing failure rates are expected as many of them are just aging out. That chart is not exactly a good one to post out of context, although the whole article is filled with charts much like it, basically you have to read the whole thing to make any sound conclusions from it.

Indiana_Krom
Jun 18, 2007
Net Slacker
I remember my Abit socket 7 board fondly, I paired it with a incredibly overclockable Athlon T-bred B 1700+ chip from a specific batch that with some tweaking I got to boot all the way up to 2600 MHz from its stock 1466 MHz, almost doubled the clock speed in overclocking. I probably could have managed to boot it at 2.8 GHz even with some more tweaking (I actually ran it at 2.4 GHz rock solid stable for years before my GPU died which is what gave me the push to build a LGA775 system around a Conroe Core 2 Duo and switch from AGP to PCIe video cards). But what was really nice about it was the Nforce4 chipsets digital audio output, the realtime dolby digital AC3 5.1 channel encoding was amazing when piped over optical SPDIF into a suitable receiver at a time when most PC speakers were cheap 2.0 or 2.1 setups at best.

Indiana_Krom
Jun 18, 2007
Net Slacker

wargames posted:

I could be wrong but isn't that Core voltage?

The SPD page is the DRAM voltage needed for each profile.

Indiana_Krom
Jun 18, 2007
Net Slacker

Combat Pretzel posted:

I’m kind of angry that most boards/BIOSes don’t feature ramping (anymore). In the past it was more common to be able to set a time constant to filter fan speed demands and respectively smooth out transients.

My gigabyte BIOS has this as well. The actual term for it is Hysteresis.

I have it disabled because I tied all my fan/pump speeds to my coolant temperature in my custom water cooler which takes 3-5 minutes to significantly move from changes in load anyway and provides basically perfect hysteresis naturally.

Indiana_Krom
Jun 18, 2007
Net Slacker

Kibner posted:

Probably stupid question, but is there a reason to use the chipset drivers that come from the mobo manufacturer over the ones provided directly by AMD themselves?

e: for desktop motherboards
None. Use the AMD ones.

Indiana_Krom
Jun 18, 2007
Net Slacker
I've been wanting to upgrade my PC for about two years now, was really close to making up my mind with a 7800X3D since it has most of the marks of a platform that should have legs. I was really tempted a weekend ago when I saw RTX 4090 founders in stock to grab one, but just shoving it in my existing system would do almost nothing because I'm already CPU limited in pretty much every RTX game I have. But the whole first generation DDR5 platform issues and incidents of CPUs, Motherboards, and GPUs all burning up or exploding has kept me from actually building the rest of the system up to properly feed a 4090.

Indiana_Krom
Jun 18, 2007
Net Slacker
RT global illumination and path tracing is just nice because games with it can do dynamic lighting with day / night cycles the lighting never breaks. In pure raster games, anything not pre-baked and fixed in the dynamic cycle will eventually fail and look bad/wrong, and even plenty of the stuff that is pre-baked/fixed will still crumble completely the moment the player interacts with it or looks at it from a different angle.

Indiana_Krom
Jun 18, 2007
Net Slacker

CaptainSarcastic posted:

*raises hand*

I don't know what the percentage would be, but I think there are a fair number of enthusiasts or whatever term you want to use who are very averse to jumping on the start of a new generation of hardware, especially memory standards. I remember very clearly how early DDR2 just sucked, and didn't jump onto DDR3 or DDR4 until they were quite far into their lifetimes. I'm sure I'll end up with a DDR5 machine at some point, but upgrading to top-of-the-line DDR4 made way more sense to me.

That is the whole reason I haven't gone for a 7800X3D: never buy the first generation platform of a new memory standard if you can hold out. (It also helps that usually by the time the second generation CPU/motherboards come out for the "new" memory standard it will have lost most if not all of its price premium.) My existing system (Intel 9900K + 3080 Ti) has been a pretty good run coming up on 5 years, but I'm already deep into obviously CPU limited land so this system is pretty much tapped out for upgrade potential which is why I passed on a 4090 founders edition when I had plenty of chances a few weeks back. Next upgrade I'm going to need a complete new build from the ground up.

Adbot
ADBOT LOVES YOU

Indiana_Krom
Jun 18, 2007
Net Slacker

Kibner posted:

Haven't read the rest, yet, but heat produced is a function of how much power is drawn. A quick google suggests the 7800x3d using far less power than the 8700 when doing something like rendering and also probably less while gaming. If anyoen can find some better reviews, please let me know:

- 8700: https://www.tomshardware.com/reviews/intel-coffee-lake-i7-8700k-cpu,5252-12.html
- 7800x3d: https://www.tomshardware.com/reviews/amd-ryzen-7-7800x3d-cpu-review/2

e: that 7800x3d review does a handbrake test specifically and it draws a load of 76w, which is only very slightly more than the 8700's 71w while gaming. I think it is safe to say that the 7800x3d will be drawing less power than your old 8700 in most cases

Power multiplied by die area. Modern chips run incredibly hot because they are using significantly more power than older chips and pumping all of it through much smaller dies.

Basically the more power you use spread out over less area, the closer your CPU comes to an incandescent light bulb.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply