|
In my experience, RAIDs done in the BIOS have the same amount of resource utilization as pure software methods, but have the immediate downside of often disabling/hiding the SMART status of the attached drives so the OS cannot sound the alarms when one of them is degrading or about to fail. (Not that SMART always catches when a drive is on its way out, the sudden catastrophic failures usually evade it, but on the other hand it will alert you for basically every graceful failure mode and often signal it days/weeks before you would begin to lose data allowing ample time to backup and replace the drive.)
|
# ¿ Aug 1, 2021 15:33 |
|
|
# ¿ May 22, 2024 11:46 |
|
Yeah I had a 7700k, it poo poo itself in shadow of the tomb raider pretty badly, a 9900k does that game a lot better. IIRC it was approaching something like 3x the 95% frame rate improvement in the benchmark. Far Cry 3/4/5/etc on the other hand, practically no difference when isolating CPU, all single core/IPC and that isn't all that different from the 7700k to the 9900k.
|
# ¿ Jan 10, 2022 00:16 |
|
denereal visease posted:oh wow, didn't realize ASUS was so thoroughly stunting on their competition; it must be the over-the-top gamer aesthetic / ROG poo poo Posting this using a system with a Gigabyte board, which IMO has been a great sales pitch to go back to ASUS next time I build a PC.
|
# ¿ Apr 8, 2022 21:51 |
|
There are diminishing returns and just plain limitations on how much heat can reliably be removed in any given form factor. Like a full tower case can handle a lot, but stuff 1000w into one without significant attention to cooling and there will be major thermal problems. Honestly even 500w is already stretching the limits of heat rejection from the average tower case. And then you have to deal with thermal density, GPUs have it fairly easy because they have thousands of cores that split the load so each individual core isn't dealing with as much heat and it is spread more evenly over the entire chip. CPUs on the other hand have fewer cores that are much higher performance, so the heat and power is concentrated in those cores which are very small areas on the die so it is a huge challenge to get the heat away from those hot spots.
|
# ¿ Apr 17, 2022 15:31 |
|
Kibner posted:Yes. I am also checking the windows event log for memory errors. Big assumption there that the firmware even reports ECC errors to the OS. Like practically no consumer motherboards do last time I checked (which I'll admit has been a year or so). More often than not the only indicator windows will give you of the presence of memory errors is a BSOD.
|
# ¿ May 2, 2022 02:56 |
|
Paul MaudDib posted:The general advice is don't update unless you have a specific reason. But Cygni is correct that AMD is churning AGESA a lot which requires updated BIOS to get it. If you want the new stuff you'll have to update. But if you don't need new stuff then don't update, because there's always the possibility of introducing new bugs/regressions, and actually this happens quite often on AMD because of how hard AMD is churning the AGESA code. Partners are having to patch around new bugs in AMD's stuff and then AMD changes it again and breaks everything. The "socket support for 5 years" stuff is very difficult on both AMD and partners. Also worth noting that for instance if you are running windows, the necessary microcode updates to protect you from the Spectre-vulnerability-of-the-week are loaded at runtime by the OS as long as you are up to date on windows update. Doing it in a BIOS update doesn't change the level of protection you get, it just makes it kick in even before the OS loads, which for Spectre/Meltdown/etc is basically entirely irrelevant since if they have code that could exploit those running BEFORE the OS loads it would mean your system is already completely owned.
|
# ¿ May 7, 2022 12:27 |
|
hobbesmaster posted:In theory that could match the performance of fast, tuned DDR4. In bandwidth maybe, but DDR4 will crush it in latency by a huge margin.
|
# ¿ May 23, 2022 02:50 |
|
hobbesmaster posted:So, I’m guessing 280mm asetek means one of the big Arctic ones (or whoever else asetek sells that to). This is “special cooling” in the sense that it will not fit inmost cheaper cases and has a significant performance gain over any air cooler. I would’ve chosen “exotic” for what videocardz was driving at, which probably means something sub ambient. 280mm AIOs don't actually perform all that impressively over a 140mm heat pipe tower cooler. The bigger 140mm tower coolers can outperform even a 360mm an AIO in the right case setup. AIOs are more about flexibility than performance, you can often still mount one in cases that can't handle a 170mm tall tower cooler.
|
# ¿ May 25, 2022 02:42 |
|
CaptainSarcastic posted:Of course this requires using a flashlight because the inside is so dark that all my components are lurking in shadow.
|
# ¿ May 28, 2022 22:49 |
|
I haven't heard of the removal of the gsync module in actual gsync monitors yet, I think they are still there in the few left on the market. Though maybe with the new VESA standard actually having testing and certification the gsync module can finally be put to pasture.
|
# ¿ Aug 29, 2022 01:14 |
|
The only reason someone would experience anxiety about applying thermal paste is because they have gotten caught up in the incredibly stupid internet politics of applying thermal paste. (More than you strictly needed is always better performing than the air gaps you get from not using enough.)
|
# ¿ Sep 3, 2022 02:05 |
|
You can use Isopropyl alcohol and a simple rag to clean and break up most thermal pastes. The basic ~70% stuff won't hurt your hands over the couple minutes it would take to completely dissolve any leftover paste then you can wash off with regular soap and water.
|
# ¿ Sep 3, 2022 03:44 |
|
SwissArmyDruid posted:
I do have a bottle of that on my shelf, it is extremely effective against pretty much any thermal grease, paste, pads, powder, etc. Dab a bit on a rag and it will erase arctic silver in a single wipe, it also smells and you could probably run a lawn mower engine with it or explode a poorly ventilated room. Although if it comes to that, gasoline is also a very powerful solvent that will cut through most thermal paste instantly (and as a bonus, is incredibly cheap compared to the average petroleum solvent).
|
# ¿ Sep 3, 2022 21:00 |
|
SwissArmyDruid posted:sounds less like you have "Goo Gone" and more "Goof Off", different brand, different formulation. Nah, its goo gone from magic american corporation, even says "citrus power" on the label. Read the back label; it still contains petroleum distillates. Although goof off would also work just as well.
|
# ¿ Sep 3, 2022 21:47 |
|
Slowing down the GPU so the CPU doesn't have to work as hard is one way of reducing the CPU temperature I guess.
|
# ¿ Sep 3, 2022 23:15 |
|
Nothing craters your performance with expensive fast RAM more than swapping to disk because you couldn't afford enough of it.
|
# ¿ Sep 10, 2022 23:04 |
|
Palladium posted:AKA by far the best utility software ever made by a mobo manufacturer The reason being is its code was originally an independent piece of software called Rivatuner (Hence why MSI Afterburner to this day also requests to install Rivatuner Statistics Server for the full monitoring/overlay/OSD functionality.) Although one should note that EVGA Precision X1 is also a branch from Rivatuner, so it is not a guarantee of quality.
|
# ¿ Sep 18, 2022 02:21 |
|
TheFluff posted:The 8700K was one of those CPU's that really benefited from delidding, it's completely thermally bottlenecked by the internal thermal paste. You basically had to delid it if you wanted to overclock. I delidded mine and put liquid metal on it back in early 2018, haven't touched it since and the temps seem unchanged. I'm not really feeling a need to upgrade it though, and I'm not sure what I'll do with it if I do. Probably hard to sell, I didn't glue the IHS back on so I really don't want to take it out of the socket. 7700k was the same, I had one and I delidded it and threw on some liquid metal, then lightly glued the IHS back on, I think it was a 15-20C across the board drop in load temperatures (not much change at idle for obvious reasons). But the limit of overclocking I did on that CPU was just making all core turbo the same as single core turbo, so it cranked to 4.5 GHz on all cores all the time at load and had the thermal headroom to get away with it. It also dropped the peak power consumption from 125w to 120w in Prime 95 small FFT AVX, literally 5w of leakage current reduction just from lowering the temperature.
|
# ¿ Sep 25, 2022 21:33 |
|
Cool Thing! Its the new Hot Thing.
|
# ¿ Oct 13, 2022 00:05 |
|
Far Cry 3/4/5 (don't have 6) are heavily single thread dependent and don't scale well or sometimes at all with more cores. And they probably also hit memory bandwidth/cache really hard, especially the encrypted DRM flavors of the later games (not only is the engine poorly optimized, but it has to have its memory and executable constantly decrypted/encrypted on the fly in software). Like seriously, Far Cry 3 doesn't perform any better on a 9900k/GTX 3080 Ti than it does on a 2700k/GTX 680. It is really odd when Far Cry 2/Dunia 1 was one of the first game engines that showed a major benefit from going to 4 cores instead of 2.
|
# ¿ Oct 23, 2022 12:20 |
|
Last time I went PC building, I found most of my stuff on Newegg and used that to get it all organized, then used google to find the majority of the parts on amazon/B&H/etc and bought them there. Because Amazon's built in search is the most garbage search in the known universe. I don't know how Amazon does it really, you search for, I donno, say stationary supplies and the search shows you car parts and apparel instead because you might need a car and clothes to mail a letter or some poo poo. Its so loving random.
|
# ¿ Oct 28, 2022 19:03 |
|
orcane posted:Sorry, the 50 cents of BoM was too much in our $150 entry level mainboard That's 50 cents not going into some executives wallet.
|
# ¿ Dec 25, 2022 16:03 |
|
MicroUSB wasn't used for charging laptops or anything else big because of its maximum power limitations: you can't charge a laptop on 5 watts. USB-C power delivery can handle up to 100W, and the newer revisions even have a 240W mode which is why it can charge laptops or other high power devices.
|
# ¿ Jan 15, 2023 20:59 |
|
Insurrectionist posted:Babby's first CPU question here. I just built my first PC in like 15 years today after only playing on laptops and not really doing any overclocking ever. I enabled DOPC in the BIOS on my modest 5600X during setup, and I notice my clock speed is fluctuating like crazy. Just sitting on desktop transfering files and occasionally downloading/installing poo poo so far. Is this something I should care about/fix or just natural for overclocked CPUs?
|
# ¿ Jan 28, 2023 23:31 |
|
Klyith posted:You can't compare that across apps. Ah, I didn't know what the minimum frequency on Ryzen is, my only experience is my intel cpu that idles at 800 MHz.
|
# ¿ Jan 29, 2023 00:05 |
|
The base clock on intel cpus is just the guaranteed clock speed it will run at within its official TDP spec, so if a CPU says it is 65w or 95w, the base clock is the minimum guaranteed clock speed that it will operate at while constrained to 65w or 95w. Speedstep has been idling down to 800 MHz since the days of core 2s. AMD is almost certainly doing something similar (and also straight up shutting down entire cores).
|
# ¿ Jan 29, 2023 01:42 |
|
It is the same thing you can see that 3080 Ti doing in my attachment on the desktop, as soon as I launch a game it jumps to almost 2 GHz, but can bounce around anywhere from 210 MHz to 2 GHz depending on the load/power limits even though the base clock is 1365 MHz. I've literally never seen it run at the base clock, even at 50% power limit it still hovers in the 1600 MHz range. The main point is that at these super low idle speeds the voltage the GPU is operating at is only .768v and the CPU is similarly low voltage at 800 MHz. It is all for power savings at idle when there is literally no point in blasting away at whatever base clock it has, either they idle down to a really low state, or even power gate portions of the chip off entirely which is how even insanely high power modern desktop chips idle down to the single digit watts.
|
# ¿ Jan 29, 2023 02:12 |
|
VorpalFish posted:Does it? Don't think Ryzen does shared L1/L2 so when you disable the cores in the CCD you should be losing half the L1 and L2 cache as well, no? Yeah, I'm confused here because the 7950X3D having double the L1 and L2 cache makes sense when you consider it also has double the cores. So correct me if I'm wrong but I was assuming the 7800X3D will have the same ratio of L1+L2 cache per core as the 7950X3D.
|
# ¿ Apr 4, 2023 22:57 |
|
Twerk from Home posted:Why does the AV1 have to be hardware? Why not just buy a CPU with more cores then games can use anyway and do encoding in software? Software encoders look significantly better than any of the hardware encoders if you can throw enough CPU at them. Software encoders also get better and better over time. Power. An ASIC will do it more reliably for 1000x less energy cost, quality is secondary.
|
# ¿ Apr 8, 2023 16:03 |
|
Cygni posted:It's kinda like the primary US airlines. Everyone has one they just fuckin hate, but its different person to person. gently caress United btw. I haven't had that many motherboards die on me, but I assume it is just like hard drives: Some people swear by western digital, some people swear by seagate, but I've seen them all fail and have zero brand loyalty because of it. For example my current backups are two 16 TB drives loaded identically, one is a seagate, the other is a western digital. My last Asus board couldn't do XMP even though the memory I had in it was in the QVL, the only difference is the QVL was for the 64 GB configuration with all 4 DIMMs populated and I only had 32 GB with two DIMMs populated (and the board was a T topology so it didn't matter where the sticks were, but I tried both/all combinations anyway!). Currently using a gigabyte board that took considerable fighting to get XMP working, but did actually pull it off in the end. The main gripe I have about gigabyte is if memory training fails, the only way out is to clear cmos and reset EVERYTHING. When the asus boards failed memory training, they would fall back to jedec settings and get you back into bios with all the settings and profiles intact so you could continue troubleshooting.
|
# ¿ Apr 12, 2023 02:54 |
|
Klyith posted:Personally I have massive doubts about pretty much all blanket generalizations about brand quality, at least once you go above the garbage level. There are also products where a particular model or generation are just plain bad, it happens to every manufacturer, sometimes it is impossible to predict a flaw or failure mode that won't show up in the normal testing and validation. Like take the Seagate ST3000DM001, I had one fail at just over 3 years and that particular model was bad enough there was even a class action lawsuit about it. I keep crystal disk info running in the background scanning every drives SMART stats once a day and it caught the impending failure of mine early so I was able to back up everything and replace the drive, and I replaced it with another Seagate drive that is still here working totally fine. I've had basically every brand hard drive, and I've seen them all fail in one way or another enough to know there is no substitute for backups and health monitoring. Unfortunately motherboards and video cards and stuff like that typically don't have as graceful or predictable failure modes as mechanical hard drives and are a much lower volume product so it is harder to tell the difference between a manufacturing defect of a single board or a straight up design flaw that affects all of them to varying degrees because you won't have blackblaze publishing statistics showing something obvious like "47% of these failed during the third year of operation".
|
# ¿ Apr 22, 2023 19:46 |
|
Prescription Combs posted:Would be nice to see the total number of each drive per manufacturer in that graph, too. Also the steady climb in Seagate and Toshiba drive failures is mostly explained by those also being the oldest drives in service, so increasing failure rates are expected as many of them are just aging out. That chart is not exactly a good one to post out of context, although the whole article is filled with charts much like it, basically you have to read the whole thing to make any sound conclusions from it.
|
# ¿ Apr 23, 2023 02:04 |
|
I remember my Abit socket 7 board fondly, I paired it with a incredibly overclockable Athlon T-bred B 1700+ chip from a specific batch that with some tweaking I got to boot all the way up to 2600 MHz from its stock 1466 MHz, almost doubled the clock speed in overclocking. I probably could have managed to boot it at 2.8 GHz even with some more tweaking (I actually ran it at 2.4 GHz rock solid stable for years before my GPU died which is what gave me the push to build a LGA775 system around a Conroe Core 2 Duo and switch from AGP to PCIe video cards). But what was really nice about it was the Nforce4 chipsets digital audio output, the realtime dolby digital AC3 5.1 channel encoding was amazing when piped over optical SPDIF into a suitable receiver at a time when most PC speakers were cheap 2.0 or 2.1 setups at best.
|
# ¿ Apr 24, 2023 02:53 |
|
wargames posted:I could be wrong but isn't that Core voltage? The SPD page is the DRAM voltage needed for each profile.
|
# ¿ Apr 28, 2023 00:08 |
|
Combat Pretzel posted:I’m kind of angry that most boards/BIOSes don’t feature ramping (anymore). In the past it was more common to be able to set a time constant to filter fan speed demands and respectively smooth out transients. My gigabyte BIOS has this as well. The actual term for it is Hysteresis. I have it disabled because I tied all my fan/pump speeds to my coolant temperature in my custom water cooler which takes 3-5 minutes to significantly move from changes in load anyway and provides basically perfect hysteresis naturally.
|
# ¿ Jun 9, 2023 22:07 |
|
Kibner posted:Probably stupid question, but is there a reason to use the chipset drivers that come from the mobo manufacturer over the ones provided directly by AMD themselves?
|
# ¿ Jun 13, 2023 00:03 |
|
I've been wanting to upgrade my PC for about two years now, was really close to making up my mind with a 7800X3D since it has most of the marks of a platform that should have legs. I was really tempted a weekend ago when I saw RTX 4090 founders in stock to grab one, but just shoving it in my existing system would do almost nothing because I'm already CPU limited in pretty much every RTX game I have. But the whole first generation DDR5 platform issues and incidents of CPUs, Motherboards, and GPUs all burning up or exploding has kept me from actually building the rest of the system up to properly feed a 4090.
|
# ¿ Jul 3, 2023 19:11 |
|
RT global illumination and path tracing is just nice because games with it can do dynamic lighting with day / night cycles the lighting never breaks. In pure raster games, anything not pre-baked and fixed in the dynamic cycle will eventually fail and look bad/wrong, and even plenty of the stuff that is pre-baked/fixed will still crumble completely the moment the player interacts with it or looks at it from a different angle.
|
# ¿ Jul 3, 2023 19:58 |
|
CaptainSarcastic posted:*raises hand* That is the whole reason I haven't gone for a 7800X3D: never buy the first generation platform of a new memory standard if you can hold out. (It also helps that usually by the time the second generation CPU/motherboards come out for the "new" memory standard it will have lost most if not all of its price premium.) My existing system (Intel 9900K + 3080 Ti) has been a pretty good run coming up on 5 years, but I'm already deep into obviously CPU limited land so this system is pretty much tapped out for upgrade potential which is why I passed on a 4090 founders edition when I had plenty of chances a few weeks back. Next upgrade I'm going to need a complete new build from the ground up.
|
# ¿ Jul 23, 2023 21:46 |
|
|
# ¿ May 22, 2024 11:46 |
|
Kibner posted:Haven't read the rest, yet, but heat produced is a function of how much power is drawn. A quick google suggests the 7800x3d using far less power than the 8700 when doing something like rendering and also probably less while gaming. If anyoen can find some better reviews, please let me know: Power multiplied by die area. Modern chips run incredibly hot because they are using significantly more power than older chips and pumping all of it through much smaller dies. Basically the more power you use spread out over less area, the closer your CPU comes to an incandescent light bulb.
|
# ¿ Aug 3, 2023 23:05 |