Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
v1ld
Apr 16, 2012

I'm planning to upgrade from this 3770k to a Zen 3 12/16 core desktop part when that becomes available, hopefully later this year. But this 7+ year old system is starting to creak a bit at the seams and I was wondering if it makes sense to build a new system with an AM4 board and a cheaper current Zen 2 cpu.

How future proof are current AM4 boards? Do we expect to see new boards with significant improvements come out with Zen 3 desktop cpus? Is it better to just wait it out?

Adbot
ADBOT LOVES YOU

v1ld
Apr 16, 2012

Klyith posted:

What people mean by "future proof" is very nebulous, it would help if you were more specific about what you want. Your ivy bridge system lasted a good long time: was it just the speed of the CPU that kept it fresh, or was it upgrades you added along the way?

Good question. I mean the ability to add storage/PCIe cards (GPU/wifi). I do not expect to change out CPUs or memory since I prefer to buy a good high performing chip that will last me a few years over more freq upgrades. PCIe 4 and m.2 will be the biggest steps up.

After reading your summary, waiting on the B550 boards seems like the right thing to me after reading it. So I'll either look at a cheap cpu/mobo combo that performs somewhat better than the 3770k/z77 I have now or see how to cheaply address its current issues (which makes the most sense probably).

Thanks for the summary.

e: Out of curiosity: is there a cheap mobo to pair with a 1600 AF or 3300x that could tide me over till a 16 core zen 3 part is out? For below $250 say, with PCI 3, no m.2 needed. Asking because that 16 core part may be a lot further away than end of this year.

v1ld fucked around with this message at 15:26 on Apr 27, 2020

v1ld
Apr 16, 2012

Klyith posted:

PCIe gen 4 probably will not have any real impact over the life of a system -- today's high-end GPUs have barely detectable performance loss from being on a PCIe gen 2 connection, and the same thing was true of 1->2.

And note that you can get all the IO you'd ever need (and PCIe 4) with X570 boards. They're just fairly expensive, the good X570s are $200 and up. But if you get a high-core chip you need to pay attention to VRM quality, so a cheap board isn't a good idea anyways. (Don't get a 16 core, it's isn't "future proof". Normal poo poo isn't gonna need 16 cores anytime soon.)

I did a bunch more reading and only just realized that it's not a given that Zen 3 will use the AM4 socket. Expected and not contradicted by AMD, but not announced either.

Sounds best to just wait and keep this system going till there's clarity. Thanks for the advice/info!

v1ld
Apr 16, 2012

Ok Comboomer posted:

I guess I should’ve phrased this as a question.

What do? I figure I’d actually be pretty happy with something like a 3600X/3700X (or hell, even like a 3100/3300X) a year or two from now.

I am in the same boat - 3600 now, upgrade to AM4 large-core when AM5 comes out - and went with an X570 (still on its way).

There's a new MSI x570 board that's supposedly out this month at $200 that seems like a good all-rounder: x570 Tomahawk. But that's a lot more than $80.

v1ld
Apr 16, 2012

Doing the mental run-through of an upgrade path for all CPU gens even if you're size limited doesn't seem to make this that complicated.

I don't know what's stored in BIOS for each supported CPU gen but assuming it's roughly the same size for each gen and that some significant subset of BIOSes cannot store more than 3 times that size of CPU-specific data. Call a BIOS config that supports gens 1/2/3 a 123 BIOS. Then you would need the following BIOS permutations to allow upgrading from a 123 bios to a gen 4 chip:
134
234

Ie., say you're on a gen 1 chip and want to go to a gen 4 chip. You upgrade to the 134 BIOS and then plop in the gen 4 chip. After which you can update to the latest and greatest 234 BIOS going forward.

The 234 will be in the B550/X570, so that's one whole extra BIOS download, 134, you have to offer before folks can upgrade to a new chip.

The real concern is going to be users shooting themselves in the foot I guess.

E: Put another way, everyone that's not on a gen 1 chip just updates to the 234 bios which is the default bios and moves on with life. If you're on a gen 1 chip you have to use a 134 bios as a special one time upgrade bios then go to the normal 234 bios like everyone else. Only the gen 1 folks have to do anything special. Lots and lots of assumptions, but I still think AMD is being unnecessarily silly here.

v1ld fucked around with this message at 01:58 on May 8, 2020

v1ld
Apr 16, 2012

And why not allow board vendors to handle this? That's the bit I find weird - that AMD would prevent them from letting this be a natural competitive differentiator in the market.

The info out there right now on the topic is either 2nd hand, albeit through usually reliable sources, or from that one slide deck. I'd like to see something more direct from AMD - maybe it's out there and I just didn't see it.

v1ld
Apr 16, 2012

Some Goon posted:

There's a possibility this is coming from the board partners, given that they want to sell new boards. Its also possible AMD doesn't want bad press from half-supported setups. And, on the third hand, AMD might just want to sell chipsets.

Here's it direct from AMD, but it's no different than whats being reported.

Thanks for the link.

The post itself is correct - the socket itself continues to work with Zen 1-3 in 2020 as they said it would. But whether you can use it on your board depends upon chipsets, which they never said anything about. Ugh. I don't care if it's the socket or the chipset that's forcing me to change boards, I care that I have to change boards.

That they have to release a post like this which passive-aggressively points out that they're living up to the letter of what they said even if it may have been misunderstood by most customers is not a good thing.

Looking at that slide again, is this an accurate summary of how many cpu gens you could run for a given chipset?
_3__: 2 cpu gens (Ryzen 1000/2000)
_4__: 3 cpu gens (Ryzen 1000/2000/3000)
_5__: 3 cpu gens (Ryzen 2000/3000/4000 - though B550 can't run 2000)

But that paints a rosier picture than is actually the case since given the time of release of the _4__ and _5__ chipsets, you are effectively looking at being able to run current and next Ryzen cpu. I doubt many bought a x470 to run a Ryzen 1000 cpu.

That's just not impressive at all when you're releasing posts talking up the compatibility of the AM4 socket.

v1ld fucked around with this message at 19:54 on May 9, 2020

v1ld
Apr 16, 2012

So reuse in the sense of "AM4 through 2020" is good for AMD and motherboard vendors being able to use the same socket and associated design/manufacturing/tooling across generations. It's a marvelous engineering achievement to have had that reusability as a design goal and to make it work in the real world while still being able to crank out performance, no doubt of that. It almost certainly reduced cost paid by the consumer and gave them faster time to market. But it's also probably not what most customers heard when 2020 was originally announced.

I'm not personally affected by the incompatibility since I ordered an x570 board for the usual wrong reasons - and in spite of better and more sensible advice from goons - but I'm grateful for my reliably bad decision making since the whole point was to get a 3600 now and a better chip in a few years after they move to a new socket.

v1ld
Apr 16, 2012

The machine I remember most fondly is a dual-processor Pentium Pro board that I got from work when we retired it from being a server, late '90s. It replaced some x86 chip I cannot remember. I overclocked those 200MHz PPros all the way to 210MHz!

The next work hand me down was a Dec Alpha 21064-based machine on which I installed NetBSD. Had to fix a (minor) bug in the PPP code to use it as my main box at home, I guess no one else had run 64-bit PPP at the time when there no 64-bit Intel processors.

v1ld
Apr 16, 2012

mdxi posted:

The machines that I consider interesting and relevant to my current manycore leanings were:
  • A SPARCstation 20, with a pair of TI SuperSPARC CPU modules running at 50MHz with 1MB of L2$ each. Of course, I bought this system as even the UltraSPARCs were becoming obsolete, so I got it for about $600 instead of the $20,000 MSRP. It was my first SMP system though, and I was hooked. There was always a processor for whatever I was doing in the foreground, and one for whatever needed to run in the background! No lag or hitching! Astonishing!
  • An Intergraph TD-4, with a pair of 90MHz PPros. I picked this one up used from eBay, also at a staggering discount from its original sticker price. A pizza box machine that started its life as a CAD workstation, but I used it to continue my explorations of Linux and to download truly shameful amounts of USENET porn. Oh, and to run XMMS which basically ate one of the CPUs just playing MP3s.
  • When AMD released the Athlon MP I finally built my own multi-processor system. A Tyan mobo, a PcPowerCooling PSU, and a pair of Athlon MP 1600+s, in a bigass brushed aluminum Lian Li case. That machine lasted me for years and years. Basically until the Core 2 Duo happened -- hey, welcome back, Pentium Pro!

The PPro truly was a great processor. As you said, welcome back PPro in the Core lineup. The PPro "inherited" some of the Alpha's cool features (OOO execution!) so the Alpha lives on.

What did you run on those SPARCs? I ran Solaris 6 x86 on some Intel hardware at home but I cannot remember what it was. I doubt it was the dual-PPro box, which was Linux exclusively I'm pretty sure.

E: Or maybe it was Solaris 7 x86.

Also had one of these in a SPARC box at work, briefly: Windows on a PCi board inside a SPARC machine. Wasn't very good. https://en.wikipedia.org/wiki/SunPCi

v1ld fucked around with this message at 20:17 on May 18, 2020

v1ld
Apr 16, 2012

Anyone know what's the impact of enabling Hyper-V on gaming performance? I would have thought it would be significant but the internet is surprisingly unclear on the topic.

Thinking of using WSL2, but don't want to have to toggle Hyper-V to game.

v1ld
Apr 16, 2012

Good to hear on the passive cost of enabling Hyper-V being low. I'm not planning to run virtualized services in parallel with games but was wondering about Windows itself running virtualized, which seems to be the case with Hyper-V enabled.

v1ld
Apr 16, 2012

Some Goon posted:

Its quite telling that they've been straight banned from multiple hardware related subreddits. When Reddit is too good for you, you have problems.

Now I have to ask: which site is that?

v1ld
Apr 16, 2012

Virtualization was disabled by default in the MSI x570 Unify bios for this new machine. Will toggle it on this weekend. Virtualization Based Security sounds pretty good to have.

v1ld
Apr 16, 2012

ufarn posted:

The new Windows Sandbox is a super neat feature for opening sketchy files and executables from other people, too. I didn't use Ryzen Master anyway, so not a big loss for me.

Jeez, yeah. I need to enable all of that.

Ryzen Master has been neat for seeing that cores go to sleep/frequency as HWiNFO64 and others report very different numbers from RM, but it's not worth sacrificing all those security features. Manual CPU overclocks haven't seemed useful for this 3600, PBO is good enough. Memory on the other hand seems worth the tuning, but RM isn't needed for it.

Speaking of, can vouch for this Patriot Viper Steel 4400Mhz memory being an incredibly good deal for OC at $120 for 16GB. Running it at 3800 CL14, 1:1 FCLK, where I simply copied buildzoid's OC values and then tightened them following his suggestions.

v1ld
Apr 16, 2012

I ran 3DMark before enabling SVM in the bios, after enabling it, and then finally after enabling Hyper-V. Forgot to take a pic when SVM was disabled, but here're the results for with SVM enabled in BIOS and then with Hyper-V enabled in Windows.

1.3% slowdown in CPU for that tiny test in 3DMark, close to the 2% that Paul said. The graphics difference is noise - the non-SVM case was slightly lower than the SVM-enabled value. (That's an RX 5700 with the XT bios, not the XT hardware.)


v1ld fucked around with this message at 20:55 on May 22, 2020

v1ld
Apr 16, 2012

Klyith posted:

holy gently caress that's fast

the slightly slower 4133 is only $100, I wonder if that would do something approximately similar if they're still B-die

The 4133's timings are 19-21-21-41 while the 4400 has 19-19-19-39. I don't know enough about this stuff to know how much that plays a part.


Aside: I bought 4 sticks and one was bad, blocked in POST. It was unfortunately present in all 3 combos of memory I first tried and was also the stick I tried first by itself. Which so threw me off I thought all the other sticks were bad too when they were actually clearing POST because of the temperature display slowly cycling the LCD POST display. It took a reddit post pointing that out before I went back and tested properly. An ugly couple of hours hoping I didn't have to take the huge NH-D15 off the CPU if the motherboard or CPU was the problem.

v1ld
Apr 16, 2012

Windows Sandbox is indeed neat. But while looking into enabling VBS, I seem to have stumbled into a whole host of possible improvements, so: how worthwhile is putting in a TPM 2.0 module? My MSI board has a header for it.

Microsoft lists a bunch of stuff that can use it, in addition to enabling Hadware Security Capability.

I don't want to enable all this just to find that games run 10% slower and all overlays and injectors are now forever locked out.

v1ld
Apr 16, 2012

Paul MaudDib posted:

I wish you could get 3800 or 4000 kits with 2x16GB at a reasonable price. Running 4 sticks hurts performance more than going to a higher-rank stick. I would not mind upgrading to 32GB, I miss my old rig where I had 4x8, but fast 32GB kits are still almost $400.

Buildzoid is running those timings with 4 x 8GB sticks, which is what pushed me to get the Patriots. I'm waiting on the RMA to get back with the replacement sticks to try with 4.

v1ld
Apr 16, 2012

Enabled the fTPM, turned on Secure Boot, enabled DEP for all programs. Pretty soon I won't be able to do anything at all, but with very fast RAM.

E: And now all of the VBS options and was mildly surprised when the system managed to reboot. 3DMark shows the exact same CPU score as before, so good stuff all around.

v1ld fucked around with this message at 01:01 on May 23, 2020

v1ld
Apr 16, 2012

Klyith posted:

holy gently caress that's fast

the slightly slower 4133 is only $100, I wonder if that would do something approximately similar if they're still B-die

The 4400 is rated at 1.45V while the 4133 and lower are rated at 1.35V. Buildzoid's creed is "why wouldn't you run b-die at 1.5V" and I couldn't get his OC to work at lower voltages though I didn't spend much time on that part of it. E: The G.Skill 3800 CL14 memory is rated for 1.5V too.

In case it's useful, here're the notes I made from the bz video. The notes on the side are his suggested tweaks. All values with a yellow "!!" or underlined in blue were also added in once the primary values showed themselves as stable. That passes MemTest for 20 minutes or so and the DRAM calculator's built-in MEMBench for similar durations. Will run MemTest for longer once I have all 4 sticks.

v1ld fucked around with this message at 00:20 on May 23, 2020

v1ld
Apr 16, 2012

Seamonster posted:

Post your AIDA64 numbers?

Saving the AIDA64 trial period for after I have all 4 sticks in since I want to do extensive burn in testing then. Here's the Membench result from within Ryzen DRAM Calculator if that helps.



The 102.6 Best time is not mine, that's some number it came with for 3800 MHz. When I ran the 3800 16-16-16 from the Memory Try It! in the BIOS, the Time was around 140 if I remember correctly. The numbers with this OC hover in the 103.1 to 103.6 range.

It used to show the memory timings on the bottom left correctly but maybe Hyper-V is preventing it from reading those now. It's defaulting to 2^n-1 values as you can see.


Craptacular! posted:

So you put your GPU in one of the PCIE4 slots if you need to (if it even matters at all)

Is there any reason why putting an RX 5700 on a PCIe 4.0 slot would prevent me from overclocking its memory? I used to be able to bump it to 1825MHz from base 1750MHz with long term stability but now its in the x570 PCIe 4.0 x16 slot, it's immediate flickering if I do any sort of increase to that mem clock.

Quoting your Tech Jesus link because I wonder if I should just drop to PCIe 3 for that card if that's even possible on this board.

v1ld fucked around with this message at 14:17 on May 23, 2020

v1ld
Apr 16, 2012

Arzachel posted:

A refresh this late would be spicy because it implies that Zen3 will perform favorably compared to a discounted Zen2+ with ~10% higher clocks.

Yeah, this was my reaction to those higher clocks as well. They have to avoid stealing the thunder from their own next release - so that's quite spicy indeed if true.

v1ld
Apr 16, 2012

Like RMS is a crazy and apparently pervy coot, ugh, and I'm not trying to or wanting to defend him as a person. But he did move the needle for open source back when it wasn't a thing, so...

My first interaction with him was when I sent in a patch to the GNU tar list back in Feb/March 1996. I get an email from him asking for money to keep GNU going - a pattern that repeated a few times till the early 2000s when they had more funding.

But back in the mid 90s, the BSD toolchain wasn't as easy to get hold of as it is now (I don't even remember if the lawsuits had been settled) and the GNU toolchain was important to doing anything open source.

RMS didn't do most of that, but he did start on many of the core tools or push folks to fill in the blanks to get a free (as in money, not just code - compilers cost 100s and 1000s of real $$$$) compiler suite and supporting toolchain done. Not to say that wouldn't have happened anyway, of course.

But then he also hung onto maintainership of gcc well beyond when he should have. Handed it off to that ADA professor who was even slower and that led to the Cygnus fork etc. So good beginnings, bad endings even back then.

Likewise with Hurd which he kept pushing as the one true way well beyond Linux showing it wasn't.

I don't even know why I'm writing this, but it's kinda sad to see all the hosed up bullshit in the person when the stuff they began was so useful and had a huge impact on what I've done over the years (Emacs and the GNU toolchain I mean).

v1ld
Apr 16, 2012

gradenko_2000 posted:

The 3600
The 3600X
The 3600XT

The 3700X
The 3800X
The 3800XT

Why wasn't the 8-core part called 3800 with a 3800X bin in the first place? There must've been some reason beyond because.


uhhhhahhhhohahhh posted:

I hope it's a banger but if they had something that was considerably better than their competition, wouldn't they be talking about it and hyping it already?

They want to sell the XTs and existing CPUs in the meanwhile. Osborne showed the problems with hyping your next product as being much better than what's available to buy. Apple and others learned their lessons from that debacle and pretty much only announce detail right when the product is available to purchase.

v1ld
Apr 16, 2012

Llamadeus posted:

Imo it's simply to show that the 3700X is a strict upgrade from the 3600X (with a beefier stock cooler too), for the occasional buyer who might choose a 3600X over a 3700/3800 just for the X.

Yeah, good theory.

It's funny how their product numbers are converging to 3999.99999X and will in the limit achieve 4000X.

v1ld
Apr 16, 2012

Pablo Bluth posted:

I suspect completely bypassing the CPU when loading data to the GPU memory will be more important than the data compression side. Here's a nvidia devblog about their direct gpu storage solution where they claim up to a x8 throughput speedup when you don't have to have the CPU make a copy in main-memory as part of the process.

The DMA from SSD direct to memory with decompression not blowing cpu caches and related data paths is definitely a touted features. Thanks for the NVidia paper link.

I wonder if we'll see GPUs add some of these features over time. Would it be possible to have the GPU fetch data from the SSD and implement the decompress itself? It'll be interesting to see if GPUs implement any of these features directly. Ie., if some of the I/O complex here could move into the GPU (phone screenshot as you can see):


Cerny called the phase of pulling data into memory the Check In and quoted one Zen 2 core as being needed in some cases for just that copy overhead. He also quotes up to 9 x Zen 2 cores for the Kraken decompression. Even assuming those are the lower clock cores in the PS5 and he's taking a high usage boundary case, that's still a lot of cpu in better situations.

Boy, Cerny gives good presentation - he lays out the motivation for each feature so well. Spends a lot of time at the end on the #CUs vs sizeof(CU) discussion so that's obviously a sore point. But still, one of the cooler tech design presentations I've seen in a while. Even the reason for the abysmally slow patching on a PS4 is evident now.

v1ld
Apr 16, 2012

Seamonster posted:

Post your AIDA64 numbers?

Bit of a late followup, but finally have all 4 x 8GB sticks. Tightened the timings some more from 14-16-13-25-38 to 14-15-13-21-34 where it seems stable after running Karhu Ram Test for 45 minutes. AIDA64 seems in the ballpark looking at other 4 x 8GB numbers.





Learned an unamusing lesson: the board would simply not POST with 4 sticks though either of the 2 stick combos worked individually. Even tried at 2133MHz JEDEC settings, no dice with 4 sticks. Finally tried seating all 4 and resetting CMOS - it trained on the memory and no problems since. A side effect of non-QVL memory?

v1ld posted:

Is there any reason why putting an RX 5700 on a PCIe 4.0 slot would prevent me from overclocking its memory?

This was from Vsoc being set to 1.15V. The OC runs fine at 1.0V, which value lets GPU VRAM be OCed again. Buildzoid made a random comment that higher Vsoc values can cause problems with PCIe 4.0 devices and that turns out to be true.

v1ld
Apr 16, 2012

Latest beta of HWiNFO shows if mb is misreporting power to improve benchmarks: https://www.hwinfo.com/forum/threads/explaining-the-amd-ryzen-power-reporting-deviation-metric-in-hwinfo.6456/

v1ld
Apr 16, 2012

gradenko_2000 posted:

On the one hand, this does mean that people might have their CPUs running hotter than they might expect, if they were looking at their thermals relative to the reported wattage, and it doesn't seem like a good idea for your motherboard metrics to be not giving you "real" numbers.

Yeah, it seems like modern CPUs/GPUs have a lot of protections built in when it comes to preventing damage to themselves from overly aggressive boosts. My thought was your latter point - there's a lot of code in these things to optimize behavior across load / power / thermals and misrepresenting power seems like a bad idea when the general consensus is that AMD is not leaving much on the table to further optimize to begin with.

Hopefully the availability of these numbers in HWiNFO will drive more accurate power reporting in the future.

v1ld
Apr 16, 2012

It's great just how much logic is present in cpus/gpus/boards to optimize behavior for current conditions. Obviously that code isn't perfect and it's an extremely gnarly optimization problem with a huge number of variables, but I still feel better about those decisions being made at the point where more complete information is available that by just pushing up one parameter ignoring all else.

One feature of the Buildzoid OC for the PVS memory I followed is that he only specifies a very few voltages (Vdimm, Vsoc, Vvddg ccd/iod) and most of the timings. He leaves all other parameters to auto, including every parameter who unit is Ohms, with the comment that the existing code does a good job of optimizing these already. I like that - it's locking down some parameters to narrow the search space for an optimal solution and then letting the code get on with the solution. No doubt someone can come up with a better manual solution, but as long as auto is at most a few points off...

v1ld
Apr 16, 2012

Looks like 3600s are a week or two back-ordered most places, though the price has dropped to $167. I would have thought that the surge in people buying gear because of being stuck at home would have dropped by now. Has AMD talked about production shortfalls?

v1ld
Apr 16, 2012

Some Goon posted:

As of last week webcams were still completely sold out at my local microcenter. How much of that is China slowdown vs demand I don't know, but I don't think the demand surge is over yet, especially if a bunch of people decided a $30 price cut was enough of a difference to upgrade.

Yeah, that's a good example and one I can particularly relate to. Didn't wake up to the need for a webcam until a few weeks in and have given up on getting one.

If it's just demand there will probably be no effect on the next gen of both processors and gpus, otherwise it's going to get interesting if we have a supply shortfall for current and for future parts. In fact

SwissArmyDruid posted:

Anyone wants to sell me their lightly-used 3600 for a song because they're upgrading, hmu. XD

Hi, want to buy a heavily used, constantly OCed, 7+ year old 3770k for a very cacophonous price?

v1ld
Apr 16, 2012

BeastOfExmoor posted:

I ordered something from AliExpress on March 7th and it still hasn't made it to the USA.

I bought a pair of earphones for Covid work at home from AliExpress in March that's still stuck somewhere in China's post-customs shipping areas. They'll probably have one of those storage auction TV shows to get rid of all the stored stuff when they open up.

v1ld
Apr 16, 2012

Sounds like AMD is forking the naming for its AM4 bios packages. 300/400 series boards get AGESA 1.0.0.6 BIOS firmware while 500 series will get AGESA 1.0.0.2 V2. Is it fair to assume that only the V2s will support Zen 3 and so 400 series will eventually get V2s as well?

I don't like that naming at all. What was wrong with 2.0.0.2, AMD? Putting the most significant part of the name in a (new!) suffix is weird.

v1ld fucked around with this message at 16:10 on Jun 17, 2020

v1ld
Apr 16, 2012

igpus are good for the ultralight laptop market which I think will grow over time, especially for Windows.

One of the better purchases I've made was an HP Spectre x360, got it for $1050 new in a sweet and lucky deal. The UHD 620 held it back for gaming, would have loved to see something better there.

These things are pretty great otherwise: 13" 4K screen, ultralight, i7 that could perform decently with some undervolting, 2x USB C w/ Thunderbolt. The weight and design make them very comfortable to use (barring a very badly placed touchpad).

Windows is still a lovely tablet OS, but I used it as a laptop. iOS is a lovely desktop - though I haven't used it since I gave my Air to my mom. ChromeOS on my Slate is designed for both tablet and desktop use and that does show up well, but it's not great at either yet.

v1ld
Apr 16, 2012

Oxyclean posted:

Not sure if there's a better thread for this (don't really want to make a full tech support thread just yet) - I'm having a hard time gauging if I should be particularly concerned about my Ryzen 5 3600's (stock cooler) temperatures or not. I'm idling around 50*C right now, and saw it 85-88*C earlier under load while playing Satisfactory. Ambient has been maybe 25-30*C? (Mobo is B450 Tomahawk, Case is Meshify C, with solid/non glass side panel)

The GN Meshify C write up said they got an additional 7C drop on CPU temps by swapping out the single front 120mm fan for 2x 140mms. If you have spare fans it's worth putting them in.

v1ld
Apr 16, 2012

There was a GN video looking at whether too much paste is bad that concluded there was no downside thermally, but the overflow from the sides could cause problems. It's obviously better to have direct surface to surface contact with the paste only filling in where there is no contact, but going overboard is better than otherwise was their conclusion.

When I took the H100 off the 3770k after 7 years, it was easy to see the small pea approach had left a quarter of the surface near the edge on one side uncovered and probably with little contact at all. It still ran fine with a +20% OC for all that time.

So for the 3600 I used a small pea in the center with a very thin circle close to the edge. Idles at 35-40, maxes at 78-80 in synthetic tests and 70-72 otherwise under load. Which seems to be in line with GN's cooler reviews given ambient in the room.

v1ld
Apr 16, 2012

Oxyclean posted:

Might have to give this a shot, but only spare fans I got lying around are some 120mm from an old case, which I assume might still help?

It should, especially given that mesh is very low air resistance. If I recall that video from when I was looking at cases, they talked about where the fans are actually placed in front so that air flows to the CPU cooler so you may want to skip around to where they talk about the two fans and placement.

Adbot
ADBOT LOVES YOU

v1ld
Apr 16, 2012

Statutory Ape posted:

i hope not i still havent received 2 bongs i ordered on ebay

My earphones just showed up, 4 months after order. Hang in there!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply