Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Indiana_Krom
Jun 18, 2007
Net Slacker
Yep, it was pretty obvious why they soldered these: 210w through a 178mm2 strip of thermal paste is unrealistic for more than a fraction of a second. I highly doubt it was in response to consumers complaining about the old paste, in order to hit 4.7-5.0 turbo speeds for any useful duration or load they had to solder it or it would thermal throttle before it even came close. I'd say the 7700k pretty much hit the efficiency wall on 14nm at 4.5 GHz, they made it to the 8700k/8086k on binning but it looks like even that avenue is pretty much exhausted.

On the other hand, my custom loop water cooler with a 420x38MM high density copper radiator is suddenly looking like a pretty good investment after all. Especially after reading the THG review where their 280mm AIO hit a 90C package temp. (My 7700k used to do that at stock, then I delidded it and swapped the paste for liquid metal, now 4.8 GHz AVX torture only draws it up to the low 70s and any realistic load typically stops in the mid 60s.)

Adbot
ADBOT LOVES YOU

Indiana_Krom
Jun 18, 2007
Net Slacker
If you are willing to spend $500 on a CPU, another $200 on a mobo, what's $150 more on a new PSU?

Also, the amount of power required for a system is dependent on the whole system, we would need to know what else is going to be in the system with it. A 9900k is noted to consume up to 200w, pair that with a high end GPU for instance, I've seen a couple GTX 2080 Ti cards with two 8 pin and one 6 pin power connector, which when combined with the PCIe slot is sufficient to deliver 450w to the card which brings you to 650w load potential in only two components. Now it is pretty unlikely you would see that much current pull from either component in practice, so 650w would probably do it. But throw in a couple hard drives, a SSD and a bunch of cooling fans then compound it with degraded output capacity of the PSU from a few years of usage down the road and you might want to think about something closer to 850w. Also as noted, you probably want a PSU that has all the necessary dedicated outputs for your motherboard, I know some aux power connections *can* be optional assuming you aren't overclocking, but this varies by motherboard and what CPU you stuff in it. It is better to play it safe and get a PSU that has every connector you need.

Indiana_Krom
Jun 18, 2007
Net Slacker

B-Mac posted:

Side not why does anand tech still use a 1080 for their gaming tests? You’d think a 1080 ti/2080 or 2080ti would make sure they aren’t GPU limited when testing CPUs.
At least their latest benchmarks have their 720p/IGP mode listed, though still not a perfect measure of the CPU for high refresh gaming. I have a GTX 1080 and 7700k in my system and when I tried running the shadow of the tomb raider benchmark I was consistently coming up 10-15 fps average less than the anandtech benchmark using the same CPU video card combination. According to the benchmark I was only 30% GPU limited, and my 7700k has multi-core enhancement (4.5 GHz all core turbo), so if anything I should have been beating their score. Then I realized anandtech was also using some low quality graphics preset on their 720p/IGP setting where I was keeping everything at its highest settings except the resolution. The low quality preset culls more detail/geometry closer to the camera than the highest settings, so its not just easier on the GPU per frame but also easier on the CPU which is why they got higher numbers.

No idea why they run 4k and 8k ultra benchmarks though, basically a complete waste of a run and everyone's time because any mainstream gaming CPU from the last 5-7 years wouldn't bottleneck before the GPU at those settings. Maybe eventually they will get an editor who realizes it probably isn't a good CPU benchmark when it shows almost no difference between massively different CPUs. Like a 5+ year old 4c4t i5 might get almost the same average/95% percentile framerate as a 9900k in a game at 4k, but that doesn't tell us that it might have taken the smaller CPU 30+ seconds longer to start the benchmark in the first place.

Indiana_Krom
Jun 18, 2007
Net Slacker
Hmmm, I was even looking at that Asus 390 board. I'm less concerned about overclocking or if it can deliver the necessary power since a double hardware 4 phase still has the 350 amps or whatever capacity you could possibly need, the question I have then is what do the boards do when the load is closer to idle/middle loads. Does the MSI design with its doubled 6 phase have the ability to shut off phases, or even one half of the split phases and then cycle through them in sequence in order to up the efficiency and spread the thermals around when the system doesn't need every phase running at once? If it can, then it would likely be comfortably ahead of the Asus board on power savings, but if it has to run all 6 doubled phases all the time it would cost a lot of power.

Indiana_Krom
Jun 18, 2007
Net Slacker
It might just mean a further harvested 9900k die where both the iGPU and part of the Cache are disabled.

Indiana_Krom
Jun 18, 2007
Net Slacker

Khorne posted:

Between 1.4V-1.5V is safe for RAM depending on manufacturer, heat situation, etc. z370 is desktop so you're well in the clear.

The spec for VCCSA and VCCIO are usually less than 1v, Asus is running them both over 1.4. That being said, I had to run them at 1.3v on my 9900k to get 3200 MHz RAM stable in XMP.

Indiana_Krom
Jun 18, 2007
Net Slacker
I still have my original Intel 160 GB SSD (320 series which was the successor to the X25-M but with a more confusing name), these days I use it as a USB drive with one of those USB3 or type-C to SATA adapters. Handy for brute forcing large transfers that would be too slow over the wifi and also makes an excellent source for installing windows because it is way faster and more durable than the average thumb drive.

Indiana_Krom
Jun 18, 2007
Net Slacker

Paul MaudDib posted:

Excluding the iGPU:
  • Sandy Bridge 4C: 1.02-1.31W/mm^2
  • Ivy Bridge 4C: 1.32-1.75W/mm^2
  • Haswell 4C: 1.25-1.65W/mm^2
  • Kaby Lake 4C: 1.26-1.9W/mm^2
  • Coffee Lake 8C: 1.55-2.33W/mm^2
So 50-75% higher thermal density than Sandy Bridge depending on how aggressive you get. Which is a large part of why solder sucks now, even if it was good enough then it's not good enough to move almost twice the heat.

(the other half being the die itself is a bit thicker now to resist mechanical working and cracking, because smaller dies don't have the mechanical strength to resist the expansion/contraction of the heatspreader layer as well... this is Intel's concession to die longevity.)

Yeah, getting cooling close enough to the transistors to be meaningful is going to be a significant issue in upcoming process nodes. The stresses from thermal cycles between the die and substrate/heat spreader/etc is only going to get worse as transistors shrink. The breakthrough that will allow smaller nodes to clock as aggressively as Intel's 14+++ is probably going to be a packaging/substrate technology instead of having much to do with the process node itself.

Indiana_Krom
Jun 18, 2007
Net Slacker
Perhaps switching the packaging, heat spreaders or even the semiconductor itself to diamond which has the highest thermal conductivity of any bulk material. Or some weird carbon nanotube heat spreader technology (although diamond is also mostly carbon and probably cheaper than trying to scale up manufacturing on some weird bulk CNT material). But I think in the near term it is mostly going to come down to some type of mechanical solution, existing coolers are more than sufficient to dissipate the total wattage if you can just get the heat in to the cold plate quickly enough.

Indiana_Krom
Jun 18, 2007
Net Slacker
Bringing DRAM closer to the die is for bringing data closer to the CPU so it can wait less and work more, which means it will also consume more power. Stacking everything together will definitely make cooling and power consumption worse in pretty much every way, because it sticks more power consuming and thermal dissipating components in less space than ever before and helps them all work harder besides.

Indiana_Krom
Jun 18, 2007
Net Slacker

eames posted:

I feel like there’s a good chance this we’ll see real-time software encryption DRM schemes become standard as high core count adoption improves. :smith:

Like denuvo, that makes games stutter, chop, freeze and average significantly lower performance because it steals so much CPU time away from actually running the game?

Indiana_Krom
Jun 18, 2007
Net Slacker

latinotwink1997 posted:

Where on this chart would “surface of the sun” be?

~6300 W/cm2, higher than anything else but would still fit in the graph.

Indiana_Krom
Jun 18, 2007
Net Slacker
The ASICs that accelerate encoding like those in quicksync and nvenc make trade offs of precision, complexity and features in exchange for speed and power efficiency, pure software encoding can be set to make no quality/precision/complexity or supported feature trade offs but will incur a huge performance penalty in exchange. If the differences in implementation actually translate to something detectable by the average human in a high bit rate archival encode is debatable. It is more likely that pure software encoding can achieve the same quality output with fewer bits instead of its output being universally better than hardware encoding if you just throw as many bits as it needs to look good.

Indiana_Krom
Jun 18, 2007
Net Slacker

eames posted:

Silicon Lottery published their 9900Ks binning stats

100% 5.0 GHz
31% 5.1 GHz
3% 5.2 GHz

One thing about their binning statistics for the 9900k that strikes me as amusing is their 4.8 GHz all core model has an AVX frequency of 4.6 GHz. Stock performance of a 9900k is 4.7 GHz all core with no AVX offset, literally 100% of the samples should be able to do 4.7 GHz all core AVX.

Indiana_Krom
Jun 18, 2007
Net Slacker
If you already have 3200C16 that works at the advertised speed it would be a poor investment to push for something higher, you are talking single digits improvements outside of specifically targeted benchmarks.

Also the higher the frequency or the lower the latency the more of a pain in the rear end it is to get it working even if it is in the QVL, I have 3200C14 and it took quite a while and a ridiculous 1.3v to both VCCSA and VCCIO to get it to kick over.

Indiana_Krom
Jun 18, 2007
Net Slacker
Some random info about desktop parts vs mobile parts: They may not be cut from the same wafers at all. Transistors can be optimized in multiple ways, but basically it comes down to a trade off between performance and energy consumption. Generally speaking higher performing transistors also consume more energy (because of higher leakage). The important thing to remember about leakage is that transistors are analog devices, so a transistor that is very high performance and can switch between its on and off states very quickly will likely leak a considerable amount of voltage during its "off" state because in order to attain that performance the difference between on and off is tiny. And on the other side of things, a transistor that leaks very little energy in its off state is probably not going to perform very well because the electrical field that is responsible for switching is likely significantly larger and more powerful which will take a lot longer to charge or discharge. This optimization happens at the design and process phases for the transistors, so basically binning cannot explain the variance between most mobile and desktop processors. No matter how aggressively you bin and down clock a desktop processor, those high performance transistors will always leak an unsustainable amount of energy for a mobile device. And a leakage optimized mobile transistor will never be able to switch as quickly as that desktop processor no matter how much voltage and current you try to ram through it.

Indiana_Krom
Jun 18, 2007
Net Slacker
I had a 7700k feeding my GTX 1080 video card at 1080p high hz and in several newer games it consistently couldn't keep up making the GPU to idle down. The next couple generations of video cards are definitely going to leave 4 core CPUs behind with anything past mid-range.

Indiana_Krom
Jun 18, 2007
Net Slacker
The power supply should just plug directly into a distribution board/wire harness built in to the case behind the motherboard tray, and the motherboard/drives/etc should all plug in to that, the only cables it should need are for fan hubs, pumps, GPUs and stuff like that. Like the power connector on the motherboard would be on the back side of the board and would just slot right in to a matching plug on the case.

Indiana_Krom
Jun 18, 2007
Net Slacker

silence_kit posted:

When people say 'leaky silicon' what does that really mean?

Is it that the transistors in the chip in the non-T version have a wider range of threshold voltages, so some low-threshold voltage transistors conduct more current in the off-state than expected (this is the leakage)? And the fact that there are some high-threshold voltage transistors in the distribution means that the chip will either need a little extra time or a little extra voltage for the switching circuits to be able to complete its computations within the clock period?

Does this explanation capture the physical differences between non-T & T versions of chips, or is it something else that creates that distinction?

Transistors are analog devices so "off" isn't zero volts, the whole operating range of transistors is a curve. Really fast transistors generally let more voltage through even when they are off, and the distance between off and on is smaller allowing them to switch really quickly at the expense of always "leaking" a lot of power. Basically in order to work a high performance transistor needs to be able to quickly change its field strength across the threshold, but the only real way to accomplish that is to have a really weak field because of the capacitance involved in switching. Strong fields that block most or all of the current from passing through also have a lot of capacitance which requires a lot more time to charge or discharge.

Indiana_Krom
Jun 18, 2007
Net Slacker
On this automotive tech derail; the reason Tesla has radically more modern looking/performing tech is because of their almost complete vertical integration, they build their own tech. All the other auto makers have extremely long lived relationships with third party suppliers for their tech, the relationship between them is in fact so stable that neither side is ever particularly motivated to innovate, thus automotive tech generally sucks.

Indiana_Krom
Jun 18, 2007
Net Slacker
SMT is ultimately a method of increasing processor utilization. 4-way or 8-way probably isn't looked at because AMD and Intel are already getting satisfactory utilization of their execution units with 2-way and going higher either doesn't yield any benefits or comes with a complexity/latency trade-off that isn't worth it.

Indiana_Krom
Jun 18, 2007
Net Slacker

repiv posted:

I've just been reading up on the quirks of the license system and learned that speculative execution can trigger it :whitewater:

If you're writing a library and want to avoid messing with the power state you might be tempted to do something like

code:
if cpu_has_256bit_penalty() {
    do_128bit_path();
} else {
    do_256bit_path();
}
...but branch mispredictions will randomly cause the CPU to downclock even though the 256bit branch never actually gets taken

gross

Speculative/out of order execution for the win. (Modern CPUs may execute both possible branches ahead of time and then discard whichever one is determined to be invalid when the actual branch is decided.)

Indiana_Krom
Jun 18, 2007
Net Slacker

jink posted:

poo poo, I am tempted to go with a 9900KS from my 9900K and then I see benchmarks and realize that extra 200-400mhz is NOT WORTH IT.

Also why I never bothered to overclock my 9900K (beyond getting rid of the turbo power/time limits). I wouldn't even notice the difference, especially with my current GTX1080 after I upgraded to a 1440p display.

Indiana_Krom
Jun 18, 2007
Net Slacker

jink posted:

Indeed. There is a guard around the chip from RockItCool: https://rockitcool.myshopify.com/collections/9th-gen-cpu/products/9th-gen-direct-to-die-frame-kit-complete

I did it myself and it wasn't difficult but made a mess.

How much of a temperature drop did direct die cooling yield?

For reference, my 9900k is in the mid to upper 80C range running prime95 AVX on all 16 threads, monitoring apps report roughly ~195w CPU power. The only time I fiddled with overclocking I had the chip at 5.0 all core, but the motherboard automatically fed it some insane voltage under LLC (1.4 ish) which caused it to throttle at 99C/~220w even without AVX and I immediately returned to defaults. My custom water cooler has more than enough capacity to handle the heat, with the CPU and GPU combined it easily dissipates 400w, the problem is the die and IHS are so thick I simply couldn't get the heat out of the CPU and into the coolant fast enough. The die, solder, and IHS basically become insulators because its so much power in such a small space. I knew the only way I'd get any higher cooling performance would be by going direct die cooling, and while I do have a 1155 delid kit I just didn't have the will to risk it, especially because it would be just chasing a number with no significant real world benefit.

Indiana_Krom
Jun 18, 2007
Net Slacker

Ika posted:

Is the intel memory controller also sensitive to single rank vs dual rank like zen3? Say I wanted 128gb RAM with an 11th gen CPU, should I take 4 x 32gb single rank which are hard to find, or could I do dual rank and not lose performance?

Does anyone even make single rank 32 GB sticks? Even my (granted, 3 years old) 16 GB sticks had to be dual rank.

Indiana_Krom
Jun 18, 2007
Net Slacker

Shrimp or Shrimps posted:

So, weird question but does anybody know if the cpu cooler mounting holes on a z490 board are the same as a z270 board? Wondering if you be able to swap 2 systems around, with an older one (7700k) going into a MSI trident pre-built and the newer one (10700k) going into a nr200 case.

Look up the current specs for your cooler and check its socket compatibility lists. But generally it should, LGA1200 uses exactly the same mounting positions as LGA115x.

Indiana_Krom
Jun 18, 2007
Net Slacker
Heat spreaders are becoming a catch-22 on modern CPUs. They are necessary because dies are fragile and they also provide a thermal buffer to save CPUs from instantly burning up from improperly installed coolers, but they also are basically insulators at this point and prevent any cooling system from being able to remove heat from the die as fast as it can be produced.

Indiana_Krom
Jun 18, 2007
Net Slacker

Space Gopher posted:

Oh well if it's from reputable vendor Corsair it must not have any security vulnerabilities.

Wait, what's this CVE-2020-8808? Well, they're so reputable, I'm sure that it was some obscure issue where someone could possibly fool the drivers into accessing some tiny chunk of should-be-off-limits memory, but it's probably not anything super serious.


Hmm, it turns out that any process could just ask the iCUE drivers to read or write arbitrary memory and bypass the entire Windows security model. That seems pretty bad, but they did patch it a couple of months after it was reported.

I'm sure that was just a one-time "whoops, we completely forgot to put any security in our software that runs in a highly privileged context" issue, though, surely that wouldn't be something that would be part of a longer running pattern. Oh, wait, both CVE-2018-12441 and CVE-2018-19592 detail other issues with Corsair software that allow any unprivileged user on the system to execute arbitrary commands with system-level permissions.

Eh, gently caress it, who needs security when you have fancy flashing lights.

Addressable RGB is something that was basically just smashed into firmware with zero fucks given about security. It is presented to the system as just some tiny chunk of memory reserved by the firmware, controlling it from the OS requires reading and writing directly to physical memory. With the average hardware vendors budget allocation for software development being "zero fucks given", security in RGB software is a permanently lost cause and will always be an automatic failure.

Indiana_Krom
Jun 18, 2007
Net Slacker

Fantastic Foreskin posted:

As someone who only has man-on-the-street level knowledge of chip fab, can someone explain to me what exactly it means for a node/process to fail, and how one does it for 5 years straight?
We could get in to lots of technical reasons Intel 10nm has been a dumpster fire, but the rough of it is: They can't make chips as quickly as they would like, way too many of the ones they do make end up being defective, and even the good ones don't perform well.

Why it is taking 5 years is for multiple reasons from typical corporate mismanagement to the actual goals being too ambitious for the equipment they are trying to use. Also high volume manufacturing of stuff that small is just inherently incredibly difficult and risky.

What helped TSMC succeed with 7nm where Intel failed is TSMC is using EUV "extreme ultraviolet" lithography for critical layers where Intel is trying to do it only with DUV "deep ultraviolet" lithography. Basically if the ultraviolet you use is a marker, Intel is trying to use a 193nm wide marker tip to draw a 10nm wide line where TSMC is using a 13.5nm wide marker instead. Everyone was using 193nm down to about 12-14nm, there are a lot of tricks and workarounds to make it work down to those sizes, but the difficulty goes up exponentially as the size decreases and TSMC/samsung/etc simply waited till the machines that work with 13.5nm became available before they attempted anything smaller.

Indiana_Krom
Jun 18, 2007
Net Slacker

Fantastic Foreskin posted:

This, I think, is what I was asking. The manufacturing failure is obviously a technical problem, I just don't know enough about chip fab / design to know what these problems could even be. The 'five years' part was secondary, but if you're trying to drive a nail with a screwdriver that'll get you to five years no problem.
Yeah, it was basically a nail with a screwdriver or fitting a square peg in a round hole kind of problem. It isn't like there aren't valid reasons to not want to use 13.5nm wavelength light though. It took a while for the tools to become available because nobody had a light source that could reliably put out that wavelength with sufficient power to be useful for photo-lithography and the environment it requires further complicates things. DUV machines use "regular" lasers, EUV machines have to use a laser to vaporize a tiny droplet of liquid metal which then produces a flash in the EUV wavelength (and said flash isn't as "clean" as a laser). DUV machines can use lenses and conventional optics, only need to be filled with nitrogen (no oxygen) and can immerse the wafer in a special thin layer of water to improve the optics, EUV machines have to have a near vacuum with only trace hydrogen inside and have to use mirrors because the wavelength won't go through lenses or just about anything else. Also EUV is powerful enough to qualify as ionizing radiation and gradually decays/destroys anything it comes into contact with including the mirrors used inside the machine (it is right on the edge of the X-ray spectrum).

13.5nm is probably the end of the line, any larger wavelength and you are stuck with the same "too big" problem, any smaller wavelength and in exchange for passing through the atmosphere it passes through EVERYTHING because its an X-ray or gamma ray, and the things it does hit get electrons violently dislodged from their atoms.

Anyway, it is just kind of an interesting/fascinating subject where microchip manufacturing meets nuclear physics. If you are bored on a weekend or pandemic isolation some time it can pass a few hours to read about photo-lithography and DUV/EUV on Wikipedia and see what you remember from your elementary science classes on the electromagnetic spectrum.

Indiana_Krom fucked around with this message at 17:21 on Apr 2, 2021

Indiana_Krom
Jun 18, 2007
Net Slacker

B-1.1.7 Bomber posted:

540 watts lmao I have a literal spaceheater that’s 600 watts.

When someone asks how powerful your CPU is, you can just say "It peaks at about 3/4 horsepower".

Indiana_Krom
Jun 18, 2007
Net Slacker
An ~800W gaming computer running for one solid month would consume 576 kWh of electricity, which is enough energy to drive a EV with a 275 Wh/Mi efficiency about 2100 miles.

Indiana_Krom
Jun 18, 2007
Net Slacker
Let it also be said that the penalty for using too much paste is far less of a problem than the penalty for not using enough. Too much paste makes a mess and squeezes out of the sides but otherwise performs fine, too little and there will be air gaps between the surfaces and the performance will suffer significantly. Always error on the side of too much.

Indiana_Krom
Jun 18, 2007
Net Slacker
I still remember when the usb consortium announced the new names for the 5/10/20 gbps standards. Even for some of the poorest performing standards bodies out there it is exceptionally rare one invents a naming convention so colossally bad that it makes the whole internet pause for a moment because everyone that reads it thinks it is a prank.

Indiana_Krom
Jun 18, 2007
Net Slacker

carry on then posted:

This was definitely me, for awhile I ran the overclock to get my 9900k to 5GHz all core and besides being able to look at the nice round number on spec screens (except for Task Manager which cheated me out of 30 MHz) I never noticed a single performance benefit, so back to stock I went.

:hf:

I've run my 9900k at 5.0 all core for all of like 3 minutes, I just couldn't be bothered to get the power consumption under control because auto voltage which was some low side of 1.3v shot it to about 225w power consumption and all cores locked to 99C, granted this was running prime95. Even at stock it runs around 190w constantly, but at least the water cooler is able to keep it around 80C in prime. I concluded that there was no way I'd notice a performance uplift in anything from only 6% difference in clock speed and it certainly wasn't going to be worth a 20% increase in power consumption.

Indiana_Krom
Jun 18, 2007
Net Slacker

Hasturtium posted:

I have a perfectly dumb question: for the purpose of gaming, would a 12400 be a better general choice than a 7940x? Just wondering if the per-core grunt and lower latency between cores would elevate it above fourteen cores of Skylake-X justice.

12400 would likely completely wreck a 7940x in gaming.

Indiana_Krom
Jun 18, 2007
Net Slacker

ConanTheLibrarian posted:

So that I could remap the buttons on a new Corsair mouse, I had to install an 836MB download. Despite the size, it was remarkably unintuitive to use and occasionally forgets the mappings. At least OSes and browsers do something.

While the logitech mouse software is also pretty garbage, I'm grateful that the button mappings can be saved to the profiles (up to 3) on the mouse itself and then you can remove the software and the mouse remembers the profiles/mapping/acceleration forever even if you move it to a different computer. Everyone with remappable kb/mouse profiles should do that.

Indiana_Krom
Jun 18, 2007
Net Slacker
There is a lot that goes into the architecture that determines the final clock speeds, one of the ways x86 gets to 5+ GHz is by making sure it does as little work as possible each "tick". The more you try to do in a single clock cycle, the longer it takes to do it and the longer duration the clock cycle must be to accommodate the workload. ARM is optimized for a different spot on that curve, so it does more work per clock cycle but as a result cannot reach as high of a clock speed. There is a lot of stuff that goes on way down in the nuts and bolts of architecture involving pipelines, parallelism and latency that is honestly beyond my ability to explain in a forum post. Probably the best "at a glance" metric is to look at the number of pipeline stages: ARMv10 has 6 pipeline stages, Intel *lake architectures have 14 stages (some netburst types reached as high as 31!). Generally speaking the more stages you have the less work you need to do in each stage so the less time it takes for each stage to complete which allows the clock speed to go higher.

So basically even if you threw unlimited power and cooling at it the current iteration of ARM architecture, it is simply not engineered to be able to reach x86 clock speeds due to the internal latency somewhere in its pipeline stages. There is some minimum amount of time required for one or more of the pipeline stages to complete their work which sets a ceiling that no amount of throwing power at it can overcome. Say something on the order of 0.3 nanoseconds, which means no matter what you do with the energy input the processor frequency cannot exceed 3.33 GHz. x86 gets to 5+ GHz by doing less work in each stage, so every stage in the pipeline can complete its work within like 0.13 nanoseconds or something, hence 7+ GHz overclocks if you can ram enough power through one (under LN2 for instance).

At the various architecture design teams there are probably people who know what the highest latency pipeline stage is that they could use to tell you an approximation of the highest clock limit physically possible on a given chip.

Indiana_Krom
Jun 18, 2007
Net Slacker

CoolCab posted:

never run two programs that monitor temps or voltages etc at once, neither will report the right results as I understand it.

The way these programs generally work is your motherboard and cpu and gpu all have temperature sensors read by embedded micro controllers on them, these controllers then write the temperature data to some reserved memory addresses, monitoring programs then read the data from that memory addresses and display the data in a human readable format for you. So you can actually run as many monitoring programs as you want because 2, 3, or even 100 of them can all just read that same memory address space just fine, it is "public" data inside your PC.

Where you can run into trouble is monitoring programs that also let you control fans, rgb leds, overclocking, voltage, or power saving features which are also controlled by writing to some reserved memory near the same space as all the sensor data, because then you can end up with multiple programs all attempting to write to that area of data and conflicting with each other.

Adbot
ADBOT LOVES YOU

Indiana_Krom
Jun 18, 2007
Net Slacker

Agreed posted:

The 4770K in my old comp is still working great in most games paired with a RTX 2070 non-S, my kiddo uses those and for the most part nothing has issues.

I was running into some trouble with it in the newest titles before I built this 12900K replacement for it, though. Warhammer II and III make it cry, Elden Ring did not run well. Jokes on me there though Elden Ring still does not run well, stutter city every time I play after the first one post-patch. I don't get why it should, everything else runs great. :negative:

I was recently tearing my hair out trying to figure out why Dying Light 2 was stuttering like mad on my machine, like constantly. After trying a couple driver updates or clean installs and exiting all background tasks, I noticed the CPU usage rainmeter graph I keep going on my second screen showed my CPU was idling down during the game. So I switched my windows power profile to High Performance and the stuttering immediately stopped. I still haven't figured out why Dying Light 2 wasn't keeping my CPU at full speed like pretty much every other game does, but at least I have a workaround for now.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply