|
Yep, it was pretty obvious why they soldered these: 210w through a 178mm2 strip of thermal paste is unrealistic for more than a fraction of a second. I highly doubt it was in response to consumers complaining about the old paste, in order to hit 4.7-5.0 turbo speeds for any useful duration or load they had to solder it or it would thermal throttle before it even came close. I'd say the 7700k pretty much hit the efficiency wall on 14nm at 4.5 GHz, they made it to the 8700k/8086k on binning but it looks like even that avenue is pretty much exhausted. On the other hand, my custom loop water cooler with a 420x38MM high density copper radiator is suddenly looking like a pretty good investment after all. Especially after reading the THG review where their 280mm AIO hit a 90C package temp. (My 7700k used to do that at stock, then I delidded it and swapped the paste for liquid metal, now 4.8 GHz AVX torture only draws it up to the low 70s and any realistic load typically stops in the mid 60s.)
|
# ¿ Oct 20, 2018 01:11 |
|
|
# ¿ May 14, 2024 18:23 |
|
If you are willing to spend $500 on a CPU, another $200 on a mobo, what's $150 more on a new PSU? Also, the amount of power required for a system is dependent on the whole system, we would need to know what else is going to be in the system with it. A 9900k is noted to consume up to 200w, pair that with a high end GPU for instance, I've seen a couple GTX 2080 Ti cards with two 8 pin and one 6 pin power connector, which when combined with the PCIe slot is sufficient to deliver 450w to the card which brings you to 650w load potential in only two components. Now it is pretty unlikely you would see that much current pull from either component in practice, so 650w would probably do it. But throw in a couple hard drives, a SSD and a bunch of cooling fans then compound it with degraded output capacity of the PSU from a few years of usage down the road and you might want to think about something closer to 850w. Also as noted, you probably want a PSU that has all the necessary dedicated outputs for your motherboard, I know some aux power connections *can* be optional assuming you aren't overclocking, but this varies by motherboard and what CPU you stuff in it. It is better to play it safe and get a PSU that has every connector you need.
|
# ¿ Oct 21, 2018 16:01 |
|
B-Mac posted:Side not why does anand tech still use a 1080 for their gaming tests? You’d think a 1080 ti/2080 or 2080ti would make sure they aren’t GPU limited when testing CPUs. No idea why they run 4k and 8k ultra benchmarks though, basically a complete waste of a run and everyone's time because any mainstream gaming CPU from the last 5-7 years wouldn't bottleneck before the GPU at those settings. Maybe eventually they will get an editor who realizes it probably isn't a good CPU benchmark when it shows almost no difference between massively different CPUs. Like a 5+ year old 4c4t i5 might get almost the same average/95% percentile framerate as a 9900k in a game at 4k, but that doesn't tell us that it might have taken the smaller CPU 30+ seconds longer to start the benchmark in the first place.
|
# ¿ Oct 27, 2018 20:45 |
|
Hmmm, I was even looking at that Asus 390 board. I'm less concerned about overclocking or if it can deliver the necessary power since a double hardware 4 phase still has the 350 amps or whatever capacity you could possibly need, the question I have then is what do the boards do when the load is closer to idle/middle loads. Does the MSI design with its doubled 6 phase have the ability to shut off phases, or even one half of the split phases and then cycle through them in sequence in order to up the efficiency and spread the thermals around when the system doesn't need every phase running at once? If it can, then it would likely be comfortably ahead of the Asus board on power savings, but if it has to run all 6 doubled phases all the time it would cost a lot of power.
|
# ¿ Nov 4, 2018 19:32 |
|
It might just mean a further harvested 9900k die where both the iGPU and part of the Cache are disabled.
|
# ¿ Feb 16, 2019 01:11 |
|
Khorne posted:Between 1.4V-1.5V is safe for RAM depending on manufacturer, heat situation, etc. z370 is desktop so you're well in the clear. The spec for VCCSA and VCCIO are usually less than 1v, Asus is running them both over 1.4. That being said, I had to run them at 1.3v on my 9900k to get 3200 MHz RAM stable in XMP.
|
# ¿ Aug 9, 2019 23:22 |
|
I still have my original Intel 160 GB SSD (320 series which was the successor to the X25-M but with a more confusing name), these days I use it as a USB drive with one of those USB3 or type-C to SATA adapters. Handy for brute forcing large transfers that would be too slow over the wifi and also makes an excellent source for installing windows because it is way faster and more durable than the average thumb drive.
|
# ¿ Aug 31, 2019 02:43 |
|
Paul MaudDib posted:Excluding the iGPU: Yeah, getting cooling close enough to the transistors to be meaningful is going to be a significant issue in upcoming process nodes. The stresses from thermal cycles between the die and substrate/heat spreader/etc is only going to get worse as transistors shrink. The breakthrough that will allow smaller nodes to clock as aggressively as Intel's 14+++ is probably going to be a packaging/substrate technology instead of having much to do with the process node itself.
|
# ¿ Oct 12, 2019 12:57 |
|
Perhaps switching the packaging, heat spreaders or even the semiconductor itself to diamond which has the highest thermal conductivity of any bulk material. Or some weird carbon nanotube heat spreader technology (although diamond is also mostly carbon and probably cheaper than trying to scale up manufacturing on some weird bulk CNT material). But I think in the near term it is mostly going to come down to some type of mechanical solution, existing coolers are more than sufficient to dissipate the total wattage if you can just get the heat in to the cold plate quickly enough.
|
# ¿ Oct 12, 2019 13:37 |
|
Bringing DRAM closer to the die is for bringing data closer to the CPU so it can wait less and work more, which means it will also consume more power. Stacking everything together will definitely make cooling and power consumption worse in pretty much every way, because it sticks more power consuming and thermal dissipating components in less space than ever before and helps them all work harder besides.
|
# ¿ Oct 12, 2019 16:17 |
|
eames posted:I feel like there’s a good chance this we’ll see real-time software encryption DRM schemes become standard as high core count adoption improves. Like denuvo, that makes games stutter, chop, freeze and average significantly lower performance because it steals so much CPU time away from actually running the game?
|
# ¿ Oct 13, 2019 12:52 |
|
latinotwink1997 posted:Where on this chart would “surface of the sun” be? ~6300 W/cm2, higher than anything else but would still fit in the graph.
|
# ¿ Oct 13, 2019 17:38 |
|
The ASICs that accelerate encoding like those in quicksync and nvenc make trade offs of precision, complexity and features in exchange for speed and power efficiency, pure software encoding can be set to make no quality/precision/complexity or supported feature trade offs but will incur a huge performance penalty in exchange. If the differences in implementation actually translate to something detectable by the average human in a high bit rate archival encode is debatable. It is more likely that pure software encoding can achieve the same quality output with fewer bits instead of its output being universally better than hardware encoding if you just throw as many bits as it needs to look good.
|
# ¿ Nov 3, 2019 15:40 |
|
eames posted:Silicon Lottery published their 9900Ks binning stats One thing about their binning statistics for the 9900k that strikes me as amusing is their 4.8 GHz all core model has an AVX frequency of 4.6 GHz. Stock performance of a 9900k is 4.7 GHz all core with no AVX offset, literally 100% of the samples should be able to do 4.7 GHz all core AVX.
|
# ¿ Nov 4, 2019 00:02 |
|
If you already have 3200C16 that works at the advertised speed it would be a poor investment to push for something higher, you are talking single digits improvements outside of specifically targeted benchmarks. Also the higher the frequency or the lower the latency the more of a pain in the rear end it is to get it working even if it is in the QVL, I have 3200C14 and it took quite a while and a ridiculous 1.3v to both VCCSA and VCCIO to get it to kick over.
|
# ¿ Dec 12, 2019 01:28 |
|
Some random info about desktop parts vs mobile parts: They may not be cut from the same wafers at all. Transistors can be optimized in multiple ways, but basically it comes down to a trade off between performance and energy consumption. Generally speaking higher performing transistors also consume more energy (because of higher leakage). The important thing to remember about leakage is that transistors are analog devices, so a transistor that is very high performance and can switch between its on and off states very quickly will likely leak a considerable amount of voltage during its "off" state because in order to attain that performance the difference between on and off is tiny. And on the other side of things, a transistor that leaks very little energy in its off state is probably not going to perform very well because the electrical field that is responsible for switching is likely significantly larger and more powerful which will take a lot longer to charge or discharge. This optimization happens at the design and process phases for the transistors, so basically binning cannot explain the variance between most mobile and desktop processors. No matter how aggressively you bin and down clock a desktop processor, those high performance transistors will always leak an unsustainable amount of energy for a mobile device. And a leakage optimized mobile transistor will never be able to switch as quickly as that desktop processor no matter how much voltage and current you try to ram through it.
|
# ¿ Jan 16, 2020 23:33 |
|
I had a 7700k feeding my GTX 1080 video card at 1080p high hz and in several newer games it consistently couldn't keep up making the GPU to idle down. The next couple generations of video cards are definitely going to leave 4 core CPUs behind with anything past mid-range.
|
# ¿ Apr 8, 2020 00:44 |
|
The power supply should just plug directly into a distribution board/wire harness built in to the case behind the motherboard tray, and the motherboard/drives/etc should all plug in to that, the only cables it should need are for fan hubs, pumps, GPUs and stuff like that. Like the power connector on the motherboard would be on the back side of the board and would just slot right in to a matching plug on the case.
|
# ¿ May 5, 2020 22:13 |
|
silence_kit posted:When people say 'leaky silicon' what does that really mean? Transistors are analog devices so "off" isn't zero volts, the whole operating range of transistors is a curve. Really fast transistors generally let more voltage through even when they are off, and the distance between off and on is smaller allowing them to switch really quickly at the expense of always "leaking" a lot of power. Basically in order to work a high performance transistor needs to be able to quickly change its field strength across the threshold, but the only real way to accomplish that is to have a really weak field because of the capacitance involved in switching. Strong fields that block most or all of the current from passing through also have a lot of capacitance which requires a lot more time to charge or discharge.
|
# ¿ May 30, 2020 12:51 |
|
On this automotive tech derail; the reason Tesla has radically more modern looking/performing tech is because of their almost complete vertical integration, they build their own tech. All the other auto makers have extremely long lived relationships with third party suppliers for their tech, the relationship between them is in fact so stable that neither side is ever particularly motivated to innovate, thus automotive tech generally sucks.
|
# ¿ Aug 17, 2020 23:01 |
|
SMT is ultimately a method of increasing processor utilization. 4-way or 8-way probably isn't looked at because AMD and Intel are already getting satisfactory utilization of their execution units with 2-way and going higher either doesn't yield any benefits or comes with a complexity/latency trade-off that isn't worth it.
|
# ¿ Oct 10, 2020 13:49 |
|
repiv posted:I've just been reading up on the quirks of the license system and learned that speculative execution can trigger it Speculative/out of order execution for the win. (Modern CPUs may execute both possible branches ahead of time and then discard whichever one is determined to be invalid when the actual branch is decided.)
|
# ¿ Jan 6, 2021 02:32 |
|
jink posted:poo poo, I am tempted to go with a 9900KS from my 9900K and then I see benchmarks and realize that extra 200-400mhz is NOT WORTH IT. Also why I never bothered to overclock my 9900K (beyond getting rid of the turbo power/time limits). I wouldn't even notice the difference, especially with my current GTX1080 after I upgraded to a 1440p display.
|
# ¿ Jan 8, 2021 02:40 |
|
jink posted:Indeed. There is a guard around the chip from RockItCool: https://rockitcool.myshopify.com/collections/9th-gen-cpu/products/9th-gen-direct-to-die-frame-kit-complete How much of a temperature drop did direct die cooling yield? For reference, my 9900k is in the mid to upper 80C range running prime95 AVX on all 16 threads, monitoring apps report roughly ~195w CPU power. The only time I fiddled with overclocking I had the chip at 5.0 all core, but the motherboard automatically fed it some insane voltage under LLC (1.4 ish) which caused it to throttle at 99C/~220w even without AVX and I immediately returned to defaults. My custom water cooler has more than enough capacity to handle the heat, with the CPU and GPU combined it easily dissipates 400w, the problem is the die and IHS are so thick I simply couldn't get the heat out of the CPU and into the coolant fast enough. The die, solder, and IHS basically become insulators because its so much power in such a small space. I knew the only way I'd get any higher cooling performance would be by going direct die cooling, and while I do have a 1155 delid kit I just didn't have the will to risk it, especially because it would be just chasing a number with no significant real world benefit.
|
# ¿ Jan 9, 2021 14:55 |
|
Ika posted:Is the intel memory controller also sensitive to single rank vs dual rank like zen3? Say I wanted 128gb RAM with an 11th gen CPU, should I take 4 x 32gb single rank which are hard to find, or could I do dual rank and not lose performance? Does anyone even make single rank 32 GB sticks? Even my (granted, 3 years old) 16 GB sticks had to be dual rank.
|
# ¿ Jan 16, 2021 19:46 |
|
Shrimp or Shrimps posted:So, weird question but does anybody know if the cpu cooler mounting holes on a z490 board are the same as a z270 board? Wondering if you be able to swap 2 systems around, with an older one (7700k) going into a MSI trident pre-built and the newer one (10700k) going into a nr200 case. Look up the current specs for your cooler and check its socket compatibility lists. But generally it should, LGA1200 uses exactly the same mounting positions as LGA115x.
|
# ¿ Feb 10, 2021 04:31 |
|
Heat spreaders are becoming a catch-22 on modern CPUs. They are necessary because dies are fragile and they also provide a thermal buffer to save CPUs from instantly burning up from improperly installed coolers, but they also are basically insulators at this point and prevent any cooling system from being able to remove heat from the die as fast as it can be produced.
|
# ¿ Mar 6, 2021 16:52 |
|
Space Gopher posted:Oh well if it's from reputable vendor Corsair it must not have any security vulnerabilities. Addressable RGB is something that was basically just smashed into firmware with zero fucks given about security. It is presented to the system as just some tiny chunk of memory reserved by the firmware, controlling it from the OS requires reading and writing directly to physical memory. With the average hardware vendors budget allocation for software development being "zero fucks given", security in RGB software is a permanently lost cause and will always be an automatic failure.
|
# ¿ Mar 28, 2021 02:34 |
|
Fantastic Foreskin posted:As someone who only has man-on-the-street level knowledge of chip fab, can someone explain to me what exactly it means for a node/process to fail, and how one does it for 5 years straight? Why it is taking 5 years is for multiple reasons from typical corporate mismanagement to the actual goals being too ambitious for the equipment they are trying to use. Also high volume manufacturing of stuff that small is just inherently incredibly difficult and risky. What helped TSMC succeed with 7nm where Intel failed is TSMC is using EUV "extreme ultraviolet" lithography for critical layers where Intel is trying to do it only with DUV "deep ultraviolet" lithography. Basically if the ultraviolet you use is a marker, Intel is trying to use a 193nm wide marker tip to draw a 10nm wide line where TSMC is using a 13.5nm wide marker instead. Everyone was using 193nm down to about 12-14nm, there are a lot of tricks and workarounds to make it work down to those sizes, but the difficulty goes up exponentially as the size decreases and TSMC/samsung/etc simply waited till the machines that work with 13.5nm became available before they attempted anything smaller.
|
# ¿ Apr 2, 2021 14:50 |
|
Fantastic Foreskin posted:This, I think, is what I was asking. The manufacturing failure is obviously a technical problem, I just don't know enough about chip fab / design to know what these problems could even be. The 'five years' part was secondary, but if you're trying to drive a nail with a screwdriver that'll get you to five years no problem. 13.5nm is probably the end of the line, any larger wavelength and you are stuck with the same "too big" problem, any smaller wavelength and in exchange for passing through the atmosphere it passes through EVERYTHING because its an X-ray or gamma ray, and the things it does hit get electrons violently dislodged from their atoms. Anyway, it is just kind of an interesting/fascinating subject where microchip manufacturing meets nuclear physics. If you are bored on a weekend or pandemic isolation some time it can pass a few hours to read about photo-lithography and DUV/EUV on Wikipedia and see what you remember from your elementary science classes on the electromagnetic spectrum. Indiana_Krom fucked around with this message at 17:21 on Apr 2, 2021 |
# ¿ Apr 2, 2021 17:15 |
|
B-1.1.7 Bomber posted:540 watts lmao I have a literal spaceheater that’s 600 watts. When someone asks how powerful your CPU is, you can just say "It peaks at about 3/4 horsepower".
|
# ¿ Aug 8, 2021 12:32 |
|
An ~800W gaming computer running for one solid month would consume 576 kWh of electricity, which is enough energy to drive a EV with a 275 Wh/Mi efficiency about 2100 miles.
|
# ¿ Aug 9, 2021 03:02 |
|
Let it also be said that the penalty for using too much paste is far less of a problem than the penalty for not using enough. Too much paste makes a mess and squeezes out of the sides but otherwise performs fine, too little and there will be air gaps between the surfaces and the performance will suffer significantly. Always error on the side of too much.
|
# ¿ Dec 4, 2021 13:03 |
|
I still remember when the usb consortium announced the new names for the 5/10/20 gbps standards. Even for some of the poorest performing standards bodies out there it is exceptionally rare one invents a naming convention so colossally bad that it makes the whole internet pause for a moment because everyone that reads it thinks it is a prank.
|
# ¿ Dec 8, 2021 02:24 |
|
carry on then posted:This was definitely me, for awhile I ran the overclock to get my 9900k to 5GHz all core and besides being able to look at the nice round number on spec screens (except for Task Manager which cheated me out of 30 MHz) I never noticed a single performance benefit, so back to stock I went. I've run my 9900k at 5.0 all core for all of like 3 minutes, I just couldn't be bothered to get the power consumption under control because auto voltage which was some low side of 1.3v shot it to about 225w power consumption and all cores locked to 99C, granted this was running prime95. Even at stock it runs around 190w constantly, but at least the water cooler is able to keep it around 80C in prime. I concluded that there was no way I'd notice a performance uplift in anything from only 6% difference in clock speed and it certainly wasn't going to be worth a 20% increase in power consumption.
|
# ¿ Jan 22, 2022 19:10 |
|
Hasturtium posted:I have a perfectly dumb question: for the purpose of gaming, would a 12400 be a better general choice than a 7940x? Just wondering if the per-core grunt and lower latency between cores would elevate it above fourteen cores of Skylake-X justice. 12400 would likely completely wreck a 7940x in gaming.
|
# ¿ Feb 23, 2022 02:50 |
|
ConanTheLibrarian posted:So that I could remap the buttons on a new Corsair mouse, I had to install an 836MB download. Despite the size, it was remarkably unintuitive to use and occasionally forgets the mappings. At least OSes and browsers do something. While the logitech mouse software is also pretty garbage, I'm grateful that the button mappings can be saved to the profiles (up to 3) on the mouse itself and then you can remove the software and the mouse remembers the profiles/mapping/acceleration forever even if you move it to a different computer. Everyone with remappable kb/mouse profiles should do that.
|
# ¿ Apr 24, 2022 00:10 |
|
There is a lot that goes into the architecture that determines the final clock speeds, one of the ways x86 gets to 5+ GHz is by making sure it does as little work as possible each "tick". The more you try to do in a single clock cycle, the longer it takes to do it and the longer duration the clock cycle must be to accommodate the workload. ARM is optimized for a different spot on that curve, so it does more work per clock cycle but as a result cannot reach as high of a clock speed. There is a lot of stuff that goes on way down in the nuts and bolts of architecture involving pipelines, parallelism and latency that is honestly beyond my ability to explain in a forum post. Probably the best "at a glance" metric is to look at the number of pipeline stages: ARMv10 has 6 pipeline stages, Intel *lake architectures have 14 stages (some netburst types reached as high as 31!). Generally speaking the more stages you have the less work you need to do in each stage so the less time it takes for each stage to complete which allows the clock speed to go higher. So basically even if you threw unlimited power and cooling at it the current iteration of ARM architecture, it is simply not engineered to be able to reach x86 clock speeds due to the internal latency somewhere in its pipeline stages. There is some minimum amount of time required for one or more of the pipeline stages to complete their work which sets a ceiling that no amount of throwing power at it can overcome. Say something on the order of 0.3 nanoseconds, which means no matter what you do with the energy input the processor frequency cannot exceed 3.33 GHz. x86 gets to 5+ GHz by doing less work in each stage, so every stage in the pipeline can complete its work within like 0.13 nanoseconds or something, hence 7+ GHz overclocks if you can ram enough power through one (under LN2 for instance). At the various architecture design teams there are probably people who know what the highest latency pipeline stage is that they could use to tell you an approximation of the highest clock limit physically possible on a given chip.
|
# ¿ May 1, 2022 16:26 |
|
CoolCab posted:never run two programs that monitor temps or voltages etc at once, neither will report the right results as I understand it. The way these programs generally work is your motherboard and cpu and gpu all have temperature sensors read by embedded micro controllers on them, these controllers then write the temperature data to some reserved memory addresses, monitoring programs then read the data from that memory addresses and display the data in a human readable format for you. So you can actually run as many monitoring programs as you want because 2, 3, or even 100 of them can all just read that same memory address space just fine, it is "public" data inside your PC. Where you can run into trouble is monitoring programs that also let you control fans, rgb leds, overclocking, voltage, or power saving features which are also controlled by writing to some reserved memory near the same space as all the sensor data, because then you can end up with multiple programs all attempting to write to that area of data and conflicting with each other.
|
# ¿ Aug 1, 2022 23:10 |
|
|
# ¿ May 14, 2024 18:23 |
|
Agreed posted:The 4770K in my old comp is still working great in most games paired with a RTX 2070 non-S, my kiddo uses those and for the most part nothing has issues. I was recently tearing my hair out trying to figure out why Dying Light 2 was stuttering like mad on my machine, like constantly. After trying a couple driver updates or clean installs and exiting all background tasks, I noticed the CPU usage rainmeter graph I keep going on my second screen showed my CPU was idling down during the game. So I switched my windows power profile to High Performance and the stuttering immediately stopped. I still haven't figured out why Dying Light 2 wasn't keeping my CPU at full speed like pretty much every other game does, but at least I have a workaround for now.
|
# ¿ Oct 9, 2022 18:51 |