|
Sir Bobert Fishbone posted:Is it normal to have a fairly consistent 10 degree difference between cores at idle on IB? Or did I gently caress up my thermal paste application? Nah, the difference on my SNB is like 7C at idle. There's the one stupid hot core and the one core that lives next to the iGPU and uses it like an extra heat sink.
|
# ? May 12, 2012 18:53 |
|
|
# ? May 11, 2024 06:48 |
|
Sir Bobert Fishbone posted:Is it normal to have a fairly consistent 10 degree difference between cores at idle on IB? Or did I gently caress up my thermal paste application? Don't know. Physical layouts of the two chips are pretty similar though: and both of them sandwich two of the cores, and poor Core 3 is right at the center of the entire package. Still, I only have about a 3°C difference at idle. But I doubt it's down to thermal paste unless your idle temperature as a whole is abnormal.
|
# ? May 12, 2012 19:03 |
|
My i5-3750k cores are currently: 30, 34, 34, 24.
|
# ? May 12, 2012 19:05 |
|
Alright, I'm back and I think that I'm finally getting the hang of this poo poo... but I still have a few quirks to clear out and it'd be cool if someone with a MSI board with Click BIOS II could help me out. The reason being is that Click BIOS II does not seem to have offset voltage, instead it only offers constant voltage, so a lot of the advice I'm getting in here is not really applicable to me. Here's where I stand now: 1. CPU is set to 4.2ghz 2. Core Voltage is set to 1.2V (though this can probably come down) 3. Vdroop is set to Level 2 4. Core OCP Expander is set to Enhanced Everything else is unchanged. The problem is that while I've gotten everything in the OS down below 1.2V, my problem is two-fold: 1. Temps are still peaking at mid-60s under load (for one core, all other cores are 55~60C) 2. I'm idling at 1.184V and using 1.160V under load... this is my biggest frustration, and I'd like some advice on what Vdroop setting I should use to make this thing idle at lower voltages. Thanks. testtubebaby fucked around with this message at 01:28 on May 13, 2012 |
# ? May 13, 2012 01:08 |
|
This is my first time overclocking since I was literally an employee at FrozenCPU.com a million years ago with a Celeron 450 OC'd to like 900. I have an ivy-bridge i7-3770k on a Asus P8Z77-V Deluxe (yeah I know), and an Corsair H60 water cooler set up. So. Right now I'm running at 42x100mhz, with 1.226V. My memory is 1600mhz and isn't overclocked at all. My idle temp is around 34 C. Seems pretty stable to me, but I just wanted to run it by you guys in case I'm GOING TO FRY MY COMPUTER.
|
# ? May 13, 2012 02:53 |
|
dunkman posted:This is my first time overclocking since I was literally an employee at FrozenCPU.com a million years ago with a Celeron 450 OC'd to like 900. Has it escaped past 74C so far? Bit warm (even with a water cooler), in my opinion.
|
# ? May 13, 2012 02:58 |
|
movax posted:Has it escaped past 74C so far? Bit warm (even with a water cooler), in my opinion. I'm going to let it run while I do some errands. I will see where it goes.
|
# ? May 13, 2012 03:01 |
|
Do you need that much voltage for 42x, or is that your first dial-in? Most of the reviews I've seen that have done overclocking didn't need that much voltage until 4.5-4.7 GHz.
|
# ? May 13, 2012 03:09 |
|
That was my first dial-in. I just let it run and it was hitting around 76ish at some peaks, but mostly around 71/2/3/4. So I should dial back on the voltage a bit? Any recommended value? I really am just stabbing in the dark on this one.
|
# ? May 13, 2012 03:20 |
|
dunkman posted:That was my first dial-in. I just let it run and it was hitting around 76ish at some peaks, but mostly around 71/2/3/4. 72.5ºC for full-time loads is what's published as safe for the chip. We're not really sure about voltage yet, just waiting for some people to push them hard enough to cook them or start experiencing early failure to establish those parameters. Two new lithographic technologies at once, all we know is temperature at this point. That said, for 42x, you're going way overkill on the voltage most likely. Two ways to go, either lower the voltage or raise the multiplier 'til it gets unstable then go one back down from there. Thing is, as clock increases, the voltage will cause more heat as well, and you're already pushing safe temperatures. Partly it's because your cooler, it's just not a very efficiently designed unit compared to modern mono- or dual-tower 4+ heat pipe coolers, which can wick heat much more quickly with less noise and greater performance to total radiator fin space. Either find a way to lower temps so you can push up the multiplier and get a really profound overclock out of it, or stay where the multiplier is now and lower voltage 'til you're at the least necessary for ironclad stability but with significantly lower temperatures.
|
# ? May 13, 2012 03:28 |
|
I dropped it to 1.15v and now it runs OCCT Linpack at ~67 degrees.
|
# ? May 13, 2012 03:37 |
|
dunkman posted:I dropped it to 1.15v and now it runs OCCT Linpack at ~67 degrees. Yeah, Ivy Bridge is pretty leaky and the temperature at a given clock and voltage seems to rise rapidly, even at comparatively low clocks. I predict more people going for "golden chips" this time around; with Sandy Bridge, even a bog standard one could be reasonably expected to hit 44x-45x with a Hyper 212+ or Evo at a reasonable voltage one. Decent chips started hitting that heat and voltage wall at around 46x. My 2600K (pre-2700K) will do 47x at 1.38V, the amended maximum safe voltage for Sandy Bridge, but it runs hot doing so - I've taken extreme measures, it has a Noctua NH-D14 with an optional third fan bolted onto it. As a result I could probably feed it 1.4V and shoot for 48x or go for broke and open it up to 1.425V-1.45V and shoot for 50x, but the truth is that performance gains are so minimal past 45x anyway that it's basically e-peen to go for those really high clocks. Ivy Bridge, the voltage you were running at is, as FF noted, closer to what people running 44x-45x seem to be using (and may end up being a de facto "safe voltage" if Intel doesn't offer clarification, since the VID value is surprisingly high - like, Sandy Bridge high - and likely can't be trusted). Heat is going to be the primary limiting factor for the majority of Ivy Bridge overclocks, I'd guess, and an H60 would have to be using REALLY loud fans to move enough air over the radiators to help cut down the temperatures enough for a higher overclock. The current top-end closed loop pre-packaged liquid coolers do offer good performance, it's just poor performance if you take into consideration radiator space, CFM required, and noise. Hell, the pump on most closed loops is at least as loud, usually louder than my three-fan NH-D14. But it weighs like three pounds and looks like it was yanked out of a space shuttle cooling system, it virtually dwarfs the motherboard. Custom liquid cooling has the advantage of removing the radiator to an arbitrary space and size, so you don't necessarily have to use really powerful Delta fans or whatever just to get competitive cooling. Give it enough surface area and it will passively radiate more than any high-end air cooler made. But in terms of efficiency for surface area for air-flow for noise, liquid loops are poor compared to modern heat pipes. I'd like to see vapor chamber cooling applications soon, it seems like a natural progression, especially given that it's already shown really impressive results on graphics cards.
|
# ? May 13, 2012 04:12 |
|
The best GPU cooling setups are still heat pipe, though. Vapor chamber just gets along real well with the default "blower exhaust all heat out of the computer because we don't trust the end user to have any airflow" design.
|
# ? May 13, 2012 04:34 |
|
I have an extra 80mm fan laying around, are you saying I should screw it to the "back" of the H60 radiator to get better temps? It's already screwed on to the one that comes with it on the back of my case. Edit: Did it anyways, now OCCT tops out at 60 degrees. Not bad, but a bit noisier. milquetoast child fucked around with this message at 05:14 on May 13, 2012 |
# ? May 13, 2012 04:38 |
|
dunkman posted:I have an extra 80mm fan laying around, are you saying I should screw it to the "back" of the H60 radiator to get better temps? It's already screwed on to the one that comes with it on the back of my case. This, but with 38mm-thick fans instead: Sanyo Denki/Panoflo (NMB) fans are generally designed in that form-factor with the intention to hook them to a fan controller at any rate. You'd see lower noise and temperatures (than the stock fan and an additional 80mm) with two of them in push-pull, with the caveats that it would take more space due to the added thickness, and you'd want to get medium/high-specced fans and undervolt them on a controller for best results. SD fans have a gentle-typhoon-style noise profile, so they're barely audible at 5-7V. You could also just get 2x 120x25mm fans, but the tiny radiator on the H60 means you'd have to run them faster (more noise) to overcome the resistance on the radiator. You'd have to get a little creative with the mounting solution. Zip-ties are easy but messy, although it wouldn't be that difficult to grab some wider bolts at your local hardware store as the H60 mounting bolts are a standard size. future ghost fucked around with this message at 06:40 on May 13, 2012 |
# ? May 13, 2012 06:37 |
|
Dogen posted:The best GPU cooling setups are still heat pipe, though. Vapor chamber just gets along real well with the default "blower exhaust all heat out of the computer because we don't trust the end user to have any airflow" design. To the best of my knowledge that's not really true - what is true is that the best aftermarket and non-reference cooling uses 8mm heat pipes still, not because they're superior to vapor chamber cooling but because they are still the most convenient when it comes to targeted wicking heat from location A to location B for removal. Vapor chambers employed with regard to surface area and height still allows more efficient cooling, but adapting it to a non-blower cooler would require some creative thinking. Hence pretty much every after-market/non-reference cooling design coming in at either "juuuuust barely 2-slot" where it's questionable if you could actually run them adjacently, to straight up tossing the idea of 2-slot out the window and taking a full 3.
|
# ? May 13, 2012 12:03 |
|
Sir Unimaginative posted:Don't know. I reapplied paste anyway and dropped my load temps by 10C across the board. Still got a discrepancy between cores, but I'll take it!
|
# ? May 13, 2012 21:31 |
|
Factory Factory posted:That Vcore is way too high for a long-term safe overclock. That voltage would burn out Sandy Bridge prematurely, and its transistors are almost double the size. You seem to have good cooling if you're only getting 82 C (which is too high, and doubly dangerous with your voltage) at that high a Vcore, so dial the volts back to 1.3 and take what clock speed you can get. What would be a good offset config if I want to max at around 1.35 vcore? This option is nice but it's not really intuitive to mess around with.
|
# ? May 14, 2012 02:26 |
|
Glen Goobersmooches posted:I've been running my new 2550K @ 4.8GHz (it was the same price as the 2500K from my favorite vendor and I don't encode a drat thing ever okay!) nicely stable at 1.40 vcore for a while now on a Asus P8Z68-V Pro/Gen3 (never seen it exceed 68c in any core during BF3 in full swing). After reading the 1.38 vcore safety limit, I'm going to try to drop this to 1.30-1.35 and work up. Really, what are the hazards of keeping the 1.40v? Are we talking degrading performance or stability within months? I didn't think it was an issue but I suppose it's a designated upper limit for a reason, huh. I was going to say "if it's stable and doesn't get hot, don't worry about it" but then I saw that your temperature reference was Battlefield 3. Which, while a demanding game, is not really representative of a torture test load. What's it look like under IBT in admin mode with 4 threads, Very High stress, 5 iterations? (cancel the test if it starts topping 80ºC)
|
# ? May 14, 2012 02:45 |
|
Admittedly, I haven't run any of the regular stress tests and the temp numbers would be undoubtedly less sustainable. However, the most load my PC ever gets is from games like BF3, and as long as I'm never subject to BSODs in those programs I'm content. Is there a specific purpose to Prime95 and such besides finding the absolute limits to what your chip can do without losing its lunch? That said, I'd be remiss to not give it a try. Is this the right download for IBT? http://files.extremeoverclocking.com/file.php?f=213 I've dropped the core to 1.35v (couldn't get past win7 login without a BSOD before that!) and am anticipating a possible lockup in games. I might have to - dear god - drop 100MHz or two. This is a black day for baseball, and price/performance margins.
|
# ? May 14, 2012 02:54 |
|
A 24/7-safe overclock (to which the 1.38V limit applies) means "the chip lasts at least to the end of its warranty period, even if run at full 100% load 24 hours a day, 7 days a week." The primary problem you're working against is electromigration, the gradual decay of signal paths within the processor. While the process is gradual, the resulting instability will happen pretty much immediately once it starts. As in, one day the processor will work, and one day it will not. There is some variation involved - some CPUs will last longer than others given the same settings because of variations in the manufacturing process. But the 1.38V limit is Intel's recommendation for every single chip to last its warranty period running all day every day. If you're not running at 100% 24/7, you can actually get away with a bit more voltage and still have it last long enough. But if your CPU is particularly marginal or your cooling isn't good, the little extra voltage might kill it early anyway. So it's a matter of caution. Since you don't know whether it's a fragile chip or not beforehand, either you play it safe or you buy overclocking insurance in case it craps out within the warranty period.
|
# ? May 14, 2012 03:14 |
|
No offense intended, but 100% of the info you need is in the OP series. It kinda sounds like you just "overclocked" it (scare-quotes intended). The purpose isn't really to test for PERFECT STABILITY(ilityilityility), since as someone smart mentioned, without ECC RAM, eventually even a perfectly operationally stable system will kick an error on tests which leave no margin for error because some stray radiation flips a bit and the calculation comes back wrong. It's more for testing for operational stability. A few IBT runs will tell you if your CPU and memory are doing what they're supposed to. If they're not, bad things happen. That may sound nebulous, but consider that every single damned thing that happens in your computer relies on your processor, all its integrated poo poo, and your RAM working as perfectly as can be expected. If your computation stuff decides to wig out while doing I/O operations, it can ruin files currently being written to, for example. Prime95 is the golden test because it's the most comprehensive measure of how your system will behave under heavy, but normal, computational loads involving both processor and RAM. It has your computer do a bunch of fancy poo poo to calculate a bunch of fancy versions of pi. That's seriously all it does. But, it does so in enough fancy ways that it gives your system a really solid workout. After each calculation, it compares the result to its table of known accurate results. If they all match, rad! Stable! If they all do not match, booooo, it'll stop that worker thread right there (or lock up your computer, or bluescreen, depending on the severity of the instability). Run Prime95 with Admin permissions for accuracy. As stated earlier, if you just let it stress test forever, even a stable system will eventually get an anomalous failure that's unrelated to operational stability because the parts involved in calculations and storing data temporarily are small enough that they can be affected by little sunspot emissions and stuff. 12 hours is a pretty safe test, 24 if you're anal. In my experience, an unstable system - one which will demonstrate issues of some kind at some point - will usually fail in some way during the three to five hour period. That was the case for a few Wolfdale/Yorkfield (last Core2 processors), and is the case for my Sandy Bridge 2600K system too. Getting to 12+ hour stability is my "aaaaand done" point in an overclock. Leave it on overnight or while you're away, once you've ascertained safe temperatures (no greater than 72.5ºC steady temps in Prime95). Speaking of temps and your high voltage and clock, watch for heat during IBT. Linpack, the stress testing utility that IBT acts as a handy front-end for, is, well, stressful. Really stressful. If you're too overclocked, it will push your processor well past the safe point for extended thermal operation. Nobody recommends prolonged IBT testing. 10 Standard runs, 5 Very High runs, 2 Maximum runs are my benchmarks there. Maximum runs are great for a quick assessment as to whether you might have memory-related instability. If you can pass 10 standards but your system hard locks on a maximum stress test, something is going wrong in the general communication between memory and processor. Solving that may require upping RAM voltage slightly, especially if you've got all of your board's DIMM sockets populated. If you are still unstable, bump VCCIO (the integrated memory controller voltage) up by one or two of your motherboard's increments, max. As memory control is integrated into the chip, you can expect these actions to raise your CPU temperature. If you suspect memory is causing you issues, run Memtest86+ at least one full go, preferably two or more; it's at least as good as Windows Memory Diagnostic, in my experience better at detecting unusual memory failures. BUT WHY DOES ALL THIS MATTER? I WAS PLAYING BF3 JUST FINE! Because you might be doing cumulative damage to your computer's hardware which will result in early component degradation, and you risk procedural reduction in data integrity as well. All that said, despite being a bit out of Intel's 24/7 recommendation, 1.4V is not an abnormally high voltage to reach 48x with a chip which can run at that multiplier to begin with. That's a pretty select group, by that point, too. Whether it's something you'll be able to safely do remains to be seen. It usually requires exceptional cooling and a good setup. That board features Asus' 12-phase Sandy Bridge VRM design, it won't get in your way if you've ensured that all phases are enabled and thermally controlled (t-probe, rather than maximum current at all times regardless of temperature). Edit: I should note that sometimes games and specialized processing can poke holes in stability tests, too - a system can be arbitrarily Prime95 stable and pass IBT flawlessly but still have instability if it's extremely marginal and a game or other application calls on it to do something that neither stability stress test would normally do. The third and final phase of "testing" is referred to as "just using your computer as normal, and noting if anything seems off or unstable." Agreed fucked around with this message at 03:37 on May 14, 2012 |
# ? May 14, 2012 03:33 |
|
Some places use a big 7-zip decompression or video encode for real-world stress testers. They'll error out on unexpected behavior pretty reliably.
|
# ? May 14, 2012 03:48 |
|
Agreed posted:No offense intended, but 100% of the info you need is in the OP series. It kinda sounds like you just "overclocked" it (scare-quotes intended). The purpose isn't really to test for PERFECT STABILITY(ilityilityility), since as someone smart mentioned, without ECC RAM, eventually even a perfectly operationally stable system will kick an error on tests which leave no margin for error because some stray radiation flips a bit and the calculation comes back wrong. quote:All that said, despite being a bit out of Intel's 24/7 recommendation, 1.4V is not an abnormally high voltage to reach 48x with a chip which can run at that multiplier to begin with. That's a pretty select group, by that point, too. Whether it's something you'll be able to safely do remains to be seen. It usually requires exceptional cooling and a good setup. That board features Asus' 12-phase Sandy Bridge VRM design, it won't get in your way if you've ensured that all phases are enabled and thermally controlled (t-probe, rather than maximum current at all times regardless of temperature). Thanks for writing such a large, in-depth post. It's very helpful and I appreciate it. I did fully read the relevant part of the excellent OP, but had some specific queries about very particular numbers like this due to the original misinformation I had consumed. Like what kind of heat increase where bumping the VRM to 350MHz instead of auto might incur. And so on!
|
# ? May 14, 2012 14:54 |
|
Linpack first, then the 12 (or I'd do 24) hours of Prime95. If Linpack will error, it will do so much more quickly. No sense in spending the 12 hours if the 20 minutes would have told you immediately. -- On another note, I looked into MSI Sandy/Ivy boards' lack of Offset voltage, and it looks like they use Offset as the underlying mechanism, but keep it abstracted from the user. As in, the board auto-picks the offset given the CPU's base voltage in order to hit the user's target. I'd love to see somebody with an MSI board confirm this. The way to tell would be to watch Vcore on idle when C1E and EIST are enabled - if it drops along with frequency, then I'm right.
|
# ? May 14, 2012 16:13 |
|
I feel like my VCore is a bit high. I have an i2500k on an ASUS P8P67 REV 3.1 mobo, clocked to 4.2Ghz. At time it's hitting 1.32, at times. Should I be alarmed? When I OC'ed, I did nothing more than the of "turbo by all cores 42X" and disabling PLL overvolting. I know the "safe" limit is 1.37, but I feel like I shouldn't even be coming close to that. From da OP: Factory Factory posted:Be sure to enable all of your VRM phases. By default, motherboards will operate their VRMs to mimic the 4+1 phase Intel reference design.
|
# ? May 14, 2012 17:47 |
|
betterinsodapop posted:From da OP: I believe it'll be under the 'Phase Control' option; going by memory here, but I think the Asus manual says that 'Extreme' gives you all the phases. Standard varies with CPU load and Optimized does...something.
|
# ? May 14, 2012 17:55 |
|
Optimized uses fewer phases on idle and all phases on load. As long as it's stable (which it should be), it's the best balance of power savings and performance. Duty control should be T.Probe. You can do this from UEFI or from the Digi+ VRM plug-in to AI Suite.
|
# ? May 14, 2012 18:00 |
|
Anyone shooting for a 47x+ overclock at higher voltage really ought to be ensuring the load is balanced across them - for "normal" overclocks, Optimized is probably fine, but if your OC is legitimately rather extreme, use Extreme. Do always use t-probe, though, unless you want to cook parts. AI Suite is the god-damned devil, no one should use it. Bloatware, gets in the way of overclocking. Keep the temperature monitoring part, if you want to, but they're pretty bad about keeping the BIOS and the software able to communicate accurately, and absurd temperature readings are pretty commonplace in AIsuite.
|
# ? May 14, 2012 18:07 |
|
Aha! I'm going to switch to Optimized and use the T.Probe, if it isn't already set up that way by default. I've heard some BAD things about AI Suite, which was why I was hoping to avoid it. Glad I should be able to do this from UEFI. You guys are the best.
|
# ? May 14, 2012 18:24 |
|
If you're feeling really [H]ardcore, it looks like replacing the TIM inside the integrated heatspreader on your Ivy Bridge processor will give you up to 23% lower temperatures under load! http://www.techpowerup.com/165882/TIM-is-Behind-Ivy-Bridge-Temperatures-After-All.html That's actually a pretty huge temperature difference--the guys on the Japanese site went from 84*C under load at 4.6 Ghz/1.2V to 69*C under load by reapplying the thermal paste.
|
# ? May 14, 2012 19:16 |
|
I respectfully disagree on AI Suite. Specifically, I think it's useful for more than the temperature monitoring, and also the temperature monitoring can be broken so don't trust it for that. I stripped down my install to only the TurboV EVO/Digi+ VRM modules and the BIOS updater, and I barely use it, but I have a 6-series board with an already-dialed-in overclock. The 7-series version of AI Suite includes some pretty good fan control stuff and finer control on the overclocking parts with increased stability for on-the-fly frequency/volt changes. Basically, every review of 7-series boards says the in-OS overclocking software is exactly as advertised in terms of functionality. At worst, that functionality isn't very interesting or the interface is slow/clunky. But the stuff is definitely not so bad as to be shunned on principle. Overclocking SNB/IVB is simply a matter of telling an already-variable voltage regulator to change its output and setting a different ACPI p-state target. It's changing stock behavior only in terms of the specific numbers, not by any wild change in mechanisms. It's got a lot fewer failure points than, say, changing the FSB frequency on a Core 2 system.
|
# ? May 14, 2012 20:07 |
|
I'm going to have to remain skeptical of its functionality on the basis that they don't have an even software-to-BIOS update schedule. If they support it, great, but as you note their temperature monitoring is all over the place (I had one BIOS where the extra special sensors on my Sabertooth P67 would show temps correctly in the at-that-point current AIsuite, after that it's always been some read negative and others melting). Perhaps it's a difference of focus, perhaps the BIOS updates don't affect the TPU/EPU software integration. But even you had issues with it previously, permissions problems; given that the Ivy Bridge chipset and motherboards are still pretty new and will surely see plenty of BIOS updates, I just don't think it should be relied on to accomplish what is easily and safely done from within the UEFI. I'd love for them to prove me wrong in the end, it's not like I wouldn't appreciate a utility which allows robust overclocking and configuration within the operating environment.
|
# ? May 14, 2012 20:47 |
|
For anyone with an ATI/AMD card running catalyst 12.3 or higher: If you want to use MSI Afterburner (the lastest version is the non-beta -> Beta15 just expired today) with catalyst version 12.3 or higher, and you want to unlock the overclocking limits, you'll need to do some extra steps. AMD removed a couple .dll files from the newer catalyst drivers, so the files that Afterburner looks for are no longer there. You'll need to add them in. You need to save the .dll files from here and then extract both of them into Afterburner's root directory. I copied atipdl64.dll to \Windows\system32\ & atipdlxx.dll to \Windows\SysWOW64\ per the instructions, but the AB root folder should suffice. After this you'll want to run AB once (allow it to reboot when asked) to get the correct configuration profile for your card. This first-run creates the /profiles/ folder in AB's root directory. Once it's back in windows, you'll need to make sure that BOTH MSIAfterburner.cfg files have the following modifications (make sure AB is closed when you do this) : [ATIADLHAL] UnofficialOverclockingMode=1 UnofficialOverclockingEULA=I confirm that I am aware of unofficial overclocking limitations and fully understand that MSI will not provide me any support on it The cfg files are located at: 1) /MSI Afterburner/Profiles/MSIAfterburner.cfg (you'll need to add the above section to the bottom of this file) 2) /MSI Afterburner/MSIAfterburner.cfg (add the above section to the lines in this file as they're already present) After you make the above modifications, run afterburner and you can overclock beyond CCC limits and enable voltage control or setup fan profiles or whatever. Enter your clocks manually as the toggles are pretty sensitive. This is the guru3d thread talking about the .dll change with 12.3 and higher: http://forums.guru3d.com/showthread.php?t=359671 I've been playing around with the new AB version, and while there's not many changes from the latest beta, they added powertune options to the main window which is a nice feature if you want to pump more voltage into your card for some maxx-epeen-overclocking (aftermarket cooling whatup).
|
# ? May 15, 2012 01:22 |
|
Newbie question: What does '24/7 stable' mean in regards to overclocking? Does it mean that your CPU is considered stable doing intensive CPU related tasks 24/7 or just leaving your computer on 24/7? Also, what are safe 24/7 temps, specifically for the 3570k?
|
# ? May 15, 2012 02:47 |
|
Factory Factory posted:A 24/7-safe overclock (to which the 1.38V limit applies) means "the chip lasts at least to the end of its warranty period, even if run at full 100% load 24 hours a day, 7 days a week." And 72-ish degrees centigrade.
|
# ? May 15, 2012 02:50 |
|
I'm planning out a 3570k/Hyper 212 Evo/ASUS GTX 670 DirectCUII/Z77 ATX system, and I reckon I've got my case choice down to two: the FD Define R3, at £70 ($110), or the FD Arc Midi, at £55 ($90). The relevant Anandtech benches are these: in which the Arc Midi is the clear winner in terms of temp. At idle though, it's ~3 times louder than the R3. Now, the Arc Midi review criticises the fans (stock: 3x 140mm 66cfm 19dB), and suggests that the cooling performance could be maintained with some quieter 140mm's. The R3 review on the other hand suggests that the cooling (stock: 2x 120mm 40cfm 15dB) would likely be improved - whilst maintaining the noise profile - with an extra pair of fans. So, potential options: - Arc Midi, stock (£55) - Define R3, stock (£70) - Arc Midi, swapped out fans (<£85, guessing <£10/fan) - Define R3, extra pair of fans (<£90, guessing <£10/fan) - Fifth option that I've missed completely due to Fractal Design & Anandtech tunnel vision (Corsair 500R? Shinobi with more fans?) If you've an alternative to suggest though, be aware that I'm very attached to the black monolith school of case design. Which is the best choice? Considering my build is already verging on £1,500 and I'm planning on keeping this thing 'till 2017, I've no problem with spending the extra if it makes for a noticeable improvement. Right now I'm leaning towards the 4th option, as the extra two fans would bring the R3's airflow up to roughly the same cfm as the Midi. As they're of largely the same interior design, I suspect this'd give cooling figures similar to the Midi's. Knowing nothing about this stuff however, here I am. (Define R3 review, Arc Midi review) coffeetable fucked around with this message at 17:36 on May 15, 2012 |
# ? May 15, 2012 17:13 |
|
If you want a "black monolith", you want the Define XL. http://www.fractal-design.com/?view=product&prod=68 It's just so... big. Also, I have one, and it keeps my OC'd 3570k at under 65C @ 4.53GHz. Also, you literally can't hear when it's on. Did I mention it's big? Because it makes my Noctua NH-D14 look small. Edit: If you do get that case, make sure your PSU has long enough cables. My Seasonic X750 Gold's CPU 8 pin wasn't long enough to route behind the motherboard tray. I actually had to run it UNDER my video card. KillHour fucked around with this message at 17:55 on May 15, 2012 |
# ? May 15, 2012 17:52 |
|
Love my Corsair 500R. Looks clean, reasonably quiet stock fans, proper front USB3.0 port headers (can use USB 2.0 or 3.0 headers) and was built to be used with the H100. Though no case can help my ambient temps during the summer especially with these 6 cores + SLI 560's pushing out the heat they do. So much airflow coming out of the case makes the room pretty warm lol.
|
# ? May 15, 2012 18:58 |
|
|
# ? May 11, 2024 06:48 |
|
KillHour posted:Noctua NH-D14 Man, an extender for that ought to be packaged with PSUs. I always have to do some goofy poo poo like run it alongside the rear fan and tighten it to the fan with a zip-tie or something like that. And in this case, as a fellow NH-D14 () owner who did not plug in the 8-pin 'til the mobo and PSU were installed, let me tell you, plugging that barely-long-enough fucker in... was an unpleasant experience.
|
# ? May 15, 2012 19:18 |