Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Too long, didn't...


No. I mustn't. Real thought: great job, I'm still gathering data to try to put together a cogent graphics card effortpost but until Kepler is more than a good idea there's not a lot I feel I can do there.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

kingcobweb posted:

What do people like to use for GPU stability testing (other than a game)? I tried the one included in OCCT, but it would remain stable at high clocks... then I would get a blank screen when opening Chrome v:confused:v

Unigine Heaven is great at showing you errors of varying severity. 3Dmark11 is good, too, but running that sequentially is way more of a pain in the rear end than Heaven. The thing to remember though is that DX9, DX10, DX11 are different and your card might wall at different frequencies for them. E.g. my 580 will do about 950mhz in DX9 stably, but 920mhz in DX10, and 925mhz at DX11. Different parts getting stressed. I go with the lowest common denominator because I can't be arsed, personally.

Here are extreme example of Heaven artifacting, one looks more like CPU (disco poo poo popping up) whereas the other... looks like he shouldn't have flashed that 6950, frankly, holy poo poo. CPU and shaders.

GTX 560Ti SLI artifacts in Heaven
https://www.youtube.com/watch?v=WJLGBqKywUs

HD6950 (misguidedly, it appears) unlocked to 6970 artifacts in Heaven
https://www.youtube.com/watch?v=e5VGkvZcbTk

Those both look like GPU/Shader artifacting to my eyes. Memory manifests as either texture issues, large geometric spaces with peculiar color, or a driver crash (because previous-gen - that's pre-7970 - cards typically used fast GDDR5 but slower memory controllers and overclocking the memory tended to be a bit of a fool's errand).

Edit: Spelling.

Agreed fucked around with this message at 21:18 on Feb 8, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Animal posted:

Thanks for this tip. DX10 seems to run way hotter than DX11. Thats curious.

That has been my experience as well. Could be because there's less limitation on raw FPS since it's not having to do DX11 tessellation, ADoF, etc. and so the GPU can really get going.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Splash Damage posted:

I used the ATi Overdrive Tool to auto-detect the overclocking settings and tried playing Battlefield 3 with those. While the performance was improved, the game keeps crashing almost immediately after lauching (about a minute). What could be the problem?

Auto-detection tends to suck.

Driver crashes are often your card's way of telling you that something just really can't take it. It can be kind of frustrating isolating what, specifically, because you've got several variables in play, but if the game runs fine visually with no noticeable artifacts and then crashes out, I'd guess video memory/the video memory controller. Generally speaking, fast GDDR5 ruled the day last generation, but memory controllers often limited the clock potential. In fact, higher overclocks on the core/shaders can sometimes be had by leaving memory clocks completely alone and just focusing on voltage and core/shader clocks.

Bottom line for you is that the auto-overclock made some errors in computerized judgment and you're going to want to reset it to stock settings and do it manually. For videocards, you can do core/shader bumps of as much as 15mhz to even 25mhz at a time because they have robust protection from death, generally speaking, and will do what you experienced (crash out precipitously then recover) within the OS itself without having to do the whole BSOD thing. For nVidia cards, driver crashes are followed by a restoration to factory default clocks and voltages - I would assume that's true for ATI as well, but I don't know. Factory Factory or another ATI user care to chime in?

Process for OCing videocard cores really isn't far off the process for OCing processors, you just do it from within your OS and use tools like the Unigen Heaven benchmark, Furmark/OCCT, EVGA Precision/MSI whatsitcalled... It's software based overclocking unless you want to get fancy with flashing a non-factory BIOS. Don't get fancy, trust me, it's a pain in the rear end unless you bought a 7970 and just HAVE to see the biggest numbers on planet earth or something. Generally you'll hit a wall in your cores/shaders/memory before you max out your stock voltage.

Speaking of which, another thing that can cause crashes is a card drawing too much power. By default, protection circuitry is pretty hardy on reference designs and there's hardware and software protection from drawing too much power. So you kinda get to play the "chip lottery" whether you like it or not. I got a GTX 580 that will run to 920MHz core at 1.25V in all DX rendering modes. That's pretty good, brings it to nearly identical performance with a stock 7970 in games where VRAM isn't the bottleneck, but some folks get 'em up to 950MHz or higher at voltages lower. Since clockrate and voltage are part of the total power draw calculation (and heat, which adds resistance as well - set up a custom fan profile to more aggressively cool when overclocking your card heavily), you might end up running up against what the card will allow power-wise even though technically it could go faster.

That's when you could, if you were tempted, get fancy and turn off protections and flash a custom BIOS and generally make a lot of errors in judgment and probably fry a perfectly good card, so be careful chasing those high numbers :)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Splash Damage posted:

I don't understand a word you've posted. How would I go about overclocking an ATi Radeon 4800HD just a little bit, to get consistent 30fps in BF3?

A miracle, or extremely low resolutions with all settings down very very far.

It's time to upgrade, your card had a good run. There are excellent price:performance choices right now, to make a proper recommendation need to know your resolution as well as what kind of settings you're after for that 30 fps - preliminarily I'd recommend a 6850 as a killer budget card. Oh, also, if you have one, a budget. Can't play the latest hot-poo poo games at reasonably good looking settings on several generations old hardware, unfortunately, so you're looking at either a LOT of compromises, or getting with the times.

Well, approaching the times cautiously, so as not to startle the times. The 6850 is in some important ways a two-generation old card, very similar to the higher end models of the 5800-series, but it overclocks well (though if you genuinely didn't get a word of that, and I would understand that since you tried the automatic utility first, overclocking might not be for you... maybe consider a 560 Ti if you can budget it, that's a card that everyone can get great framerates in BF3 with even up to 1080p so long as you aren't looking for maxed settings, but it'll still look a hell of a lot better than consoles :)).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

VorpalFish posted:

A 560ti is the last card I would recommend to someone looking for BF3 performance unless they finally fixed the awful triangular artifacting?

Edit: Less snarky, sorry, too early.

The devs of BF3 have been working on that one, it doesn't seem to be an nVidia problem but rather some unpleasant confluence of issues. I think both companies have worked substantially to fix that issue and it is mainly a "launch" problem at this point, with the vast majority of complaints months ago and hardly any since that could be related to a specific and repeatable cause (in other words, individuals with poorly overclocked or just lovely videocards might not be getting the best experience, but for most folks, seems like BF3 on a 560 Ti is doing fine now).

Agreed fucked around with this message at 18:29 on Feb 11, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

6850/6870 still probably a better choice, just on value merits, depending on what kind of experience he's looking for and what his monitor's resolution is. And what card he has now.

Hey, man, specific stuff:

1. What is your monitor's resolution?

2. What is the exact type of card you have now? No way you'll get any real performance gains overclocking it if it isn't top-tier from the time period, and even then, you're definitely looking at lowering settings/resolution for a minimum-30fps experience.

3. Are you interested in upgrading? If so, budget? Depending on exactly how fancy you want to get there are a lot of options ranging from about $150-$160 up to around $300 that are on the price:performance curve and wouldn't require you to screw around with anything unless you just wanted to. Overclocking nets performance gains, but at the cost of a lot of dicking around, and it does require an investment of time in learning how to do it. While this isn't really the thread for people who don't care about overclocking, it's still SH/SC and we try to help if we can and we do understand if you're not up for putting in the effort to learn to overclock a card, coming from knowing nothing about it. There's not much to learn, but if you just don't have time, that is your business.

HalloKitty posted:

Unless it's the XFX 4890, which I had. It had cheap analogue VRMs that were uncooled, and didn't like it when you did.. anything in the way of overclocking.

What is it with XFX and cheaping out on power delivery? They'd be a great brand otherwise, but you just can't trust them to not have the bean counters go over every aspect of the reference board and take out anything that isn't necessary to achieve the stock clocks. They don't do it on every model, but they do it with alarming regularity on enough models that I feel like nobody ought to trust 'em.

Agreed fucked around with this message at 19:55 on Feb 11, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I've been thinking about reseating mine with the new hardware they mailed me but my temps are great and I can't be arsed, it's got three fans and cools ridiculously well. I re-seated it once and gained 3ºC, then added a third fan for another 2-3ºC under load, it's Done and I need to just relax.

Been overclocking my graphics card more lately, I think I might be able to get another 50mhz base memory clock at the same core clock :toot:

Kepler can't come soon enough. Wait is killing me. I've given up on moderation, I'm going to buy a high end part when it launches if it manages a substantial performance increase over ATI's 7970 so I can put the 580 in my backup machine to give its CUDA a serious kick in the rear, and nVidia doesn't usually launch the GTX 4/5/670 until they've got some less than ideal chips to make them with in quantity so top dollar and top end it will be.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

I want to overclock this 580 some more, but I have to up the voltage to go over ~860 core, and the increase in fan speed just isn't worth it to me. I get the feeling I could hit 1ghz with it if I wanted to.

That was exactly my experience, and I figure if I flashed a higher voltage BIOS I could get high 900s rather than the compromise 920mhz, but now that my core's dialed in I really feel like there's room for the memory. It's EVGA's implementation of the reference cooler, which means stuff is all thermal-interface (well, thermal pad - hey, it works fine) attached and stays cool, and I'm at 920core/1840shaders and 4200mhz effective GDDR5 (or 1050mhz base/2100mhz ddr), feels like there's room on the RAM. I don't need it, I don't think it'd really do anything more, but the core is rock solid now and I just want to tweak it more really.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

redeyes posted:

This rules actually. I love being stuck at 3.8Ghz with no stupid declocking. I do low latency audio recording and this is extremely useful. Thanks.

Hey, buddy, me too :) I'll help if I can if you have any specific questions.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I like Unigen because it's really quick to show artifacts in modern cards that are too overclocked thanks to the specific features it's showing off. The various Furmark-alikes (EVGA OC Scanner, OCCT) can tell you about power draw, and that is useful, but apart from that no games (even the most demanding ones) will heat up a card like OCCT will. It's sort of the IntelBurnTest of graphics card overclocking. Unigen is a useful tool to spot "my card is stable enough to load games but has occasional artifacts" problems in the DX10/DX11 modes especially. Just a quick check, easier to use (and free) compared to 3Dmark11.

But the rest is, really, just down to playing games and seeing if you get driver crashes once you're really sure you've got a stable overclock. Games aren't made like stress tests or tech demos, they have all kinds of their own stuff going on and you may not know whether your overclock is stable or not until you've run it through whatever game you haven't played yet that's especially demanding.

Before Metro 2033 my overclock was 960mhz core. Then it was 925MHz, until I spent more time playing S.T.A.L.K.E.R. Clear Sky Complete mod with Atmosfear 2 and CoP Swartz Mod and SGM 2.1 with Atmosfear 3 - their graphical features in DX10 mode brought my formerly "stable" 925MHz overclock to driver crashes. Now it's stable at 920MHz. I figure since that doesn't artifact in any stress tests or fancy tech demos and plays demanding games for longer sessions without issues, it's stable - but there's always the possibility some new graphical nick-knack will come out and do things in a different way which stresses the card's logic somehow new, and that 920mhz "solid as a rock" overclock may have to go down again.

Graphics card overclocking is a lot more trial-and-error than CPU overclocking, basically. A solid power supply and a quality motherboard are absolute requirements for heavy GPU overclocking, because you can approach the practical limits of what a PCIe slot and one or two PCIe power connectors can put out, and if you do, you need to be absolutely certain that your power supply won't have issues serving power, and that the motherboard's own power delivery to the slot is quality.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Schiavona posted:

Can you go into this a little more? I was under the impression that the mobo limited CPU OC'ing, though it makes sense that it also has an effect on the GPU. What decides quality/power delivery?

Part choices, mainly. Cheap motherboards are cheap because they use cheap parts, or fewer parts, or even fewer & cheap parts (and often have kinda poo poo QA, as well).

Motherboards range from "have the electronics-savvy bean counters go through and see every possible area that can be cut down on and have the computer still turn on and work" all the way to "really overkill but hey it's one of the vital components why not."

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Picked up 16GB to replace the RAM I've got now. I may just sit on it for awhile and resell it unopened at higher prices, you seem to be very right that its price is trending up a lot, but at 4.7GHz on a 2600K and with everything work-related on SSDs I could see some actual improvement with higher clocks and better timing than the 9-9-9-24-2T 1600mhz that I've got in now.

So there's even money I'll rip it out of the box as soon as I get here, go through the pain in the rear end process of taking my NH-D-14's fan off, and stick that stuff in my motherboard to see what kind of tweaking I can get up to.

Hell, used prices seem to be keeping up with or exceeding new prices anyway.


I canceled my order. It is good RAM, obviously - but I just don't really have a problem that it solves, so... Not for me.

Agreed fucked around with this message at 21:29 on Feb 19, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

movax posted:

Is it just me or are they nice and low-profile too?

They're 30nm - low profile, low voltage, and extremely high performance. They're just also not very necessary if your system's already built. Great for new buyers, though.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

Hahaha they're $100 on Amazon now.

Not that I need to replace my sweet fast low voltage Mushkin anyway.

Amazon has an rear end in a top hat/awesome price matcher that seems to be able to catch trends ridiculously fast and run with supply/demand (rear end in a top hat) but also auto-match big sales (awesome).

When I ordered earlier it was like $56 for an 8GB pack, now it's taken off, screw that - just the first in a series of next-gen RAM, Samsung's been profitable (the only profitable RAM division in the industry in 2011 iirc?) and continues producing high-quality memory, not going to end up costing that much forever by any means.

Really exciting to see the performance potential of low voltage, smaller process RAM, though. Whenever Intel decides consumers have whacked the SB pinata enough and start shipping Ivy Bridge, there ought to be some great memory on the market.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Emanuel Yam posted:

I want to know whether the OP was being facetious with the 'Set the CPU multiplier to 42, save and exit' advice? Because i dont really want to crank it too much before i really know what im doing but it seems crazy not to go for a near 1 ghz increase for so little effort.. do i need to adjust anything else in the BIOS? is it not that simple?

Sandy Bridge is awesome is what's up there. Not facetious in the least. Not just 1GHz either, 1GHz and turbo by ALL cores instead of managing it within the stock TDP (up to 3.8GHz on one core, with others clocked lower). The actual speed increase in multi-threaded applications is dramatic. Easy and killer, really can't go wrong.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Animal posted:

If his CPU was only 45% load then he was probably using Quicksync

I had a similar HOLY gently caress moment with my movement from the old 2800+ 1GB of RAM build (overclocked a bit, with a GIGANTIC copper heatsink, it fit an 80mm fan!!!) to a C2Q Q9550 with 8GB of DDR2 1066 mhz :smuggo: in 2008.

Immediately and painlessly overclocked to 3.4GHz/core on stock voltage (ahh, the time I had a Golden Chip... I mean, I still have it, it's at 3.8GHz with a bit of a voltage bump now and still turns in nice numbers, just not 2600k-at-4.7GHz nice).

Went from transcoding a video in several hours to roughly 7 minutes, it blew my mind.

But I agree, with such low CPU utilization, that would have to be extremely I/O bound (which it obviously isn't to have executed in under 7 minutes).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Son of a bitch, my machine went all prime95 unstable on me again (related to the last BIOS update, I never did go through and do it right to figure out what stable settings would be for the new BIOS revision... Motherboards, if they aren't broke don't fix 'em folks, ugh). Time for more voltage.

Luckily my Corsair mesh side panel came in today and I've got three 200mm fans pushing air into the case now. It is positively frosty in there. Lowered my overclocked reference design videocard temps by ~7ºC, and now I've got substantially positive air pressure since 2x200mm in and 1x120mm/1x200mm out. Take that, dust.

Next video card will definitely be an internal multi-fan design - the machine, despite its many fans, would be very quiet except WHIIIRRRRRR from the reference blower. Definitely keeps the card cool, though, it's at 1.138V and running like a champ under OCCT, etc. high power draw stress tests.

Edit: I must have had some turbulence this is helping with, dropped my CPU temps by about 5ºC as well. They go -up- when I take the side of the case off. Airflow matters, I know some disagree and I understand their perspective but it's not my experience.

Agreed fucked around with this message at 04:12 on Feb 28, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Yinzer posted:

I hope this is the right thread for it, I think it is since it has to do with OC'ing. To those that have EVGA Precision, is it standard to uninstall the old version and then do a fresh install with the new version. Or can I just install the newest version of the program and have it override the old?

Install it over the old one, it's safe. I've never tried the other way, perhaps it's the same, but for sure installing it on top of the existing version it'll maintain custom fan/voltage/OC profiles.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I used archery serving to tie an old stock AMD processor fan over the (removed) fan mounting shroud on a Leadtech Winfast 6800 GT that I had overclocked to Ultra specs. Worked for years, didn't raise my temps. The shroud was a massive hunk of copper... Ah, the pre-heatpipe years. I tied it off with serving because...

1. the stock fan, busted or not, HAD to be plugged in or else it wouldn't POST, it was a weird connector so I had to keep that busted thing around. Electrical taped to the side of the card.

2. serving doesn't stretch and isn't susceptible to stretching or loosening by vibration and is slightly resistant to simple sharp-edge wear, by design.

3. I was able to position it carefully to tie it off, without impeding the fan's airflow.

It was definitely a MacGyver fix, but it kept it cool at Ultra specs and it only eventually died because, if I recall correctly, the RAMDAC went... Can't remember that well, though. But I did get years out of it, and it was way better than trying to deal with Leadtech Winfast support.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Run IBT in administrator mode, set number of threads to your number of threads (e.g. 2500K, 4 threads; 2600K/2700K, 8 threads). For a super quick crash test, run a default standard stress check 5 times. For a more thorough stress test, run "Very High" 5 times, then maximum stress for 2 runs to give your CPU, RAM and integrated memory controller a workout. Maximum stress eats all available RAM, if you're unstable due to memory reasons it can make that show up sooner than Prime95. Sometimes you need to increase the voltage to RAM or the VCCIO very slightly when overclocking. When using IBT, don't let temps get too close to/above 80ºC. If you can pass these, you're stable enough to do the real stability testing (and you should probably not run IBT anymore, since high temps and full exercise of the processor logic is hard on the CPU and supporting hardware and nothing else will ever get it that stressed, pretty much ever).

Stability testing after that is just Prime95. Make sure to run in administrator mode so Prime95 can make sense of your processor, or it'll make assumptions instead. Blend mode for 12+ hours should tell you if your system is stable or not. The higher heat consumption and small FFT modes are both sort of "specialty" tests, good for stressing the CPU specifically, but Blend is nice because it systematically works through your processor and RAM (and the relationship between them in fetching and execution) and so will root out low-level instabilities nicely.

Monitor temps throughout the process.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

They sold it to you as being capable of running at the stock clocks, RMA that sumbitch :clint:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

You also buy some throttling room (without hacked firmware, the cards are pretty insistent about staying within a given thermal envelope, which changes depending on the card and manufacturer). Still, if you hit a core/shaders wall, provided that your memory controller and VRAM chips are well cooled it really can't hurt to raise your GDDR5 speed too, within tested safe and stable limits. It will increase bandwidth, good for certain AA methods for example.

Speaking of which, you can get totally lost in nVidiaInspector dicking around with the "hidden" AA modes, altering compatibility flags, etc.; I've finally got my GTX 580 actually flexing its muscles with 4x sparse grid SSAA combined with 8xCSAA 4 coverage 4 color mode in Mass Effect 3 - which is different from what looks the best in ME2, there it's 2x sparse grid SSAA combined with 2x2 SSAA + 4xOGAA... Different for Mass Effect 1, too, but Mass Effect 1 does not take AA gracefully, even for an Unreal Engine game.

Been dicking around with ambient occlusion, too, though it's a bit of a performance hog for the looks in my opinion. Has to be on quality mode or it's sloppy, clearly lower precision... and on quality mode, even this heavy hitter of a card can't combine it with the above high-powered AA modes.

Sparse grid supersampling is the poo poo. It's my favorite shimmer-reduction technology. Even 2xSGSSAA goes very far to reduce visible shimmer in textures. In games without good, engine-level FXAA implementations, I know what I'm going to be using.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

vid =/= vcore for these chips, at least. Look lower for your vcore setting. vid has to do with how the chip requests voltage at a given clock (super super simplifying) and is used, as far as I know, mainly in stock implementations and for motherboard overclocking near as I can tell (could be a just-so story for the last one, but if you look at vid versus clockrate for motherboard auto overclocking, it seems to match up pretty well - leading to sensible values at lower overclocks, like a mere ghz or so, and not so sensible 1.4V+ voltages for higher OCs that need to be manually tuned for long-term stability).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Yuns posted:

If you're looking at an Ivy Bridge overclock it might be worth paying extra for a cooling solution like the Phanteks PH-TC14PE. Normally the $90 would be too much for most but it's been reviewed really well and I've seen at least one preliminary test have showing a big different in 3570K cooling.

http://www.hitechlegion.com/forum?task=viewtopic&pid=36340

Ballsy of them to note that the NH-C14 is a good choice if you need low profile, then suggest this new company's product (I've never heard of them, in fact? Not saying it doesn't look solid, but still) which is a really obvious clone of the Noctua NH-D14 (the highest performance cooler they make that isn't intended for low profile use) is the one to go for.

Top-end coolers may have a new addition, but the Silver Arrow and the NH-D14 are so effective and well-designed that they approach the limits of what can be achieved with or expected from a heat pipe & radiator dual-tower cooler. I see some controversy in that very thread about not including Noctua's actual high-performance/low-noise solution and handing the crown uncontested to a relative unknown.

the guy who posted that thread you linked posted:

Actually, the Phanteks dethroned the NH-D14 across the board in reviews (except one), but only by a slim margin. But, just as important in the Noctua/Phanteks showdown; Phanteks doesn't use brown fans.

The aesthetic win kinda makes me :rolleyes: a bit. If you've got a side cutout, any gigantic cooler is gonna look like a big radiator with fans stuck in it, basically. And maybe brown isn't dude's favorite color, but... They aren't just brown, they're some of the highest quality and longest-life fans on the market, and provide extreme performance:noise. Not to mention that it apparently isn't brought up that the Silver Arrow swings its own highly competitive (wins some reviews, loses others) temperature performance and its fans are probably the very best on the market for performance:noise, at 140mm a piece and pushing a very nice column to move that heat off the fins.

Agreed fucked around with this message at 18:50 on Apr 29, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

EdEddnEddy posted:

On to IB, it is interesting how the chips main problem seems to be heat. The heat spreader is over thermal paste vs being soldered on like most past chips which is a bit odd/a bummer.

Sorry to cherry-pick from a larger post, I agree that SB's offset overclocking is a PITA and requires a lot of dicking with to get it exactly where you want. I learned to just stop worrying and love what works, in my setup. Trying to get exact behavior while maintaining all power saving wasn't happening, so I turned on LLC and upped the offset a tad and all is stable while it still idles down at 16x and around 1V.

---Now to the cherry-picked quote here...---

Or they tested it and there's no real difference, which is certainly a possibility with two huge lithographic variables changing at once, and the part itself is just inherently a bit electrically leaky since it's basically brand new tech for Intel and it'll take some time with it to get the most out of it? I mean, the thermal performance is amazing under normal operating conditions, we're down to speculation as to why they went with TIM instead of solder. Maybe tiny 3D transistors in the wafer start breaking down at an unacceptable rate when you solder the heatspreader on, who knows? At least one test has been conducted where the heatspreader was removed and a NH-D14 was put directly on the surface of the chip (... which is HOLY poo poo crazy, but overclocking gets awesome like that) and the resulting temperature difference was within the margin of error for appropriate thermal paste application.

I'd guess it's a tech issue, but that is just speculation. I don't think anyone who actually knows is allowed to say anything, so what we get in the end is Ivy Bridge performing pretty much like Sandy Bridge when all is said and done, but with the benefit of Intel having made a much needed set of changes to their processor construction lithography. I expect cool stuff from future iterations :)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Star War Sex Parrot posted:

Not that it applies to many people, but holy hell is overclocking the GTX 680 weird because of NVIDIA's new "turbo" feature. I mean, it's easy but it's also just not as precise as I'm used to.

Explain, please? This will become relevant to me at some point and I'd love to know what I need to do so I don't have to go digging around.

tijag posted:

edit: also, I used the 'spread the TIM with a credit card' method on the base of the heatsink. Should I take it off, clean it, and do the little dabs of heatsink method? In my mind that doesn't make as much sense as a very thin coating of TIM, but apparently my mind is wrong.

Hyper 212+ Evo and other direct-contact heatpipe coolers have some special requirements, it's usually best to apply directly to the heatsink itself to prevent air from being trapped in any available space between the pipes (or on older models, between the pipes and the retaining brackets). What you've got will probably work, though, honestly, the difference between a piss-poor application of TIM and a perfect one is maybe 6-7ºC; the margin for "pretty good" and perfect is closer to 2-3º.

Agreed fucked around with this message at 18:44 on Apr 30, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Only flat contact blocks should use methods which distribute the TIM with pressure rather than manual distribution. You probably have a really good TIM application for the Hyper 212+/Evo/etc. direct contact style, I would assume it should make great contact with the pipes and the heatspreader once you get that sucker installed.

It's flat contact blocks that are found on the top-end coolers (they usually use bigass 8mm nickel-plated copper heat pipes soldered into bigass nickel-plated copper contact blocks) where TIM application is more about letting the force of contact pressure spread it evenly for you.

I bet you did fine :)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Neat, any way to override the TDP-limited overclocking?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

It takes a hacked firmware to exceed the hard TDP limit on stock Fermi cards, I wouldn't be surprised if nVidia put their foot down on this one in a similar manner. Hopefully the improved power efficiency makes up for it and some nice overclocks are possible anyway.

For Fermi, a tip I picked up from Dogen is keep the memory at stock if possible, it has plenty of bandwidth and you don't need to waste the power limit on the memory control and framebuffer VRAM when it could be juicing the core instead. I'm not voiding my warranty hardcore by flashing it with a BIOS to ignore power draw enforcement, that advice let me get an extra 60mhz out of the GPU at the cost of 50mhz (post doubling, so... 25mhz) RAM.

Wonder if anything like that might help with Kepler?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

EdEddnEddy posted:

At the Nvidia Launch Event I attended, it was kinda cool seeing one of the head host overclock the 680 to something like 2Ghz and still play BF3 with it. It may not be 100% set, but it does overclock like mad from what we saw.

5000:1 odds against that being a very carefully hand-picked card, any takers, step right up

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

poo poo, really? I -knew- Zotac was looking pretty good these days, we need to let Crackbone know asap so he can add them to the good-to-buy list. Sapphire is still a solid brand, isn't it? Zotac's brand power has grown substantially since they were nVidia's main launch partner for the 560Ti-448, guess they figured (like any good arms dealer during a perpetual war) you really want to be selling ammo to both sides. :v:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Stability test the stock setup first. There is nothing more frustrating than doing everything right while overclocking your computer and getting endless blue screens, failed tests, etc. - only to drop it all back to stock, run the same tests, and it turns out you've got a bum stick of RAM, or the processor's defective and won't run at stock clocks.

Step zero of overclocking is do everything you would do to test the stability of an overclock to your stock-settings assembled computer.

Quick list (you don't get the joys of using the computer during this period but you can either have your cake or eat it, and it's a good idea to make sure the cake isn't spoiled first):

Memtest 86+, at least one full iteration but preferably a few. You want to work the integrated memory controller, the motherboard's pathways to the RAM, and of course the RAM itself. If you get errors, consider a slight bump to RAM voltage. If it's still unstable, slight voltage bump (as in, one increment, two max) to VCCIO. If you still get errors, probably bad RAM.

Prime95 in admin mode, blend, for a first-go to establish stability I would say a solid 24 hour run isn't overkill. You're trying to prove beyond a shadow of a doubt that the basis of your system from which you'll be overclocking is stable. 9 hour or 12 hour Prime95 runs are fine for a lot of stuff, but you want ultimate confidence here, so let it roll.

I guess you could test with IntelBurnTest, but that seems really silly for establishing stock performance. That is, after all, the "quick check" of overclocking - 10-20 Standard stress runs in admin mode will usually let you know if your processor's thermal performance is acceptable at the clock and voltage, and 2-3 runs in Maximum stress gives the whole processor&RAM interface a nice workout without cooking your stuff, ideally. But it's a tool to save time later, for a base system stability confirmation you should stick to the golden tools above and ignore the shortcut tools like IBT or OCCT.

Agreed fucked around with this message at 22:54 on May 1, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Is that vcore or vid?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Zenzirouj posted:

Great, thanks guys! I ended up spending the evening with an icky gross GIRL so I didn't have time to do anything, but tonight is the night.

You got nothin' scrub I'mma new dad (incoming red title text: SHUT THE gently caress UP ABOUT BEING A NEW DAD YOU DICK) and that means as soon as this little dude learns to move around and mess with god damned everything I am probably going to have to give up my 200mm fan mounted on a mesh-cut side panel fan that I got from Corsair for free and put on the solid side panel that I got from Corsair for free.

Truly changing moments. My airflow :qq:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Thanks Movax! I am somewhat ashamed, we're down to 22nm transistors but the smallest nuclei of neurons are 3,000nm. Guess I'll just have to try overclocking as much as possible, but tbqh I am concerned about heat dissipation.

Dogen posted:

Did you cut the mesh out? Can't see that mattering.

Oh, no - actually, you have to disassemble the plastic bit to put the mesh in. It doesn't have sharp bits or anything, his hands are safe, it's the innards of the computer that aren't :laugh: The answer to the question "what will a baby/toddler do to your computer" is an unknown unknown but with ominous connotations. I'll probably end up building or buying an end table to put it on, really, seems like the most sensible solution since there's a non-removable big fan and grille on the top :)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

Just for reference, my wife's best friend has a 14 month old that crawls around and is pulling up, and the baby's pop has a 650D with the window just sitting on the floor. Worst I think that has happened is she has accidentally turned off the computer (she likes the lights :) )

I would burn my computer in my back yard if it would make my baby boy happy, honestly, you go kind of crazy when you're a parent if my experiences are at all universal.

Man, sucks to be someone with a 200mm LED fan though. Thank you, Corsair, for the tasteful, understated fans on the 650D and in that box you sent (for free - how, exactly do you guys make money??)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

tijag posted:

Is there any consensus on what is 'safe'?

Nope. Right now it's just based off of VID and watch-and-see with the extreme overclockers at other places. Remember that we were initially told 1.5V was safe for Sandy Bridge, and sure, some chips that are extraordinary can take that with high-end cooling, but it turns out it cooks other chips right away regardless of cooling (destroys them internally) and Intel amended to 1.38V as a ballpark safe figure.

I kind of have my doubts that they themselves know what the actual safe 24/7 voltage is for Ivy Bridge at the moment, two lithographic changes (one really huge in terms of its potential effects) at once? That's basically alchemy, no poo poo, there's a lot of guesswork involved in process shrinks and new process transitions. Who knows how many spindowns were involved in making Ivy Bridge chips that function? Intel is less forthcoming, it seems, about process issues than GPU makers are (but then Intel fabs their own chips, nVidia and ATI can both say "TSMC :argh:" when shareholders get antsy).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Hiyoshi posted:

So what you're referring to is the voltage difference between an intense 100% load and a light 100% load rather than the difference between a 0% load and 100% load, correct?

Yeah. Without vdroop compensation (which is what out-of-spec LLC is), there's an inherent drop-off in voltage as the processor is loaded more fully. It's part of the design, and LLC to combat it is basically the same thing as just raising your voltage to be stable under the most demanding loads, except *maybe* safer since it's only doing it when the higher voltage is needed rather than full time.

It happens faster than software polling detects, you're not going to get an accurate read on what your true voltage under load is with LLC enabled. Power delivery adjustments in the VRM phases happen in the 300-500khz range. Software polling occurs usually at the 1hz range.

Agreed fucked around with this message at 22:21 on May 3, 2012

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Not a stupid question, you have a shiny new processor and it's fast out of the box and you don't want to damage it by trying to make it go faster.

As FactoryFactory notes, we've got a good sense of the safety margins for "standard" overclocks (which are still completely spoiled, really, we can just count on between 25%-33% free extra performance, hah). It's the limits we don't know yet.

Edit: FactoryFactory has put a very large billboard up stating as much, but the thing to consider with Ivy Bridge really is heat. First of all, as temperature rises, so does resistance. More resistance means that more voltage is required to overcome that resistance. More voltage going through the part means that it raises the temperature. See how that creates a cycle? Your processor is made of a bunch of little teeny-tiny parts, each of which has a safe operating temperature limit. Intel has been a little conservative about that in the past, but I'd take it very seriously for Ivy Bridge until we know more about the thermal limits of the processor and how high temperatures affect its lifespan.

Right now, be a touch conservative with voltage, but be very wary of heat. Until we know more, that is the best indicator that everything is fine, or not.

Agreed fucked around with this message at 19:55 on May 3, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply