Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

What's all this talk of automatically downclocking at 70ºC? Some people (especially back in March/early April) talked about that, but others (even at the time) said no such behavior. What are you using to overclock that it's doing that?

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

What a weirdly specific and reproducible thing.

Okay, well, a custom fan profile to keep it cool was already planned, and losing 13MHz off of a 150-200MHz overclock isn't a deal breaker :v:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KKKLIP ART posted:

My Asus 670 will be here tomorrow and I am super stoked. I am using a GTX260 now, so this is going to be such a massive improvement that it isn't funny.

As someone who upgraded from a GTX 280 (not 285, 280) to a GTX 580 last generation, it's stupendously awesome.

I don't expect quite that level of :tviv: from the GTX 580 ---> GTX 680 when I install it later tonight (it's sitting beside me and it's really hard to focus on important work things because it's RIGHT HERE oh christ), but improvements of 30-50% are pretty god damned cool too and I look forward to that. But you, man, you're... drat. Have fun going from limping to sprinting the whole marathon.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

:stare:

The GTX 680 is an extremely, extremely high performance card. I've left it at stock voltage and I'm slightly less than doubling my previous 3Dmark11 score, with my former card, a GTX 580, clocked at 920MHz. I don't consider an overclock "stable" unless it has zero artifacts ever under any circumstances. I think I've actually got some room to go, but all I did was change three things:

1. Power target from 100% to 132% (max on the EVGA Precision X slider)
2. GPU Clock Offset +100
3. Mem Clock Offset +300

That's it. The firmware on the card, which is stock EVGA SC+ firmware, takes care of the rest. It seems to kinda just do what it wants in games, I've seen high clocks and occasionally a dip (it does seem to be able to adjust in really small increments, when it "downclocks" it's only by like 6mhz, or 13mhz, never farther than that) but it generally hangs out around 1290mhz on the GPU and sticks to the memory clock solidly.

I do have a more aggressive fan profile set up, to keep it under 70ºC to be on the safe side.

Also, I did a bunch of research and was delighted to find out that my power supply, a Corsair HX750, is made by CWT, not Seasonic, and is technically certified Gold but they downrated it at 750W because at higher temps it slips down to silver a bit - and that it can put out over 900W before any of its safety features kick in. Another way of looking at it is that it's an 850W 80+ Bronze power supply. So, I decided to keep my GTX 580 in, returned to stock clocks, to hang out and perform PhysX duties since they make any single card setup totally eat poo poo, but the comparatively exceptional compute performance of the GF110 chip means it does PhysX really well. Holy smooth framerates with PhysX enabled in Batman games, Batman!

Total system power usage doesn't top 600W-650W under a load like that, and most of the time the GTX 580 hangs out at its lowest power and clock states. While there aren't too many games that take advantage of PhysX, the ones that do are extra badass now. :kiddo: And I can keep hoping to find some uses of CUDA since I've got a lifetime warranty and an advance RMA on the 580, and I'm hanging onto it until I can cash that in when it goes tits up :mad:

Agreed fucked around with this message at 04:49 on Jun 5, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

HalloKitty posted:

You can usually tell by the fact Channel Well uses crinkly green tape around the transformers. Oh, and loose screws (that one comes from Jonny Guru).
I'm not a fan, but mainly because I had so many Antec PSUs that were Channel Well made, and they cost quite a bit. They all failed, because of the Fuhjyyu capacitors. I doubt they're using them now, but that left me with a bitter taste.

They aren't; its construction is better than contemporary Seasonics (that Corsair is still using). And there's not a loose screw one in the unit. It's a fantastic power supply at 750W, rather astounding that Corsair is being exceedingly fair in their labeling of it to call it Silver in the first place when 80Plus and their room-temperature testing certified it gold. It drops a little in efficiency, just slightly but enough to be in the silver range in a 50ºC test environment.

Basically, it's powerfully overspecced for its stated usage. Says nothing about other CWT supplies in lower wattage or whatever, but this unit tends to have somewhat more of everything than it absolutely has to, compared to its technological contemporaries, and that's why running Metro 2033 was a breeze even though the GTX 680 (overclocked a lot!) and GTX 580 (stock clocks) were both running at full core/memory.

drat does that make the experience smooth, too (edit: welp finally got metro to smooth vsync with everything on, just in time for them to up the stakes in the next version I'm sure, just shoot me now so I don't keep doing this please). A dedicated PhysX card is a lot cooler for games that support it than Ageia made it out to be, conceptually, but it is exactly as situational as everybody figured it would be in practice.

Edit: Though I'm not sure the left hand is talking to the right when it comes to PhysX - if you look at some games' recommendations they suggest you use a 9800 GTX or GTX 260, but both of those just slow a modern card down and have since Fermi's fumbling arrival. If your dedicated PhysX card isn't at least a generation recent and at least in the good price:performance mid-range bracket you're probably going to slow down a modern top-end card, which is weird given its memory bandwidth I guess but still. A GTX 580 is stupid overkill, but if you just happen to have one sitting around... More likely the least you could get away with if you were silly enough to buy intentionally would be a 560, maybe a 560 Ti, and then only because Fermi's compute is ultra-super-badass compared to Kepler's.

Agreed fucked around with this message at 14:31 on Jun 5, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

Here's the skinny on GK110. 7.1 billion transistors, 15 compute-optimized SMXs, 2,880 CUDA cores, 288 GB/s of memory bandwidth. But it still looks like it's optimized for real-time graphics... :circlefap:

In the sense that any gigantodie based on the same underlying architecture is necessarily going to be - I guess it's possible they could put that into the consumer market space but it'd be going very sharply against a number of tremendous accomplishments with the GTX 680. It seems to me more likely that they intend to keep videogames and workstation/compute markets actually, rather than artificially segmented...

If ATI does something that requires a response, I feel pretty confident based on the performance of the 680 that nVidia will have one without having to more than double the transistor count with a consumer product that's highly compute-focused and thus pretty inefficient at the given task. That would be a weirdly desperate move which seems unlikely to be necessary. Take back all the neato stuff gained from the Fermi --> Kepler move, d'oh.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

madsushi posted:

If you have a 670/680 and you're having problems with D3, it's probably due to V-Sync/Adaptive V-Sync. I know several people who all had this issue (or you can read the 100+ page thread on the nVidia forums).

The solution is to install the 302.71 beta nVidia drivers, which aren't "officially" even released in beta yet. After installing them, I've had no issues in D3.

That's a really dangerous solution for anyone not running Windows 8 it seems. Has a decent chance of hosing your system even installed correctly. I haven't had any issues with overclocking or with Adaptive Vsync, but I don't play Diablo 3 either. Going to go ahead and wait on these to become official before I ruin a good thing to try the next driver set, even though I do expect some performance/feature improvements in the coming updates, as with most new tech. We're, what, two driver releases into Kepler now? Hell, one release (maybe it was the first 300, I don't remember off the top of my head) made some up to 40% performance improvements in some circumstances in Fermi-based cards, I'm not discounting any possibilities with regard to the hack-packs that are drivers. But I'm also not going to gently caress around with a literal hack to even install the thing, when I haven't had any issues.




Edit: Unreal 4, holy poo poo. I love what's going on with deferred rendering engines lately, can't wait for that tech to start punishing my system as soon as humanly possible. Also, use PhysX please, thank you.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

tijag posted:

My GTX 680 [EVGA] crashes games if i run at GPU clock offset +70 and Mem Clock Offset + 250.

You have a great sample.

That's on top of the stock boosts thanks to the the EVGA "SC+" SKU, too.

I ended up backing it down after extended runs showed some marginal instability, final offsets for 100% stability and stupid good performance are +87 core +300 memory, 132% power target (why not?... custom fan profile keeps it under 55ºC in Metro 2033, thanks to Adaptive Vsync, what a great power saving tool; I've seen it get up to ~110-118% but no higher in use, but as it is the 4-phase rather than 5-phase VRM design I'd rather not risk it). It does a great job of going as fast as it needs to to accomplish the rendering task of the moment, I've seen it boost as high as 1300mhz on the core but generally it's in the 1260s-1280s range.

I did set the Voltage to 1.15V, seems to help with memory offset stability starting closer to where it's going to end up anyway. In demanding usage scenarios it gets to 1.175V, which is as high as it'll go, automatically based on TDP and regulators I guess.

I am definitely pleased as punch with how its performance turned out, I've been reading around and people seem to think the EVGA SC cards are binned for specific performance targets, which does follow what little we know/lots of speculation about their general division of chips into different SKUs. If that's true, mine's hitting fairly above average for an SC/SC+ (only difference is the backplate, which is "supposed" to improve cooling, I have no idea if it does or not but the card does stay cool and quiet with a custom fan profile and the stock EVGA blower).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

EVGA dampens the fan noise somewhat, it's not a 100% reference blower iirc, but it is easily a lot quieter in games than the 580. In fact, in games where the 580 does CUDA PhysX processing, I can't hear the 680's overclocked-as-far-as-it'll-go fan (even with the custom fan profile to begin aggressively cooling a 60ºC to prevent throttling) over the GTX 580's fan, and the 580 has a much less demanding overall workload and is at stock clocks and voltage. It just has a noisier fan design.

That said obviously the farther you get from the reference design, the more and bigger fans you add, the quieter it'll be. But I've got three 200mm fans that are pretty much silent, meaning I've got at least three 200mm x 200mm holes in my case, and the 680 is inoffensively noisy even when going full blast.

It also cools REALLY well, even when it's hitting power target >110% and giving the card a thorough workout it nonetheless doesn't get over about 63-64ºC with the custom fan profile. (Independently verified that at 70ºC it throttles by 13MHz, and again at 80ºC by 13MHz, but even in Metro 2033 which is the only game I'm running that will actually get the GPU fully engaged sometimes at max settings all it took was a custom fan profile that basically pegs fan speed% to temperature up to 55ºC, then goes to max from 55ºC to 60ºC to prevent it from getting hot and throttling).

It's a cool running card and anything based on the reference vapor chamber is going to cool well. More sophisticated/involved solutions can do the job quieter but I sincerely doubt better, barring BIOS modding and suicide-run super overclocks for 3Dmark e-peen.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

The cooler on my EVGA may not be totally reference. It's got sound dampening stuff on the fan itself. Is that a reference quality? I dunno. Definitely shitloads quieter than the 580 that's also idling below it and running full-bore for physX crap.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Boten Anna posted:

Does it work basically by having each card draw different elements on the screen and them merging them together, using hardware or even just the raw video signal somehow?

Is it possible in the future that the connection between the two will be more of a... for a lack of a better word, logical link that basically just uses additional cores to throw more hardware at the rendering similar to a multi-core CPU?

I'm probably phrasing this in all kinds of terrible ways what with having an only rudimentary understanding of how any of this works under the hood.

Toms Hardware actually has a decent explanation of this stuff, including some explanation of microstutter, SLI/Xfire's not so pleasant side.

http://www.tomshardware.com/reviews/radeon-geforce-stutter-crossfire,2995.html

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Chuu posted:

I'm considering upgrading a computer with a 1680x1050 monitor to a 2560x1440 monitor. It's currently a Core i5-2500 /w a Radeon 4850, and I'm happy with the graphics as in.

How much am I realistically going to have to spend to keep a similar level of quality if I want to game at native resolutions?

1440p, you're looking at a GTX 670 or a Radeon 7950/7970, depending on your preferred card maker. nVidia is winning this generation but it's not a total beatdown or anything, 7950s/7970s overclock like CRAZY and they have great per-clock performance, meaning less overclock goes farther. They're also not at all bandwidth limited, thanks to the 384-bit memory bus, so you can pretty much go hog wild with the GPU/shaders clock and add memory as an afterthought. With the 670/680 you have to balance a solid memory overclock with a solid core overclock or else you won't get all the performance that the core has to offer, and that balancing act is a bit of a hassle since there isn't a good way to test it other than playing games and hoping you guess well.

Alereon posted:

Remember that time I accidentally edited my reply into your post? Edit != quote is apparently pretty easy for mods :shobon: Anyway, I would recommend against factory-overlocked cards due to the difficulty testing overclocks for stability on the GTX 600-series. I have the base EVGA GTX 670 card and I am very happy with it. It has some very, very minor tweaks over the reference design that should improve cooling and noise by an immeasurably small amount.

Yeah, I got the SC+ for one reason and one reason only, it was the actual price of the card for approximately five minutes and I was ready to pull that trigger if any card became available for a non-scalper price. The 670's disabled SMX means gently caress all, really, in terms of performance, and you can have a look at graphs which show that it seems to open some overclocking headroom at that, which just makes keeping up with a 680 all the easier. I was lucky to receive a sample which has what would be from the stock GTX 680 factory settings more than a +100 core and +400 memory overclock, without any modification to the BIOS to disable overvoltage protection or anything like that.

A lot of people buying EVGA Superclocked cards are finding they perform as advertised and not much wiggle room above that. So no point playing the lotto on them if you can find a baseline card instead. They do have some designs with more robust VRM, but it's not necessary unless you're going for [H]-level overclocks that aren't gonna do much for you. nVidia didn't just say "eh, gently caress it" when they reduced the VRMs from 5 to 4. It only needs 4 and that works just fine. Very sophisticated power management in the card's hardware takes care of everything nicely, and the cooler is extremely good.

I also think that EVGA has some modification to the reference cooling design - it's the blower style, but there are removable acoustic dampening pads in important places and it is far quieter than the previous gen cooler despite being very similar in design (in other words, from nVidia themselves, you'd expect any noise improvements to come from Kepler generation's incredibly effective power management which keeps the card's power draw to where it actually needs to be for the workload, because they already introduced the great stock vapor chamber cooling setup with Fermi and blowers haven't changed much in a long time, just fans capable of taking in a lot of air despite cramped conditions, and with very long projected MTBF).

My EVGA GTX 580 SC is a lot noisier qualitatively than my EVGA GTX 680 SC+. I have not experienced any issues that others have commented on about the actual frequency of the fan noise being more annoying, that was brought up in a Techreport review but it doesn't match my experience at all, apples to apples with a very similar cooler type from the same maker. The 580 does not have the noise-reduction doodads on the fan.

Agreed fucked around with this message at 04:58 on Jun 13, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Here's one extra that matters - way better DX11 performance, this is the generation that ATI really got with DX11 in my opinion. Though in-game implementations still tend to somewhat favor nVidia's approach to DX11 (they have, for example, practically no performance hit when enabling tesselation, though ADOF can still be a hog), ATI is no longer a second-rate performer in all but synthetic benchmarks.

Wish I could find a better example, but Heaven's okay for demonstrating what I mean, I guess. Check the 7850 out vs the 570 vs the 6950:

http://www.techradar.com/reviews/pc-mac/pc-components/graphics-cards/amd-radeon-hd-7850-1068373/review/page:2#articleContent

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

AMD isn't losing on performance, they're losing on price. If they can make that competitive, they'll be back in the game. Otherwise, they won't.

Edit: Here's both of them in Anandtech's GPU bench 2012 to demonstrate what I mean - remember that these are stock settings and nVidia's card sort of auto-overclocks, as a result edging higher than the stated clock would actually give. If you look at overclocked performance, which is what people should look at since the HD 7970 is an amazing overclocker (even moreso than the 680 overall) is how the two compare when overclocked. Nothing's guaranteed running out of spec, but consider that there's a clock-for-clock discrepancy in the 7970's favor, enough of one that when you run an overclocked one alongside a very overclocked GTX 680 it starts to outperform it.

So it's not that the 7970 is bad. As usual some games do better for nVidia, some for ATI. It's just that it's too expensive and doesn't make sense as a value prospect when nVidia's got the 670 that runs like a 680 and can be found in stock at around $400. "Performs similarly, costs more" is not a good look.

Agreed fucked around with this message at 05:53 on Jun 17, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

New nvidia beta driver release, more performance improvements for 400/500/600 cards and some 600 specific fixes

Wow, these drivers are... really something. I know, beta means what it says, but it's screwing with GPGPU capability in 670s/680s... Like, turning it the hell off... And some of the features aren't all the way there. They've got a ways to go with the FXAA selectivity functionality, but that's understandable given how many applications don't need to be FXAA'd up.

What I'd like is for different levels of FXAA to be selectable as global or specific options, though. Have a more aggressive FXAA that does more post-sharpening, for example, some games would strongly benefit from that. CSAA, as cool as the technology is, interferes with the rendering on some in-house deferred rendering engines in a really noticeable way (like, do not use CSAA with Diablo III, it breaks the visuals).

I'm surprised they singled out S.T.A.L.K.E.R. CoP for performance increase, since it already ran like CRAZY before - DX11, everything maxed, Absolute Nature 3, Atmosfear 3, forced SSAA and SGSSAA and FXAA and it pretty much chills out at the un-boosted clockrate and never hits above ~70% of the power target even indoors with tons of interactive shadows. Maybe if it were a higher resolution, I dunno.

I wonder what the vsync fix means. I noticed some weirdness with adaptive vsync especially, hopefully the new drivers take care of that...

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

There's a new pointless e-peen benchmark in town, ladies and gentlemen.

http://origin-www.geforce.com/whats-new/articles/put-your-system-through-its-paces-download-the-pla-directx-11-and-physx-benchmark/

It's an nVidia-centric benchmark in that it lets you test PhysX at various levels. It's a little unintuitive to get running correctly, you have to hit escape to access the settings button - even starting it in DX11 mode on high, it fires up in 720p and most settings are decidedly not high. Manual adjustments to make to compare it to other scores would be to set it to 1920x1080 resolution and make sure PhysX is set to High (requires at least as many CUDA cores as a 560Ti has, iirc, or it'll force a non-1:1 "Medium" PhysX processing that's okay but not nearly as impressive in how it's used).

I'm pleased as punch at the score my overclocked-as-hell 680 turns in, using the GTX 580 for PhysX. Strongly beats nVidia's benched score and compares favorably to 580 SLI scores others are turning in. :rock:

Since I'm on a P67 motherboard running Sandy Bridge, to use both the 680 and the 580 means that each has to run at PCI-e 2.0 8x; that does sacrifice some bandwidth, anywhere between 2%-5%, and it does show at 1920x1080 with current-gen cards. But two of them working in tandem with a minor performance penalty still comes out majorly ahead when considering the performance penalty of GPU-accelerated PhysX. It's really, almost surprisingly intensive. Shame more games don't use it so the rest of the time I've just got a 680 going 2%-5% slower than it could for no reason :v:

On that note, you know, 1920x1080 may be commodity when it comes to panels, but that's really not all that low of a resolution, I don't know why we tend to shrug at 1080p and only treat 1440p/1600p/surround resolutions as genuinely high resolutions when looking for really high performance at max or near-max settings.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Tunga posted:

You really need to stop posting about this, I keep seriously considering buying another card just for PhysX :swoon: and it's your fault!

If more games actually supported PhysX, I'd be all over this. It seems like it's basically Crossfire/SLI without the ridiculous limitations (fullscreen only) and flaws (microstutter).

But yeah...I wonder what the cheaptest card would be to make this worthwhile...

FactoryFactory and I kicked the ball around on that and we figured given the bandwidth of current-gen cards, and given that PhysX is a specific kind of compute hard that even the hamstrung GK104 parts are good at, you really don't need, or substantially benefit from the amazing compute performance of a GF110 (GTX 560Ti-448, GTX 570, GTX 580) part. You just need a bunch of CUDA cores. When they're not being shaders, PhysX gives SMs (aka CUDA cores) a very zen-like workload. They munch through it no problem. The only thing is you need to match like to like to some degree or you'll have the rendering card chilling out waiting on the processing card to catch up. It's impossible to generalize like "a last-gen card should be fine," because while that is 100% true with Kepler's very gaming-focused performance (compared to Fermi's "amazing at both!... poo poo this is expensive" approach), it would not have been true last-gen. A top-end G80 (8800 GTX) would slow a top-end GT200 (GTX 280) down. A top-end GT200 might be able to keep up with a GTX 560Ti, but it could slow a GF110-based part down. So it goes.

After a long session of bullshitting about it we figured, eh, 560Ti would be a very safe bet. You could probably get away with a standard GTX 560. That's to not handicap a GTX 670/GTX 680's rendering speed. When using a card as a dedicated PhysX processor you can only overclock the memory. It is advisable to do so, you want bandwidth. CUDA workloads (and that includes PhysX) are almost all about bandwidth - tons of tiny parallelized processors working in tandem, hungry for all the bandwidth they can eat.

One thing's for sure, a GF110 part (let alone the top-tier version) is almost certainly dramatic overkill for the task at hand, it's just what I've got.

Agreed fucked around with this message at 11:16 on Jun 20, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

That's certainly a better reply than the 500 pixel :geno: I was considering posting.

I have eye candy lust, but nowhere near enough to drop big bones on a PhysX assist card. Agreed worked very hard to earn a pass from my raised eyebrow of high-horse scorn.

Yeah, I should reiterate that I'm only hanging onto the 580 because it's an -AR part and I bought the advance RMA for it. I am keeping it until it dies, so I can get something current-gen with respectable performance to fill a similar role. EVGA's change to their warranty makes a lot of sense, imo, they've moved from one tenable model to another, both of them allow for actual warranty issues (failure within three years is better than the previous KR or unregistered AR's 2-year limited). Past that, they're only obliged to replace it with something that performs similarly to the card they're replacing. So, for example, let's say my GTX 280 had been bought from EVGA instead of BFG (who said gently caress LIFETIME WARRANTIES and also YOU). It dies today. Oh no!... Okay, I send it off, they determine that it performs a lot like a GTX 560. That's more or less true, and more or less sucks for me, since while that card cost $500 they don't go by cost. :v:

The interesting part is the two options for extending the warranty (which are also prerequisites for the step-up, so it's a more complex racket). You can gamble on buying two extra years to stay within some kind of likely performance margin. The blowers on 'em are rated for something like 9 years, and that's the part you're hoping is going to fail early if you're in it for the opportunity cost gamble, so the house has an advantage, but they always do. Even though it's still going to be similar to the above scenario, if my GTX 580 died some time a couple years from now and they decide that something like a GTX 760Ti (this is make-believe, bear with me) is a proper replacement, I'd probably be okay with that. Even though it's downgrading from the top notch part, it'll have whatever fancy technology is then-current, and, bonus, new, so not likely to die.

---

Shorter, more relatable version: nobody needs this, I just happen to have it because I used to use CUDA and now I really don't, and I splurged on a 680 because SHINY poo poo RULES. I did a bunch of digging to find out exactly what my power supply can do and good rough figures on system power usage of two cards like this, and... Now I've got a setup that I would not recommend anyone buy.

What'd you say, it's like SLI without the hassle? More like "all the cost of SLI without the majority of the benefits," really, it's still about $1000 of graphics cards, but only PhysX accelerated games see any benefit from it, and then only when using PhysX. Whereas with SLI you've got scalable rendering that, sure, might not be perfect, but if you're running a three-monitor setup, having one 680 rendering graphics and one 580 doing nothing most of the time except sucking up its idle power usage for no reason is literally worthless. Or worse than worthless, if you're not on PCI-e 3.0, since PCI-e 2.0 at 8x costs some performance.

This message has been brought to you by the Scared Straight Vis A Vis GPU Opulence program.

Agreed fucked around with this message at 11:47 on Jun 20, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

When a game does support PhysX though it rules so hard. Last word. Honest.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Tunga posted:

Maybe I will run SLI and offload PhysX :colbert: .

Enjoy your crazy setup!

I'll drink to that. :tipshat:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

A hundred dollars for that level of performance is sort of a joke when the AMD 7750-900 eats its lunch for about twenty five bucks more.

Don't eat fast food one day, double your graphics performance for the current generation of cards.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

How much fast food do you eat :stare:

Okay, pizza? Or just don't be a cheapass, it's an even smaller price difference now as both are $109 msrp. The unique selling point is that the 640 actually works well for 4K for HTPC, as it turns out though. Both are specced for it, only the 640 makes it not suck.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

HalloKitty posted:

Here we go, we knew Tahiti and GCN in general has headroom, so AMD keeps ramping clocks.
AMD Radeon HD 7970 GHz Edition

Quick summary for you: 1GHz base clock, turbo up to 1050MHz, which causes it to beat the GTX 680 in places it didn't before, across the board numbers are up (as you'd expect) and they've even boosted memory clocks.
Idle power consumption and noise are very low as you'd expect from AMD, but the boost in clocks has made noise under load terrible. But this is of little concern, really, because you'll see large fan coolers out from the usual suspects in no time, I'd wager. Avoid the reference cooler.

In AnandTech's testing here are the number of wins/card:
Gaming: 7970 - 18 / 680 - 16
Compute: 7970 - 5 / 680 - 2
Synthetics: 7970 - 4 / 680 - 1
Overall: 7970 - 27 / 680 - 19

More significant is they're launching it price competitive with the 680 at $499 MSRP. Still, with the 670 being the current card to beat for high end gaming, they need to do something roughly as spectacular as what nVidia's up to at the $400 mark if they want to pull off a real coup here.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

eggyolk posted:

Where did you find a small form factor version of the 7750?

http://hexus.net/tech/reviews/graphics/37477-sapphire-hd-7750-ultimate-vtx3d-hd-7750/

or if you have slot space but not length,

http://hexus.net/tech/news/graphics/38157-powercolor-outs-passively-cooled-hd-7750-go-green/

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

td4guy posted:

Go to the nVidia Control Panel. Manage 3D settings. Program Settings tab. Add MassEffect.exe or whatever. Then set the antialiasing mode to override the application, then set the Setting to whatever level you want.

If a 670 can't handle FXAA and at least 2xSSAA (total performance hog, but amazing visual quality, and it works with Mass Effect series' deferred rendering better than MSAA or CSAA), something's up.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

tijag posted:

That doesn't work. It's not a big deal.

Download nVidiaInspector 1.9.6.6 (released yesterday). Set the Antialising Option to "Override Application Setting" so that it takes over from the in-game deferred renderer and sets up its own back buffer. Set it to either 2xMSAA or (as the card should handle fine) 8XSQ (combines 2x2 SSAA and 2xMSAA). Then, a couple options down, under Transparency Antialiasing, set it to 2x Sparse Grid SSAA. Then, a ways down, right above "Negative LOD Bias" where it should be defaulting to "Allow," set a manual negative LOD Bias at -0.500. Turn FXAA on for good measure because why the hell not.

You need to match MSAA level with SGSSAA level, because Sparse Grid Supersampling is a huge performance saver compared to regular transparency Supersampling, but it gets its points of comparison from MSAA. I like 8XSQ because it combines 2x2 SSAA and 2xMSAA for superb jaggie reduction without a dramatic performance hit (on a card like this - these aren't options to use with lower end cards, this is what $400-$500 gets you at 1080p-ish resolutions, no guarantees with multi-monitor or 1440p/1600p).

The negative LOD bias is only applicable to DX9 games, but it helps to ensure crisper textures. -.500 for 2xSGSSAA, -.1000 for 4xSGSSAA, -.1500 for 8xSGSSAA... But the truth is there is virtually no distinguishable visual difference past 2xSGSSAA, though you'll feel the hit performance-wise for sure. DX10/DX11 games, you don't have the option of setting a negative LOD bias, which can lead to less texture sharpness or even some unintended shimmering, but the visual quality trade is generally worth it, or taken care of by the heavy-duty AA you've got going on elsewhere.

SSAA and SGSSAA are only feasible these days because GPUs are incredibly powerful. It used to be the method of choice, but then MSAA came along and offered an acceptable compromise with a much lower performance hit. Then shader-based post processing algorithms (MLAA, FXAA, SMAA, TXAA) came along and they're compatible with anything if implemented right, and (in their modern iterations) are practically free in terms of performance hit for AA comparable to 4xMSAA. MLAA-based algorithms are cool because they're kind of shader agnostic, they don't mess with the sorts of effects modern games like to use to look purty, but it has to be implemented in-game, it was initially AMD-only, and FXAA came out and kicked its rear end cross-platform. There's pretty much no reason to ever turn FXAA off in a game unless it causes unacceptable blurring of text, because the driver-level implementation is effective and leans on post-sharpening enough that it isn't as blurry as some injectors could be before it became officially force-able through nVidia's control panel.

The problem is that deferred rendering engines (which are not a bad thing at all, they can be quite efficient and offer some really impressive visuals with less of a hardware requirement) don't play well with conventional MSAA or even fancier new algorithms like CSAA. But SSAA is brute force and can be forced even on D3D games, from DX9 to DX11. Combine 2xSSAA with 2xMSAA in that 8XSQ (because transparency 2xSGSSAA needs the 2xMSAA to know where to take its lower sample count from) and you should have great image quality even in games which are not amenable to MSAA or CSAA.

I know this sounds like a lot of stuff, but it's really not, you just install one application, launch it as administrator, hit the button on the middle right to get into elevated driver control mode, and you change four things. Boom, good to go.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Tunga posted:

I run 1680x1050 so that probably explains why I don't really stretch the card.

Is this stuff applicable to all games? Can I just do this once and never touch it again, or will I be forever fiddling with settings for different games? I've never really got into overriding game-applied video settings but I'm interested.

It would also be a lot simpler if we didn't, as a species, feel the need to invent seventeen different ways to make video games edges look less jagged.

Best to do it game-by-game because universal settings can be problematic. Exceptions: it's fine to have the universal profile set up a frame limiter (works in tandem with vsync to provide less input lag with deferred renderers and/or triple buffering). 58frames for vsync 60Hz, different values for different refresh rates. It's also fine to have FXAA turned on universally. No real hassle there.

As far as "seventeen different ways," more like hundreds. Most games have preconfigured driver conditionals that force certain behaviors, some of it opaque but others editable by a user that knows what he or she is doing. Drivers are two things - underlying interface between the hardware and the OS to take advantage of acceleration as such, and a huge collection of the accumulated per-game hacks that make various games work right.

To recall a recent example, Skyrim started off with serious problems with nVidia cards. I found it astonishing that my GTX 580 ran it kinda like poo poo at 1920x1080, especially indoors. It didn't make sense. Then nVidia released a new driver set that offered about a 25% performance increase, and an accompanying fix in the drivers for a bug that caused undue poor performance indoors. Then the 300-series added another 40% performance increase on top of that, more or less patching up the bugs that were preventing Skyrim and nVidia's Fermi and Kepler cards from working to their capability. That's two significant hacks in a row and a clear indication that in the pre-adjusted state, high end hardware was only performing up to about 50% of what it could do.

Often for forcing special kinds of AA you need to enter in manual compatibility bits, too, which tells the control panel to ignore nVidia's chosen hacks in favor of other hacks that work better for something specific (e.g. if you're having trouble using the nVidia default UE3-friendly profile for Mass Effect 2, users have experimented and come up with 0x08009CC5 as an AA compatibility bit which allows for a wider variety of AA to be applied, at the cost of some occasional visual artifacts).

Also, with regard to technological progress in AA, it's kind of funny that if you CAN, "2x SSAA for fullscreen + 2x MSAA for a basically-free pass edge detection enhancement and more importantly tuning for transparency AA + 2xSGSSAA for lighter performance hit transparency AA + FXAA for postprocessing" is a great way to leverage older technologies for extreme visual performance. But the only thing in that whole mess of acronyms that's remotely new is FXAA, the other stuff has been around since at least the 8800GT. The goal with new technologies is to provide high performance antialiasing without sacrificing image quality for that performance. Hence FXAA, hence upcoming TXAA (which, being integrated into Unreal Engine 4, should be dynamite). MLAA was ATI's shot at a proprietary AA format and it's cool because it's shader-agnostic, but only recent versions get into the "basically free" territory of performance, and it's highly debatable whether they offer something that's worth it compared to the unquestionably higher performance FXAA.

But so long as game developers do stuff with the DX and OpenGL APIs that are not, strictly speaking, "to the letter of the law," expect it to be on a game by game basis. That's the fun part!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Here's a list of nVidia compatibility bits that can make a game go from "the control panel forcing doesn't work :smith:" to "hooray AA :unsmith:"

http://www.forum-3dcenter.org/vbulletin/showthread.php?t=490867

The table is in English, there are other good ones linked. This saves a LOT of effort, I mean, imagine trying out various bits yourself to see if they'd work... Talk about excrutiatingly boring. Use hardware to play games, not play games to look at what your hardware can do. Lately I've been really digging the two Ys games released, though since Absolute Nature 3 came out I am very much looking forward to a full play through of some of my favorite mods with that amazing, I would say legitimately redefining texture and model pack for S.T.A.L.K.E.R. CoP.

Looks so great, I've spent tons of hours playing the various S.T.A.L.K.E.R. games, plenty on CoP alone because the engine just comes together so nicely... And I didn't recognize screenshots of some "famous" locations in CoP when AN3 came out. And man, it gets up to a very, very high VRAM utilization.

But it still runs at 60FPS (without PhysX as even part of the equation at all, not implemented in the game, like most games, so some kind of performance hit going on running the 680 in a PCI-e 2.0 8x slot, with the 580 just chilling... well, hotting... So to speak). God drat Kepler is a great generation. This is a game that on my GTX 580 couldn't be comfortably run in DX11 mode because of performance impacts that weren't acceptable (dips into the 40s maxed out with no forced AA). But the overclocked 680, which as others have noted is a good sample, I'm lucky in that regard I guess... just chews through it no problem. Totally, totally smooth performance.

Then I forced the above quad-AA crap. 2xSSAA and 2xMSAA mode, 2xSGSSAA, and FXAA (on top of some built-in post processing it does in the engine that is not identical to FXAA). I expected to have to turn some details down or something, but it freaking still runs at 60FPS, just completely, totally without issue. Unbelievable.

---

Now, to a different topic...

Remember that the 7970 is actually a better clock for clock performer than the 680, and is generally regarded as being a better overclocker, all things told. Both cards show dramatic improvements in DX11 in-game compared to previous games, especially when it comes to all-important (if you care about graphics like this, which is ~40% too much to be considered normal :v:) Minimum Framerate framerate framerate framerate.

The GTX 580 and the Radeon 7970 spend almost the exact same time rendering a given frame, though with particular DX11 features the 7970 will pull ahead. Of course, clocking a GTX 580 to 1200MHz or greater? Yeah, good luck, that's a suicide run - but the 7970, that ought to be in reach.

So all this nVidia talk, well, I figure there have got to be some dorks on the AMD side of things who can solve AA/control panel issues because of a dramatic overabundance of care. And it is worth looking into, because now that AMD's brought their cards into price and default performance parity with the 680, I'd think it's time to focus less on the GTX 680 and how rad Kepler is, and see what people can get going on with the 7970 side of things.

PhysX is only so cool, after all. ;) AMD's working hard to answer nVidia since AMD created this generation's basic price:performance criteria and nVidia won :smuggo:-style. Still on the edge of my seat wondering what nVidia's mid-range will look like, if they'll try to compete on performance or on price. Seems to me they need to aim for price or the 7800 cards will kill 'em, I just don't see them putting out a part that makes sense in that price bracket. Are they going to top GTX 580 performance for $250? AMD can do that, and of course are happy to, grab up sales... But nVidia has some concerns AMD doesn't there, obviously. I am quite excited to see what kind of lean, mean chip we end up with, how many SMX, especially how many ROP. It's possible nVidia is going to kind of stuck... But we'll see. I'm excited. It's yet another competition, much more interesting than just a boring generation win. :D

Even the GT 640, which has some moderate things about it that improve the HTPC experience for people, saw a nearly immediate answer from a close AMD partner:

Seriously, look at this crazy thing!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Low-profile cards that pack some oomph to them are really, really cool, as stated mainly because they've gotta be somebody or some team's pet project. That's not a mass-market part, it's... well, for comparison's sake, here's the 7750 in other hands. Note some are obviously the 7770 or the GHz edition or both, but still, while some engineers are making tiny ones, other folks are doing all manner of weird stuff that makes hilariously gigantic housings for a card that slots in as "well it's better than a GTX 285 anyway" price:performance-wise.

http://www.google.com/search?q=radeon+7750&um=1&ie=UTF-8&hl=en&tbm=isch

Edit: Personal favorite, also from Sapphire, has to be the Sapphire Radeon HD 7750 Ultimate. Look at this crazy bastard. Passively cooled, maxxximum HD 7750 performance with no fan in sight.




... of course, high performance for a Radeon HD 7750 is still somewhere a bit over half a GTX 460, but imagine the work that went into making it! With no fan to fail, what parts do you warranty? :v:

Agreed fucked around with this message at 15:54 on Jun 24, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Skyrim is screwy, poor resource utilization in my opinion. Gtx 580 got up to about 1.4GB of texture memory at 1080p. Though I think in general we simultaneously over-worry and underestimate the importance of the framebuffer; paging to and from VRAM is not the end of the world, often it's barely a hitch in performance momentarily. Having a goodly amount (and for current gen high quality gaming I do think that 1.5GB is the "goodly amount" point) is obviously desirable but it's not game over if you need to hit storage. Especially on a game like Skyrim where you might reasonably run it from an SSD.

Hell, look at RAGE. For its noted flaws with regard to depth, and noted successes with regard to gameplay, it's rendering method relies on a large storage decompression cache and constant texture streaming. Arguably a choice for scalability of the megatexturing system's implementation, but it for 100% sure runs smoothly on even comparably low powered hardware, hitting the hdd or SSD constantly. (20 gigs is a lot of space to give a short and kinda unfinished or unpolished game, but there in minor contradiction to my general point it does run flawlessly without mico hitching on that much faster medium...)

Games that don't handle VRAM limitations gracefully are another matter, obviously, but most of the time it's not a huge deal to need to move stuff in and out.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Could be coded to use whatever it can. Not necessarily a good or a bad thing, once it's at "enough for the user not to notice paging."

No modern game is going to be able to load every single texture, bar none, into VRAM at once. Games like Skyrim will encounter situations where their previous textures are useless and new textures are needed and it's time to page... Pretty sure since it's Gamebryo++, that means it'll have somewhat aggressive precaching with available video memory to prevent hitching.

Basically, look at any frame by frame GPU benchmarks of high resolution games. Watch how performance over time doesn't go to poo poo all the time when even though, say, a SLI 1GB card has to load stuff in and out of memory with some frequency. Does it cause a performance hit, probably yes. A HUGE world-ender of one, nah.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

The 690 is a good card in the sense that it does exactly what it says it does, puts two 680s on one board without crippling them. That's thanks to the power efficiency and thermal performance of Kepler - it was Fermi's power hungry and hot-running nature that cause the 590 to require being downclocked by nearly 100MHz per core (a huge drop in per-GPU performance, greater than 10%) in order to fit within the thermal envelope of the slot configuration. ATI's shot at it whupped rear end even though if I recall it, too, was a little handicapped compared to two of the discrete cards.

The GTX 690 is about as noisy and performs exactly as well as two stock GTX 680s. They didn't have to downclock it. And I would wager it's probably not going to be nearly as prone to croaking early like the unfortunate 590 was, though that's just my estimation. I don't own one.

That said, even if you have extraordinary graphics needs which make such a substantial setup worth buying into, it's abjectly pretty bad of an idea to spend $1000 on a card that doesn't really have much headroom for overclocking compared to going for two GTX 670s, which perform within 10% of the GTX 680 in nearly everything to begin with, for $400 each. Then overclock them and beat the 690's performance. You won't miss the loss of a single SMX per card. Even stock 670 vs 680 comparisons, before overclocking enters the equation at all, it's barely missed. Maybe you could sell your GTX 680 and get a GTX 670? Supply is still limited enough you might be able to profit, and if it's an EVGA card the three-year portion of the warranty is transferable, that's about as good as graphics cards get...

One reason to use it would be if you have to have a single-slot card and it has to be a performance badass, I guess. But its main accomplishment is that they can finally say "hooray, we did the dual-GPU on one PCB thing and it does not suck this time!"

Agreed fucked around with this message at 19:58 on Jun 25, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

Can you even fit a quad-slot cooler like that in the top slot without hitting I/O panel ports, the CPU socket, or the RAM slots on an X79 board?

Figure that'd have to be the sole occupant of your entire PCI-e getup, slotted in the lower position, and on top of that there's a pretty good chance you'd need to really carefully consider your cooler, too.

Even whatever EVGA is doing with their absurd gigantoboards wouldn't fit that adjacent to anything at all, holy crap that's a lot of passive cooling.

Do those guys make a CPU cooler? :getin:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

HalloKitty posted:

Just thinking of the most outrageously large coolers I know of; maybe you haven't seen the NoFan Icepipe (which was available at some point), or the Scythe Godhand (which wasn't ever available).

Mother of god.



It's beautiful... Elegant... Horribly inefficient... I want one so bad, just to have, you know, maybe solder it to the top of my case like a crown so everyone knows my case is the queen case and other cases need to just take it easy or heads are gonna roll, you know

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

When you have a bunch of the same part go bad, start looking at the power delivery (from the PSU itself to the motherboard PCI-e slot) as potential culprits. Especially when it's temperature related and consistent like that. As heat raises so does resistance, and with transistor counts in the 1 to 3 billion range, all that added resistance adds up to a more difficult load for iffy supporting hardware.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Any particular reason you bought a 680 over a 670? I'm too tired to remember if you're that guy who could get an overclocked 680 for less than a 670 domestic-to-you, or if you've just tossed a solid hundred bucks or so down the bin. The 670 performs at stock within 10% of a stock 680 and keeps up as they overclock. Odds of getting a "good" 670 (a sample that overclocks well) are around as good as a 680, judging by a bunch of crappy anecdotal data, probably because manufacturers themselves haven't really figured out how to effectively test for overclocking stability with the new aggressive clock stepping tech.

There are extremely, extremely few scenarios where a 680 just outperforms a 670. One SMX doesn't make much of a difference at all, and that's all that separates the two, they have identical memory bus, memory amount, core, ROPs - it's the least difference since... poo poo, I don't even remember. A long time.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Not super happy with the most recent beta drivers, they basically broke the framerate limiter function. Previously it was great at reducing input lag by capping frames to 58, with either front or deferred rendering methods both working perfectly well to fill out the vsync for no screen tearing (I am not using Adaptive vsync, too much tearing... But that's mainly due to really aggressive settings fuckery to get high image quality, if my GPU isn't being utilized I try to take steps to fix that).

The current version may be closer to the intended behavior of framerate targets and Kepler hardware - it certainly much, much more aggressively controls power and voltage than the previous versions, which used a framerate limiting method that has been in nVidiaInspector for some time and didn't seem to have much if anything to do with how the hardware and software controlled the various clock and power states. So it could be that they're fine-tuning it, and the eventual result will be both as-required performance and lower power usage, but in the meantime there's no real way to get the old version of framerate limiting and I don't like seeing my card's core and smx clocks dip down into the 600s when I'm playing a game that would otherwise be utilizing the full power of the card since I've forced either aggressive CSAA (if it's not a deferred rendering engine) or the heavier handed but workable "2xSSAA+2xMSAA w/ 2xSGSSAA transparency" go-to that plays nice with deferred rendering engines.

The aggressive downclocking results in a very not-smooth gameplay experience. I've had to disable the "framerate targets" manually and can't use the previous framerate limiter, even in games where vsync is poorly implemented, or whatever. Not ideal, hope they keep adjusting it so that it works better. This seems to be a very problematic intermediary step rather than a working technology.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Belan posted:

You shouldn't have to underclock it, the extra boost you see is called "Kepler boost" that gets added on top of the normal boost clock.

The amount of kepler boost changes from card to card which is probably why a lot of overclocked cards seem to be crashy.

More here http://www.overclock.net/t/1265110/the-gtx-670-overclocking-master-guide

6XX overclocking got very silly with all the boosts and temperature throttle points.

Got pretty awesome if you ask me, it's really solid technology and while it does affect tiered/binned overclocking practices nonetheless I'm happy for all the power-saving I can get, in and out of game.

Gigabyte has a history of bad power delivery in addition to this generation's "how... do we test this overclock, exactly?" problem.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

A side fan is going to do way more for keeping a GPU cooler than front intake. Front intake is for HDDs and general airflow - a side fan usually sits right where the GPU(s) is/are. I've got... an unusual and power hungry high end setup, and the 200mm side fan (optional) is part of why even playing the most intensive games I've got, my temperatures don't go above 60ºC on either the graphics or the physx card. In the vast majority of games, temps stay right around 50ºC on the 680. Stock coolers on both the 680 and the 580.

But that's a pretty unusual and not very recommended setup. If you are having actual problems with your temperatures, that's different, but the cooling tends to be optimized for noise levels by default and so setting a more aggressive cooling profile isn't necessarily a good idea, especially if you're not doing some heavy overclocking where you might want to stay out of the higher reaches. I do agree with other regulars that the thermals you've noted are within the card's safety window for sure, but I get a little uncomfortable seeing a GPU running at 90ºC today - these cards shouldn't be running that hot, that's more HD4870/GTX280 under full load territory, makers have gotten better at power efficiency and cooling since then. If it's overclocked, that could cause some instability in marginal cases, I'd think, given the increased resistance as heat rises. But a video card between 70ºC and 80ºC with the stock cooler and default fan control is not outside the "24/7" safety margin for them, they can run a lot hotter than CPUs by design.

If cleaning out your dust went from the high 80ºC region to the mid-to-high 60ºC region, I'd say just be more vigilant about blowing the dust out of the case. Which the intake fans will probably not help a lot with, unfortunately, more intake means more dust coming in even with filters. :-/

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Well, you might be able to get away with re-applying the TIM; on AR parts of the period, if I recall correctly EVGA's policy was "undo any modifications and restore the card to factory condition and it will be considered under warranty unless we discover something damaged by inept modification."

I note that because to me it sounds like the card might have misapplied TIM at first glance (er, first listen? ... metaphors). But then you note cleaning the dust out dropped your temps 20ºC. That's a lot. So obviously something was going on there. If you take an action and the effect is a huge reduction in temperatures, I'd say that's a pretty solid indicator that whatever you just did had something significant to do with the problem in the first place.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply