Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Arzachel
May 12, 2012
It sucks that there are only two GPUs this gen that are worth a drat, HD7850 and GTX670 :( While both of them are pretty great, I expected such deals in every price range. I guess Canary Islands and GK110 aren't that far off now.

spasticColon posted:

I overclocked my HD7850 to 1GHz and ran the Heaven Benchmark 3.0 on it for about an hour without issues. The GPU only got to 61C under load at 1GHz too. :stare::fh:

Seen some people strapping huge coolers to 7850s and running them at 1300 mhz core. Seems like the only thing limiting even higher clocks is voltage throttling at around 1.3v or so.

Arzachel fucked around with this message at 18:16 on May 26, 2012

Adbot
ADBOT LOVES YOU

Arzachel
May 12, 2012

Agreed posted:

Do you mean HD 7950? The 7850 is a strong performer if you want "last gen performance++" but it's not in the top SKU, I wouldn't call it the great card from ATI this gen...

The 7850 is this generations GTX460, except everybody ignored it at first do to overclocking being locked at 1050 mhz core maximum for a while. Up to 50% overclocks limited by voltage throttling isn't anything I've seen before. It trades blows with the GTX580 just by bumping the core slider as far as it goes.


quote:

I don't know WHAT to expect from GK110. My crystal ball is currently out for repair, but nVidia is strong on branding and if the performance improvement is commensurate with transistor count then that will be, like, the 700-generation card... Or we'll have another "GTX 285 and GTX 260 Core 216" situation where they bring more powerful hardware to market under the same generational name in order to compete with (what will hopefully be) an even stronger contender performance-wise from ATI.

The earlier launch ended up hurting AMD as much as helping them. They traded clockspeeds for yields which shows in the huge headroom most cards have. Nvidia stripping GK104 of compute brunt instead focusing on gaming was also quite a curveball. I'm guessing Nvidia is going to do the same with the 700 series, with GK114 taking up most of the high end and GK110 being mostly delegated to compute.

Arzachel
May 12, 2012

Agreed posted:

Edit: Oh, hold on, the 7850 was artificially choked for overclocking and it turns out you can get more than that out of it? Reliably, or "the good cards?"

And I do think that it's possible nVidia took a lesson from Fermi not to cross the streams and may indeed relegate big Kepler to compute, it was a strange choice to build a card that has far more compute than a gamer will ever require - potentially never even use unless they're folding@home or something - and just accept the problems of making that high end compute chip also work for high end graphics. There's room for a separation there, and with the way big Kepler is designed, they could probably run it for some time as Quadro/Tesla workstation units, ignoring the gaming consumer market entirely.

I've yet to see a 7850 not hit 1100 mhz on the core. Some might need a bit more voltage, but the average is around 1150-1200 for most with outliers hitting ~1300 mhz on custom cooling. You could only strip the limit through a setting on ASUS overclocking tools at first, but it seems it has gotten easier lately.

The issue with delegating GK110 to only compute is that they can't offload R&D costs to consumers. Focusing on GK114 would probably make them more competitive on the high end for less, but it seems they're willing to make up for it with a huge chip.

Arzachel fucked around with this message at 19:37 on May 26, 2012

Arzachel
May 12, 2012
Hard to tell, voltages on the 7850 range from ~1.07v up to ~1.21. Try pushing it closer to 1100 if you have the time to mess around with it. The temperature increases from overclocking without overvolting are pretty negligible.

Arzachel
May 12, 2012

Agreed posted:

Could you point out some of these? 'Cause I'm looking but not having an easy time finding them. It seems more like "yeah, they overclock really well, you can pretty safely expect a good OC out of them and performance that deprecates the GTX 570 completely and pushes up against nVidia's top end last gen card for a great price" - which is an extremely laudable thing, of course, at the price point, I hope it's clear I'm not saying "pfffft that's nothin'" - but (multiple) 50% overclocking?

Reasonable expectations from what I'm seeing are more like a ceiling at 1100mhz-ish with some able and some not able to go much further. But if I'm looking in the wrong place, help me out, I keep up with nVidia's technology more than AMD/ATI's (example: I didn't know the previous 1050mhz ceiling was artificial :v:)

This one is on water I believe, the guy got the same card to 1260mhz core on air if I remember correctly: http://forums.anandtech.com/showthread.php?t=2245585

Between BIOS flashing and throttling woes, there several people running stable at 1200 core and over on air http://forums.anandtech.com/showthread.php?t=2239216&page=26

Found out about the earlier way to avoid the clock limitation here(OverclockersUK): http://forums.overclockers.co.uk/showthread.php?s=ecc68cd63d79e6708f55e701e85a34b5&t=18389760

Fake edit: oh wow. There is a way to run the cards at 1.3v without throttling now, the guy in that OCUK thread is running stable at 1350mhz core.

Real edit: There are two different reference 7850s. One using the same PCB as the 7770 and one on the same PCB as the 7870. Most 7870s run at 1.21v. You can probably see where this is going. Both the OC and non-OC Sapphire 7850 seem to use the 7870 PCB.

Arzachel fucked around with this message at 22:18 on May 26, 2012

Arzachel
May 12, 2012

Agreed posted:

So it's a bit of a retread of the 6950 flash to 6970 thing, then, but even more risky - sure, knock yourself out, but you're violating the warranty hardcore with a hacked BIOS and if it turns out that card had a non-artificial-segmentation reason for being a 7850, you might end up with a dead card.

If it's anything like previous AMD/ATI flash gambles, early bird gets the worm as they want to hit that market hard and grab up price:performance seekers as soon as possible before nVidia makes their next move. Hopefully TSMC isn't still lagging on production and they can nab the price:performance bracket while the nabbing's good.

Not quite. The BIOS flash only allows running the chip at 1.3v without throttling because shader units have been fused off. That said, this option allows for insane overclocking, the guy in that OCUK thread is running the core 60% higher than stock :drat: Everyone can make use of the beefier circuitry on the 7870 PCB though, a lot of people seem to settle for 1.21v and ~1200mhz core(+40%), which is the stock voltage on the 7870.

But yeah, BIOS mods are pretty risky and I wouldn't and don't do them myself.

Arzachel
May 12, 2012

spasticColon posted:

I have now pushed it to 1050MHz which is as far AMD OverDrive in the Catalyst Control Center will allow and the Heaven benchmark 3.0 has been running fine for over an hour now but I haven't messed with the VRAM clocks yet.

To go over 1050mhz, you'll need to either use Sapphire Trixx or Asus Tweak. I would recommend to use the Sapphire app for actually overclocking but you might have to use the Asus app to remove the limitation, not sure if Sapphire allows that yet. It's generally not worth it to mess with the memory clocks too much because the increases are small and error correction kicks in pretty quickly. Find the highest core clock at the voltage you're comfortable with and only then bump up the memory clocks a bit, testing with some benchmark to see if your scores don't deteriorate with the additional memory clocks.

Oh, also set the power control settings to +20% in CCC, so you don't get throttled.

Arzachel fucked around with this message at 13:12 on May 27, 2012

Arzachel
May 12, 2012

incoherent posted:

Intel is shipping poo poo (human fecal matter) and getting people to buy it.

Think about this: it ONLY supports DX10.1, and ships 9.0 natively. We're 2 years and a service pack into windows 7 and they're shipping DX9 hardware.

Intel is paying companies to use their crap, if that helps. (it doesn't)

Arzachel
May 12, 2012

Agreed posted:

That said, overclocking memory is a good way to waste TDP in modern cards, honestly. 200GB/s+ memory bandwidth is already more than plenty, and if you're running in SLI or Crossfire, double/triple/insanity mode quadruple that because of the nature of parallel access. Basically cards' memory bandwidth in the high end have been "plenty loving fast" for a few generations now, and overclocking memory by displayed rate usually means upping the real clock by at least 1mhz at a time, so you can set it to, say, 6007mhz if you want to, but the fundamental clock rate is probably not going to actually be 1001.75mhz with the way it's all strapped.

The best thing you can do is set it to factory stock settings to free up power and heat for the GPU and shaders, where it counts.

I agree for the most part, but 7970s seem to absolutely love memory bandwidth. At a certain point you get the same increases in performance from increasing memory clocks as you get from the core. Probably the same is true for 680/670s due to how bandwidth limited the cards are.

Arzachel
May 12, 2012
Hawaii XT teaser: https://www.techpowerup.com/gpudb/2460/radeon-r9-290x.html

Clocks might not be final, though.

Arzachel
May 12, 2012

El Scotch posted:

"Data on this page may change in the future."

I'll reserve judgement until it's actually confirmed. I can't find a source for that data (yet).

Techpowerup are the guys behind GPU-Z, not a random hit baiting site, so this has a reasonable chance of being true. We'll know for sure on the 25th though.

Arzachel
May 12, 2012
Aaand I was wrong, TPU does pull their pre-release specs from rumours. The 512bit memory bus is pretty ballsy if true.

Arzachel
May 12, 2012

LCD Deathpanel posted:

Not really.. ATI/AMD used to use 512bit memory buses on their cards up through the 2900XT's. They only moved to 256bit for the 3870's when they moved to faster GDDR4.

Yeah, that was a ring bus I believe. Nvidia ran 512bit DDR3 on the GTX280 too. It's not that it hasn't done before, but the added complexity and power draw wasn't seen as worth it since AMD and Nvidia started using GDDR5.

Arzachel
May 12, 2012

Agreed posted:

Edit: If current leaks are true, performance of the 9000s being ~roughly at Titan level (a few FPS here or there depending on AA, games, etc.) is not exciting to me. Releasing a new generation that competes strongly!... with the current generation. Welp. Hopefully they can hit hard on price this go-around, because I doubt Maxwell's highest end single GPU card is going to be an inch slower than the top-tier Kepler cards, would bet significantly faster; and, they've got some neat stuff that might be relevant to PC gaming being able to keep up with some of the cool poo poo that consoles are doing - at least according to currently available info on the green side of things. (disclaimer: if the leaks are true, obv.)

Node shrinks don't do miracles anymore. With TSMC track record lately and how the 7970/680 launch went, we'd be lucky to see the first 20nm cards a year from now, slightly faster but more expensive than the 780/290X. If the performance is there, the 290X is likely to get even more mileage than the GTX580. Price it under the competition, bundle with BF4, don't require BIOS flashing for sane manual overclocking and it will do as well as a 600$ GPU can.

Arzachel
May 12, 2012

Agreed posted:

The most complex turnpikes ever, and tessellated water meshes completely underneath geometry near coastlines :psyduck: No apparent distance culling on the water :psyduck: Did... they... do better with Crysis 3?

From what I've gathered lurking on the beyond3d forums, they actually do culling after the geometry has been calculated.

Arzachel
May 12, 2012

GrizzlyCow posted:

Have they spoken anything about how they'll approach their driver situation from now on? From what I gathered, nVidia stills beats them on that front especially for multi gpu setups. Are they finally moving away from dual-gpu cards, also, too?

Single GPU drivers are pretty much on par, although I prefer the look of Nvidia's control panel more. Multi GPU, AMD still hasn't gotten frame pacing up for Multi monitor and tri/quadfire setups.

Arzachel
May 12, 2012

Zero VGS posted:

I got an MSI 7950 3GB Twin Frozr from Micro Center for $180 and I still have a few weeks to return it, does anyone actually think these Hawaii cards will have better value out of the gate or at least cause older AMD cards to drop substantially? I'm new to high-end PC gaming and don't really know how the market responds to this stuff.

AMD would have to really hit it out of the park because an overclocked 7950 is pretty much unbeatable at bang for buck and it's not likely to change. No harm in waiting, but I can't remember the last time a discounted high end card from the last gen wouldn't have been a better deal than a new gen midrange card on launch( 4870 were cheaper than 5770 for quite a while, same for 480 vs 570, 6950 vs 7850 etc. ).

Arzachel
May 12, 2012

Alereon posted:

Even post-correction frame-pacing on single AMD cards is as bad as nVidia's SLI frame pacing, which is rather unforunate.

Source? Everything I've seen since the single GPU fixes show single GPU AMD cards with both faster frame rates and smoother frames in the majority of cases vs comparable Nvidia GPUs, never mind SLI setups. I haven't followed this for a while, but I somehow doubt that Nvidia has pulled some magic drivers in the mean time.

Arzachel
May 12, 2012

Alereon posted:

Check out the per-frame latency graphs in the TechReport article, by using the buttons to switch between the cards you can see that the GTX 680 graph is perfectly flat while the 7970 graph has some stutter. Though the 7970 does do perfectly in Sleeping Dogs, and AMD should be congratulated for the progress they've made in these games, the overall story of poor frame pacing on AMD cards remains true.

Yeah, you're right. The differences shown by Fraps are minuscule, but Nvidia does do better when looking at the FCAT graphs. I was thinking of pcper's 13.2 beta 7 reviews(http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-GeForce-GTX-660-Ti-and-Radeon-HD-7950/Far-Cry-3, http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-GTX-660-vs-HD-7870-plus-HD-7790-HD-7850-GTX-650-Ti-BOOST) when FCAT wasn't a thing yet.

Arzachel
May 12, 2012
^^^ 280X != 290X

Performance is great, the price is better, the reference cooler is balls. I wouldn't be surprised if the performance delta to the 780 actually grows when comparing non-reference cards once those come out, because the 780 has a pretty solid cooler.

Arzachel
May 12, 2012

Agreed posted:

Transistor density as high as that and clocks out the gate as high as that mean that operational temperatures as high as that are not the product of a lovely cooler. I'll be surprised if there's a lot of headroom to go faster on these cards. I really do think aftermarket cooling will be mainly for noise reduction, not better overclocking. Especially earlier on, maybe REALLY GOOD yields will change that later on, but I have my doubts.

The difference in power draw between a "reference" 7970GE reviewers got and current custom cooled 7970GE is about 40W give or take ten. The 290X shouldn't be such rush job, but I have little confidence in AMD's reference blowers, and this looks like a slightly modified 7970 cooler. I'd be surprised if the reviewers got the cards to clock over 900mhz under load.

Arzachel
May 12, 2012

Zero VGS posted:

Enlighten me, are blowers like, strictly worse than open-air fans? I would have thought a blower to be better from a scientific standpoint because it seems like it "scoops" the air through the system instead of just brute-force spinning some propellers.

Open air cooler fans can be bigger, you can mount several of them and while dumping the heat into the case might seem counterproductive, you're passing the heat to large low rpm case fan instead of having to move most of it with a dinky high rpm blower fan.

In my opinion the only reason to use a blower is in small cases with little room for case fans.

Arzachel fucked around with this message at 16:04 on Oct 24, 2013

Arzachel
May 12, 2012

Agreed posted:

The article uses a factory overclocked GTX 780, too - with the current WHQL driver, my GTX 780 is back up to 1176MHz (seriously, what the hell is up with driver versions changing the overclock's stable bin, that's just goofy), and in synthetic benches it seems like GK110 has a slight clock for clock advantage over AMD's newest GCN chip.

OcUK ran some synthetic benches showing the reverse to be true, a 1200mhz 290X beating a 1310mhz 780 HOF: http://forums.overclockers.co.uk/showpost.php?p=25171654&postcount=70, http://forums.overclockers.co.uk/showthread.php?t=18551534 (scroll down a bunch).

Arzachel
May 12, 2012
More on the Prolimatechs: http://forums.overclockers.co.uk/showthread.php?t=18551649

quote:

stock clocks and fan speeds

85'C in heaven on stock cooler

55'C in heaven on MK-26 cooler


Overclocked at 1200 Core + 1.4V


stock cooler at 85'c with 100% fan speed

MK-26 cooler at 72'C with silent fans.

quote:

Hi there

Right guys, being stress testing this solution over-night and all day to day.


At stock speeds, voltage reduced too 1225mv (Stock is 1250mv) the card running Heaven maxed out, never exceeded 50c underload and was completely silent. That is a good 30-35c improvement with the MK26 cooler over stock.

Maximum overclock does not improve any further with silent fans, but you can match the max overclock, again keeping silent fans your looking at approx 85c under load, so about 5c improvement over the stock fan at 100% which is hoover loud.

However this is the real sweet spot guys for 24/7 gaming:-

Core: 1100Mhz
RAM: 6000MHz
Voltage: 1300mv
Power: +50
Fans: Silent
Heaven 3.0: Tesselation Extreme, 16x/8x AA/AF at 2560x1440

Maximum load temperature was 62c and in SILENCE.
The stock cooler could do this but with two dis-advantages, you'd be pushing 85c area and the fans would be at 75-100%, so you'd be hotter and much louder.


In short this is the perfect gaming setup, yet you could push the OC easily to 1150 / 6400, but in our view why? Keep the card 110% safe and cool whilst still absolutely beasting everything.

The MK26 works superb, nothing is melting, VRM's, memory is all fine.

Soon we shall test EK Blocks for those who wish to go water as that will be the ultimate solution and a 24/7 gaming could be no doubt done easily with 1200/6400 speeds.

This is a fuckoff huge 4 slot cooler and they are selling their product, but three slot and the better two slot aftermarket coolers would likely cut 20°C under load, possibly while clocking higher due to the stock cooler throttling the card.

Arzachel
May 12, 2012

Straker posted:

I don't get it either. It seems to fill a very tiny niche... 1440p single-monitor gaming for people who really really can't be bothered with SLI/crossfire and are willing to pay a huge premium to avoid it? I see people posting about the "awesome price/performance". Since when is 20% more than a 7970 for well over double the price good price:performance? :raise:

The 290X can't handle triple monitors or 4K on anything fancy with fancy settings, so you're going to need two or three of them. I guess if you're committed to cramming as much GPU in your case as possible then yeah it sorta makes sense but if not, then it seems like you'd be much less stupid to go with 7970s or 780s or whatever. 7970s are $250 now, 7990s are as low as $500 and completely wreck the Titan or 290X or anything really, it's twice the card for twice the price which makes sense since, well, it's two cards in one. AMD's frame pacing isn't perfect for multiple monitor setups (presumably including 4K for now since those are generally tiled, right?) but it's awesome for, say, a single 1440p monitor which, again, is all you'd be using a single 290X for...

I recall launch reviews saying that the Titan was 30% faster than the 7970Ghz, so the 290X should be around the same unless I'm missing something. Anyway, gone are the days of preformance doubling when going down a node, the GTX680 is what, 40-45% faster than the 580. 20nm planar seems to be underwhelming, so 20-30% more performance for the same price the 7970 launched looks pretty decent. Ignoring sli/cf profile issues, Pre-290X Crossfire doesn't work properly on multi-monitor or >1440p setups and the GTX690 is still pretty pricy and limited to 2GB VRAM. After a certain point more performance costs disproportionally more, I mean you could have gotten an unlockable 6950 for about half the price of a 580 a bit after those launched. The hype is more due to the fact that the 290X brings high end prices more in line with the previous gens.

Edit: Also, with how AMD Pro parts have traditionally turned out, I wouldn't be surprised about a 450$ 290 that's clock for clock only about 5% slower than the full part.

Arzachel fucked around with this message at 10:45 on Oct 26, 2013

Arzachel
May 12, 2012

Guni posted:

I don't think I've ever experienced screen tearing, thank gently caress. All this talk of new GPU's is making me wanna upgrade though. I've noticed that 7990's are like $700, are they a decent product [in terms of overall quality/usability etc], which brings me to another point, is AMD still having the crossfire issues?

For another question, what's the blower cooler like on the 770/780/titan's like if I were to look at a SLI setup?

Crossfire is fine under the following conditions: you run 290/290X or you don't use multiple monitors or a qHD display. There should be a fix for multi monitor crossfire for pre Hawaii cards, but until then don't bother. I wouldn't get the 7990 either, it's not very good and 700$ is too much. Two 7950 with good coolers can be had for under 250$, clock for clock are within 5-8% of a 7970 with similar headroom and can usually hit 1000-1050 on the core while slightly undervolted unless you get one with silly low voltage at stock.

Arzachel
May 12, 2012

Mad_Lion posted:

I plan on finally upgrading my Core2Quad to a new Haswell i5 or i7 this Christmas. I already have a 7850 2GB. My question is, how do you think two 7850 2GB cards would do vs. say, a 280x/7970? They can be found used for pretty cheap, certainly cheaper than getting a 280x and selling my current one. Also, the one I've got will hit 1ghz core and 5ghz memory effortlessly.

Rule of thumb, two midrange cards are going to be faster than a single high(er) end card. It's more of a hassle and you would be limited to 2GB VRAM. A cheap 7950/7970/280X could be about 60-70% faster depending on the clocks, so it would be a pretty significant upgrade.

As an aside if you're willing to tinker with Afterburner, most 7850 based off the 7870 pcb tend to hit 1100 with little to no voltage once the software clock limit comes off with the best ones hitting up to 1250 once overvolted.

Arzachel
May 12, 2012

Ruin Completely posted:

My 7850 can hit 1.2ghz with no problem, but when I tried to OC the memory poo poo crashed, I hope that doesn't prevent any worthwhile performance gains.

It shouldn't be too bandwidth bound, you can try running some benchmark at 1100/1150/1175/1200 and see if the scalling falls off too much for your liking.

Not sure why the hype about TXAA was a thing, from the screenshots it had the blurred out look of FXAA with a performance hit worse than MSAA. It might have worked better in motion, but I guess I don't hate jaggies enough to put up with vaseline-o-vision

Arzachel
May 12, 2012

Rahu X posted:

The 290 has me feeling pretty mediocre. Sure, it's got amazing price/performance, but at the cost of having another loud hot pocket. Sure, you can slap on an Xtreme III to alleviate it, but you can only get a 12% OC out of it for the most part. A 780 can get a good bit more than that.

I call bullshit on this. The vast majority of launch day review oc tests are hilariously awful. "The slider won't go any higher in catalyst, so I guess that's the best it can do" awful. People are running 290X at 1100 with stock volts on custom air cooling and 1200+ slightly overvolted, why would you think the 290 would do significantly worse?

Arzachel
May 12, 2012

Stanley Pain posted:

That video states it really well at the end. Boils down to what do YOU value most. $100, or more heat/loudness from your video card.

The choice becomes whether Nvidias brand name, feature set and the 780 being more or less figured out by now is worth 60-70 bucks to you, once you get a custom cooler though. High end GPUs aren't really a reasonable purchase by any means, so I can't fault people going with the 780 regardless. Having to flash the bios for proper voltage control kinda sucks though.

Arzachel
May 12, 2012

Bloody Hedgehog posted:

I wouldn't worry to much about Mantle, it's not going to go anywhere and will basically end up as another Physx. There'll be a few developers that produce AAA games that take advantage of Mantle (like Batman and Phsyx), but it's not going to be accepted very widely. The last thing developers and publishers want to do is fracture their potential audience, and that's exactly what would happen if you started producing games where people in one video-card camp are getting a vastly superior game.

That is to say, if Mantle ends up as purely an AMD only option. If it ends up being able to be used by both AMD and Nvidia, I could see wider adoption, but honestly, even in that case I don't see it becoming "the next big thing".

Mantle isn't going to be used by Nvidia much like Physx isn't going to be used by AMD. What AMD are betting the farm on and what differentiates Mantle from Glide, Physx, etc. is that the Api would be implemented in the bigger engines to be closely compatable with the console paths, thus porting a multiplat to Mantle would have a low oportunity cost, since you've already written the code once. This rarely ends up as straightforward in practice, so well have wait and see, but I'd say Mantle has a far greater chance to stick than GPU accelerated Physx ever did.

Edit: 20nm GPUs are at the very least 9 months off (expect a year), and 20nm planar itself looks somewhat underwhelming, so there's no reason to think that Maxwell/* Islands are going to obselete your shiny new GPU over night.

Arzachel fucked around with this message at 08:36 on Nov 6, 2013

Arzachel
May 12, 2012

BurritoJustice posted:

The main difference between Boost 2.0 and Powertune, is that with the stock cooler 780's won't just stay at the "Boost Clock", they will exceed it by a healthy margin when they can. While with the 290 and 290x the cards fight to try and reach the "Boost Clock". If a 780 has no thermal headroom, it will still likely run at it's "Boost Clock" but not in excess of it, while if the 290 has no headroom it will it run at the minimum speed and eventually increase fan speed to stay stable. Arbitrary difference yes, and more related to how the two different company's define "Base" and "Boost" clocks, but noticeable in practice.

Since reviewers usually don't limit Kepler GPUs to their boost clocks, there's zero difference. Both a 290 and 780 will preform worse than you'd expect from review benchmarks in a poorly ventilated case.

Arzachel
May 12, 2012

HalloKitty posted:

It may do well lower down, but at 3840x2160, it's essentially the same against the vastly cheaper 290X.

Not only at 3840x2160, the 290X is pretty much the same performance clock for clock as the 780ti in most synthetics except Valley: http://forums.overclockers.co.uk/showthread.php?t=1855542.

Arzachel
May 12, 2012
This probably goes here instead of the laptop or OC threads, is there a way to undervolt or at least disable turbo on my GTX660m? It throttles down to 720mhz in an hour or two under load and I would much rather fiddle with it by hand. I guess dusting the intake/fan is in order, but gently caress turbo anyway.

Arzachel fucked around with this message at 15:49 on Dec 28, 2013

Arzachel
May 12, 2012

craig588 posted:

You should be able to flash the bios to do that, but it's a bad idea. You can raise the power target in EVGA Precision to eliminate power throttling, but if it's temperature related you really want it throttling instead of burning itself up. At least on desktop boards there are over 40 different power states with different voltages and clock speeds depending on the temeprature and GPU load. Removing those states will give you worse overall performance and dramatically worse battery life. The Boost implementation on the Kepler GPUs is possibly one of the best power/temperature/performance management systems ever implemented and it's almost always a better idea to figure out what's causing it to throw a red flag and fix the problem instead of disabling Boost.

It's temperature, the GPU throttles to stay below 93C° which is fine, but the thing would be stable at near idle voltages with a slight underclock. The laptop (Lenovo's y580) is under a year old so I'm not sure it's dust, I'll probably have to either RMA or refit the heatsink myself.

Arzachel
May 12, 2012

Agreed posted:

So if I were to mod my card, I could possibly get up to 1.31V or so (allowing for the less robust power delivery, but also allowing for the cooling being really good and my airflow being killer, consistently idling at around 20ºC and VERY rarely topping 45ºC in modern games or benchmarks) and go from trivial 1250 to trivial 1350+?

1500MHz, 7.1 billion transistors, fuuuuuck me.

Hmm hmm hmm I don't have a second BIOS and I don't know if it'd be worth it to go in and dick around but I am sure wondering if I might have been better off going with a Classified now :aaaaa:

You could do the same on 780s and Titans with flashed BIOS, a bunch of people ran them at ~1400 core with decent cooling.

Arzachel
May 12, 2012
Is this where I joke about how electromigration can't keep up with your buying habits? :v:

Honestly, I wouldn't do it not so much do to the voltage, since GK110s (and 290s and pretty much anything on TSMC 28nm except early-ish GK104s) seem to take voltage pretty well, but because I'm not man enough to flash the BIOS without having a second to fall back if I gently caress up. You've got a cool setup and seem happy with it. Unless you enjoy the tinkering more than actually playing games, in which case go hog wild!

Arzachel
May 12, 2012

Gwaihir posted:

It's the first ARM based SOC that builds in a full desktop Kepler core. It's a smartphone chip with a whole Kepler based compute unit, so it supports everything a real desktop GPU does, instead of the much more limited featureset most mobile GPUs offer.

Last I checked, the thing's >5W TDP which means you won't see them in phones. Maybe v2 on 20nm.

Arzachel
May 12, 2012

exquisite tea posted:

I looked around online and I think part of the weirdness had to do with AC4 forcing 30fps with VSync on if your card can't put out a continuous 60fps. I knew something was weird when I'd turn shadows from low to high with no framerate difference at all. Turning VSync off also isn't an option due to excessive screen tearing. I ended up having to download RivaTuner to force triple buffering and now the game runs on high settings at 45fps in open areas and 60fps everywhere else, this is with a GTX 760.

Download Nvidia Inspector and force a frame cap at 59/60fps or adaptive V-Sync. Hell everybody download Nvidia Inspector or Radeon Pro, if you're asking "It'd be cool if you could make my GPU do X" chances are those two can do something like that.

Adbot
ADBOT LOVES YOU

Arzachel
May 12, 2012
Just to put 10% into perspective, that's the difference between a boost 7970(or a 7950 at about 1130mhz) and GTX780 in BF4 at 2560x1440.

On a completely different note, I blame the coincalypse on everyone who bought lovely 660tis, 670s and 760s instead of the 7950 even after the frame pacing fixes :colbert:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply