Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BurritoJustice
Oct 9, 2012

Y'all worrying about the 980ti 3GB thing should probably click the "link" and not just copy it into a browser bar.

Adbot
ADBOT LOVES YOU

suddenlyissoon
Feb 17, 2002

Don't be sad that I am gone.
Why is Amazon loving around with stocking these cards?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

pr0zac posted:

Well I bought a 980 ti. I bought it for someone else but still counts for bragging purposes right?

:glomp:

PC LOAD LETTER
May 23, 2005
WTF?!

Paul MaudDib posted:

You might be able to make a 4GB card behave like a 5GB or 6GB card (maybe 6GB) but it's not going to ever hit 200% or 300% of where it is right now.
AMD was claiming that their memory usage is massively inefficient. They had claimed they never bothered to improve efficiency since more RAM was easy to add for years with GDDR5. They also wouldn't need a effective 2-300% improvement in capacity either. If 6GB is enough for the 980Ti why wouldn't an effective ~6GB be enough for Fiji?

I'd also agree that their silence indicates they know that they can't beat the 980Ti on performance. I'd also agree that at $600 Fiji won't sell well at all. $500 it would be a good buy though.

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

They'd be drat fools not to position the Fiji against where the 980ti is. Near titan at 650 changes the price/performance they have to hit. Yeah, they just got gut punched, but we'll see what they come out with. (Incidentally, them still working on making the BIOS actually work isn't consistent with that performance estimate being relevant).

Xotl
May 28, 2001

Be seeing you.
So if both companies are using the same size process and have been for years, how is Nvidia managing these massive performance increases while AMD is busy pioneering ways to add different letters and numbers to existing stock?

Xotl fucked around with this message at 04:28 on Jun 3, 2015

Bleh Maestro
Aug 30, 2003

xthetenth posted:

They'd be drat fools not to position the Fiji against where the 980ti is. Near titan at 650 changes the price/performance they have to hit. Yeah, they just got gut punched, but we'll see what they come out with. (Incidentally, them still working on making the BIOS actually work isn't consistent with that performance estimate being relevant).

It's coming with a water cooler and the 295x2 is like $600 right now I don't see how they can competitively price without eating a huge loss or at least greatly reduced margins than they had planned before 980ti was announced.

LiquidRain
May 21, 2007

Watch the madness!

Xotl posted:

So if both companies are using the same size process and have been for years, how is Nvidia managing these massive performance increases while AMD is busy pioneering ways to add different letters and numbers to existing stock?
The same reason Intel can produce a better CPU than AMD, and how AMD beat Intel back in the day. (and how ATI/AMD beat nVidia during the 9700/9500 series, and the HD 4000 series) Better research, betting on the right technologies, and focusing on efficiency over brute force.

It's funny because I remember reading that AMD felt that hitting the mid-market ($200-$300) with a chip first was one of the reasons they did so well with the HD 4000 series, and that they'd made a mistake going back to gargantuan chips first for the HD 5000 series. They seemed like they learned that efficiency is most important and yet... now... sigh.

Please catch up AMD. :/

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

PC LOAD LETTER posted:

AMD was claiming that their memory usage is massively inefficient. They had claimed they never bothered to improve efficiency since more RAM was easy to add for years with GDDR5. They also wouldn't need a effective 2-300% improvement in capacity either. If 6GB is enough for the 980Ti why wouldn't an effective ~6GB be enough for Fiji?

I'm just dubious that they can really squeeze that much blood from a stone.

You rob banks because that's where the money is, and you similarly attack memory waste by targeting the portions that consume the largest amount of memory first. While I don't have AMD's metrics, my gut tells me that big-rear end hi-res textures are what chomps up gigs and gigs of memory in high-end use-cases. You can't squeeze 50% memory improvement out of a store of bitmaps. Yeah, memory compression helps some, but the 980 Ti has that too. A 4GB card with compression is still going to lose to a 6GB card with compression every time.

His example was "our framebuffers are inefficient" and to me that seems like robbing where the money isn't. A common high-end use case, a single 4K frame is 8.3 megapixels, which at 24 bpp works out to 24.9 megabytes a pop. So let's say that we have 20 copies of that sitting around from our inefficient framebuffer manager, that's 500MB of memory. Let's assume that AMD assigns a crack team of neckbeards to it and manages to get it down to the perfect best-case scenario - 2 frames, one is rendering while the other one draws to screen. That's a 450MB reduction total, huge in relative terms but it's like a 11% improvement in absolute terms. And I don't think they can get it to perfect given the crap the GPU has to do in pre-Mantle frameworks. Maybe not even then. A 50% reduction in framebuffer is only a 250MB reduction, 6.1% total improvement.

So again, you rob banks because that's where the money is. And I think the places where there's money to be stolen in a GPU are Fort Knox in terms of difficulty.

quote:

I'd also agree that their silence indicates they know that they can't beat the 980Ti on performance. I'd also agree that at $600 Fiji won't sell well at all. $500 it would be a good buy though.

Yeah, given that they're running hotter and don't have the walled garden I think they have to be a bit cheaper. $500 would be about the normal pricing strategy for AMD vs NVIDIA.

It's really a shame because walled gardens are not a good thing in the long term, nor is NVIDIA driving their only real competitor out of business.

Paul MaudDib fucked around with this message at 04:44 on Jun 3, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I'm still confused as to how Fiji can use up to 375W yet have less performance than the R9 295? Like, I know one is a Xfire card but holy poo poo it sounds like the comparison isn't even there, it's not even 6970-5970 comparison.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Xotl posted:

So if both companies are using the same size process and have been for years, how is Nvidia managing these massive performance increases while AMD is busy pioneering ways to add different letters and numbers to existing stock?

Totally talking out my rear end here, but:

I think a major reason is that AMD has to spread their investment out over a huge number of market segments, while NVIDIA just does GPUs. It's a high margin business and NVIDIA can keep rolling the profits back into research, while AMD's GPU group also has to keep pumping cash to their server/desktop/mobile/embedded groups, including all the necessary crap like chipsets. It's one of AMD's 2 lifelines, the other being APUs, which has an increasingly bleak future now that Intel has figured out how to design a reasonable iGPU.

Yeah, NVIDIA does embedded stuff like the Tegra too, but they pretty much just keep to making a few ARM SOCs, or like a dev board here and there at the most. And arguably that stuff can be treated as research too, since they can leverage it into efficient GPU designs.

NVIDIA also does server products, but they're server GPUs which are an insanely high margin business - a single K80 GPU will put you back a full $5K. Yeah AMD has FirePros, but NVIDIA has another walled garden there. CUDA is the dominant framework in which GPGPU stuff is done, OpenCL is a fraction of the market at best. And it gives them experience in designing top-of-market GPUs. It's another situation in which they leverage the few excursions they make from their core business.

Paul MaudDib fucked around with this message at 05:07 on Jun 3, 2015

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

LiquidRain posted:

The same reason Intel can produce a better CPU than AMD, and how AMD beat Intel back in the day. (and how ATI/AMD beat nVidia during the 9700/9500 series, and the HD 4000 series) Better research, betting on the right technologies, and focusing on efficiency over brute force.

Funnily enough, their main play this year in processors is all about the efficiency. I think part of it is that NV gutted the hell out of DP for maxwell to the point that by the standards of other Titan branding the Titan X is made of lies and bullshit and is just an overpriced gaming card with too much RAM, while AMD was trying to make a real serious effort into workstation stuff, so they've got a ton of transistors leaking and not doing much. Also, Maxwell is a really well designed architecture in terms of being able to scale well, while AMD's had to push clock speeds higher and give up a bit of efficiency. Honestly though, everything below the 970 is a hilarious mess where you can buy an anemic NV card or a powerful AMD card, and the AMD cards aren't getting much traction because apparently computers don't come with real power supplies these days. Also AMD hasn't been able to pull off a top to bottom refresh in ages, their three year old cards are doing fantastically all things considered.

Bleh Maestro posted:

It's coming with a water cooler and the 295x2 is like $600 right now I don't see how they can competitively price without eating a huge loss or at least greatly reduced margins than they had planned before 980ti was announced.

The 295X2 is two chips, another fan, a much bigger block and overall much more stuff. I don't doubt for a second that their margins are going to be a lot tighter than they were thinking before the 980ti dropped. I wouldn't be surprised if the rumors of titan style market milking prices were true until it became clear they weren't going up against Titan because the milking time is over. Also there's got to be a binned part and that may very well come with nice aftermarket coolers and a very nice price tag.

Paul MaudDib posted:

I'm just dubious that they can really squeeze that much blood from a stone.

You rob banks because that's where the money is, and you similarly attack memory waste by targeting the portions that consume the largest amount of memory first. While I don't have AMD's metrics, my gut tells me that big-rear end hi-res textures are what chomps up gigs and gigs of memory in high-end use-cases. You can't squeeze 50% memory improvement out of a store of bitmaps. Yeah, memory compression helps some, but the 980 Ti has that too. A 4GB card with compression is still going to lose to a 6GB card with compression every time.

Isn't the compression just during transmission to let them run a higher clocked narrower bus and still get effectively the same width as a wider bus not to save memory space?

quote:

His example was "our framebuffers are inefficient" and to me that seems like robbing where the money isn't. A common high-end use case, a single 4K frame is 8.3 megapixels, which at 24 bpp works out to 24.9 megabytes a pop. So let's say that we have 20 copies of that sitting around from our inefficient framebuffer manager, that's 500MB of memory. Let's assume that AMD assigns a crack team of neckbeards to it and manages to get it down to the perfect best-case scenario - 2 frames, one is rendering while the other one draws to screen. That's a 450MB reduction total, huge in relative terms but it's like a 11% improvement in absolute terms. And I don't think they can get it to perfect given the crap the GPU has to do in pre-Mantle frameworks. Maybe not even then. A 50% reduction in framebuffer is only a 250MB reduction, 6.1% total improvement.

So again, you rob banks because that's where the money is. And I think the places where there's money to be stolen in a GPU are Fort Knox in terms of difficulty.

I don't think they were robbing any banks before, so hopefully they manage to assemble small gains from all over the place into something big. Before they just kept adding chips to keep ratcheting up that bandwidth, so why go robbing banks when they've got an interest bearing fund giving them what they need.


Paul MaudDib posted:

The first Fiji PCBs were spotted on import invoices in November I think. The rumors were that Fiji would launch in January (I think), then another spotting on the import invoices, then another rumored launch in late Feb/early March. Then the launch got pushed back to Computex. I think that time segment is probably indicative of some hardware issues. Now that the launch is pushed back again and someone's demoing a Fiji with no BIOS - I think they're fighting software issues. In all fairness it does seem like it would be really hard to assess the performance of a card if you've got major BIOS issues. On the other hand it's also Not My Problem, I'll take a 980 Ti that works today, guaranteed, over a Fiji that costs 30% more that might beat it by 30% at some point down the road.

This seems really weird to me, considering they're apparently making an announcement on the 16th. Getting bitten over the course of an entire generation even one in five times seems way worse than waiting a few weeks. Between the drop dead certainty of Fiji bins at lower price points when binned top chips like the 980ti, 970, 7950 et al have been historically great values and how short a wait it is, I don't see the point in not waiting.

FaustianQ posted:

I'm still confused as to how Fiji can use up to 375W yet have less performance than the R9 295? Like, I know one is a Xfire card but holy poo poo it sounds like the comparison isn't even there, it's not even 6970-5970 comparison.

It's a water cooled card with huge potential OC headroom and a fixed design so it's at least partially pushing in on the historic turf of stuff like lightning and kingpin versions so a 6+8 wasn't necessarily going to cut it in all situations?

xthetenth fucked around with this message at 05:05 on Jun 3, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
I'm still holding out on the idea that AMD might start binning Fury cards so they can scale them downwards. Heck, if their claims are solid at all, a 3GB HBM card sounds workable, maybe even 2GB card. Scale speeds appropriately and maybe they don't need to compete with the 980 Ti, but maybe murder the 970.

So announce RX 300 series which have mild efficiency gains but is essentially the previous gen running on DX12. Drop Fury, then bin everything to make the RX 400 series. Drop the RX 500 series 2016?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

xthetenth posted:

The 295X2 is two chips, another fan, a much bigger block and overall much more stuff. I don't doubt for a second that their margins are going to be a lot tighter than they were thinking before the 980ti dropped. I wouldn't be surprised if the rumors of titan style market milking prices were true until it became clear they weren't going up against Titan because the milking time is over.

The 295x2 is also a full 500W TDP part with a 744W peak, and it's a Crossfire-On-Board unit, so some games will run at half speed (of both Fiji and 295x2, roughly). If you're running at 1440p/4K, your game supports Crossfire, you have a case that can mount a big-rear end power supply, and you would never Crossfire your Fiji if you bought one - the 295x2 is the best value around. Stacking discounts you can actually get them around the $450 range. If you fit the use-case you would be stupid not to, you'll be buying at least a 980 Ti to replicate that kind of capability.

quote:

Also there's got to be a binned part and that may very well come with nice aftermarket coolers and a very nice price tag.
In theory that's Fiji Pro, vs the full-chip Fiji XT. Rumors are that it'll clock in around $100 less than the equivalent Fiji XT board. I guess it depends on just how cut-down a part it is. I'd personally hazard a guess at GTX980-like performance for $450, dropping to $400 in a couple months.

Paul MaudDib fucked around with this message at 05:19 on Jun 3, 2015

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

xthetenth posted:

It's a water cooled card with huge potential OC headroom and a fixed design so it's at least partially pushing in on the historic turf of stuff like lightning and kingpin versions so a 6+8 wasn't necessarily going to cut it in all situations?

If it's got a lot of OC room then I take it you've got a grain of salt for the performance rumors?

Ragingsheep
Nov 7, 2009
They've already got the 290 and 290x which are competitive with the 970, especially with the lower prices (just ignore power consumption) and appear to still have tons of stock of. It's the 980Ti and Titan tier where they don't have a viable single card solution.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

Ragingsheep posted:

They've already got the 290 and 290x which are competitive with the 970, especially with the lower prices (just ignore power consumption) and appear to still have tons of stock of. It's the 980Ti and Titan tier where they don't have a viable single card solution.

Competitive in performance but not efficiency, which also makes them not competitive in price due to the larger PSU. Maybe the R9 390s run cooler though and the comparison is more apt, maybe they can really undersell Nvidia.

Ak Gara
Jul 29, 2005

That's just the way he rolls.

Ozz81 posted:

Hell, I bought a 465 years ago since at the time it was sub $200 and had pretty great performance for a midrange card. I almost crapped myself when I decided to go SLI and noticed the primary card was hitting mid 60s in temperature...running Photoshop. I can't imagine having one or more of the 470-480 cards with how hot they got and the amount of power they used under load...

I can't find it right now but I once saw a video with the title "man turns quad sli 480 pc on" and when you click it, it's that one scene from Terminator 2.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Can you SLI a Titan X with a 980 Ti?

Obviously you'd only get the equivalent of two 980 Tis, but having the extra 6GB of memory for compute is probably worth the extra $350 to me. I don't care about the graphics performance, if I could get away with SLI'ing a 980 Ti I'd probably consider buying one of those down the line instead of a matching card.

Paul MaudDib fucked around with this message at 05:30 on Jun 3, 2015

Ragingsheep
Nov 7, 2009

FaustianQ posted:

Competitive in performance but not efficiency, which also makes them not competitive in price due to the larger PSU. Maybe the R9 390s run cooler though and the comparison is more apt, maybe they can really undersell Nvidia.

Yeah but if you're AMD, you can either choose to write off a whole lot of 290/x chips and start binning the newer Fiji chips (which are presumably more expensive per unit) and get the same-ish performance or just keep production low enough just for the very top end. Also there's no indication that Fiji chips are more efficient compared to their existing range.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Ragingsheep posted:

Yeah but if you're AMD, you can either choose to write off a whole lot of 290/x chips and start binning the newer Fiji chips (which are presumably more expensive per unit) and get the same-ish performance or just keep production low enough just for the very top end. Also there's no indication that Fiji chips are more efficient compared to their existing range.

Since it's still a GCN-based card I guess it actually seems kind of questionable that efficiency would improve that drastically.

The more I think about it the more I think Fiji will end up around 300W. The connectors could feed up to 375W so I was thinking 300-350W, but the argument that they'd leave some headroom in the connectors for overclocking does make sense. Maybe 275W at the absolute low end, but probably closer to 300.

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

FaustianQ posted:

If it's got a lot of OC room then I take it you've got a grain of salt for the performance rumors?

I personally have a feeling that AMD is going to take clockspeeds to where they need to be to get performance. I'm hoping the power is sane when they're through. I'm worried they're going to eat OC headroom, and push the stock up into diminishing returns land for power/performance.

FaustianQ posted:

I'm still holding out on the idea that AMD might start binning Fury cards so they can scale them downwards. Heck, if their claims are solid at all, a 3GB HBM card sounds workable, maybe even 2GB card. Scale speeds appropriately and maybe they don't need to compete with the 980 Ti, but maybe murder the 970.

So announce RX 300 series which have mild efficiency gains but is essentially the previous gen running on DX12. Drop Fury, then bin everything to make the RX 400 series. Drop the RX 500 series 2016?

I'm not sure there. I have a feeling they have a generation of stillbirths designed for 20nm they're still having to get out of the hole from. Fury chops could be interesting, but the memory situation may be trouble for them. I do have a feeling that a bin one rung below the water cooled premium pimp edition that was going to fill the gap between the 390X at 350 or less and the top one at 800 may very well be the price/performance pick of the high end. Or HBM is premature and we get what we've got till HBM2.

Paul MaudDib posted:

The 295x2 is also a full 500W TDP part with a 744W peak, and it's a Crossfire-On-Board unit, so some games will run at half speed (of both Fiji and 295x2, roughly). If you're running at 1440p/4K, your game supports Crossfire, you have a case that can mount a big-rear end power supply, and you would never Crossfire your Fiji if you bought one - the 295x2 is the best value around. Stacking discounts you can actually get them around the $450 range. If you fit the use-case you would be stupid not to, you'll be buying at least a 980 Ti to replicate that kind of capability.

In theory that's Fiji Pro, vs the full-chip Fiji XT. Rumors are that it'll clock in around $100 less than the equivalent Fiji XT board. I guess it depends on just how cut-down a part it is. I'd personally hazard a guess at GTX980-like performance for $450, dropping to $400 in a couple months.

Yeah, you can get the 295x2 real cheap. I still wouldn't do it when compared to a 980ti because for example, unreal engine 4. Also even after CF really stepped up its game, frame times are still a lot less consistent, which is not good for how the game feels. If you want as much framerate for money as you can get and are willing to deal with headaches it's certainly viable, but it's not something I'd go for.

FaustianQ posted:

Competitive in performance but not efficiency, which also makes them not competitive in price due to the larger PSU. Maybe the R9 390s run cooler though and the comparison is more apt, maybe they can really undersell Nvidia.

The price difference between the 290X and the 970 has usually been able to cover the difference in PSU (and honestly, system power draw for a 290X system seems to be safely under 500W, and getting an actually good PSU under that number takes some doing (I will note that for upgrading OEM systems that can be a big deal though because OEM PSU picks are goddamn horrible trash).

Paul MaudDib posted:

Since it's still a GCN-based card I guess it actually seems kind of questionable that efficiency would improve that drastically.

The more I think about it the more I think Fiji will end up around 300W. The connectors could feed up to 375W so I was thinking 300-350W, but the argument that they'd leave some headroom in the connectors for overclocking does make sense. Maybe 275W at the absolute low end, but probably closer to 300.

It's likely to be the same version of GCN that Tonga was based on, and while the 285 was a resounding mediocrity, the full Tonga chips that were binned for Apple's use turned in considerably better performance. I wouldn't doubt 300W though, the rumored performance and all that makes me think they might be upping clocks to compensate, and I don't think even later GCN gets clock speed as cheaply as Maxwell does.

xthetenth fucked around with this message at 05:37 on Jun 3, 2015

Gwaihir
Dec 8, 2009
Hair Elf

Paul MaudDib posted:

Can you SLI a Titan X with a 980 Ti?

Obviously you'd only get the equivalent of two 980 Tis, but having the extra 6GB of memory for compute is probably worth the extra $350 to me. I don't care about the graphics performance, if I could get away with SLI'ing a 980 Ti I'd probably consider buying one of those down the line instead of a matching card.

Nope, different chips. SLI chips have to have matching memory and shader counts.

Ragingsheep
Nov 7, 2009

Paul MaudDib posted:

Since it's still a GCN-based card I guess it actually seems kind of questionable that efficiency would improve that drastically.

The more I think about it the more I think Fiji will end up around 300W. The connectors could feed up to 375W so I was thinking 300-350W, but the argument that they'd leave some headroom in the connectors for overclocking does make sense. Maybe 275W at the absolute low end, but probably closer to 300.

Yeah that was my point. Power efficiency is supposedly targeted by Arctic Islands (the next gen after Fiji).

BurritoJustice
Oct 9, 2012

Paul MaudDib posted:

Can you SLI a Titan X with a 980 Ti?

Obviously you'd only get the equivalent of two 980 Tis, but having the extra 6GB of memory for compute is probably worth the extra $350 to me. I don't care about the graphics performance, if I could get away with SLI'ing a 980 Ti I'd probably consider buying one of those down the line instead of a matching card.

No, SLI needs identical Cores (GM200-400 and GM200-310 won't work together) and identical VRAM amounts (6v12). You can't even SLI a 2GB 760 with a 4GB 760.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

xthetenth posted:

The price difference between the 290X and the 970 has usually been able to cover the difference in PSU (and honestly, system power draw for a 290X system seems to be safely under 500W, and getting an actually good PSU under that number takes some doing (I will note that for upgrading OEM systems that can be a big deal though because OEM PSU picks are goddamn horrible trash).

Throw away your goddamned OEM PSU. Just shut up and do it. I don't care whether your NVIDIA GPU draws 50-75W less than an AMD equivalent, there's just absolutely no basis for trusting that your OEM power supply can actually deliver more than 35-50% of its rating.

I bought a decent Rosewill Capstone 550W gold-rated PSU for $40 AR from Newegg 3 weeks ago, the sale ran for an entire month and didn't sell out. Ask yourself if you're enough of a gambler to risk your entire build to save $40. I also grabbed another gold-rated Antec TPC-750W for $50 about a week ago.

We're talking about a device that is hooked up to both 5-12V components as well as an unlimited source of 120V AC, when it goes it can go big time. There's just no reason to run those risks.

Paul MaudDib fucked around with this message at 05:46 on Jun 3, 2015

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

Paul MaudDib posted:

Throw away your goddamned OEM PSU. Just shut up and do it. I don't care whether your NVIDIA GPU draws 50-75W less than an AMD equivalent, there's just absolutely no basis for trusting that your OEM power supply can actually deliver more than 35-50% of its rating.

I bought a decent Rosewill Capstone 550W gold-rated PSU for $40 AR from Newegg, the sale ran for the entire month and didn't sell out. Ask yourself if you're enough of a gambler to risk your entire build to save $40.

I agree wholeheartedly. Throw those piles of poo poo away rather than buying a card that draws less power but costs much more for the performance. You can pay once for a decent PSU or pay every time for a more efficient GPU that performs worse as well as the cost of risking the whole build.

If you literally buy a R9 290 which can be had with minimal searching for 250 and, for example, a $50 evga 500 B which reviewed well when I checked for a friend, iirc you're getting nearly linear price/performance scaling over a GTX 960 at around 200 when you factor in the 290 having an aftermarket cooler that keeps the card from throttling like the reference does. Then the next time you make that choice you just buy the better card and save money, and you aren't putting your build at risk with a firecracker. If you're worried a ton about saving money by saving power, LED light bulbs can be had for two for $5, do that first.

xthetenth fucked around with this message at 05:51 on Jun 3, 2015

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Paul MaudDib posted:

Throw away your goddamned OEM PSU. Just shut up and do it. I don't care whether your NVIDIA GPU draws 50-75W less than an AMD equivalent, there's just absolutely no basis for trusting that your OEM power supply can actually deliver more than 35-50% of its rating.

I bought a decent Rosewill Capstone 550W gold-rated PSU for $40 AR from Newegg 3 weeks ago, the sale ran for an entire month and didn't sell out. Ask yourself if you're enough of a gambler to risk your entire build to save $40. I also grabbed another gold-rated Antec TPC-750W for $50 about a week ago.

We're talking about a device that is hooked up to both 5-12V components as well as an unlimited source of 120V AC, when it goes it can go big time. There's just no reason to run those risks.

Ive actulaly seen some hp's and dell with 80 plus units stock.

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

Don Lapre posted:

Ive actulaly seen some hp's and dell with 80 plus units stock.

That's really awesome and I hope they keep doing that. The last experience I had with the things was said friend, who had a 650W supply rated to provide fewer (!) amps than the 500W supply I mentioned.

I think the latter are providing a decent amount of the market for cards like the 750ti, which has not needing a PCIe connector (and performing like it) as its main sales draw, and the 960, which is selling well for a price where I, a longtime habitual buyer of GTX x60 cards, wouldn't even give it the time of day.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.


Dats a bigass gpu, or tiny asian hands.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

xthetenth posted:

That's really awesome and I hope they keep doing that. The last experience I had with the things was said friend, who had a 650W supply rated to provide fewer (!) amps than the 500W supply I mentioned.

I think the latter are providing a decent amount of the market for cards like the 750ti, which has not needing a PCIe connector (and performing like it) as its main sales draw, and the 960, which is selling well for a price where I, a longtime habitual buyer of GTX x60 cards, wouldn't even give it the time of day.

Same - a gifted HP Z400 workstation had a "450W" PSU that could be over-currented by an R9 280. It had an 80+ rating and everything.

It also had a custom mobo connector pinout so they could block compatibility with standard PSUs. However, with a little cunning and a little idiocy you can just wire in a second power supply.

Paul MaudDib fucked around with this message at 06:12 on Jun 3, 2015

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

Paul MaudDib posted:

Same - a gifted HP Z400 workstation had a 450W PSU that could be over-currented by an R9 280 and also was not compatible with standard ATX power supplies.

Yikes. I'm looking at benchmarks now and that 280 should be pulling a system draw of 300W, probably with an i7 because that's how review sites roll. At least tell me that thing had something a bit more hungry on the CPU side than a Haswell i7.

PC LOAD LETTER
May 23, 2005
WTF?!

Xotl posted:

So if both companies are using the same size process and have been for years, how is Nvidia managing these massive performance increases while AMD is busy pioneering ways to add different letters and numbers to existing stock?
NV cut out a lot of the GPGPU oriented hardware to get the in perf/watt they're getting. It chews up die space and puts out lots of heat. IIRC the 980 is about as good as a 4870 on DP work loads. That sort of work load just isn't very important to gaming and even some GPGPU stuff so AMD being kick rear end at it isn't a big deal for games which is all most people care about.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

PC LOAD LETTER posted:

NV cut out a lot of the GPGPU oriented hardware to get the in perf/watt they're getting. It chews up die space and puts out lots of heat. IIRC the 980 is about as good as a 4870 on DP work loads. That sort of work load just isn't very important to gaming and even some GPGPU stuff so AMD being kick rear end at it isn't a big deal for games which is all most people care about.

In particular the Titan X is a kick-rear end card for neural net stuff - you don't need double-precision for that stuff, and latency doesn't matter because you can queue up a billion threads and wait it out. My Master's thesis was a simulation done in single-precision, as was the source program. It's not universally applicable but I think a lot of people overestimate the need for double precision nowadays, just because it's there. If you can get away with floats (and for most applications you can, especially with some effort) then 12GB for $1k is a goddamn steal - a K40 with 12Gb memory will set you back $3500. That's why I'm thinking about shelling out the extra $350 for the Titan X. Doubling my working set for $350 is a no brainer, even if I have to spend another $350 down the line if I SLI.

The point about die space is also a good point. You'd think that ideally it would be gated off but you can't avoid the space loss, ever.

No need to recall, from last page (image 3):

Thanks again!

e: Actually could I get you to take another screenshot to see if it clocks up at all with the "load it up" option checked on 3? Sorry! :shobon:

Paul MaudDib fucked around with this message at 06:35 on Jun 3, 2015

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

xthetenth posted:

Yikes. I'm looking at benchmarks now and that 280 should be pulling a system draw of 300W, probably with an i7 because that's how review sites roll. At least tell me that thing had something a bit more hungry on the CPU side than a Haswell i7.

Xeon W3565, so a 135W TDP. But not like 450W worth for sure. And I edited in - that was a unit with an 80Plus sticker.

Voluntary certifications mean nothing. Film at 11. 80+ ain't worth poo poo.

Paul MaudDib fucked around with this message at 06:34 on Jun 3, 2015

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Paul MaudDib posted:

In particular the Titan X is a kick-rear end card for neural net stuff - you don't need double-precision for that stuff, and latency doesn't matter because you can queue up a billion threads and wait it out. My Master's thesis was a simulation done in single-precision, as was the source program. It's not universally applicable but I think a lot of people overestimate the need for double precision nowadays, just because it's there. If you can get away with floats (and for most applications you can, especially with some effort) then 12GB for $1k is a goddamn steal - a K40 with 12Gb memory will set you back $3500. That's why I'm thinking about shelling out the extra $350 for the Titan X. Doubling my working set for $350 is a no brainer, even if I have to spend another $350 down the line if I SLI.
that chip is literally designed for neural nets, especially since CNNs can all reduce to GEMM or GEMM-like workloads so it can run well given the ridiculousness of the Maxwell ISA.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Professor Science posted:

that chip is literally designed for neural nets, especially since CNNs can all reduce to GEMM or GEMM-like workloads so it can run well given the ridiculousness of the Maxwell ISA.

Edited in a response, but I'll pull it out because I'm curious what you think:

With the low double performance and the SIMD nature of the processor (i.e. don't want to poo poo up the performance of a whole warp), I wonder if offloading key double-precision calculations/calculation sequences to the CPU through pinned memory would yield gains. That's probably something that could be offloaded on a framework-level thing. There's ways to estimate correction factors for float error via a double approximation for certain circumstances, right?

Paul MaudDib fucked around with this message at 06:48 on Jun 3, 2015

xthetenth
Dec 30, 2012

Mario wasn't sure if this Jeb guy was a good influence on Yoshi.

Paul MaudDib posted:

Xeon W3565, so a 135W TDP. But not like 450W worth for sure. And I edited in - that was a unit with an 80Plus sticker.

Voluntary certifications mean nothing. Film at 11.

At least in the a la carte market they tend to get backed up by reviews and word of mouth. A 135W TDP shouldn't be causing a power draw jump of over 100W, that's just horrifying, especially when considered as a percent of a relatively small whole.

Titan X is definitely very good for very certain compute, but in general given the DP centric marketing of the original Titan brand, it feels like they found compute areas to match the card and talked them up to keep the premium nameplate.

xthetenth fucked around with this message at 06:45 on Jun 3, 2015

Etrips
Nov 9, 2004

Having Teemo Problems?
I Feel Bad For You, Son.
I Got 99 Shrooms
And You Just Hit One.
How much of a difference is there between reference vs non-reference cards?

Adbot
ADBOT LOVES YOU

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Paul MaudDib posted:

With the low double performance and the SIMD nature of the processor (i.e. don't want to poo poo up the performance of a whole warp), I wonder if offloading key double-precision calculations/calculation sequences to the CPU through pinned memory would yield gains. That's probably something that could be offloaded on a framework-level thing. There's ways to estimate correction factors for float error via a double approximation for certain circumstances, right?
depends on arithmetic intensity of each computation of course, but I can't imagine it working well. latency of PCIe plus waking up the CPU and the various cache flushes on the GPU side (I THINK you'd have to flush L2 every time you want to send data to the CPU from within a kernel, I don't know if mapped pinned memory bypasses L2 on Maxwell or not) would put a serious damper on it. it'd probably be better to have a processing pass, keep track of exceptional locations where you actually need DP (if it's not needed at every location), then do the fixup on the CPU side, pipelining all of this with PCIe and probably triple-buffering.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply