I've generally been avoiding learning specifics about recent AMD cards due to price, but since they're coming down a bit, am I correct to assume a 280x is superior to a 770? Hopefully this is the start of a trend. I'm not even in the market for anything right now but I like competition
|
|
# ? Mar 21, 2014 22:14 |
|
|
# ? Jun 3, 2024 20:39 |
|
The 280X is still essentially a 7970GHz. It performs relative to a 770 much as a 7970GHz performs to a 680. Trades blows based on which games like which architecture better, so it goes.
|
# ? Mar 21, 2014 23:12 |
|
ShaneB posted:It's not like the PC really has that many compelling new popular multiplayer FPS games to play instead Excuse me but Planetside 2 is the FPS Battlefield wishes it was. I'll take 2000 player combined arms shootymans ok thank you. Not to mention the literal tens of thousand games floating around in the PC space. EA is pop music of games (not a bad thing), popular sure but certainly not all that there is. I've got a Mercury S8 CaseLabs case on the way, two Corsair H110s and an H90, a whole mess of Noctua fans, my two 780tis and two of those NZXT brackets. I hope to have a well-built, cool and quiet computer after it's all said and done. The only issue that has me worried is the length of the CPU cooler hoses and if they'll reach. Although I could have just went with a standard DIY liquid cooling setup I plan on bringing this system with me to Quakecon and didn't want to deal with the true liquid cooling setup just yet. Since NVIDIA seems to be dragging their feet I should be good for AT LEAST a year right
|
# ? Mar 22, 2014 02:40 |
|
NZXT's Kraken coolers have longer hoses than Corsair's versions. I'd also reverse them - two 140mm ones (or Kraken X40s, one per GPU) and the H110 for the CPU. It may seem counter-intuitive to use the smaller coolers on the 250W GPU, but the heat density of a GPU is actually much smaller than a CPU - huge cooling on a Haswell isn't so much about overall capacity as it is about the quick removal of heat, and GPUs have enough die area for their wattage that this isn't a problem. I cool a 250W-overclocked GeForce 680 on a 120mm radiator and have no complaints.
|
# ? Mar 22, 2014 02:55 |
|
Factory Factory posted:NZXT's Kraken coolers have longer hoses than Corsair's versions. I'd also reverse them - two 140mm ones (or Kraken X40s, one per GPU) and the H110 for the CPU. It may seem counter-intuitive to use the smaller coolers on the 250W GPU, but the heat density of a GPU is actually much smaller than a CPU - huge cooling on a Haswell isn't so much about overall capacity as it is about the quick removal of heat, and GPUs have enough die area for their wattage that this isn't a problem. I cool a 250W-overclocked GeForce 680 on a 120mm radiator and have no complaints. Yeah you can cool a GPU with the cheapest closed loop easily. Mine stays frosty with a h55 and 5v fan.
|
# ? Mar 22, 2014 02:59 |
|
Factory Factory posted:NZXT's Kraken coolers have longer hoses than Corsair's versions. I'd also reverse them - two 140mm ones (or Kraken X40s, one per GPU) and the H110 for the CPU. It may seem counter-intuitive to use the smaller coolers on the 250W GPU, but the heat density of a GPU is actually much smaller than a CPU - huge cooling on a Haswell isn't so much about overall capacity as it is about the quick removal of heat, and GPUs have enough die area for their wattage that this isn't a problem. I cool a 250W-overclocked GeForce 680 on a 120mm radiator and have no complaints. But that's going to change my mounting plans! I've got a 140mm mount on the upper rear of the case and two "140.2" mounts along the top. The plan was to have the two H110s out the top and the H90 out the back, and pump in air with four 120s in the front and two 120s in the bottom for a positive pressure system. I suppose two Kraken X40s should be able to share a 140.2 slot and the H100 would take the other one at the top, which would mean all the heat from the system would be being blown out the top only. I did look at the NZXTs originally because their hoses are indeed longer, but the X60 is sold out everywhere. Not so with the X40 though. Thanks, looks like I'm doing an Amazon return again! ShaneB posted:Yeah you can cool a GPU with the cheapest closed loop easily. Mine stays frosty with a h55 and 5v fan. I mildly oc'd my 780s and the H50 (my current case is cramped which might have everything to do with this) couldn't keep the main one cool enough during me playing Planetside 2 with SLI off. That happening is the catalyst for me doing all this over again. EDIT Taking your advice Factory Factory, moving to two X40 Krakens and keeping the H100 for my CPU. Thanks for the help! KakerMix fucked around with this message at 03:32 on Mar 22, 2014 |
# ? Mar 22, 2014 03:06 |
KakerMix posted:Excuse me but Planetside 2 is the FPS Battlefield wishes it was. I'll take 2000 player combined arms shootymans ok thank you. Not to mention the literal tens of thousand games floating around in the PC space. EA is pop music of games (not a bad thing), popular sure but certainly not all that there is. Ah man... these games... might as well be on different planets man. I like both a lot but drat dude. Planetside 2 actually was actually what finally pushed me to move to intel, I tried upgrading my video card and seeing virtually no difference I looked into it and a new cpu solved it. It really showed the deficiencies of my AMD, even if it was a poorly optimized game at the time. The AMD ran BF4 fine, but the intel really took it to the limit. But man these games couldn't be more different dude lol.
|
|
# ? Mar 22, 2014 06:26 |
|
Wasn't the next NVIDIA GPU going to contain an ARM processor? Whatsup with that, what would be the use?
|
# ? Mar 22, 2014 06:41 |
|
It might be a Big-Maxwell-only feature, since it's compute-oriented. The idea behind it is that there are some lightly-threaded tasks that are rear end-slow on a GPU, but PCIe being what it is, it's slower to send those back to the CPU than just process them locally on a wimpy ARM core.
|
# ? Mar 22, 2014 06:42 |
|
Factory Factory posted:It might be a Big-Maxwell-only feature, since it's compute-oriented. The idea behind it is that there are some lightly-threaded tasks that are rear end-slow on a GPU, but PCIe being what it is, it's slower to send those back to the CPU than just process them locally on a wimpy ARM core. This was my conjecture and I even started to write a post about it yesterday but I didn't want to interrupt the discussion. Would mean the chances of them running a similar generation entirely off of one chip would be significantly lower as I imagine it'd be a little bit to just somehow be able to laser off the logic that interfaces with the ARM core... Think it'd be more of a ground-up kind of redesign, maybe for a low-power variant aimed to hit the mobile market and OEMs square on, since they neither need nor benefit from the greater GPGPU performance. Or they could have changed course on it after their experience with the Tegra SOCs with integrated Kepler onward, who knows.
|
# ? Mar 22, 2014 15:27 |
|
They've already been on a path of merging their ARM SoC and GPU lines; this is a continuation of that, while also making some sense for compute tasks.
|
# ? Mar 22, 2014 23:06 |
|
I know at least one of you guys has an EVGA 780ti classified, and a few of you have the regular EVGA 780ti SC ACX, so I'm interested in hearing from you as to the relative overclocking performance (assuming normal usage, not crazy LN2 fuckery). Would you say that there's really $110 of performance difference between the two? I'm starting to think I should save my money.
|
# ? Mar 23, 2014 08:01 |
|
I haven't pushed mine very hard, but it'll trivially run at 1250MHz core and +150MHz (half effective via precision, +75MHz base - 1750MHz --> 1825Mhz, or thereabouts) on the memory within the (extreme) limits of my ability to and care about validating it. No apparent artifacts, no driver crashes, no instabilities, and like a 10FPS jump in Firestrike just from that, which translates well to games that would have some trouble staying at or above 60, or whatever. I am almost certain I could get (possibly a lot) more out of the core, but I think that's about where the memory ceiling is at, starts getting some artifacts if I push it up 25MHz above that and I take a better-safe-than-sorry approach with already very fast GDDR5. The poo poo throws errors in rigorous testing at stock speeds, I'm not going to lament that I can only run it at 1825MHz. Good memory, but god drat, it's already fast, does it have to be faster? Really? https://www.youtube.com/watch?v=U6RWiVRwoE8 Looking like my "early adopter for G-Sync" thing is a no-go, too - probably going to wait 'til Maxwell for that as I've got a big move planned, so... rather than being the absolute fuckin' coolest crap ever, instead it's just a really fast video card and I like that about it, ahh, let's see, it's nice that it keeps it at a high triple buffering ratio on a 60Hz monitor with VSync turned on under even extremely demanding conditions. Neat! Others, report as you wish. IIRC some dude has his running at like 1400MHz on the core with a Classified and the ... rather more robust power delivery, but I could probably gently caress around in the BIOS of mine and get similar performance just because I seem to have lucked out as an early early adopter and got a really good chip that doesn't require a lot of volts to run really fast? However, that's conjecture, I'm not going to actually dick with it in that manner so use your judgment. I will say this, by now EVGA has certainly binned the gently caress out of them, and you're almost certainly going to get the performance you pay for, not more. Agreed fucked around with this message at 08:48 on Mar 23, 2014 |
# ? Mar 23, 2014 08:43 |
|
I got the classified 780ti and have no regrets about the price (930$ AUS) right out of the box on stock BIOS it would boost clock to 1150mhz on the core and while I haven't played around with it much with the EVGA BIOS you can get that unlocks the volts past the 1.21v limit, I got it to 1300mhz core before temps got past 70c (need a waterblock) The classified has a full custom PCB with increased power input (2 8 pin connectors) with 14phase/3phase for the core/mem so that means for even just on air or water OC'ing you should get a more stable OC then the stock PCB setup
|
# ? Mar 23, 2014 08:49 |
|
I've done some more reading as well. This will be the first GPU I've tried to overclock, and I'm not likely to push all that hard so I'm going to save my money. That being said, for a similar price to the ACX SC is the MSI twin frozr 780ti, and that's starting to look compelling as well - it definitely runs hotter than the ACX, but it seems to be the quietest cooler by a massive margin - one site that reviewed both has load noise for the MSI at 30db, the EVGA ACX at 35db, and reference cooler at 39db. Assuming I take my measurements and I can fit it into a prodigy, the MSI might end up being my pick.
|
# ? Mar 23, 2014 10:18 |
|
The Lord Bude posted:I've done some more reading as well. This will be the first GPU I've tried to overclock, and I'm not likely to push all that hard so I'm going to save my money. That being said, for a similar price to the ACX SC is the MSI twin frozr 780ti, and that's starting to look compelling as well - it definitely runs hotter than the ACX, but it seems to be the quietest cooler by a massive margin - one site that reviewed both has load noise for the MSI at 30db, the EVGA ACX at 35db, and reference cooler at 39db. Assuming I take my measurements and I can fit it into a prodigy, the MSI might end up being my pick. Don't forget to check the temperatures, some cards have more aggressive default fan profiles, and are louder as a result.
|
# ? Mar 23, 2014 14:26 |
|
Factory Factory posted:It might be a Big-Maxwell-only feature, since it's compute-oriented. The idea behind it is that there are some lightly-threaded tasks that are rear end-slow on a GPU, but PCIe being what it is, it's slower to send those back to the CPU than just process them locally on a wimpy ARM core.
|
# ? Mar 23, 2014 20:33 |
|
Professor Science posted:I think you can trace this idea back several years to one article written during one of JHH's GTC presentations and it's metastasized since. it never made sense and it continues to make no sense because memory latency.
|
# ? Mar 24, 2014 12:40 |
Rastor posted:AMD and nVidia are both pursuing this, there must be a reason. It's definitely interesting that AMD's approach is to combine (comparatively) weak GPU cores with their fat x86 CPU chips, while nVidia is going to add small low-power ARM CPU cores to their top-end GPUs. And one day the GPU card will get so large that it becomes the computer itself, moving the CPU socket to the GPU pcb and the motherboard will whither away to nothing but a faint memory, making future generations wonder why they still plug the GPU into something. Really though it seems like a processor on the GPU seems like a good idea, eliminating the need to clog the bus pipes with more trivial tasks. You can literally watch this happen with a latency monitor thousands of times a second (I really do think what I said above will happen :P) Ignoarints fucked around with this message at 15:06 on Mar 24, 2014 |
|
# ? Mar 24, 2014 15:04 |
|
Ignoarints posted:And one day the GPU card will get so large that it becomes the computer itself, moving the CPU socket to the GPU pcb and the motherboard will whither away to nothing but a faint memory, making future generations wonder why they still plug the GPU into something. Isn't that basically just a console?
|
# ? Mar 24, 2014 15:25 |
|
http://www.tomshardware.com/news/bi...998187516738456 Not directly GPU related but hopefully stuff like this relieves prices for regular gaming folk. Also 130w is practically peanuts in GPU terms. I expected some sort of 400+ watt monster when I first read the headline.
|
# ? Mar 24, 2014 16:07 |
|
veedubfreak posted:Isn't that basically just a console? I'd argue a PC is something on which the software can be completely customised without restriction. There's already plenty of integrated PCs already on the market anyway. Seamonster posted:http://www.tomshardware.com/news/bi...998187516738456 Litecoin is what is driving up AMD card prices and the algorithm it uses requires a lot of RAM, from what I understand. I don't think this kind of thing is suitable for it. SCheeseman fucked around with this message at 16:11 on Mar 24, 2014 |
# ? Mar 24, 2014 16:09 |
|
Correct, that's a board of Bitcoin ASICs and the altcoin ASICs haven't hit the market yet. But it's a good sign and I'm hopeful that that the altcoins will switch to ASICs.
|
# ? Mar 24, 2014 16:17 |
veedubfreak posted:Isn't that basically just a console? Sort of... if a console was upgradable and ran Windows or something. But it would definitely blur the line.
|
|
# ? Mar 24, 2014 16:23 |
|
Welp, EVGA is releasing a 780 with 6gb of memory and will also be making a 780ti with 6gb.
|
# ? Mar 24, 2014 21:38 |
|
Would NOT have guessed they'd do that, that's going to be a major boon for single precision GPGPU. 1/10th the cost, a little more wattage, and way more than 1/10th the performance of a fully enabled GK110 Quadro. Yeah, it still has twice the RAM, but god drat.
|
# ? Mar 24, 2014 22:27 |
|
This is weirdly late in the game to start doing that, I wonder what finally convinced nvidia to greenlight that?
|
# ? Mar 24, 2014 22:40 |
|
Hace posted:This is weirdly late in the game to start doing that, I wonder what finally convinced nvidia to greenlight that? I'm guessing a delay in 20nm fabrication.
|
# ? Mar 24, 2014 22:47 |
|
Because this is like the third time they can sell the same product to some people and some people will buy it. I did twice but NOT AGAIN YOU FUCKS I KNOW YOU NOW I know you now i know you, now
|
# ? Mar 24, 2014 23:37 |
|
Of course they do this only 3 weeks after I get a 780ti classy, wankers
|
# ? Mar 24, 2014 23:52 |
|
I've been running my EVGA 780 SC ACX for the past 3 months without issue, using Precision X with +50 clock offset and +100 memory offset. For the first time today, I've been having issues with BF4 where my game would freeze up every few minutes for about 5 seconds, afterwards it would also happen to WoW until I restarted. I'm not seeing any artifacts or anything that indicates a driver crash, so any ideas what could be causing this all of a sudden? To clarify 2 things: After I restarted and played WoW first things were fine until I turned on BF4 again; and 2, the GPU temp is never going above the high 70s. EDIT: Fixed, turns out the FPS limiter in Precision X was turned on and causing the stutters. Haeleus fucked around with this message at 02:10 on Mar 25, 2014 |
# ? Mar 25, 2014 00:03 |
|
Michymech posted:Of course they do this only 3 weeks after I get a 780ti classy, wankers Not sure if you're actually upset, but do you have a 30-day return window? Microcenter and Amazon have 30-day returns. NewEgg has a 30-day free-shipping-no-restocking-fee return thing called Premier, with a trial which you can sign up for after the fact, then cancel once you have your full refund. Of course it might not be worth your time but I'd never buy a card from anywhere without a month return window, it's just too big an investment.
|
# ? Mar 25, 2014 00:14 |
|
Im not that mad about it as I don't see myself Getting a 4k monitor anytime soon as I dont see 3gig of ram being a issue anytime soon plus im sure at the point were it does become a issue the 800 series cards will be out
|
# ? Mar 25, 2014 01:20 |
|
Michymech posted:Of course they do this only 3 weeks after I get a 780ti classy, wankers If it makes you feel better, so far only a regular 780 6gig has been formally announced. And I haven't seen a release date. The 780ti 6gig is still the realm of speculation and leaks, unless something changed overnight while I was asleep. seems to me like as far as gaming goes this is really only useful for the folk who want 2 of them for 4k.
|
# ? Mar 25, 2014 01:20 |
|
Well thats how I see it too, im planning to get a 2nd one and stay at 1440p plus oculus rift fun times soon, plus even with 2 780tis its really hard to get even a stable 60fps for gaming
|
# ? Mar 25, 2014 01:29 |
|
The Lord Bude posted:If it makes you feel better, so far only a regular 780 6gig has been formally announced. And I haven't seen a release date. The 780ti 6gig is still the realm of speculation and leaks, unless something changed overnight while I was asleep.
|
# ? Mar 25, 2014 02:07 |
|
Rastor posted:According to the rumor/leak, GeForce GTX 780 with 6GB memory will be available in ‘few weeks’, while GTX 780 Tis are scheduled for May/June. EVGA has been advertising the 780 6gig on their website for at least a few days now though, while I haven't seen anything about the 780ti, except on leak sites. Now that Titan black is out, it kinda makes sense that a 6 gig 780 would be less of a direct threat to titan sales than a 780ti 6 gig, and I suspect that's why Nvidia is allowing it. It could be that the rumours of a 780ti are just that - speculation based on the fact that the 780 is getting a 6 gig version.
|
# ? Mar 25, 2014 02:19 |
|
Check the thread on overclock.net. The evga guy said they plan to have basically every possible iteration using 6gb.
|
# ? Mar 25, 2014 15:04 |
I wonder if we'll ever be able to upgrade vram, or is it too integrated into the specific application for that
|
|
# ? Mar 25, 2014 15:27 |
|
|
# ? Jun 3, 2024 20:39 |
|
EVGA also sent me an email flyer saying they're about to start accepting Step-Up applications towards 6GB cards. I bet they take the ones you mail back to them and just solder on some 6GB chips.
|
# ? Mar 25, 2014 15:27 |