|
Desuwa posted:The ACX 2.0 on my SC has the curved heatpipies. So there are four EVGA coolers, ACX, ACX 2.0, ACX 2.0 that came with the FTW, and ACX 2.0 that comes with the SSC (which has less heatpipes?). EVGA's 6% is comparing the new one (the 4th) to the curved heatpipes ACX (the first or second) instead of the 3rd which by all accounts is better than the 1st and 2nd while still being lovely. By all accounts they are all lovely. Hilarious.
|
# ? Jan 13, 2015 22:58 |
|
|
# ? Jun 3, 2024 23:46 |
|
BurritoJustice posted:The second one is quite clearly exactly the same cooler from the Anandtech review that is poo poo Take a look for yourself: http://www.evga.com/Products/Product.aspx?pn=04G-P4-2974-KR Even in the promo shot with the box you can see the heatpipes running across the card. Their marketing is just smoking crack and throwing names at whatever sticks. Within the 970 line alone there are now at least three different coolers that have been called "ACX 2.0". We'll see how it does when I get it, I mostly wanted the upgrade in the output configuration and just paying shipping for that plus a probably better cooler seems worth a shot. e:f;b
|
# ? Jan 13, 2015 23:04 |
|
Kazinsal posted:300W TDP? Jesus christ, that's like two MSI 970 4Gs. OTOH the rumor is that it's 50% again larger than the 290X, and HBM will alleviate one of the biggest bottlenecks in current-gen cards. Combine that with imperfect SLI scaling, and the thing could actually RUN like a pair of 970s.
|
# ? Jan 13, 2015 23:19 |
|
BurritoJustice posted:So there are four EVGA coolers, ACX, ACX 2.0, ACX 2.0 that came with the FTW, and ACX 2.0 that comes with the SSC (which has less heatpipes?). EVGA's 6% is comparing the new one (the 4th) to the curved heatpipes ACX (the first or second) instead of the 3rd which by all accounts is better than the 1st and 2nd while still being lovely. By all accounts they are all lovely. Hilarious. As far as I know all the non-FTW cards have three pipes, where the FTW versions have four. It's their attitude more than their technical incompetence that has me going through the motions to replace it.
|
# ? Jan 13, 2015 23:45 |
|
I don't know how many heatpipes my GTX 980 has because I'm too busy playing games with it.
|
# ? Jan 13, 2015 23:49 |
|
Currently mulling over some upgrade possibilities. Is there any particular reason to get the larger-sized 970 cards instead of the mini ones, other than for overclocking? Or at stock speeds is it identical, just with less fans and a smaller size?
|
# ? Jan 14, 2015 02:43 |
|
Pretty excited to see what hbm can do. Hopefully the gpu can keep up. 300 watts is disappointing but not unexpected I suppose save for that one leak earlier
|
# ? Jan 14, 2015 02:50 |
|
univbee posted:Currently mulling over some upgrade possibilities. Is there any particular reason to get the larger-sized 970 cards instead of the mini ones, other than for overclocking? Or at stock speeds is it identical, just with less fans and a smaller size? At stock it's going to run hotter (some people with the mini-ITX have reported running at 10c higher than cards with more fans in some things) and probably run louder since it's one fan versus two or three.
|
# ? Jan 14, 2015 03:22 |
|
Factory Factory posted:OTOH the rumor is that it's 50% again larger than the 290X, and HBM will alleviate one of the biggest bottlenecks in current-gen cards. Combine that with imperfect SLI scaling, and the thing could actually RUN like a pair of 970s. But can I use the same nuke jokes that AMDfags used to describe Fermi with this card? Also, what bottleneck does HBM solve?
|
# ? Jan 14, 2015 06:36 |
|
Memory bandwidth. Since the 7970/GeForce 680, cards have gotten a lot of performance out of VRAM overclocking, and plethoras of DDR3 interfaces require a lot of shenanigans like huge die sizes and tons of memory controllers with complex access control. HBM will pack shitloads of bandwidth into fewer controllers (albeit still with a ton of I/O pins). It'll basically act like a giant and effective L4 cache, like Intel Crystalwell or the ESRAM cache on the Xboxen.
|
# ? Jan 14, 2015 11:55 |
|
Today's GTX 960 leaks are dire. It's a GM204 cut in half but they stepped hard on the clockspeed to edge out a benchmark lead over the 760, 1200+ MHz out of the box = 120W and the same cooling needed as a 970. Somehow they managed to make a card that's less attractive than the 2GB Tonga.
|
# ? Jan 14, 2015 14:25 |
|
sauer kraut posted:Today's GTX 960 leaks are dire.
|
# ? Jan 14, 2015 14:49 |
|
sauer kraut posted:Today's GTX 960 leaks are dire. AMD has to be left confused right now, because their slightly underwhelming mid-range part was just copied. Maybe they should feel flattered? Any information about the retail pricing yet? Because NVIDIA hasn't historically been known for their good mid-range pricing. Either way, it seems the rumours were on the money. HalloKitty fucked around with this message at 15:33 on Jan 14, 2015 |
# ? Jan 14, 2015 15:23 |
|
BurritoJustice posted:By all accounts they are all lovely. Hilarious. Maybe it's just because my last card was a 5870, but my EVGA 970 seems pretty quiet.
|
# ? Jan 14, 2015 17:24 |
|
sauer kraut posted:Today's GTX 960 leaks are dire. Ouch, if that card is any higher than $200 then that's a bit sad.
|
# ? Jan 14, 2015 18:24 |
|
HalloKitty posted:AMD has to be left confused right now, because their slightly underwhelming mid-range part was just copied. Maybe they should feel flattered? $199 is what I've heard. For 2GB and 128bit bus in 2015, um....
|
# ? Jan 14, 2015 19:04 |
|
Recall: 1) It's supposed to be a new chip, and there are supposed to be Ti versions with harvested GM204 dies to fit between the 960 and 970. 2) Maxwell features end-to-end memory compression. Notice how the GeForce 980 matches-to-beats a 780 Ti (with a 384-bit bus) using a 256-bit bus and identical memory clocks. 3) A GeForce 760-ish card really can't take much advantage of more than 2 GB of RAM while maintaining ~60 FPS by itself. Like, here's some benchmarks testing Far Cry 3. At 4K, the extra VRAM makes a 500% difference - from 2 FPS to 10 FPS. It's clearly a 1920x1080 card, and 1920x1080 only broke the >1 GB benchmark consistently a year or two ago. 4) Today's "high" settings are yesteryear's "Ultra," and today's "Ultra" involves supersampling and/or ludicrously expensive, barely perceptible differences like texture quality you'd only pick up on at 4K res. It's not like today's Mass Effect needs more oomph, it's that people are exposing Metro 2033 levels of high-end options more regularly.
|
# ? Jan 14, 2015 19:19 |
|
Factory Factory posted:3) A GeForce 760-ish card really can't take much advantage of more than 2 GB of RAM while maintaining ~60 FPS by itself. Like, here's some benchmarks testing Far Cry 3. At 4K, the extra VRAM makes a 500% difference - from 2 FPS to 10 FPS. It's clearly a 1920x1080 card, and 1920x1080 only broke the >1 GB benchmark consistently a year or two ago. It does prevent frame time spikes when games just assume that they can have 3+ gigs of textures cached at one time and 2 gig cards end up swapping textures to and from system memory.
|
# ? Jan 14, 2015 19:41 |
|
Wrong thread.
Simple Simon fucked around with this message at 19:59 on Jan 14, 2015 |
# ? Jan 14, 2015 19:43 |
|
The_Franz posted:It does prevent frame time spikes when games just assume that they can have 3+ gigs of textures cached at one time and 2 gig cards end up swapping textures to and from system memory. Kind of. I have a lot of opinions on nvidia's memory situation last gen. Most of the time it was very well matched. At least as far as what was out last year as far as games, almost no cards could reasonably push past the quantity of memory present on the board without major suffering in raw fps first. That being said, running out of memory is the worst kind of fps drops the times where it was present first (such as, 770 2gb SLI) but even then it only happened in certain situations. I could mitigate those to-zero fps drops by frame limiting, despite never having "enough" memory. I'm not sure how well some of those cards might fare with today's most memory hogging games. I am very impressed with the memory improvements this time around, and it kind of cinched the idea that memory bits and bandwidth are only really comparable in performance within the same generation of tech (which was evident in earlier generations as well). However, the thought of 128 bit 2GB cards seems to be pushing it. We shall see
|
# ? Jan 14, 2015 20:05 |
|
Presto posted:They're not lovely. They all do a perfectly adequate job of cooling the card. Just because it's maybe not quite as good as other cards doesn't mean it's some kind of total failure. I'd call louder than reference a pretty big failure for a two fan open blower. By "maybe not quite as good" do you mean its up to 10 decibels louder and 10 degrees hotter than other custom cards you can get for the same money. How is that not lovely? Its a pretty awesome performer if you compared it to older, way higher TDP cards with basic blowers but its far below the standard for custom coolers this generation.
|
# ? Jan 14, 2015 22:28 |
|
Factory Factory posted:3) A GeForce 760-ish card really can't take much advantage of more than 2 GB of RAM while maintaining ~60 FPS by itself. Like, here's some benchmarks testing Far Cry 3. At 4K, the extra VRAM makes a 500% difference - from 2 FPS to 10 FPS. It's clearly a 1920x1080 card, and 1920x1080 only broke the >1 GB benchmark consistently a year or two ago. 2GB is just a shortsighted purchase even for a lower-end card these days. You can suffer performance drops by trying to push higher res textures than your card can drive, sure - but there's a hell of a difference between going from 40->30fps and "completely unplayable" which happens when you run out of texture memory.
|
# ? Jan 15, 2015 14:23 |
|
Happy_Misanthrope posted:...and there's a reason about that ended about "a year ago" with the introduction of the new consoles - that is no longer the case. 2GB, even 3+GB vram can be exceeded by several next-gen console ports already at 1080p. Yeah, but see the point about halo-tier effects. Go look at Shadow of Mordor's 6 GB VRAM Ultra textures compared to High. Tell me you see a difference there. Tell me you'd see a difference when the game was in motion. Play it at 1080p rather than a higher resolution and then tell me you could spot the differences reliably. Because I can't.
|
# ? Jan 15, 2015 15:29 |
|
So why does gsync (and I think freesync/adaptive sync) require exclusive full-screen? Can the Windows compositor not scan out at variable rates?
|
# ? Jan 15, 2015 15:32 |
|
Most games in the last 6 months have either used nearly all of or even more than 2GB of VRAM, I'd be extremely surprised if that doesn't start becoming a major bottleneck this time next year.
|
# ? Jan 15, 2015 15:38 |
|
Factory Factory posted:Yeah, but see the point about halo-tier effects. Go look at Shadow of Mordor's 6 GB VRAM Ultra textures compared to High. Tell me you see a difference there. Tell me you'd see a difference when the game was in motion. Play it at 1080p rather than a higher resolution and then tell me you could spot the differences reliably. Because I can't. Even at 2560x1440 I couldn't notice a difference between SoM's high and ultra textures, so I went back to high because ultra textures made it leak memory faster. I'm surprised I can't find more talk online about Shadow of Mordor leaking memory, when I had a fixed-size small page file it would run out of memory and crash after about 20-30 min, now that I let the page file get gigantic it starts filling up the pagefile and stuttering and freezing after 30 min. I'd say there was something weird about my machine, but a friend of mine who has a GTX 970 instead of my R9-290 has the exact same thing happen. Whenever I've asked other people about it, I just get told that 8GB isn't enough RAM anymore, it seems like most of my friends have at least 16GB and several have 32GB.
|
# ? Jan 15, 2015 16:00 |
|
Hace posted:Most games in the last 6 months have either used nearly all of or even more than 2GB of VRAM, I'd be extremely surprised if that doesn't start becoming a major bottleneck this time next year. Hell, it was a bottleneck some time ago with modded Skyrim. It seems very short-sighted now for anything more powerful than a 750 Ti.
|
# ? Jan 15, 2015 16:03 |
|
I guess 3 year old 7970's for 230 bucks is just too good of a deal for the dirty poors, it couldn't be allowed to happen again
|
# ? Jan 15, 2015 16:11 |
|
Subjunctive posted:So why does gsync (and I think freesync/adaptive sync) require exclusive full-screen? Can the Windows compositor not scan out at variable rates? I would not go there
|
# ? Jan 15, 2015 16:47 |
|
Twerk from Home posted:Even at 2560x1440 I couldn't notice a difference between SoM's high and ultra textures, so I went back to high because ultra textures made it leak memory faster. I have a 980 and 8 gigs of system ram and it doesn't do that for me. It plays fantastically at 2560 * 1600 at ~70 fps for hours.
|
# ? Jan 15, 2015 16:48 |
|
Gwaihir posted:I have a 980 and 8 gigs of system ram and it doesn't do that for me. It plays fantastically at 2560 * 1600 at ~70 fps for hours. What we've worked out is that we both have 2500K, 8GB DDR3-1600, and Samsung 840 EVOs with RAPID enabled. I'm betting that RAPID is doing something dumb.
|
# ? Jan 15, 2015 16:52 |
|
Malcolm XML posted:I would not go there ?
|
# ? Jan 15, 2015 16:57 |
|
Twerk from Home posted:What we've worked out is that we both have 2500K, 8GB DDR3-1600, and Samsung 840 EVOs with RAPID enabled. I'm betting that RAPID is doing something dumb. Hm, I also have an 840 Evo with rapid, and a 3770K. Perhaps different motherboard chipsets, z67 vs z77.
|
# ? Jan 15, 2015 17:00 |
|
The size of the EVO matters for how much memory RAPID consumes BTW. For what it's worth I have seen something eat up my computer's Nonpaged pool when I've not restarted it after a few weeks. (This is easiest to see in RAMMap). I have a 750GB EVO and 16GB of RAM though (I love my chrome tabs and games so I overspent on RAM, sue meeeeee).
|
# ? Jan 15, 2015 17:31 |
|
Why would you install the lovely software that comes with a harddrive/SSD, and most of the stuff that comes with mainboards and GPUs? That's just begging for trouble
|
# ? Jan 15, 2015 17:35 |
|
sauer kraut posted:Why would you install the lovely software that comes with a harddrive/SSD, and most of the stuff that comes with mainboards and GPUs? That's just begging for trouble
|
# ? Jan 15, 2015 17:46 |
|
sauer kraut posted:Why would you install the lovely software that comes with a harddrive/SSD, and most of the stuff that comes with mainboards and GPUs? That's just begging for trouble I think you're misunderstanding exactly what RAPID is.
|
# ? Jan 15, 2015 17:47 |
|
The way that windows interfaces with graphics and DX is complicated and the answer is, maybe but it would break applications so I am not surprised that dynamic refresh can only be done in single application fullscreen mode.
|
# ? Jan 15, 2015 17:48 |
|
sauer kraut posted:Why would you install the lovely software that comes with a harddrive/SSD, and most of the stuff that comes with mainboards and GPUs? That's just begging for trouble Do you know what rapid does?
|
# ? Jan 15, 2015 17:50 |
|
|
# ? Jun 3, 2024 23:46 |
|
Malcolm XML posted:The way that windows interfaces with graphics and DX is complicated and the answer is, maybe but it would break applications so I am not surprised that dynamic refresh can only be done in single application fullscreen mode. Yeah, I work with some ex-Windows-graphics people, but we weren't sure and I couldn't find an explanation anywhere, nor indication as to whether it was something they could fix in a driver update (like they did when adding support for windowed apps to Shadowplay, f.e.). I guess you're saying it could violate applications' expectations of how often DWM scans out, but DWM composites already so apps update whenever they want, so I don't really follow. I'll keep looking on the web. Does AMD's freesync stuff have the same limitation?
|
# ? Jan 15, 2015 19:06 |