|
The tech sites are doing their HBM analysis after AMD releases more info. Pick your poison: http://www.anandtech.com/show/9266/amd-hbm-deep-dive http://semiaccurate.com/2015/05/19/amd-finally-talks-hbm-memory/ http://www.pcper.com/reviews/General-Tech/High-Bandwidth-Memory-HBM-Architecture-AMD-Plans-Future-GPUs Of note: * Anandtech thinks that AMD will halve their memory power consumption: http://www.anandtech.com/show/9266/amd-hbm-deep-dive/4 This isn't the kind of power savings as you'd find in Maxwell, though. Oh well, there's always Arctic Islands. Nvidia will benefit from being able to claim a "free" 15-30W power savings relative to Maxwell, Team Green's marketing will probably spin this to make Team Red look bad. * Semiaccurate thinks that while HBM is going to be a very high end product, its associated tangential benefits that are helping to defray costs now (not needing to route memory through PCBs = less complex, cheaper PCBs, being able to repurpose old 65nm lines into dedicated interposer lines) will really help drive costs down once volume ramps up. In other words, the HBM should not be any more expensive than whatever people were going to do with GDDR5, if not cheaper. * PcPer notes that the interposer gives AMD some serious flexibility. Probably a "duh" thing, but it lets AMD use different process technologies, like if they wanted to use a 14nm process on the GPU and 19nm process on the HBM. (Their article uses 28nm and 19nm, but let's face it, we're getting 14nm FinFET parts from both nVidia and AMD inside of two years.) SwissArmyDruid fucked around with this message at 21:32 on May 19, 2015 |
# ? May 19, 2015 20:16 |
|
|
# ? May 31, 2024 15:08 |
|
Hay look, Nvidia decided to hook us up. http://www.geforce.com/titan-x-geforce-experience-beta
|
# ? May 19, 2015 20:29 |
|
It seems the Project Cars developers may have lied about their correspondence with AMD per HardOCP.SwissArmyDruid posted:The tech sites are doing their HBM analysis after AMD releases more info. Pick your poison: I'm really hoping that this won't be a repeat of the Rx 200 or GTX 700 series where only two or three cards are new out of the whole lineup. I guess we won't know 'til this summer.
|
# ? May 19, 2015 21:24 |
|
veedubfreak posted:Hay look, Nvidia decided to hook us up. Oh poo poo.
|
# ? May 19, 2015 21:26 |
|
GrizzlyCow posted:It seems the Project Cars developers may have lied about their correspondence with AMD per HardOCP. I am unfamiliar with the personalities on [H]. Who is cageymaru and what makes what he says carry any weight?
|
# ? May 19, 2015 21:39 |
|
GrizzlyCow posted:It seems the Project Cars developers may have lied about their correspondence with AMD per HardOCP. quote:The PhysX makes 600 calculations per second on the CPU.
|
# ? May 19, 2015 21:46 |
|
GrizzlyCow posted:
haha im pretty sure that we are certain of
|
# ? May 19, 2015 21:55 |
|
GrizzlyCow posted:It seems the Project Cars developers may have lied about their correspondence with AMD per HardOCP. I'm sure AMD will continue to dress up the rotting corpse that is the 7970 into new clothes with yet another rebrand. Makes it really difficult to keep track of what actually supports their latest features like TrueAudio. That and the fact they've been keeping the prices on these rebrands the same for years now, stagnating the market a bit.
|
# ? May 19, 2015 21:57 |
|
GrizzlyCow posted:It seems the Project Cars developers may have lied about their correspondence with AMD per HardOCP. The amount of rumor being passed around as fact regarding Project CARS is unreal. If you believe the mob, it runs badly on AMD because the Gameworks source code is under NDA and AMD can't see it (even though the game doesn't use any Gameworks libraries besides PhysX), and because the PhysX simulation is only GPU accelerated on nVidia cards (even though it runs perfectly with CPU-forced PhysX on nVidia, and the consoles pathetic CPUs). Who knows who's to blame for AMDs low performance, but making stuff up to paint nVidia as a mustache twirling villain isn't helping Slightly Mad put out a statement refuting that, it's their own CPU-based simulation that runs at 600hz. repiv fucked around with this message at 22:05 on May 19, 2015 |
# ? May 19, 2015 22:00 |
|
Is 4GB that limiting for modern 4k games?
|
# ? May 19, 2015 22:14 |
|
repiv posted:Slightly Mad put out a statement refuting that, it's their own CPU-based simulation that runs at 600hz. They're changing their story pretty much every day.
|
# ? May 19, 2015 22:16 |
|
Are they? They did say the physics ran at 600hz but as far as I can see they never said PhysX runs at 600hz. Someone unaware that the game has its own physics model for the vehicles just saw that and made a false assumption.
|
# ? May 19, 2015 22:24 |
|
There's no GPU accelerated physics in Project Cars regardless of whether you have an AMD or Nvidia gpu.
|
# ? May 19, 2015 22:29 |
|
I have been under impression that only APEX part of PhysX can run on GPU. I know Arma 3 and all Unreal 4 games use PhysX for general physics that only runs on CPU. Sininu fucked around with this message at 22:35 on May 19, 2015 |
# ? May 19, 2015 22:32 |
|
Tab8715 posted:Is 4GB that limiting for modern 4k games? GTA V on higher settings can blast past 4GB in 4K. Not sure about other games, I haven't really tried too hard to break it.
|
# ? May 19, 2015 22:35 |
|
Shadow of Mordor requires 6GB of VRAM to use the Ultra Textures setting. AMD is putting out a card rumored to be $850 that can't run a year old game at max settings.
|
# ? May 19, 2015 22:38 |
|
Mutation posted:Shadow of Mordor requires 6GB of VRAM to use the Ultra setting on textures. Are the Mordor Ultra textures (DLC, I think?) actually an improvement in visual quality even at 4K, or just a stunt?
|
# ? May 19, 2015 22:39 |
|
Subjunctive posted:Are the Mordor Ultra textures (DLC, I think?) actually an improvement in visual quality even at 4K, or just a stunt?
|
# ? May 19, 2015 22:48 |
|
SwissArmyDruid posted:I am unfamiliar with the personalities on [H]. Who is cageymaru and what makes what he says carry any weight? I'm not too familiar either. Dude just seems to be a backer. If it helps back up what he said, someone accidentally forced enable CPU only PhysX on the Steam forums, and the performance dropped to AMD levels. There's also this page which shows what happens when the R9 290X is used with Windows 10's updated D3D11 (which reduces load on the processor).
|
# ? May 19, 2015 22:51 |
|
Mutation posted:Shadow of Mordor requires 6GB of VRAM to use the Ultra Textures setting. It really doesn't, you can run ultra textures on a 970 with no problems
|
# ? May 19, 2015 22:52 |
|
GrizzlyCow posted:I'm not too familiar either. Dude just seems to be a backer. If it helps back up what he said, someone accidentally forced enable CPU only PhysX on the Steam forums, and the performance dropped to AMD levels. There's also this page which shows what happens when the R9 290X is used with Windows 10's updated D3D11 (which reduces load on the processor). I've tested CPU-only PhysX myself, it makes absolutely zero difference to performance even on ultra settings. Can't say what caused that guys performance to tank, but it's more likely something unrelated caused it than CPU PhysX miraculously running faster on my machine.
|
# ? May 19, 2015 22:55 |
|
I don't know if the other HBM articles mentioned this but in The Tech Report's interview with Joe Macri he talks about memory capacity efficiency. So maybe 4GB will still be the future if they're able to work around the problem without degrading quality. http://techreport.com/review/28294/amd-high-bandwidth-memory-explained/2 Joe Macri posted:When I asked Macri about this issue, he expressed confidence in AMD's ability to work around this capacity constraint. In fact, he said that current GPUs aren't terribly efficient with their memory capacity simply because GDDR5's architecture required ever-larger memory capacities in order to extract more bandwidth. As a result, AMD "never bothered to put a single engineer on using frame buffer memory better," because memory capacities kept growing. Essentially, that capacity was free, while engineers were not. Macri classified the utilization of memory capacity in current Radeon operation as "exceedingly poor" and said the "amount of data that gets touched sitting in there is embarrassing."
|
# ? May 19, 2015 22:58 |
|
ryangs posted:How's this for insane: I just bought a new GTX 970 and my PC won't boot if my monitor is connected. For the curious, running the monitor through a simple DVI to HDMI adapter has fixed this. Amazing. The POST screen now shows at 640x480 instead of 1900x1200, which may be a clue to why this fixes it.
|
# ? May 19, 2015 23:47 |
|
I'd much rather have 8GB cards at launch, and if they don't have them I'll be tempted to hold off, but what happened to all the people backing the 970 with "3.5GB is more than enough", and what of the fact that the GTX 980 only has 4GB? It doesn't sound like there's a technical reason they can't do 8 stacks - the only plausible objection I've heard is that the interposer might be too big. Kinda sounds like they might be holding 8GB units back as a counterattack after the 980 Ti launches with 6GB, along with the dual-GPU card (if rumors are to be believed (they're not)). Or they could go the route and back their 4GB of up with 4-8GB of GDDR5 as a slow segment (after which NVIDIA immediately launches back with the "6GB means 6GB!" campaign and AMD's marketing team commits seppaku). AMD usually puts so much bandwidth on their memory that it would still probably be fine. After all a Titan X only has 5% more bandwidth than a 290X, so a "slow segment" would still probably be faster than a 980 Ti, especially if (since it's a cut-down GM200) they start disabling parts of the memory system like when they cut the GM100 down into the GTX 980. You're probably back into "huge board and runs hot" territory at that point, of course. Also, hopefully the DX12 "adding memory capacity of the cards together in SLI/Crossfire" thing comes to fruition, because that would be a killer feature. Or that devs tighten their poo poo up and reduce memory usage, although I'm dubious that that will solve the problems entirely. If that happens, people will just start running double-ultra textures at 8x SSAA in triple-monitor 4K. Whatever it takes to get your rig bogged down to 50 FPS. Paul MaudDib fucked around with this message at 00:22 on May 20, 2015 |
# ? May 20, 2015 00:00 |
|
Ugh, witcher 3 is 23gb. Gonna take a bit to download that one. But it was free, woo.
|
# ? May 20, 2015 01:02 |
|
repiv posted:I've tested CPU-only PhysX myself, it makes absolutely zero difference to performance even on ultra settings. Hmm. After looking around, there does seem to be more (user) benchmarks that follows your conclusion. Guess I goofed.
|
# ? May 20, 2015 01:03 |
|
wolrah posted:GTA V on higher settings can blast past 4GB in 4K. Not sure about other games, I haven't really tried too hard to break it. Off the top of my head Crysis 3 hits 3.7GB at 4K. VRAM consumption numbers are kind of dubious though, especially in streaming-heavy open world games. You can't tell if the engine is bloating its consumption by aggressively pre-caching more asset than it needs if spare VRAM is available, or delaying deleting probably unneeded assets until more VRAM is actually required.
|
# ? May 20, 2015 02:01 |
|
So far a lot of games with massive vram requirements seem to perform just fine when the excess is loaded into system memory. I think people over estimate just how much bandwidth is needed, as long as the driver is putting the bandwidth hungry assets in GDDR5/HBM then most games should run fine.
|
# ? May 20, 2015 03:20 |
|
I've yet to run into any scenario where I couldnt just turn AA down either but ... I haven't played things like witcher 3 or gta5
|
# ? May 20, 2015 03:51 |
|
repiv posted:Off the top of my head Crysis 3 hits 3.7GB at 4K. That's correct. Witcher 3 only 'needs' a 2GB card for the highest quality textures, setting it to ultra just increases the buffer. Of course it helps that it uses a custom variant of FXAA and doesn't waste resources on MSAA. Well it applies MSAA on HairWorks stuff but at that point you're at the very high end anyway.
|
# ? May 20, 2015 04:43 |
|
Mutation posted:Shadow of Mordor requires 6GB of VRAM to use the Ultra Textures setting. Also most importantly they don't actually use 6 gigs without also bumping AA up to high levels.
|
# ? May 20, 2015 04:56 |
|
veedubfreak posted:Hay look, Nvidia decided to hook us up. This is supremely cool. Not only am I all 'yay team green' but now I am using GoG's Galaxy Beta where I'd probably have ignored it before. A good marketing move I do say so.
|
# ? May 20, 2015 07:25 |
|
Tab8715 posted:Is 4GB that limiting for modern 4k games? Not really. That graph from TPU shows the max memory of those games at either 1440p with 4xAA or 4K with no AA. (CoD is probably an outlier, filling as much memory as it's given without actually using it). Having at least 6 seems like the sweet spot so that it's not limiting for at least a couple of years.
|
# ? May 20, 2015 08:03 |
|
KakerMix posted:This is supremely cool. Not only am I all 'yay team green' but now I am using GoG's Galaxy Beta where I'd probably have ignored it before. A good marketing move I do say so. If they're smart, they'll keep it up. I'd imagine it costs them no more than ~$10 per download code buying in bulk (if not less), and if they leverage their capital over AMD to provide first-tier not-yet-released games to people who buy Titan cards for their first year, that munches up a *lot* of the price difference between their SKUs, and all but forces AMD to lower their prices or start a program to compete. I'd think the only thing that'd give them pause is setting up a system like Humble Bundle did for a while there where they automatically added titles to your linked Steam account so they could put the curb on people selling the codes. Of course, the folks on Guru3D are already theorycrafting how they could hack a cheap nVidia card in BIOS to act like a Titan X in name and DEV_ID only so they could game the system. BIG HEADLINE fucked around with this message at 12:45 on May 20, 2015 |
# ? May 20, 2015 12:41 |
|
Evil Kaneval posted:I'm wondering if there are others here that are running 770s in SLI that have experienced a tremendous amount of stutter while using nVidia drivers beyond 347.88 (namely 350.12 and 352.86)? This has been a confounding problem for me as 347.88 worked flawlessly. I updated to 350.12 for GTA 5 and experienced game breaking stuttering in that and other games (Guild Wars 2 and Diablo 3). The stuttering stopped when I disabled SLI, but I wasn't happy with that solution. I posted this on page 424 and I seemed to have isolated my issue and resolved my stuttering. It was a problem with my particular nVidia surround setup and I'm still not 100% sure what I was doing wrong. It seems that something was introduced in 350.12 and beyond that made my setup no longer functional. I wrote about it on the GeForce forums if anyone is curious: https://forums.geforce.com/default/...your-monitors-/
|
# ? May 20, 2015 15:25 |
|
I'm glad you figured out what was triggering it. I'd never have guessed lol
|
# ? May 20, 2015 15:49 |
|
How would heat generation and energy consumption compare between a 760 and a 970? I have a 760 that seems to throw off waaay more heat than my previous 660ti. In thinking of upgrading to the 970 for the 2 free games and selling my 760 to offset the cost, and part of that consideration would be decreases energy consumption. How much could I get for an EVGA 760 4gb, no overclocking?
|
# ? May 20, 2015 20:26 |
|
cat doter posted:So far a lot of games with massive vram requirements seem to perform just fine when the excess is loaded into system memory. I think people over estimate just how much bandwidth is needed, as long as the driver is putting the bandwidth hungry assets in GDDR5/HBM then most games should run fine. Maybe with HBM in laptops we can at least get some standardization for mobile GPUs in terms of memory bandwidth. For too long have manufacturers castrated otherwise decent GPUs with lovely DDR3, not mention it at all in the specs and then have people wonder why they're not getting all the FPS they should or thought they would.
|
# ? May 20, 2015 20:34 |
|
760 170w vs 970 145w So the 970 is about 15% lower, up to you if that is really worth it. The main power savings were from people using 770s(230w) and 780s(250w). The 970 is certainly a much more powerful card in terms of performance though, so you would see gains there.
|
# ? May 20, 2015 20:38 |
|
|
# ? May 31, 2024 15:08 |
|
Cinara posted:760 170w I am amazed at the energy efficiency gains recently in computer parts. Hey, we're using significantly less power... And it performs way better then before! Any thoughts on what I could sell a 760 4gb for? The Slack Lagoon fucked around with this message at 20:50 on May 20, 2015 |
# ? May 20, 2015 20:47 |