The Lord Bude posted:This thread is the exact opposite of 'the high end stuff'. This thread, if anything, is very close to hostile towards people that have a looser interpretation of 'value' and want to spend more money on their PC. Most people who post here are trying to get the best bang for their buck at <1k budgets. The 290x and 780 aren't even on the list of graphics card recommendations. An i5 is not high end, and we make it explicitly clear not to buy an i7 for gaming.
|
|
# ? Feb 2, 2014 06:27 |
|
|
# ? May 29, 2024 02:08 |
|
I'm seeing people putting up Mantle vs DX benchmarks on older CPUs and they're getting like, double the FPS on Q6600s and stuff. Mantle seems pretty marginal for Sandy Bridge-and-up I5s and such(though 8-10% increases just through quasi-software solutions isn't too shabby at the high end at all), but anything older than that and APUs? Huge increases, the kind that would definitely influence someone wanting to upgrade their old setup with minimal fuss.
|
# ? Feb 2, 2014 06:31 |
|
edit, wrong thread
|
# ? Feb 2, 2014 07:04 |
|
I think the really good thing it's doing is exposing - at least with this engine, though I think this will become more broadly true as more practical implementations are released and tested - just how CPU-bound some things can become in games. Here's the rub, though: in order to solve that problem, to really solve it, the solution has to be non-proprietary, or AMD has to somehow put nVidia completely out of business in graphics cards or enter into some sort of tech sharing agreement or something, in any case nothing they are currently doing and probably not something nVidia would be super interested in at all since they've been doubling down lately on building up OpenGL as the next big thing to use (even though up until Mantle hit the horizon, it's basically just been a playground for cool features that developers can't use in D3D but it isn't really used for games, RAGE being like the exception outside of tiny indie projects). Microsoft goofed and said DX11 was feature complete, then backpedaled to D3D11 is mostly feature complete, then took all that poo poo down entirely because it's a load of horse crap. I am all for - and now we see GPU makers suddenly being all for - creative solutions to the problem of GPUs being CPU-bound in many possible scenarios (not necessarily ACTUAL ones, because no dev worth a poo poo is going to ship a production game that's so CPU bound on modern hardware that it can enjoy nearly twice the performance just from not being THAT poorly optimized). Building a viable platform-agnostic standard, which may be as straightforward as hammering out OpenGL into something people actually use again instead of just a thing that DX devs look at and sigh, thinking "it sure would be cool if we had that feature but oh well back to my job making this game without it?" AWESOME. Proprietary solution is not awesome. It is an interesting proof of concept that also has the benefit of giving some of you guys some extra free performance if you bought the right team at the right time, assuming you're using a lovely CPU or the game is actually, for real CPU bound or you're using a Crossfire setup (speaking of, that's the craziest preliminary result I've seen so far - I mean, we've known there's overhead in managing multiple cards for a long time, but I wouldn't have put it nearly that high previously). The neat things about Mantle aren't really what it does, but what it shows, in my opinion.
|
# ? Feb 2, 2014 07:06 |
|
Here's two quick runs of the Star Swarm benchmark. Mantle is insanely faster: Min FPS hit about 7 on the DX render path, and 23 on the mantle render path.code:
code:
|
# ? Feb 2, 2014 07:12 |
|
Stanley Pain posted:Here's two quick runs of the Star Swarm benchmark. Mantle is insanely faster: Min FPS hit about 7 on the DX render path, and 23 on the mantle render path. Really? Something must be wrong with the dx11 because my 3570@3.8 and 7870 gets 30fps average. Mantle only bumps it to 40
|
# ? Feb 2, 2014 07:59 |
|
Dr. McReallysweet posted:Really? Something must be wrong with the dx11 because my 3570@3.8 and 7870 gets 30fps average. Mantle only bumps it to 40 Running the extreme preset? I'm about to run numbers for 2560x1440. We'll see how those turn out. Also, which scenario are you running? I was running the attract one. Going to do follow next. I think there is something wrong with the benchmark though. Not getting consistent results at all. Stanley Pain fucked around with this message at 16:19 on Feb 2, 2014 |
# ? Feb 2, 2014 16:04 |
|
Dr. McReallysweet posted:Really? Something must be wrong with the dx11 because my 3570@3.8 and 7870 gets 30fps average. Mantle only bumps it to 40 According to the Beta driver notes, GCN 1.0 cards like yours aren't currently getting the same kind of performance boosts from Mantle that GCN 1.1 cards are. Thinking about it, after seeing a R7 260X benchmark showing nice gains with Mantle and wondering why a card that wouldn't be GPU limiting BF4 could get those boosts, I remembered that card is GCN 1.1, as are the R9 290 series cards. Now the interesting thing to see if GCN 1.0 cards get the same kind of boosts that GCN 1.1 gets in Mantle if it indeed is just an optimization issue with current Beta drivers and not that GCN 1.0 is technically behind 1.1 enough that there aren't worthwhile gains with Mantle.
|
# ? Feb 2, 2014 16:31 |
|
I don't think the architecture revision matters that much in that case though, seeing as how the 7870 is pretty much the pre-rebranded version of the R9 270/270x. Pretty sure only the 280Xs have anything close to new silicon, since that one allegedly has a slightly revised Tahiti core.
|
# ? Feb 2, 2014 17:22 |
|
I'm not sure when this was posted, but I just noticed the first test results for an 80plus Titanium power supply, the Super Flower SF-600P14TE 600W. Tested efficiency was 93.05% at max load, peaking at 94.61% at 50%. For most applications this won't matter much, but if you're running Crossfire 290s a higher capacity model could mean cutting your power supply's dissipation from ~80W to ~50W max load, that is nothing compared to your videocards but significant for localized temperatures around the power supply. If I ever build a house I'm putting my electronics on a 240V circuit.
|
# ? Feb 2, 2014 17:56 |
|
INKRO posted:I don't think the architecture revision matters that much in that case though, seeing as how the 7870 is pretty much the pre-rebranded version of the R9 270/270x. 280's are just rebadged 7970's, same silicon. The only new silicon I know of is on the 7790/R7 260X and the R9 290 series cards, those are the cards that support GCN 1.1 and TrueAudio.
|
# ? Feb 2, 2014 18:11 |
|
Fair enough, I got the 7790/260x stuff mixed up because Bonaire was one of those late-generation release cards with some features of the current generation of cards. Although AFAIK at least some of the 280xs past release would be using the Tahiti XTL chip, which is pretty much the GPU equivalent of a higher CPU stepping back in the old Core 2/2008-2009 days.
|
# ? Feb 2, 2014 23:39 |
|
redstormpopcorn posted:They're all getting snatched up for LiteCoins and Dogecoins and Coinye and whatever other stupid loving worthless pre-mined horseshit fork of Bitcoin is the new hot flavor this week. I don't believe Litecoin was pre-mined in the common sense, only the first block or possibly the first two. Others, more recently, have been. I'd also say Litecoin was at least novel, being the first Bitcoin-alike, for using a different proof-of-work algorithm. Most now simply copy Litecoin in the choice of Scrypt. It's not as if Bitcoin itself has any more worth of value than any of the lovely clones, and that's the basic take-away from all of this. There are places that will still let you exchange them for Bitcoin, and from there, to fiat. HalloKitty fucked around with this message at 23:55 on Feb 2, 2014 |
# ? Feb 2, 2014 23:49 |
|
Stanley Pain posted:Running the extreme preset? I'm about to run numbers for 2560x1440. We'll see how those turn out. Also, which scenario are you running? I was running the attract one. Going to do follow next. Yeah, this is hard to even accurately refer to as a tech demo - the one thing it's really trying to emphasize is batch count relative to DX11, but since there's nothing actually to it as a game, it'd be better kept, imo, as an in-house test-case tool rather than released as a public demo... I dunno, just not sure they've done an effective job at communicating what it is doing (because it's a barebones skeletal structure with game-like behavior, not an actual game), to give you sufficient context to look for the specific bigger number to care about and why you should care about it. I've seen more (or at least as) effective third party PhysX demos (most of which are similarly not-actually-a-game to the point that they really serve as benchmarks for the technology in question, which in this case seems to be CPU efficiency with large batches under the Oxide engine). I'd appreciate it if nobody would accuse me of being a damned fanboy just because I feel like the one practical and one theoretical test tools that we have to demonstrate what Mantle is capable of do it kind of poorly. Honestly, it's the least fanboy thing in the world to try to take these proprietary results and generalize something more significant from it: there are situations where a CPU becomes a dramatically limiting factor, and a shipped game avoids them out of necessity (hence low returns when using the Mantle API) while an engine tech demo invites them. That we can see disparities in performance tells us in non-arguable ways that D3D11 has some serious issues, and that has gotta be welcome news. How am I a fanboy because I don't want the solution to "the de facto high-end game API has issues" to be "so buy this one brand and let's all switch to using their proprietary API." There have been some promising developments in OpenGL which could help remove the profound WDDM bottleneck, and that's a pretty well-established API, though... I mean, it's important to realize that for a lot of usage cases, even with the overhead, D3D11 is still going to be powerful enough for modern games to keep pushing the envelope, as hardware gets more powerful with them. I am not a fanboy just because I can 1. acknowledge a good proof of some clear symptoms of modern DX11 sucking, and 2. want to see it resolved without being stuck with one hardware developer getting a monopoly on the solution. I feel exactly the same way about G-Sync, I don't want it to be proprietary because I want it to see wide adoption and history shows that anything which fragments the market right out the gate that much only gets implemented if the people making the product (be it game engines or displays, whatever) get a major sweetheart deal with the GPU hardware vendor, severely limiting what would otherwise be a smart choice all around.
|
# ? Feb 3, 2014 01:57 |
|
It's alright buddy, I think people who don't know you confuse your enthusiasm for EVERYTHING if they only see a few of your posts with enthusiasm for one thing that would usually be associated with fanboyism.
|
# ? Feb 3, 2014 02:02 |
|
I read all of his posts and was looking forward to his take on the preliminary Mantle results. I was a bit disappointed when it was basically trying to make fun of AMD for spending a bunch of time and money on "only" 8-10% gains (which is actually a pretty big deal) and just generally crapping on it.
|
# ? Feb 3, 2014 02:09 |
|
beejay posted:I read all of his posts and was looking forward to his take on the preliminary Mantle results. I was a bit disappointed when it was basically trying to make fun of AMD for spending a bunch of time and money on "only" 8-10% gains (which is actually a pretty big deal) and just generally crapping on it. The problem is that one benchmark doesn't really tell the whole story. We don't know if the Mantle renderer in Frostbite had further optimizations beyond replacing API calls, nor do we know how Mantle compares to APIs other than D3D. Valve saw similar gains from moving to OpenGL from D3D, so until AMD releases the API publicly and some open tests are conducted, all we can do is guess.
|
# ? Feb 3, 2014 02:37 |
|
I was honestly expected 0% performance increases for Mantle with beefy CPUs. The whole schtick they advertised was reducing the CPU overhead of draw() calls, and BF4 never struck me as a particularly CPU-bound game to begin with. But the 30%+ performance increases on APUs, that's drat impressive.
|
# ? Feb 3, 2014 03:09 |
|
Yeah, sorry if my lack of omniscience on this one disappoints, but I don't know anything more than what we've been shown and what we were told prior to being shown this, and the marketing before specifically related to Battlefield 4 was that with Mantle running it would "embarrass" (and I'm quoting, here) nVidia's top-end card in terms of performance. What benchmarks we've seen so far suggest that isn't actually the case. Anand's benches aren't especially comprehensive, either, which makes it hard to draw broader conclusions than "well if you're really CPU bound for one reason or another, this is handy!" which - while only actually coming to public attention recently due to Mantle being an actual thing - has been well known as a problem for a long time. Rendering technology has advanced in many cases in spite of crap Microsoft just hasn't been willing to address in DX for years now, and if you read developer blogs of people involved in rendering you find a consistent pattern of frustration with the state of affairs (again, long before Mantle was even a thing). We're saying 8-10%, but Anand puts it ""7-10%" so maybe we should use that number since it's their partially exposed benchmark under primary discussion here. We've seen bigger performance gains on a single title through driver updates. That is a whole lot of expensive man hours if that's the kind of performance gain that one can hope to achieve as a ceiling and used only with high end proprietary hardware for the time being (of course that's set to change when they manage the next Mantle patch to back-port it to Tahiti and friends GCN 1.0 architecture). As far as the other engine bare-bones demo, it's hard to draw any meaningful conclusions from that at all just because it's the sort of thing I'm kinda surprised has seen the light of day instead of just staying in-house. beejay, you haven't actually heard my take on real preliminary Mantle results because as far as I'm concerned we haven't actually seen them yet. That said, I'm not going to change my feeling that it's a mistake to try to strong-arm the industry with a proprietary methodology here, and on the other hand I'm not going to change my mind that it's a good thing to expose situations where developers are forced to make compromises for a published game (putting out of mind that BF4 has a bug or two, here) that prevent even a new API designed to alleviate CPU-bound situations from seeing a more dramatic performance improvement. As far as the Oxide thing, that's more dangling their shiny things out in front of other devs and saying "look what you can do if you use this, consider it!" But so long as it's proprietary, I think that will severely limit adoption because let's say we get some amazing, best-case-scenario forward looking game development on Mantle only and things can run a hardware generation faster (30%+) - what does that do to the publisher who knows that they need to service a broader market than just AMD GCN card users? What does it do to the developer who has to consider allocating hours to integrate a feature that can either be fundamental to the core of the game and thus basically required (locking out the competing hardware), or just an extra setting that turns on more shiny stuff - like PhysX, but for draws and batches and other stuff that just makes a scene extra fancy - unnecessary to the base gameplay experience, just an additional notch on the graphics slider for AMD GCN and above owners. The latter outcome surely isn't desired by anyone, I would think. But it seems like it's an almost necessary consequence of something being not revolutionary enough to change EVERYTHING, being interesting enough to attract some attention, but being proprietary and so bottlenecking adoption from the get-go.
|
# ? Feb 3, 2014 03:48 |
beejay posted:I read all of his posts and was looking forward to his take on the preliminary Mantle results. I was a bit disappointed when it was basically trying to make fun of AMD for spending a bunch of time and money on "only" 8-10% gains (which is actually a pretty big deal) and just generally crapping on it. and yeah if this is just an AMD thing then it's going to be useless, and even if I mostly play BF4 right now I don't care about just getting results in BF4 (it already runs amazingly well for me so what do I care), and as such I still haven't gotten around to playing anything at all since that mantle announcement
|
|
# ? Feb 3, 2014 04:03 |
|
Straker posted:I remember Agreed making basic thermodynamics mistakes in the system building thread a few years ago so I think it's shobon as hell that he's found his niche I don't remember what you're talking about, but it's weird to me that you do
|
# ? Feb 3, 2014 04:09 |
I've heard that before. Don't be weirded out, I'm legitimately brilliant, I don't have a goons.xls or anything If memory serves, it was some typical misunderstanding/disagreement (I was a third party) about heat dissipation vs. temperature on its own and I think you were rationalizing higher temperatures in a system by suggesting against large quantities of obnoxiously warm air blowing on your leg, probably by loud fans (in favor of a smaller volume of scalding air) or something silly like that. doesn't matter now! I actually wasn't sure if it was you or Crackbone at first but then I remembered whoever it was always made crazy wordy posts and so that's definitely you! Straker fucked around with this message at 04:16 on Feb 3, 2014 |
|
# ? Feb 3, 2014 04:13 |
|
Edit: Let's just move right along
Agreed fucked around with this message at 04:27 on Feb 3, 2014 |
# ? Feb 3, 2014 04:20 |
Agreed posted:I don't think I would make a mistake as basic as confusing temperature in a closed system...
|
|
# ? Feb 3, 2014 04:27 |
|
Straker posted:I've heard that before. Don't be weirded out, I'm legitimately brilliant, I don't have a goons.xls or anything You're weird. Anyhoo, what's the latest skinny on Nvidia Maxwell? Are we still looking at mobile parts this month, desktop next month? Or is there a newer rumor?
|
# ? Feb 3, 2014 04:28 |
Last time I checked pretty much everyone who's posted more than twice in this thread is weird as balls, so. I've been looking, hoping to maybe flip my 290s to an altcoin miner in exchange for new and improved 800 cards while the former are still hot, but haven't seen anything in a long time now. I'd wonder how nvidia is going to deal with Maxwell mobile parts when they've already released a mobile "800"-series GPU, but I guess that's not any different from how all of the 2000s were, really.
|
|
# ? Feb 3, 2014 04:34 |
|
Factory Factory posted:You're weird. I've come across some second hand information (co-worker whose friend works at nVidia) that supposedly the 880 will launch "in summer", replace the 780 Ti in pricing tier, and shifting all the existing parts down one tier, with the performance delta between the top tiers will remain about the same. It sounds plausible enough, but who knows. Oh, and he also says that 120Hz IPS G-Sync monitors be be coming around the same time, which I'm a little more dubious of (IGZO displays to get that refresh rate? would be pretty pricey but maybe that's what they'll do).
|
# ? Feb 3, 2014 05:54 |
|
I wonder if Dice has any games on the horizon that are more likely to be CPU bound. An MMO or RTS perhaps? Can't wait to see what crazy things can happen when an MMO is fully optimized to utilize Mantle.
|
# ? Feb 3, 2014 07:54 |
|
MMOs are CPU bound, but usually because their processes aren't very parallel friendly. Rift for instance, can use multiple CPU cores, but the main process doesn't run in a multi-threaded environment well. So to increase FPS in Rift, you want a faster single core speed. I doubt that this is something that could be fixed with Mantle, but I would like to be wrong. I don't see any reason for Blizzard to try it with WoW as it is not as graphically intense as other MMOs. I think for RTS games the best you can do to help frame rates is to just limit the number of units on a map. With Supreme Commander, an older game I know, 8 players with a 200 unit cap would drag the frames down fast even if you weren't looking at anything but water. SlayVus fucked around with this message at 08:23 on Feb 3, 2014 |
# ? Feb 3, 2014 08:20 |
|
Just to put 10% into perspective, that's the difference between a boost 7970(or a 7950 at about 1130mhz) and GTX780 in BF4 at 2560x1440. On a completely different note, I blame the coincalypse on everyone who bought lovely 660tis, 670s and 760s instead of the 7950 even after the frame pacing fixes
|
# ? Feb 3, 2014 10:30 |
|
Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally?Agreed posted:Yeah, this is hard to even accurately refer to as a tech demo - the one thing it's really trying to emphasize is batch count relative to DX11, but since there's nothing actually to it as a game, it'd be better kept, imo, as an in-house test-case tool rather than released as a public demo... I dunno, just not sure they've done an effective job at communicating what it is doing (because it's a barebones skeletal structure with game-like behavior, not an actual game), to give you sufficient context to look for the specific bigger number to care about and why you should care about it. I've seen more (or at least as) effective third party PhysX demos (most of which are similarly not-actually-a-game to the point that they really serve as benchmarks for the technology in question, which in this case seems to be CPU efficiency with large batches under the Oxide engine). Yeah, it certainly looks that they're trying to show off how much more they can offload to the CPU. It's a neat tech demo at best right now because as a benchmark it kinda sucks. Running the same "scene" doesn't necessarily produce the same visual results and mantle performance is all over the charts (great min FPS, at times horrible max FPS, skewing the AVG severly) whereas the DX render path can have some really really low min FPS (7), or very high (80+). I'd call you a fanboy. A fanboy of GPUs
|
# ? Feb 3, 2014 13:32 |
|
Stanley Pain posted:Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally? Sure do. I would only get it in Borderlands 2, but after the December update things seemed to be fine. That is until I attempted to play it with a friend of mine, at which point I can seem to go longer than 10 minutes before black screening and having to force a shutdown. Pretty infuriating.
|
# ? Feb 3, 2014 14:30 |
|
Ghostpilot posted:Sure do. I would only get it in Borderlands 2, but after the December update things seemed to be fine. That is until I attempted to play it with a friend of mine, at which point I can seem to go longer than 10 minutes before black screening and having to force a shutdown. Are you running stock cooling? I've managed to pretty much get rid of it and the solution depends on the type of cooler you have. If you're on stock cooling, you need to lower your memory speed (old catalyst driver 10% was the magic number for me in Overdrive, 14.1 drivers around 1100-1150 seemed to do the trick). If you have good cooling on the card, I've found that cranking the max power by 25% seems to work for me. Planetside 2 was my big black screen causer and I've gone 2 weeks without one now.
|
# ? Feb 3, 2014 15:18 |
|
I'm pretty sure at least one of my cards is Elpida memory and I have never had the black screen issue. But then again, cooling is not an issue at all. Still trying to figure out why my board is giving a 3% boost to gpu2. Almost tempted to do a full restore defaults on the board and see if it stops being weird.
|
# ? Feb 3, 2014 17:58 |
|
Stanley Pain posted:Are you running stock cooling? I've managed to pretty much get rid of it and the solution depends on the type of cooler you have. If you're on stock cooling, you need to lower your memory speed (old catalyst driver 10% was the magic number for me in Overdrive, 14.1 drivers around 1100-1150 seemed to do the trick). I have a Gelid Icy Vision on mine, so cooling hasn't been an issue. My memory is at the stock 1250. 25% seems to be a bit drastic but I'll sure give it a shot in 5-10% increments. Curiously, I haven't encountered a black screen with any other game yet (I honestly expected it to happen in Guild Wars 2, as it is far more intensive). Edit: It's also come up intermittently during the Ungine Heaven benchmark, but I haven't run that in quite a while. Ghostpilot fucked around with this message at 18:48 on Feb 3, 2014 |
# ? Feb 3, 2014 18:44 |
|
Ghostpilot posted:I have a Gelid Icy Vision on mine, so cooling hasn't been an issue. My memory is at the stock 1250. 25% seems to be a bit drastic but I'll sure give it a shot in 5-10% increments. Curiously, I haven't encountered a black screen with any other game yet (I honestly expected it to happen in Guild Wars 2, as it is far more intensive). I'm almost positive that the cause is a memory timing issue which is why when I either lower the memory speed or give it MORE POWER it goes away for me. I'm not 100% positive that the power setting even feeds more power to the memory.
|
# ? Feb 3, 2014 19:17 |
|
Stanley Pain posted:Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally? Yeah, in my head I was thinking "that's... kinda like Fluidmark" in terms of analogous software that purports to demonstrate how a thing is better but doesn't do so in a way that can really connect to "and thus videogames are better!" Uncapped emitters and post-processing can sure show a lot of difference in terms of CPU and GPU utilization, but it's not anything like a videogame. Which makes me wonder, by the way, what was up with the Oxide presentation at the AMD conference, then? It was rather different, maybe just the fully in-house version that they don't want getting out? I dunno. I remember at one point they had 10k units and it seems like the demo doesn't ever get close to a little over half that. Edit: Also, holy poo poo, I had no idea that Elpida's memory problems were so prolific and widespread. They crashed hard, god drat. Sorry to the afflicted. Hynix and Samsung seems to be all I've got on my cards, and they work as advertised, though - and I know I've said this before, but it's still worth mentioning - overclocking VRAM is risky business and shouldn't be done unless there's an actual good reason to do it, like the GTX 680s being bandwidth starved with their 1500MHz GDDR5 configuration... With 7GHz modules on recent nVidia cards, or just fantastically wide memory buses - or both - my advice stands at "don't overclock GDDR5 if you can help it," it becomes unstable in ways that are sometimes subtle and sometimes very overt, pretty dang quickly. Relatively high fault tolerance doesn't change that, and we all know it's basically impossible to validate a GPU overclock beyond "this plays games acceptably well." I hate to think of small devs buying factory overclocked cards to run CUDA or OpenCL programs on and overclocking their hardware to try to get something extra for free. It's a really bad idea. Agreed fucked around with this message at 20:23 on Feb 3, 2014 |
# ? Feb 3, 2014 19:56 |
|
[H] reviews the Mantle preview and actually includes frame time data.Agreed posted:Yeah, in my head I was thinking "that's... kinda like Fluidmark" in terms of analogous software that purports to demonstrate how a thing is better but doesn't do so in a way that can really connect to "and thus videogames are better!" Uncapped emitters and post-processing can sure show a lot of difference in terms of CPU and GPU utilization, but it's not anything like a videogame. Mantle is a boon for those who have weak processors and who game at 1080p or lower. Most of AMD's big numbers came from pairing a relatively weak processor with a strong graphics card and using Mantle to relieve the load on the weak processor, achieving big gains. Anyway, Mantle is in its infancy. Both the API and the drivers they are attached are in beta, and BF4 implementation probably isn't that great either. Once they mature some more, I'm sure Mantle will be a viable alternative to DirectX 11. It's still not there yet. Look at that [H] link. I do believe [H] are the first ones who actually plotted the frame latency numbers, and they paint an interesting picture. While, Mantle is smoother overall, it has some infrequent big spikes that disrupt gameplay. Hopefully that can be ironed out in the near future.
|
# ? Feb 3, 2014 20:27 |
Stanley Pain posted:Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally? Agreed posted:overclocking VRAM is risky business and shouldn't be done unless there's an actual good reason to do it, like the GTX 680s being bandwidth starved with their 1500MHz GDDR5 configuration... With 7GHz modules on recent nVidia cards, or just fantastically wide memory buses - or both - my advice stands at "don't overclock GDDR5 if you can help it," it becomes unstable in ways that are sometimes subtle and sometimes very overt, pretty dang quickly
|
|
# ? Feb 3, 2014 21:05 |
|
|
# ? May 29, 2024 02:08 |
|
For those of you having the black screen issue, have you tried putting a 290x bios on the card? All my cards have an unlocked bios from day1 and I have never had this issue. Perhaps the 290x bios gives more power to the memory along with the gpu.
|
# ? Feb 3, 2014 21:41 |