Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Straker
Nov 10, 2005

The Lord Bude posted:

This thread is the exact opposite of 'the high end stuff'. This thread, if anything, is very close to hostile towards people that have a looser interpretation of 'value' and want to spend more money on their PC. Most people who post here are trying to get the best bang for their buck at <1k budgets. The 290x and 780 aren't even on the list of graphics card recommendations. An i5 is not high end, and we make it explicitly clear not to buy an i7 for gaming.
you're in the GPU thread, not the "build me a computer" thread. my GPUs here are pretty reasonable, but cost as much as an entire PC in that other thread :)

Adbot
ADBOT LOVES YOU

INKRO
Feb 7, 2012
I'm seeing people putting up Mantle vs DX benchmarks on older CPUs and they're getting like, double the FPS on Q6600s and stuff. Mantle seems pretty marginal for Sandy Bridge-and-up I5s and such(though 8-10% increases just through quasi-software solutions isn't too shabby at the high end at all), but anything older than that and APUs? Huge increases, the kind that would definitely influence someone wanting to upgrade their old setup with minimal fuss.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE
edit, wrong thread

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I think the really good thing it's doing is exposing - at least with this engine, though I think this will become more broadly true as more practical implementations are released and tested - just how CPU-bound some things can become in games. Here's the rub, though: in order to solve that problem, to really solve it, the solution has to be non-proprietary, or AMD has to somehow put nVidia completely out of business in graphics cards or enter into some sort of tech sharing agreement or something, in any case nothing they are currently doing and probably not something nVidia would be super interested in at all since they've been doubling down lately on building up OpenGL as the next big thing to use (even though up until Mantle hit the horizon, it's basically just been a playground for cool features that developers can't use in D3D but it isn't really used for games, RAGE being like the exception outside of tiny indie projects).

Microsoft goofed and said DX11 was feature complete, then backpedaled to D3D11 is mostly feature complete, then took all that poo poo down entirely because it's a load of horse crap. I am all for - and now we see GPU makers suddenly being all for - creative solutions to the problem of GPUs being CPU-bound in many possible scenarios (not necessarily ACTUAL ones, because no dev worth a poo poo is going to ship a production game that's so CPU bound on modern hardware that it can enjoy nearly twice the performance just from not being THAT poorly optimized). Building a viable platform-agnostic standard, which may be as straightforward as hammering out OpenGL into something people actually use again instead of just a thing that DX devs look at and sigh, thinking "it sure would be cool if we had that feature but oh well back to my job making this game without it?" AWESOME.

Proprietary solution is not awesome. It is an interesting proof of concept that also has the benefit of giving some of you guys some extra free performance if you bought the right team at the right time, assuming you're using a lovely CPU or the game is actually, for real CPU bound or you're using a Crossfire setup (speaking of, that's the craziest preliminary result I've seen so far - I mean, we've known there's overhead in managing multiple cards for a long time, but I wouldn't have put it nearly that high previously).

The neat things about Mantle aren't really what it does, but what it shows, in my opinion.

Stanley Pain
Jun 16, 2001

by Fluffdaddy
Here's two quick runs of the Star Swarm benchmark. Mantle is insanely faster: Min FPS hit about 7 on the DX render path, and 23 on the mantle render path.


code:
== Hardware Configuration =================================
GPU:		AMD Radeon R9 200 Series
CPU:		GenuineIntel
		       Intel(R) Core(TM) i7-3770K CPU @ 4.20GHz
Physical Cores:			4
Logical Cores:			8
Physical Memory: 		17131827200
Allocatable Memory:		140737488224256
===========================================================


== Configuration ==========================================
API:				Mantle
Scenario:			ScenarioAttract.csv
User Input:			Disabled
Resolution:			1920x1080
Fullscreen:			True
GameCore Update:		16.6 ms
Bloom Quality:			High
PointLight Quality:		High
ToneCurve Quality:		High
Glare Overdraw:			16
Shading Samples: 		64
Shade Quality:			Mid
Deferred Contexts:		Disabled
Temporal AA Duration:		16
Temporal AA Time Slice:		2
Detailed Frame Info:		Off
===========================================================


== Results ================================================
Test Duration:			360 Seconds
Total Frames:			18751

Average FPS:			52.09
Average Unit Count:		4369
Maximum Unit Count:		5372
Average Batches/MS:		1348.19
Maximum Batches/MS:		2909.14
Average Batch Count:		29407
Maximum Batch Count:		160265
===========================================================
code:
== Hardware Configuration =================================
GPU:		AMD Radeon R9 200 Series
CPU:		GenuineIntel
		       Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz
Physical Cores:			4
Logical Cores:			8
Physical Memory: 		17131827200
Allocatable Memory:		140737488224256
===========================================================


== Configuration ==========================================
API:				DirectX
Scenario:			ScenarioAttract.csv
User Input:			Disabled
Resolution:			1920x1080
Fullscreen:			True
GameCore Update:		16.6 ms
Bloom Quality:			High
PointLight Quality:		High
ToneCurve Quality:		High
Glare Overdraw:			16
Shading Samples: 		64
Shade Quality:			Mid
Deferred Contexts:		Disabled
Temporal AA Duration:		16
Temporal AA Time Slice:		2
Detailed Frame Info:		Off
===========================================================


== Results ================================================
Test Duration:			360 Seconds
Total Frames:			9516

Average FPS:			26.43
Average Unit Count:		4180
Maximum Unit Count:		5737
Average Batches/MS:		598.93
Maximum Batches/MS:		1061.18
Average Batch Count:		24065
Maximum Batch Count:		153697
===========================================================

Dr. McReallysweet
Sep 12, 2006

The possibility of physical and mental collapse is now very real. No sympathy for the Devil, keep that in mind. Buy the ticket, take the ride.

Stanley Pain posted:

Here's two quick runs of the Star Swarm benchmark. Mantle is insanely faster: Min FPS hit about 7 on the DX render path, and 23 on the mantle render path.


code:
== Hardware Configuration =================================
GPU:		AMD Radeon R9 200 Series
CPU:		GenuineIntel
		       Intel(R) Core(TM) i7-3770K CPU @ 4.20GHz
Physical Cores:			4
Logical Cores:			8
Physical Memory: 		17131827200
Allocatable Memory:		140737488224256
===========================================================


== Configuration ==========================================
API:				Mantle
Scenario:			ScenarioAttract.csv
User Input:			Disabled
Resolution:			1920x1080
Fullscreen:			True
GameCore Update:		16.6 ms
Bloom Quality:			High
PointLight Quality:		High
ToneCurve Quality:		High
Glare Overdraw:			16
Shading Samples: 		64
Shade Quality:			Mid
Deferred Contexts:		Disabled
Temporal AA Duration:		16
Temporal AA Time Slice:		2
Detailed Frame Info:		Off
===========================================================


== Results ================================================
Test Duration:			360 Seconds
Total Frames:			18751

Average FPS:			52.09
Average Unit Count:		4369
Maximum Unit Count:		5372
Average Batches/MS:		1348.19
Maximum Batches/MS:		2909.14
Average Batch Count:		29407
Maximum Batch Count:		160265
===========================================================
code:
== Hardware Configuration =================================
GPU:		AMD Radeon R9 200 Series
CPU:		GenuineIntel
		       Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz
Physical Cores:			4
Logical Cores:			8
Physical Memory: 		17131827200
Allocatable Memory:		140737488224256
===========================================================


== Configuration ==========================================
API:				DirectX
Scenario:			ScenarioAttract.csv
User Input:			Disabled
Resolution:			1920x1080
Fullscreen:			True
GameCore Update:		16.6 ms
Bloom Quality:			High
PointLight Quality:		High
ToneCurve Quality:		High
Glare Overdraw:			16
Shading Samples: 		64
Shade Quality:			Mid
Deferred Contexts:		Disabled
Temporal AA Duration:		16
Temporal AA Time Slice:		2
Detailed Frame Info:		Off
===========================================================


== Results ================================================
Test Duration:			360 Seconds
Total Frames:			9516

Average FPS:			26.43
Average Unit Count:		4180
Maximum Unit Count:		5737
Average Batches/MS:		598.93
Maximum Batches/MS:		1061.18
Average Batch Count:		24065
Maximum Batch Count:		153697
===========================================================

Really? Something must be wrong with the dx11 because my 3570@3.8 and 7870 gets 30fps average. Mantle only bumps it to 40

Stanley Pain
Jun 16, 2001

by Fluffdaddy

Dr. McReallysweet posted:

Really? Something must be wrong with the dx11 because my 3570@3.8 and 7870 gets 30fps average. Mantle only bumps it to 40

Running the extreme preset? I'm about to run numbers for 2560x1440. We'll see how those turn out. Also, which scenario are you running? I was running the attract one. Going to do follow next.

I think there is something wrong with the benchmark though. Not getting consistent results at all.

Stanley Pain fucked around with this message at 16:19 on Feb 2, 2014

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Dr. McReallysweet posted:

Really? Something must be wrong with the dx11 because my 3570@3.8 and 7870 gets 30fps average. Mantle only bumps it to 40

According to the Beta driver notes, GCN 1.0 cards like yours aren't currently getting the same kind of performance boosts from Mantle that GCN 1.1 cards are. Thinking about it, after seeing a R7 260X benchmark showing nice gains with Mantle and wondering why a card that wouldn't be GPU limiting BF4 could get those boosts, I remembered that card is GCN 1.1, as are the R9 290 series cards. Now the interesting thing to see if GCN 1.0 cards get the same kind of boosts that GCN 1.1 gets in Mantle if it indeed is just an optimization issue with current Beta drivers and not that GCN 1.0 is technically behind 1.1 enough that there aren't worthwhile gains with Mantle.

INKRO
Feb 7, 2012
I don't think the architecture revision matters that much in that case though, seeing as how the 7870 is pretty much the pre-rebranded version of the R9 270/270x.

Pretty sure only the 280Xs have anything close to new silicon, since that one allegedly has a slightly revised Tahiti core.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
I'm not sure when this was posted, but I just noticed the first test results for an 80plus Titanium power supply, the Super Flower SF-600P14TE 600W. Tested efficiency was 93.05% at max load, peaking at 94.61% at 50%. For most applications this won't matter much, but if you're running Crossfire 290s a higher capacity model could mean cutting your power supply's dissipation from ~80W to ~50W max load, that is nothing compared to your videocards but significant for localized temperatures around the power supply.

If I ever build a house I'm putting my electronics on a 240V circuit.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

INKRO posted:

I don't think the architecture revision matters that much in that case though, seeing as how the 7870 is pretty much the pre-rebranded version of the R9 270/270x.

Pretty sure only the 280Xs have anything close to new silicon, since that one allegedly has a slightly revised Tahiti core.

280's are just rebadged 7970's, same silicon. The only new silicon I know of is on the 7790/R7 260X and the R9 290 series cards, those are the cards that support GCN 1.1 and TrueAudio.

INKRO
Feb 7, 2012
Fair enough, I got the 7790/260x stuff mixed up because Bonaire was one of those late-generation release cards with some features of the current generation of cards.

Although AFAIK at least some of the 280xs past release would be using the Tahiti XTL chip, which is pretty much the GPU equivalent of a higher CPU stepping back in the old Core 2/2008-2009 days.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

redstormpopcorn posted:

They're all getting snatched up for LiteCoins and Dogecoins and Coinye and whatever other stupid loving worthless pre-mined horseshit fork of Bitcoin is the new hot flavor this week.

I don't believe Litecoin was pre-mined in the common sense, only the first block or possibly the first two. Others, more recently, have been.

I'd also say Litecoin was at least novel, being the first Bitcoin-alike, for using a different proof-of-work algorithm. Most now simply copy Litecoin in the choice of Scrypt.

It's not as if Bitcoin itself has any more worth of value than any of the lovely clones, and that's the basic take-away from all of this. There are places that will still let you exchange them for Bitcoin, and from there, to fiat.

HalloKitty fucked around with this message at 23:55 on Feb 2, 2014

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Stanley Pain posted:

Running the extreme preset? I'm about to run numbers for 2560x1440. We'll see how those turn out. Also, which scenario are you running? I was running the attract one. Going to do follow next.

I think there is something wrong with the benchmark though. Not getting consistent results at all.

Yeah, this is hard to even accurately refer to as a tech demo - the one thing it's really trying to emphasize is batch count relative to DX11, but since there's nothing actually to it as a game, it'd be better kept, imo, as an in-house test-case tool rather than released as a public demo... I dunno, just not sure they've done an effective job at communicating what it is doing (because it's a barebones skeletal structure with game-like behavior, not an actual game), to give you sufficient context to look for the specific bigger number to care about and why you should care about it. I've seen more (or at least as) effective third party PhysX demos (most of which are similarly not-actually-a-game to the point that they really serve as benchmarks for the technology in question, which in this case seems to be CPU efficiency with large batches under the Oxide engine).

I'd appreciate it if nobody would accuse me of being a damned fanboy just because I feel like the one practical and one theoretical test tools that we have to demonstrate what Mantle is capable of do it kind of poorly. Honestly, it's the least fanboy thing in the world to try to take these proprietary results and generalize something more significant from it: there are situations where a CPU becomes a dramatically limiting factor, and a shipped game avoids them out of necessity (hence low returns when using the Mantle API) while an engine tech demo invites them. That we can see disparities in performance tells us in non-arguable ways that D3D11 has some serious issues, and that has gotta be welcome news. How am I a fanboy because I don't want the solution to "the de facto high-end game API has issues" to be "so buy this one brand and let's all switch to using their proprietary API." There have been some promising developments in OpenGL which could help remove the profound WDDM bottleneck, and that's a pretty well-established API, though... I mean, it's important to realize that for a lot of usage cases, even with the overhead, D3D11 is still going to be powerful enough for modern games to keep pushing the envelope, as hardware gets more powerful with them.

I am not a fanboy just because I can 1. acknowledge a good proof of some clear symptoms of modern DX11 sucking, and 2. want to see it resolved without being stuck with one hardware developer getting a monopoly on the solution. I feel exactly the same way about G-Sync, I don't want it to be proprietary because I want it to see wide adoption and history shows that anything which fragments the market right out the gate that much only gets implemented if the people making the product (be it game engines or displays, whatever) get a major sweetheart deal with the GPU hardware vendor, severely limiting what would otherwise be a smart choice all around.

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride
It's alright buddy, I think people who don't know you confuse your enthusiasm for EVERYTHING if they only see a few of your posts with enthusiasm for one thing that would usually be associated with fanboyism.

beejay
Apr 7, 2002

I read all of his posts and was looking forward to his take on the preliminary Mantle results. I was a bit disappointed when it was basically trying to make fun of AMD for spending a bunch of time and money on "only" 8-10% gains (which is actually a pretty big deal) and just generally crapping on it.

The_Franz
Aug 8, 2003

beejay posted:

I read all of his posts and was looking forward to his take on the preliminary Mantle results. I was a bit disappointed when it was basically trying to make fun of AMD for spending a bunch of time and money on "only" 8-10% gains (which is actually a pretty big deal) and just generally crapping on it.

The problem is that one benchmark doesn't really tell the whole story. We don't know if the Mantle renderer in Frostbite had further optimizations beyond replacing API calls, nor do we know how Mantle compares to APIs other than D3D. Valve saw similar gains from moving to OpenGL from D3D, so until AMD releases the API publicly and some open tests are conducted, all we can do is guess.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I was honestly expected 0% performance increases for Mantle with beefy CPUs. The whole schtick they advertised was reducing the CPU overhead of draw() calls, and BF4 never struck me as a particularly CPU-bound game to begin with. But the 30%+ performance increases on APUs, that's drat impressive.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Yeah, sorry if my lack of omniscience on this one disappoints, but I don't know anything more than what we've been shown and what we were told prior to being shown this, and the marketing before specifically related to Battlefield 4 was that with Mantle running it would "embarrass" (and I'm quoting, here) nVidia's top-end card in terms of performance. What benchmarks we've seen so far suggest that isn't actually the case. Anand's benches aren't especially comprehensive, either, which makes it hard to draw broader conclusions than "well if you're really CPU bound for one reason or another, this is handy!" which - while only actually coming to public attention recently due to Mantle being an actual thing - has been well known as a problem for a long time. Rendering technology has advanced in many cases in spite of crap Microsoft just hasn't been willing to address in DX for years now, and if you read developer blogs of people involved in rendering you find a consistent pattern of frustration with the state of affairs (again, long before Mantle was even a thing).

We're saying 8-10%, but Anand puts it ""7-10%" so maybe we should use that number since it's their partially exposed benchmark under primary discussion here. We've seen bigger performance gains on a single title through driver updates. That is a whole lot of expensive man hours if that's the kind of performance gain that one can hope to achieve as a ceiling and used only with high end proprietary hardware for the time being (of course that's set to change when they manage the next Mantle patch to back-port it to Tahiti and friends GCN 1.0 architecture).

As far as the other engine bare-bones demo, it's hard to draw any meaningful conclusions from that at all just because it's the sort of thing I'm kinda surprised has seen the light of day instead of just staying in-house.

beejay, you haven't actually heard my take on real preliminary Mantle results because as far as I'm concerned we haven't actually seen them yet. That said, I'm not going to change my feeling that it's a mistake to try to strong-arm the industry with a proprietary methodology here, and on the other hand I'm not going to change my mind that it's a good thing to expose situations where developers are forced to make compromises for a published game (putting out of mind that BF4 has a bug or two, here) that prevent even a new API designed to alleviate CPU-bound situations from seeing a more dramatic performance improvement. As far as the Oxide thing, that's more dangling their shiny things out in front of other devs and saying "look what you can do if you use this, consider it!"

But so long as it's proprietary, I think that will severely limit adoption because let's say we get some amazing, best-case-scenario forward looking game development on Mantle only and things can run a hardware generation faster (30%+) - what does that do to the publisher who knows that they need to service a broader market than just AMD GCN card users? What does it do to the developer who has to consider allocating hours to integrate a feature that can either be fundamental to the core of the game and thus basically required (locking out the competing hardware), or just an extra setting that turns on more shiny stuff - like PhysX, but for draws and batches and other stuff that just makes a scene extra fancy - unnecessary to the base gameplay experience, just an additional notch on the graphics slider for AMD GCN and above owners.

The latter outcome surely isn't desired by anyone, I would think. But it seems like it's an almost necessary consequence of something being not revolutionary enough to change EVERYTHING, being interesting enough to attract some attention, but being proprietary and so bottlenecking adoption from the get-go.

Straker
Nov 10, 2005

beejay posted:

I read all of his posts and was looking forward to his take on the preliminary Mantle results. I was a bit disappointed when it was basically trying to make fun of AMD for spending a bunch of time and money on "only" 8-10% gains (which is actually a pretty big deal) and just generally crapping on it.
I remember Agreed making basic thermodynamics mistakes in the system building thread a few years ago so I think it's shobon as hell that he's found his niche :shobon:

and yeah if this is just an AMD thing then it's going to be useless, and even if I mostly play BF4 right now I don't care about just getting results in BF4 (it already runs amazingly well for me so what do I care), and as such I still haven't gotten around to playing anything at all since that mantle announcement :v:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Straker posted:

I remember Agreed making basic thermodynamics mistakes in the system building thread a few years ago so I think it's shobon as hell that he's found his niche :shobon:

I don't remember what you're talking about, but it's weird to me that you do :catstare:

Straker
Nov 10, 2005
I've heard that before. Don't be weirded out, I'm legitimately brilliant, I don't have a goons.xls or anything :)

If memory serves, it was some typical misunderstanding/disagreement (I was a third party) about heat dissipation vs. temperature on its own and I think you were rationalizing higher temperatures in a system by suggesting against large quantities of obnoxiously warm air blowing on your leg, probably by loud fans (in favor of a smaller volume of scalding air) or something silly like that. doesn't matter now!

I actually wasn't sure if it was you or Crackbone at first but then I remembered whoever it was always made crazy wordy posts and so that's definitely you!

Straker fucked around with this message at 04:16 on Feb 3, 2014

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Edit: Let's just move right along

Agreed fucked around with this message at 04:27 on Feb 3, 2014

Straker
Nov 10, 2005

Agreed posted:

I don't think I would make a mistake as basic as confusing temperature in a closed system...
the metacommentary of my posting over time as it is, no offense intended
Ironically and meta-ally enough I'm sure this is what happened that other time too. I don't think I was a dick with whatever I said, you demanded an apology thinking I'd misconstrued you apparently being wrong about some first/second law of thermodynamics stuff, I didn't think you were an idiot at the time either, just figured it was miscommunication and never bothered replying :) I have to lay off the booze for the next few days so I'm drinking in advance now, if it weren't for the forums being flaky when you go back years and years in post histories I might be tempted to poke around... but I know it's not worth the trouble :)

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Straker posted:

I've heard that before. Don't be weirded out, I'm legitimately brilliant, I don't have a goons.xls or anything :)

You're weird.

Anyhoo, what's the latest skinny on Nvidia Maxwell? Are we still looking at mobile parts this month, desktop next month? Or is there a newer rumor?

Straker
Nov 10, 2005
Last time I checked pretty much everyone who's posted more than twice in this thread is weird as balls, so.

I've been looking, hoping to maybe flip my 290s to an altcoin miner in exchange for new and improved 800 cards while the former are still hot, but haven't seen anything in a long time now. I'd wonder how nvidia is going to deal with Maxwell mobile parts when they've already released a mobile "800"-series GPU, but I guess that's not any different from how all of the 2000s were, really.

GokieKS
Dec 15, 2012

Mostly Harmless.

Factory Factory posted:

You're weird.

Anyhoo, what's the latest skinny on Nvidia Maxwell? Are we still looking at mobile parts this month, desktop next month? Or is there a newer rumor?

I've come across some second hand information (co-worker whose friend works at nVidia) that supposedly the 880 will launch "in summer", replace the 780 Ti in pricing tier, and shifting all the existing parts down one tier, with the performance delta between the top tiers will remain about the same.

It sounds plausible enough, but who knows. Oh, and he also says that 120Hz IPS G-Sync monitors be be coming around the same time, which I'm a little more dubious of (IGZO displays to get that refresh rate? would be pretty pricey but maybe that's what they'll do).

Purgatory Glory
Feb 20, 2005
I wonder if Dice has any games on the horizon that are more likely to be CPU bound. An MMO or RTS perhaps? Can't wait to see what crazy things can happen when an MMO is fully optimized to utilize Mantle.

SlayVus
Jul 10, 2009
Grimey Drawer
MMOs are CPU bound, but usually because their processes aren't very parallel friendly. Rift for instance, can use multiple CPU cores, but the main process doesn't run in a multi-threaded environment well. So to increase FPS in Rift, you want a faster single core speed.

I doubt that this is something that could be fixed with Mantle, but I would like to be wrong. I don't see any reason for Blizzard to try it with WoW as it is not as graphically intense as other MMOs.

I think for RTS games the best you can do to help frame rates is to just limit the number of units on a map. With Supreme Commander, an older game I know, 8 players with a 200 unit cap would drag the frames down fast even if you weren't looking at anything but water.

SlayVus fucked around with this message at 08:23 on Feb 3, 2014

Arzachel
May 12, 2012
Just to put 10% into perspective, that's the difference between a boost 7970(or a 7950 at about 1130mhz) and GTX780 in BF4 at 2560x1440.

On a completely different note, I blame the coincalypse on everyone who bought lovely 660tis, 670s and 760s instead of the 7950 even after the frame pacing fixes :colbert:

Stanley Pain
Jun 16, 2001

by Fluffdaddy
Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally?

Agreed posted:

Yeah, this is hard to even accurately refer to as a tech demo - the one thing it's really trying to emphasize is batch count relative to DX11, but since there's nothing actually to it as a game, it'd be better kept, imo, as an in-house test-case tool rather than released as a public demo... I dunno, just not sure they've done an effective job at communicating what it is doing (because it's a barebones skeletal structure with game-like behavior, not an actual game), to give you sufficient context to look for the specific bigger number to care about and why you should care about it. I've seen more (or at least as) effective third party PhysX demos (most of which are similarly not-actually-a-game to the point that they really serve as benchmarks for the technology in question, which in this case seems to be CPU efficiency with large batches under the Oxide engine).


Yeah, it certainly looks that they're trying to show off how much more they can offload to the CPU. It's a neat tech demo at best right now because as a benchmark it kinda sucks. Running the same "scene" doesn't necessarily produce the same visual results and mantle performance is all over the charts (great min FPS, at times horrible max FPS, skewing the AVG severly) whereas the DX render path can have some really really low min FPS (7), or very high (80+).


I'd call you a fanboy. A fanboy of GPUs ;)

Ghostpilot
Jun 22, 2007

"As a rule, I never touch anything more sophisticated and delicate than myself."

Stanley Pain posted:

Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally?

Sure do. I would only get it in Borderlands 2, but after the December update things seemed to be fine. That is until I attempted to play it with a friend of mine, at which point I can seem to go longer than 10 minutes before black screening and having to force a shutdown.

Pretty infuriating. :(

Stanley Pain
Jun 16, 2001

by Fluffdaddy

Ghostpilot posted:

Sure do. I would only get it in Borderlands 2, but after the December update things seemed to be fine. That is until I attempted to play it with a friend of mine, at which point I can seem to go longer than 10 minutes before black screening and having to force a shutdown.

Pretty infuriating. :(

Are you running stock cooling? I've managed to pretty much get rid of it and the solution depends on the type of cooler you have. If you're on stock cooling, you need to lower your memory speed (old catalyst driver 10% was the magic number for me in Overdrive, 14.1 drivers around 1100-1150 seemed to do the trick).

If you have good cooling on the card, I've found that cranking the max power by 25% seems to work for me. Planetside 2 was my big black screen causer and I've gone 2 weeks without one now.

veedubfreak
Apr 2, 2005

by Smythe
I'm pretty sure at least one of my cards is Elpida memory and I have never had the black screen issue. But then again, cooling is not an issue at all. Still trying to figure out why my board is giving a 3% boost to gpu2. Almost tempted to do a full restore defaults on the board and see if it stops being weird.

Ghostpilot
Jun 22, 2007

"As a rule, I never touch anything more sophisticated and delicate than myself."

Stanley Pain posted:

Are you running stock cooling? I've managed to pretty much get rid of it and the solution depends on the type of cooler you have. If you're on stock cooling, you need to lower your memory speed (old catalyst driver 10% was the magic number for me in Overdrive, 14.1 drivers around 1100-1150 seemed to do the trick).

If you have good cooling on the card, I've found that cranking the max power by 25% seems to work for me. Planetside 2 was my big black screen causer and I've gone 2 weeks without one now.

I have a Gelid Icy Vision on mine, so cooling hasn't been an issue. My memory is at the stock 1250. 25% seems to be a bit drastic but I'll sure give it a shot in 5-10% increments. Curiously, I haven't encountered a black screen with any other game yet (I honestly expected it to happen in Guild Wars 2, as it is far more intensive).

Edit: It's also come up intermittently during the Ungine Heaven benchmark, but I haven't run that in quite a while.

Ghostpilot fucked around with this message at 18:48 on Feb 3, 2014

Stanley Pain
Jun 16, 2001

by Fluffdaddy

Ghostpilot posted:

I have a Gelid Icy Vision on mine, so cooling hasn't been an issue. My memory is at the stock 1250. 25% seems to be a bit drastic but I'll sure give it a shot in 5-10% increments. Curiously, I haven't encountered a black screen with any other game yet (I honestly expected it to happen in Guild Wars 2, as it is far more intensive).

Edit: It's also come up intermittently during the Ungine Heaven benchmark, but I haven't run that in quite a while.


I'm almost positive that the cause is a memory timing issue which is why when I either lower the memory speed or give it MORE POWER it goes away for me. I'm not 100% positive that the power setting even feeds more power to the memory.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Stanley Pain posted:

Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally?



Yeah, it certainly looks that they're trying to show off how much more they can offload to the CPU. It's a neat tech demo at best right now because as a benchmark it kinda sucks. Running the same "scene" doesn't necessarily produce the same visual results and mantle performance is all over the charts (great min FPS, at times horrible max FPS, skewing the AVG severly) whereas the DX render path can have some really really low min FPS (7), or very high (80+).


I'd call you a fanboy. A fanboy of GPUs ;)

Yeah, in my head I was thinking "that's... kinda like Fluidmark" in terms of analogous software that purports to demonstrate how a thing is better but doesn't do so in a way that can really connect to "and thus videogames are better!" Uncapped emitters and post-processing can sure show a lot of difference in terms of CPU and GPU utilization, but it's not anything like a videogame.

Which makes me wonder, by the way, what was up with the Oxide presentation at the AMD conference, then? It was rather different, maybe just the fully in-house version that they don't want getting out? I dunno. I remember at one point they had 10k units and it seems like the demo doesn't ever get close to a little over half that.

Edit: Also, holy poo poo, I had no idea that Elpida's memory problems were so prolific and widespread. They crashed hard, god drat. Sorry to the afflicted. Hynix and Samsung seems to be all I've got on my cards, and they work as advertised, though - and I know I've said this before, but it's still worth mentioning - overclocking VRAM is risky business and shouldn't be done unless there's an actual good reason to do it, like the GTX 680s being bandwidth starved with their 1500MHz GDDR5 configuration... With 7GHz modules on recent nVidia cards, or just fantastically wide memory buses - or both - my advice stands at "don't overclock GDDR5 if you can help it," it becomes unstable in ways that are sometimes subtle and sometimes very overt, pretty dang quickly. Relatively high fault tolerance doesn't change that, and we all know it's basically impossible to validate a GPU overclock beyond "this plays games acceptably well." I hate to think of small devs buying factory overclocked cards to run CUDA or OpenCL programs on and overclocking their hardware to try to get something extra for free. It's a really bad idea.

Agreed fucked around with this message at 20:23 on Feb 3, 2014

GrizzlyCow
May 30, 2011
[H] reviews the Mantle preview and actually includes frame time data.


Agreed posted:

Yeah, in my head I was thinking "that's... kinda like Fluidmark" in terms of analogous software that purports to demonstrate how a thing is better but doesn't do so in a way that can really connect to "and thus videogames are better!" Uncapped emitters and post-processing can sure show a lot of difference in terms of CPU and GPU utilization, but it's not anything like a videogame.

Mantle is a boon for those who have weak processors and who game at 1080p or lower. Most of AMD's big numbers came from pairing a relatively weak processor with a strong graphics card and using Mantle to relieve the load on the weak processor, achieving big gains.

Anyway, Mantle is in its infancy. Both the API and the drivers they are attached are in beta, and BF4 implementation probably isn't that great either. Once they mature some more, I'm sure Mantle will be a viable alternative to DirectX 11. It's still not there yet. Look at that [H] link. I do believe [H] are the first ones who actually plotted the frame latency numbers, and they paint an interesting picture. While, Mantle is smoother overall, it has some infrequent big spikes that disrupt gameplay. Hopefully that can be ironed out in the near future.

Straker
Nov 10, 2005

Stanley Pain posted:

Question for anyone who has Elpida memory on their 290s. Are you still getting Blackscreens occasionally?
Any way to check without taking off the cooler? I remember using that one utility to check if my 290s were unlockable but forget its name or if it gave any hints as to RAM manufacturer. I think I posted in this thread but I was getting some pretty loving infuriating bluescreens, driver problems etc. myself, but those have all cleared up for whatever reason. I'd reboot half a dozen times and get things working with new drivers, then BSOD for a seemingly unrelated (but obviously actually related) reason. I ended up taking out my ATI TV tuner and making one or two other small hardware changes that I can't remember, and still had the BSODs, and then they magically went away. Might have also been nudged along by a flaky hard drive that I think has caused just a small handful of BSODs over the past month or two.

Agreed posted:

overclocking VRAM is risky business and shouldn't be done unless there's an actual good reason to do it, like the GTX 680s being bandwidth starved with their 1500MHz GDDR5 configuration... With 7GHz modules on recent nVidia cards, or just fantastically wide memory buses - or both - my advice stands at "don't overclock GDDR5 if you can help it," it becomes unstable in ways that are sometimes subtle and sometimes very overt, pretty dang quickly
Is there any particular reason for saying this? Just curious - I know I mentioned in at least one thread but I mined a couple gimmicky altcoins for fun for just a few days and to get the best performance out of 290s you generally have to oc the VRAM and sometimes even underclock the GPU. You can get pretty crazy overclocks on the RAM without any special cooling, especially if you don't have to worry about dealing with heat from an overvolted OCed GPU, and mining makes it super obvious once you start getting errors, more than just a couple flashing pixels does at least. Unless you're just saying not to bother because it's generally not very helpful, in which case yeah of course, nobody overclocks their system RAM any more either.

Adbot
ADBOT LOVES YOU

veedubfreak
Apr 2, 2005

by Smythe
For those of you having the black screen issue, have you tried putting a 290x bios on the card? All my cards have an unlocked bios from day1 and I have never had this issue. Perhaps the 290x bios gives more power to the memory along with the gpu.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply