Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Yaoi Gagarin
Feb 20, 2014

Seamonster posted:

I was hoping for an unfucking of Crossfire so 2 of these would crank on 4K gaming reasonably well but then 2 GB of 256 bit wide memory? Also means the added expense of the 4GB cards will make crossfiring even more of a no go.

I'm pretty sure two of those wouldn't be enough for 4K even with more VRAM. 4K is in 2x780ti/2x290x territory right now.

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

Mr.PayDay posted:

Total Biscuit showed in his "WTF is Shadow of Mordor" the Texture Quality Settings. The "Ultra" Texture Packs needs 6 GB of VRAM.
When I buy a 970 or 980 SLI System, does the VRAM add up? Am I still limited to 4 GB VRAM or do both nvidias add VRAM up to 8 GB VRAM?
Sorry if this has been asked before, the The FAQ in the OP does not answer that.

No, VRAM is not additive in SLI or Crossfire.

edit: drat, beaten by a hair.

Yaoi Gagarin
Feb 20, 2014

Fauxtool posted:

I understand how consoles can run nice looking games while having lower end specs due to having consistent hardware that makes it easier to squeeze every bit of power out.

For PCs is it easier to just overuse the CPU on a port than to optimize it to run both on the GPU and CPU somewhat equally? Do the recommended specs on the games reflect the higher CPU usage at least?

edit: please explain like im stupid

Well, there are different reasons for different games, it seems. For example, Far Cry 4 was coded such that it very specifically wants to run its main thread on the third CPU. If you've only got two, no dice.

edit: There's really no good reason for a game to not work at all because there are only two processors available. Any multithreaded program should work on a system with any number of processors, even one, although it might slow down a lot.

I feel like ever since early 2014 or so there's been a trend of AAA PC ports demanding a lot of hardware, more than seems right. It's possible that modern AAA games are genuinely doing a lot more work under the hood. But the evidence seems to suggest that a lot of games are just poorly coded to begin with, or get that way when they're ported.

Yaoi Gagarin fucked around with this message at 07:43 on Apr 21, 2015

Yaoi Gagarin
Feb 20, 2014

Josh Lyman posted:

Aside from Freesync (and I don't plan on buying a Freesync monitor anytime soon), is there anything on my 290 I'd be giving up by going to a 970?

That cozy feeling of warmth when you cuddle your PC. :3:

Yaoi Gagarin
Feb 20, 2014

Behold, in darkness, a doom sweeps the land.

Yaoi Gagarin
Feb 20, 2014

Mutation posted:

Wait, did he say that their Sandy Bridge software processor is only almost as fast as a Voodoo 3 2000? :psyduck:

CPU-side rendering really, really sucks, especially since the code from 1999 is, at best, using SSE1.

Yaoi Gagarin
Feb 20, 2014

Don Lapre posted:

DX13 is where AMD is really going to win.

And OpenGL Romulan.

Yaoi Gagarin
Feb 20, 2014

Germstore posted:

Thanks. With new hardware I'm always worried I got a lemon, but with GPUs the expectation of them never loving up is probably unrealistic.

It's probably just something in the driver, tbh. If there was anything even slightly wrong with the hardware, you'd probably know by now.

Yaoi Gagarin
Feb 20, 2014

I have a 280x that's recently started overheating. It idles at 53C in a room that's probably about 20C ambient. The idle frequency is 500 Mhz, the load frequency is 1020 Mhz, and under load the temperature keeps rising until eventually it hits 99C and the card throttles back to 500. There's no way the chip could suddenly start drawing more power, right? This is probably just a cooler problem? The card overheats even if I up the fan speed to 100% in Afterburner or CCC.

Yaoi Gagarin
Feb 20, 2014

VelociBacon posted:

Cooler problem, is it absolutely full of dust or cat hair? Are your case intake/exhaust fans working?

Case fans are working. I'll pull the card out and take a look, you're probably right about the dust. I've been using this card for about a year and the last case didn't have any filters.

Yaoi Gagarin
Feb 20, 2014

I posted here before about my 280X's temperature problems. After thoroughly cleaning and repasting the heatsink I saw a 10-15C drop in temperatures, but recently I've been seeing crashes while playing games. It turns out that under load, the card's temperature steadily rises until it hits 75C, at which point it immediately crashes the entire system. It's 100% reproducible, seems to happen sooner or later in every game, and only takes a minute to do in FurMark. Even at 100% fan speed. It's like the heatsink is literally incapable of dissipating heat fast enough. There's a couple of things that make this situation pretty weird.

First, I got this card refurbished in July of 2014, and the heat problems didn't start until this past summer. Is it possible for the cooler to degrade over time? It's not even a stock blower or anything, it's an MSI GAMING 3G. Second, it's strange that the failure point is 75C. That's new, from within the last few days. Last summer the problem was that the card would hit 100C and then throttle to 500 MHz, and the repasting fixed that. I'm not sure what would cause this behavior, could the GPU itself be damaged?

I'd be willing to try one of those Arctic aftermarket coolers, but their website doesn't seem to have a compatibility list for specific boards. Does anyone know if the GAMING 3G uses a different PCB than a stock 280X? I could also just buy a 970 but that's a last resort option right now, I'd rather keep a perfectly good card and wait for Pascal.

Yaoi Gagarin
Feb 20, 2014

Ozz81 posted:

^ My thoughts exactly, especially with Furmark crashing in under a minute. Vostok, do you know how old and what wattage your PSU is?

Only eight months old, an EVGA supernova 550 GS. Got a pretty good review: http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story6&reid=438


Seamonster posted:

Check your VRM temps, bruh. You rightfully expect the card to throttle once the GPU temp hits a certain point but most BIOSes don't give a poo poo about VRM temps and VRMs start to poo poo out past a certain temp too.

When I get home from work I'll look at the VRM temps. Can I see that in MSI Afterburner or do I need something else?

Yaoi Gagarin
Feb 20, 2014

Ozz81 posted:

I think you can still use tools like GPU-Z to show you the temp of the VRMs at idle and load, it has a Sensors tab that shows what's monitored. If it's not the PSU or anything else the card might just be bad, might be worth swapping parts around if you've got a spare card to test with.

GPU-Z doesn't show VRM temperature, nor does Afterburner. I think it's probably not exposed by the driver for this card.

I guess I'll buy both a PSU and GPU and return whichever doesn't fix the problem.

e: Power consumption scales linearly with frequency, correct? I underclocked the card to 510 Mhz and it still crashed around 70C. Does that exonerate the PSU?

Yaoi Gagarin fucked around with this message at 05:20 on Feb 13, 2016

Yaoi Gagarin
Feb 20, 2014

In the end the problem was with the GPU after all. :rip: 280X, you did good.

...

Does anyone know a nice 970 OC guide they could link me to?

Yaoi Gagarin
Feb 20, 2014

Funnily enough I had no trouble with amd drivers for the last five years across three cards until my 280x actually started to die, but since getting a 970 a couple months ago I've been experiencing driver crashes intermittently. Sometimes it won't even be during a game, but while watching a video.

One time the driver got stuck in a loop of crashing and restarting until I escaped the full screen twitch stream. Then I spent five minutes clicking the X button on the windows 10 notifications it spawned :v:

Yaoi Gagarin
Feb 20, 2014

Gonkish posted:

This latest driver (368.39) is loving godawful. I'm on dual 760s right now and I had to roll back to the previous driver just to get basic stability back. It was crashing randomly, even doing basic poo poo in Windows (like playing Youtube videos) and poo poo. Swapping back solved all of that instantly and things are back to normal. There's something seriously fucky going on with 368.39, and the 10xx series is stuck with it.

This sounds a lot like my problem. Is there a particular stable version I can roll back to?

Yaoi Gagarin
Feb 20, 2014

wicka posted:

agreed, progress is not real

Don't be a revisionist :rimshot:

Yaoi Gagarin
Feb 20, 2014

Harik posted:

I gave GPU passthrough a try, to free up one of my machines for other stuff.

The GPU part works great. Moving from a 2500k to a 4590 (non-K) canceled out the downsides of being virtualized. Didn't even require a reinstall, I just slapped my SSD and GPU in my linux box and went with it.

Anyone know if nVidia still has a stick up their rear end about virtualizing non-quadro cards? There was a bunch of outrage when the drivers first started throwing code 43 a while back, but either they gave up or people gave up on using nvidia GPUs for this, and nobody's really talking about it anymore.

Apparently yes, but if you're using KVM there's a way to hide that it's a virtual machine


Though I haven't done it myself but I've been reading various VFIO guides and that's what they say

Yaoi Gagarin
Feb 20, 2014

I had a fun graphical hiccup today. My screen became pixelated, some of the colors messed up, and the whole thing "vibrated." It looked like something out of a movie where the bad guy subverts a computer system. After a few seconds it stopped and everything was normal, not even a notification saying the driver had to restart. :v:

Yaoi Gagarin
Feb 20, 2014

Risky Bisquick posted:

Seems like you like to live on the edge placing not one, but both MXM cards on the outside of an anti-static bag.

People do this at work all the time and I feel sad but don't correct them... :smith:

Yaoi Gagarin
Feb 20, 2014

Josh Lyman posted:

About 12 years ago, I remember Nvidia's cafeteria having a reputation for being the best in Silicon Valley and employees from other companies (like Intel) would go there to eat.

They just let people from other companies use their cafeteria?

Yaoi Gagarin
Feb 20, 2014

Paul MaudDib posted:

hahahahah I told you fucks that GPU manufacturers wouldn't be able to hold back from intervening when lovely DX12/Vulkan coders wrote garbage code that ran like poo poo on your architecture, the pressure to have your hardware look good isn't going anywhere

here we go, next stop driver-optimization town

edit:


oh boy was I wrong, here comes the DX11/OpenGL style wrappers

It is funny how we've come around, but even so it makes sense. A world where you can choose between using a well-defined low-level API and a high-level API implemented in terms of the low-level API is still better than one where your only option is a high-level API with pure magic underneath

Yaoi Gagarin
Feb 20, 2014

Zero VGS posted:

Okay, I have a GTX 1080, playing Mass Effect Andromeda in true Fullscreen, with the Asus 34" Ultrawide Gsync monitor running at 100hz, Gsync is set to "enable Gsync for windowed and fullscreen mode", Vsync and Triple Buffering were on in the game options by default so I turned those off...

Why the gently caress is it still tearing?

Edit: I tried setting the frame limit with Rivatuner to 100 fps, 101 fps, and uncapped, and confirmed them all with the fraps fps counter, but none of it fixes the tearing. The game is hitting the cap easily.

What happens if you cap to less than the monitor's refresh rate, like 96

E: Gsync only works when your fps is less than the monitor's refresh rate; if it's higher then your choices are to either enable traditional vsync, limit to less than the monitor's refresh rate, or enable fast sync in your Nvidia options (which is vsync with "real" triple buffering), all of which introduce some level of input lag

Yaoi Gagarin fucked around with this message at 04:17 on Apr 5, 2017

Yaoi Gagarin
Feb 20, 2014

Air cooled 970 for battlefield 1

Yaoi Gagarin
Feb 20, 2014

I purchased a refurbished Cbmiprz which should hopefully arrive today :toot:

Yaoi Gagarin
Feb 20, 2014

repiv posted:

I can't comment on AMDs driver quality in general these days (haven't owned one of their cards for a while) but I find it hilarious that their cursor corruption bug refuses to die.

It first appeared on ATi cards around 2003/2004 and still happens on some Polaris systems :eng99:

Wait is this the bug where the cursor gets a weird outline that inverts the colors of whatever is behind it? That's an ATI/AMD driver bug???

I'd been seeing that for years and always just assumed it was a windows thing but I guess I haven't experienced it since I switched GPU sides...

Yaoi Gagarin
Feb 20, 2014

Isn't it because GCN has better integer operation throughput?

Yaoi Gagarin
Feb 20, 2014

repiv posted:

Yeah, GCN has always kicked rear end at 32bit integer math. I do wonder why they chose to put so much power there, it seems like cryptocurrency and password cracking are the only applications that really need it and there's no way AMD saw Bitcoin coming.

...unless Raja Koduri is really Satoshi Nakamoto :tinfoil:

GCN was designed soon after GPGPU had become A Thing, right? Maybe they thought they should make a more balanced architecture, not knowing all the applications that might emerge in the future

Yaoi Gagarin
Feb 20, 2014

I think I just had my first-ever driver-induced bsod, at least on windows 10 :toot: The error code was VIDEO_TDR_FAILURE and I didn't fully catch the program name but it was nvldd-something.

I have a feeling that my system isn't playing nicely with my gsync monitor. Sometimes when I launch or alt-tab from games on it both of my screens go black and I have to reboot the pc. Anyone else see stuff like that after getting a gsync monitor?

Yaoi Gagarin
Feb 20, 2014

Kazinsal posted:

Profitability is tanking. Wait a couple months and then scoop some barely used GTX 1080s on the cheap.

I'm kind of worried that this won't actually happen. Now that nicehash lets people mine whatever is most profitable at any point in time, and there are lots of cryptocurrencies to choose from, what if we have a situation where it's always profitable to be mining something on a GPU, and so the demand from miners never goes down? An eternal GPU famine...

Yaoi Gagarin
Feb 20, 2014

Zero VGS posted:

My 7700k gets 4.2ghz with turbo disabled at 1.08v, which caps out at 50 watts, so yeah Intel still is the better value.

But you're getting 50w at 4.2 on just four cores, he's describing an 8 core CPU at 3.0. That seems like a better deal to me

Yaoi Gagarin
Feb 20, 2014

I got a 970 that sits at 99% utilization when I'm playing battlefield 1 #WaitingForVolta

Yaoi Gagarin
Feb 20, 2014

At this point I'm really loving hoping all the crypto currency miners just roll over and die before Volta hits because I really want to get a new card

Yaoi Gagarin
Feb 20, 2014

go on

Yaoi Gagarin
Feb 20, 2014

Another data point for GPUmageddon: I was in microcenter yesterday and the one 1060 they had was 3 GB and cost $250

Yaoi Gagarin
Feb 20, 2014

Is an RX 560 or a GTX 1050ti a noticeable upgrade from an HD 6850? And how much would either of those cost if the market wasn't hosed right now?

Yaoi Gagarin
Feb 20, 2014

Thanks for the many responses to my question, everyone. Not going to quote them all for space.

The person using the 6850 is for the most part fine with it since they play a lot of 2d games that would run well on anything. They've only had trouble in Skyrim. But if they ever want to play more recent games it's going to be a problem. Also this 6850 is now the noisiest component in the computer since I modernized everything else but the drives this weekend

PBCrunch posted:

Even if the difference in specifications isn't that great, the fact that Terascale driver development has been dead for years means either of those cards will beat an HD 6850 silly. From my experience even something as old as a GeForce GTX 460 smacks an HD 6850 around in newer games. Nvidia is still putting a tiny amount of effort into Fermi drivers.

AMD is quick to abandon older architectures. I'm pretty sure they quit trying on Terascale support while still selling low-end cards on the architecture.

Yeah, it being old terascale is part of why I want to replace it. I set them up with the crimson relive beta driver for terascale and that seems to work ok at least.

Yaoi Gagarin
Feb 20, 2014

22 Eargesplitten posted:

I don't know. I really don't. I assumed it would be in the system tray or an application to find in the start menu. I seriously searched for half an hour a few weeks ago, and this time within 2 minutes of searching I found it.

I looked right past the drat thing dozens of times in the past and never noticed it was even there.

It definitely used to be in the system tray. I went looking for it last week as well and didn't realize it was in the right click menu until I googled where to find it

Yaoi Gagarin
Feb 20, 2014

If only more games would actually have an smaa option. FXAA still seems to be the ubiquitous cheap AA choice

Adbot
ADBOT LOVES YOU

Yaoi Gagarin
Feb 20, 2014

Is there still time to say my 970 is trucking along on 1440p165Hz

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply