Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Mogomra posted:

Good point. I just wasn't sure if 4gb was getting into the "ridiculous" territory. I plan on keeping it as long as humanly possible, so I'll go with that.
In the end it doesn't even matter.

If you go the high-end nVidia route you end up with a 3GB card, if you go AMD you get a 4GB card--if you don't think game companies are going to keep that in mind for awhile still, I don't know what to tell you. Yes, 6GB and 8GB cards, respectively, do exist (sorta), but the price delta on them is stupid for something that "might" help your 1080p performance a bit some ways down the road.

The far better option is to simply not worry about it, get a decent card now, and in another year or three when games start bumping up against the 3/4GB barrier (outside of stupidly un-optimized games like WatchDogs), you can sell your card, combine it with the money you didn't spend on that 6/8GB card today, and get a newer, better card that will out-perform your 6GB 780 Ti or your 8GB 290X anyhow. This applies equally to you getting a 2GB 770 for sub-$300 instead of a 4GB 770 for $375+ (if you're not married to nVidia, I'll take the opportunity to remind you that the sub-$350 range right now is littered with quality used 290 and 290X's and you really have absolutely no excuse for not getting one of them, instead).

Adbot
ADBOT LOVES YOU

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Shaocaholica posted:

Yeah but those aren't hard numbers for the life of the console and can change for future titles. I think I heard something about them changing lately in the news too. Changing to give the game more memory. Don't remember which console and if it was something I read or heard from an industry friend.

It seems ridiculous that on the PS4, all the RAM isn't available. It has a secondary processor for background tasks with 256MiB RAM attached to it - the same amount as all of the PS3 system RAM - and yet it's still reserving an enormous amount of the main RAM too.

Giving the machines more resources has made the OS bloated as hell, as far as I can tell. I'd love to know what that background chip is responsible for, and why on earth all the RAM is not available. If it's because of some bullshit feature like the streaming and sharing, to hell with that.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

HalloKitty posted:

It seems ridiculous that on the PS4, all the RAM isn't available. It has a secondary processor for background tasks with 256MiB RAM attached to it - the same amount as all of the PS3 system RAM - and yet it's still reserving an enormous amount of the main RAM too.

Giving the machines more resources has made the OS bloated as hell, as far as I can tell. I'd love to know what that background chip is responsible for, and why on earth all the RAM is not available. If it's because of some bullshit feature like the streaming and sharing, to hell with that.

This is pretty typical. Lots of memory is reserved at the beginning of the console life while they decide everything they want to do. Over time more and more resources are freed up. They cant take resources away in the future so its safer to reserve a large amount at the beginning.

Wozbo
Jul 5, 2010

HalloKitty posted:

It seems ridiculous that on the PS4, all the RAM isn't available. It has a secondary processor for background tasks with 256MiB RAM attached to it - the same amount as all of the PS3 system RAM - and yet it's still reserving an enormous amount of the main RAM too.

Giving the machines more resources has made the OS bloated as hell, as far as I can tell. I'd love to know what that background chip is responsible for, and why on earth all the RAM is not available. If it's because of some bullshit feature like the streaming and sharing, to hell with that.

Yeah, this goes with a popular console concept of "Don't gently caress with the baseline." Unless you are braindead and change up the hardware *cough*, you have to be very conservative at the start, otherwise you might find out that you needed 512 mb when you only had 256 and welp, cant take it back now. Right now, what, 2.5 gigs is on lockdown? Devs aren't being too majorly constrained by 5.5 GB, so likely we'll be seeing small bumps here and there. My estimate is that probably 1 - 1.5 gigs will be on lockdown by the end of the lifespan.

Shaocaholica
Oct 29, 2002

Fig. 5E
Well the whole point is for the sense of seamless transition from the game to the dashboard. Not every service needs to be actually ready-to-go when switching to the dash. As long as you can get get to the dash without hiccup and navigate around a bit I think that's pretty good from a user experience perspective. I don't think too many people will mind if certain edge services take a bit to start. And when then you can start swapping out the game to disk to free up space for OS tasks. No way that should take 3GB or even 1GB given optimizations and what not. I have faith in smart people figureing that out.

Shaocaholica fucked around with this message at 18:12 on Jun 17, 2014

Rime
Nov 2, 2011

by Games Forum

deimos posted:

Four r9 cores would probably melt an h220 (and I mean the metal).

Well, it's probably for the best anyways. Nobody should be spending that much on a GPU. :v:

Shaocaholica
Oct 29, 2002

Fig. 5E

Rime posted:

Well, it's probably for the best anyways. Nobody should be spending that much on a GPU. :v:

Titan Zee would like a word.

Rastor
Jun 2, 2001

wccftech tossing another rumor onto the pile that say GTX800 series will be coming by the end of the year, based on Maxwell, on 28nm process.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Shaocaholica posted:

Titan Zee would like a word.
The Titan Z is such a hilarious waste, the only people who are going to buy it are ones who are absolutely locked into nVidia for whatever reason, and can argue themselves some sort of discount. Probably by going "well I could get a 295x2 for half the price and get better performance, gimmie discount!" Or they have roughly 1500 more dollars than sense.

Rime
Nov 2, 2011

by Games Forum
You could at least rationalize a 295x2 by virtue of it lasting well over 3 years at the top of the foodchain, and you'd spend at least as much buying the new "top of the line" card every year if you're that kind of gearhead. The Nvidia offerings are just demonstrating contempt for the consumer.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE
The Nvida Titan offerings are compute cards. Gamers only buy them because the average customer is an idiot.

Shaocaholica
Oct 29, 2002

Fig. 5E

Rime posted:

You could at least rationalize a 295x2 by virtue of it lasting well over 3 years at the top of the foodchain, and you'd spend at least as much buying the new "top of the line" card every year if you're that kind of gearhead. The Nvidia offerings are just demonstrating contempt for the consumer.

I'd rather spend $1500 over 3 years than all at once for something that might last 3 years. Hell you can get 2x 290x for waaaay less than $1500.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

The Lord Bude posted:

The Nvida Titan offerings are compute cards. Gamers only buy them because the average customer is an idiot.
While true, you can get two 295x2's for the price of a Z, and still come out on top in the raw compute power slug-fest--or go dual Blacks and crush it even further. The Z does win on power consumption, though, 375W vs the 1000W (lol) for a double 295x2 setup, or 500W for double Blacks, so if you're in a high electricity cost area doing 24/7 compute loads it looks like it would make sense, until you realize that at the $1000 price difference between it and two Blacks, assuming 11c/kWh, it'll take something like 8 years of running 24/7 to make up the price difference. It's not even really any more dense than Blacks, unless you're building a custom enclosure and board setup to deal with it's triple-slot cooler. Also, a lot of the people who would care about such high compute power are going to be financial or scientific applications, where the lack of ECC memory on the Z is going to be a problem. I just struggle to see who, exactly, the card was aimed at.

You're right that gamers would have to be idiots to buy one, though. Whoever the market is, it certainly isn't gamers.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

DrDork posted:

Also, a lot of the people who would care about such high compute power are going to be financial or scientific applications, where the lack of ECC memory on the Z is going to be a problem. I just struggle to see who, exactly, the card was aimed at.
CUDA developers. CUDA won in that market, and you don't need ECC if you're just building apps and testing perf before you deploy on a Tesla cluster.

Arzachel
May 12, 2012

Professor Science posted:

CUDA developers. CUDA won in that market, and you don't need ECC if you're just building apps and testing perf before you deploy on a Tesla cluster.

Why wouldn't you go for one or two Titans/Blacks then?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Arzachel posted:

Why wouldn't you go for one or two Titans/Blacks then?

You can run quad SLI with only 2 pci-e ports

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
It would make slightly more sense if it actually offered the performance of 2x Titan Blacks like the 295x2 offers the performance of 2x 290Xs but it's significantly slower. I'd think anyone willing to pay the large price premium for the Z would rather shell out the cash for a quad Titan Black setup.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Don Lapre posted:

You can run quad SLI with only 2 pci-e ports
But with the Z taking 3 slots (so taking up at least 2 PCIe ports), unless you are custom-rolling both your case and motherboards, you're not really winning the density race compared to 2x Blacks, either.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Don Lapre posted:

You can run quad SLI with only 2 pci-e ports

Titan Z is literally a 3-slot backplate design, so it saves no space

It's a product with no purpose

Proud Christian Mom
Dec 20, 2006
READING COMPREHENSION IS HARD
It takes money from people that have far too much for their own good. Its probably the best purpose possible.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

go3 posted:

It takes money from people that have far too much for their own good. Its probably the best purpose possible.
Maybe they can sell some to nVidia fan-boy buttcoin miners (do those even exist?), who will get around the 3-slot issue by simply using risers and laying them all on poorly built hanging racks?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

HalloKitty posted:

Titan Z is literally a 3-slot backplate design, so it saves no space

It's a product with no purpose

2 of them would use 6 slots instead of 8.

You could run quad sli on this board for example



Its certainly a niche case but there is a case for it.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Don Lapre posted:

2 of them would use 6 slots instead of 8.

You could run quad sli on this board for example
Sure, but there are far more boards with four 16x slots spaced to allow four dual-slot GPUs, so you'd be back to quad Blacks or dual Z's again. The board you have there is fairly unusual, and I can't see anyone thinking that it makes more sense to spend $6000 on dual Z's to fit their odd motherboard instead of $4000 on quad Blacks and pick up a new $150 motherboard if needed. I don't think there's actually a motherboard out there that would let you run 3 Z's, which is the only possible way it would be better than a similar set of Blacks.

wormil
Sep 12, 2002

Hulk will smoke you!
Been doing a little video editing and my system, i3-2100, 8GB RAM, handles the editing just fine until I start multi-tasking (mostly online tutorials) then things get a little slow. I'm thinking a low end graphics card might offload enough from the cpu and system memory to make things smooth again. My editing program (Corel) supports CUDA. Not doing any gaming and not worried about rendering speed. I'd prefer to go fanless, maybe like a GT610 or similar. I know the basics of CUDA just not how it performs in real life. Thoughts or recommendations?

Shaocaholica
Oct 29, 2002

Fig. 5E
Getting any Cuda/OpenCL capable GPU for video editing isn't going to be a silver bullet for your performance issues especially at the super low end like the 610. You could just buy something from a local store to test and if it doesn't help you can always return it.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
I think the GT640 is the lowest end card that supports CUDA.

BurritoJustice
Oct 9, 2012

DrDork posted:

Sure, but there are far more boards with four 16x slots spaced to allow four dual-slot GPUs, so you'd be back to quad Blacks or dual Z's again. The board you have there is fairly unusual, and I can't see anyone thinking that it makes more sense to spend $6000 on dual Z's to fit their odd motherboard instead of $4000 on quad Blacks and pick up a new $150 motherboard if needed. I don't think there's actually a motherboard out there that would let you run 3 Z's, which is the only possible way it would be better than a similar set of Blacks.

That board isn't unusual, probably 90%+ of ATX boards use a similar slot/16x/slot/slot/16x/slot/slot layout, to allow two two-slot cards with spacing. From memory the only motherboards that can run three triple slot cards are the EVGA Z87 and Z97 classifieds due to their weird spacings.

wormil
Sep 12, 2002

Hulk will smoke you!

Shaocaholica posted:

Getting any Cuda/OpenCL capable GPU for video editing isn't going to be a silver bullet for your performance issues especially at the super low end like the 610. You could just buy something from a local store to test and if it doesn't help you can always return it.

Sounds like a good plan. Just hoping that moving video to a card and whatever little bump I get from CUDA might be enough to smooth things out. Shouldn't take much, it's only a very slight issue, more of a hesitation when switching between the browser and editing program. Really I'm kind of impressed that my system does as well as is, I built it for basic general purpose stuff.

Re: 640, I'll have to check specs again but thought the 610 supported CUDA. Thanks all.

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Arzachel posted:

Why wouldn't you go for one or two Titans/Blacks then?
having two GPUs behind a PLX switch noticeably improves CUDA P2P behavior

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance
I'm curious do any games support or use CUDA or OpenCL for anything? I do know some games use PhysX but do games use CUDA or OpenCL for anything else like the AI or the netcode?

Shaocaholica
Oct 29, 2002

Fig. 5E

spasticColon posted:

I'm curious do any games support or use CUDA or OpenCL for anything? I do know some games use PhysX but do games use CUDA or OpenCL for anything else like the AI or the netcode?

Pretty sure this wouldn't work as it would require a context switch which would kill performance if using 1 GPU. Not sure how Nvidia/Physx works on 1 GPU with respect to context switching or lack there of.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
GPU PhysX uses CUDA to do the physics processing, and there is indeed a context switch involved. The context switch's performance impact is so great that you can improve average and minimum framerates more with a low-power dedicated PhysX card than with an SLI pairing.

For OpenCL... eh. Some games use DirectCompute, though - IIRC, Civ 5 uses DirectCompute to unpack compressed textures, rather than putting that on the CPU. E: Oh, Tomb Raider's TressFX does hair calculations through DirectCompute, too.

Factory Factory fucked around with this message at 04:12 on Jun 19, 2014

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy
I was using EVGA Precision X to limit my framerate for GTX cards... what's the best app to limit framerates for Radeons?

Professor Science
Mar 8, 2006
diplodocus + mortarboard = party

Factory Factory posted:

GPU PhysX uses CUDA to do the physics processing, and there is indeed a context switch involved. The context switch's performance impact is so great that you can improve average and minimum framerates more with a low-power dedicated PhysX card than with an SLI pairing.

For OpenCL... eh. Some games use DirectCompute, though - IIRC, Civ 5 uses DirectCompute to unpack compressed textures, rather than putting that on the CPU. E: Oh, Tomb Raider's TressFX does hair calculations through DirectCompute, too.
AFAIK (and I'm pretty well informed), no game uses OpenCL (at least on Windows, presumably also on Linux, maybe something on Mac, especially via a core system API?), the only thing besides PhysX that uses CUDA is Just Cause 2 for water. Lots and lots of stuff use DirectCompute, although it's usually as a fancy rendering pass ("sweet rear end lighting model that can't be implemented as a pixel shader with any sort of efficiency") rather than what you would generally think of as a compute workload.

Endymion FRS MK1
Oct 29, 2011

I don't know what this thing is, and I don't care. I'm just tired of seeing your stupid newbie av from 2011.

Zero VGS posted:

I was using EVGA Precision X to limit my framerate for GTX cards... what's the best app to limit framerates for Radeons?

RadeonPro, probably.

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance

Factory Factory posted:

GPU PhysX uses CUDA to do the physics processing, and there is indeed a context switch involved. The context switch's performance impact is so great that you can improve average and minimum framerates more with a low-power dedicated PhysX card than with an SLI pairing.

For OpenCL... eh. Some games use DirectCompute, though - IIRC, Civ 5 uses DirectCompute to unpack compressed textures, rather than putting that on the CPU. E: Oh, Tomb Raider's TressFX does hair calculations through DirectCompute, too.

I knew that PhysX uses CUDA in some games but nothing else does? That seems a bit underwhelming. I tried enabling PhysX in some games on my single 660Ti but the performance hit just was too much so I was thinking about getting a dedicated PhysX card but would a cheap GTX750 do the job? And would my motherboard support a dedicated PhysX card even though it doesn't support SLI? And it also only supports two cards in x8 mode (PCI-E 2.0 spec) so would that be a bottleneck?

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

spasticColon posted:

I knew that PhysX uses CUDA in some games but nothing else does? That seems a bit underwhelming. I tried enabling PhysX in some games on my single 660Ti but the performance hit just was too much so I was thinking about getting a dedicated PhysX card but would a cheap GTX750 do the job? And would my motherboard support a dedicated PhysX card even though it doesn't support SLI? And it also only supports two cards in x8 mode (PCI-E 2.0 spec) so would that be a bottleneck?

You would be better selling your 660ti and using the money from a 750ti on a better overall card.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
To expand on the above: a PhysX coprocessor only helps when you play a PhysX game. Any other game, and you're better off just having a better single card. You have to play a lot of PhysX games and have a high standard for framerates and eye candy to make the coprocessor worthwhile, and even then, it's clearly a luxury purchase.

That said, a 750 or 750 Ti is a good choice for a PhysX coprocessor. It's all the power needed for even the heaviest of game PhysX, and a low power consumption to go with it.

Also PCIe 2.0 x8 isn't really a bottleneck for anything that isn't a dual-GPU card. *Maybe* you'll get a tiny hit on something the level of a 290X or 780 Ti/Titan Black.

Wistful of Dollars
Aug 25, 2009

In other odds and ends:

The rumor mill rumors that 800 series cards on 28mm will be arriving in Q4.

Also, AMD's Dick Huddy spoke with PcPer and amongst other things spoke about Adaptive V-Sync.

quote:

Huddy discussed the differences, as he sees it, between NVIDIA's G-Sync technology and the AMD option called FreeSync but now officially called Adaptive Sync as part of the DisplayPort 1.2a standard. Beside the obvious difference of added hardware and licensing costs, Adaptive Sync is apparently going to be easier to implement as the maximum and minimum frequencies are actually negotiated by the display and the graphics card when the monitor is plugged in. G-Sync requires a white list in the NVIDIA driver to work today and as long as NVIDIA keeps that list updated, the impact on gamers buying panels should be minimal. But with DP 1.2a and properly implemented Adaptive Sync monitors, once a driver supports the negotiation it doesn't require knowledge about the specific model beforehand.

According to Huddy, the new Adaptive Sync specification will go up to as high as 240 Hz and as low as 9 Hz; these are specifics that before today weren't known. Of course, not every panel (and maybe no panel) will support that extreme of a range for variable frame rate technology, but this leaves a lot of potential for improved panel development in the years to come. More likely you'll see Adaptive Sync ready display listing a range closer to 30-60 Hz or 30-80 Hz initially.

Prototypes of FreeSync monitors will be going out to some media in the September or October time frame, while public availability will likely occur in the January or February window.

Adbot
ADBOT LOVES YOU

Rime
Nov 2, 2011

by Games Forum
This 800 series bullshit is why I dropped Nvidia back in the 8800 era. 3 years of rehashing the same drat chip and claiming it's "new". Pfah, a pox on both their houses.

Saw a translated interview with AMD claiming that they are going to be focusing heavily on VR performance optimizations moving forwards. Bodes well, along with the HSA stuff they are designing.

Rime fucked around with this message at 22:54 on Jun 19, 2014

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply