|
Mogomra posted:Good point. I just wasn't sure if 4gb was getting into the "ridiculous" territory. I plan on keeping it as long as humanly possible, so I'll go with that. If you go the high-end nVidia route you end up with a 3GB card, if you go AMD you get a 4GB card--if you don't think game companies are going to keep that in mind for awhile still, I don't know what to tell you. Yes, 6GB and 8GB cards, respectively, do exist (sorta), but the price delta on them is stupid for something that "might" help your 1080p performance a bit some ways down the road. The far better option is to simply not worry about it, get a decent card now, and in another year or three when games start bumping up against the 3/4GB barrier (outside of stupidly un-optimized games like WatchDogs), you can sell your card, combine it with the money you didn't spend on that 6/8GB card today, and get a newer, better card that will out-perform your 6GB 780 Ti or your 8GB 290X anyhow. This applies equally to you getting a 2GB 770 for sub-$300 instead of a 4GB 770 for $375+ (if you're not married to nVidia, I'll take the opportunity to remind you that the sub-$350 range right now is littered with quality used 290 and 290X's and you really have absolutely no excuse for not getting one of them, instead).
|
# ? Jun 17, 2014 06:12 |
|
|
# ? Jun 3, 2024 23:49 |
|
Shaocaholica posted:Yeah but those aren't hard numbers for the life of the console and can change for future titles. I think I heard something about them changing lately in the news too. Changing to give the game more memory. Don't remember which console and if it was something I read or heard from an industry friend. It seems ridiculous that on the PS4, all the RAM isn't available. It has a secondary processor for background tasks with 256MiB RAM attached to it - the same amount as all of the PS3 system RAM - and yet it's still reserving an enormous amount of the main RAM too. Giving the machines more resources has made the OS bloated as hell, as far as I can tell. I'd love to know what that background chip is responsible for, and why on earth all the RAM is not available. If it's because of some bullshit feature like the streaming and sharing, to hell with that.
|
# ? Jun 17, 2014 17:08 |
|
HalloKitty posted:It seems ridiculous that on the PS4, all the RAM isn't available. It has a secondary processor for background tasks with 256MiB RAM attached to it - the same amount as all of the PS3 system RAM - and yet it's still reserving an enormous amount of the main RAM too. This is pretty typical. Lots of memory is reserved at the beginning of the console life while they decide everything they want to do. Over time more and more resources are freed up. They cant take resources away in the future so its safer to reserve a large amount at the beginning.
|
# ? Jun 17, 2014 17:14 |
|
HalloKitty posted:It seems ridiculous that on the PS4, all the RAM isn't available. It has a secondary processor for background tasks with 256MiB RAM attached to it - the same amount as all of the PS3 system RAM - and yet it's still reserving an enormous amount of the main RAM too. Yeah, this goes with a popular console concept of "Don't gently caress with the baseline." Unless you are braindead and change up the hardware *cough*, you have to be very conservative at the start, otherwise you might find out that you needed 512 mb when you only had 256 and welp, cant take it back now. Right now, what, 2.5 gigs is on lockdown? Devs aren't being too majorly constrained by 5.5 GB, so likely we'll be seeing small bumps here and there. My estimate is that probably 1 - 1.5 gigs will be on lockdown by the end of the lifespan.
|
# ? Jun 17, 2014 17:51 |
|
Well the whole point is for the sense of seamless transition from the game to the dashboard. Not every service needs to be actually ready-to-go when switching to the dash. As long as you can get get to the dash without hiccup and navigate around a bit I think that's pretty good from a user experience perspective. I don't think too many people will mind if certain edge services take a bit to start. And when then you can start swapping out the game to disk to free up space for OS tasks. No way that should take 3GB or even 1GB given optimizations and what not. I have faith in smart people figureing that out.
Shaocaholica fucked around with this message at 18:12 on Jun 17, 2014 |
# ? Jun 17, 2014 18:09 |
|
deimos posted:Four r9 cores would probably melt an h220 (and I mean the metal). Well, it's probably for the best anyways. Nobody should be spending that much on a GPU.
|
# ? Jun 17, 2014 22:02 |
|
Rime posted:Well, it's probably for the best anyways. Nobody should be spending that much on a GPU. Titan Zee would like a word.
|
# ? Jun 17, 2014 22:14 |
|
wccftech tossing another rumor onto the pile that say GTX800 series will be coming by the end of the year, based on Maxwell, on 28nm process.
|
# ? Jun 17, 2014 22:16 |
|
Shaocaholica posted:Titan Zee would like a word.
|
# ? Jun 18, 2014 02:29 |
|
You could at least rationalize a 295x2 by virtue of it lasting well over 3 years at the top of the foodchain, and you'd spend at least as much buying the new "top of the line" card every year if you're that kind of gearhead. The Nvidia offerings are just demonstrating contempt for the consumer.
|
# ? Jun 18, 2014 04:35 |
|
The Nvida Titan offerings are compute cards. Gamers only buy them because the average customer is an idiot.
|
# ? Jun 18, 2014 05:23 |
|
Rime posted:You could at least rationalize a 295x2 by virtue of it lasting well over 3 years at the top of the foodchain, and you'd spend at least as much buying the new "top of the line" card every year if you're that kind of gearhead. The Nvidia offerings are just demonstrating contempt for the consumer. I'd rather spend $1500 over 3 years than all at once for something that might last 3 years. Hell you can get 2x 290x for waaaay less than $1500.
|
# ? Jun 18, 2014 05:45 |
|
The Lord Bude posted:The Nvida Titan offerings are compute cards. Gamers only buy them because the average customer is an idiot. You're right that gamers would have to be idiots to buy one, though. Whoever the market is, it certainly isn't gamers.
|
# ? Jun 18, 2014 06:09 |
|
DrDork posted:Also, a lot of the people who would care about such high compute power are going to be financial or scientific applications, where the lack of ECC memory on the Z is going to be a problem. I just struggle to see who, exactly, the card was aimed at.
|
# ? Jun 18, 2014 06:56 |
|
Professor Science posted:CUDA developers. CUDA won in that market, and you don't need ECC if you're just building apps and testing perf before you deploy on a Tesla cluster. Why wouldn't you go for one or two Titans/Blacks then?
|
# ? Jun 18, 2014 13:23 |
|
Arzachel posted:Why wouldn't you go for one or two Titans/Blacks then? You can run quad SLI with only 2 pci-e ports
|
# ? Jun 18, 2014 14:53 |
|
It would make slightly more sense if it actually offered the performance of 2x Titan Blacks like the 295x2 offers the performance of 2x 290Xs but it's significantly slower. I'd think anyone willing to pay the large price premium for the Z would rather shell out the cash for a quad Titan Black setup.
|
# ? Jun 18, 2014 16:18 |
|
Don Lapre posted:You can run quad SLI with only 2 pci-e ports
|
# ? Jun 18, 2014 17:39 |
|
Don Lapre posted:You can run quad SLI with only 2 pci-e ports Titan Z is literally a 3-slot backplate design, so it saves no space It's a product with no purpose
|
# ? Jun 18, 2014 17:43 |
|
It takes money from people that have far too much for their own good. Its probably the best purpose possible.
|
# ? Jun 18, 2014 17:45 |
|
go3 posted:It takes money from people that have far too much for their own good. Its probably the best purpose possible.
|
# ? Jun 18, 2014 17:57 |
|
HalloKitty posted:Titan Z is literally a 3-slot backplate design, so it saves no space 2 of them would use 6 slots instead of 8. You could run quad sli on this board for example Its certainly a niche case but there is a case for it.
|
# ? Jun 18, 2014 18:24 |
|
Don Lapre posted:2 of them would use 6 slots instead of 8.
|
# ? Jun 18, 2014 19:00 |
|
Been doing a little video editing and my system, i3-2100, 8GB RAM, handles the editing just fine until I start multi-tasking (mostly online tutorials) then things get a little slow. I'm thinking a low end graphics card might offload enough from the cpu and system memory to make things smooth again. My editing program (Corel) supports CUDA. Not doing any gaming and not worried about rendering speed. I'd prefer to go fanless, maybe like a GT610 or similar. I know the basics of CUDA just not how it performs in real life. Thoughts or recommendations?
|
# ? Jun 18, 2014 20:30 |
|
Getting any Cuda/OpenCL capable GPU for video editing isn't going to be a silver bullet for your performance issues especially at the super low end like the 610. You could just buy something from a local store to test and if it doesn't help you can always return it.
|
# ? Jun 18, 2014 20:40 |
|
I think the GT640 is the lowest end card that supports CUDA.
|
# ? Jun 18, 2014 20:41 |
|
DrDork posted:Sure, but there are far more boards with four 16x slots spaced to allow four dual-slot GPUs, so you'd be back to quad Blacks or dual Z's again. The board you have there is fairly unusual, and I can't see anyone thinking that it makes more sense to spend $6000 on dual Z's to fit their odd motherboard instead of $4000 on quad Blacks and pick up a new $150 motherboard if needed. I don't think there's actually a motherboard out there that would let you run 3 Z's, which is the only possible way it would be better than a similar set of Blacks. That board isn't unusual, probably 90%+ of ATX boards use a similar slot/16x/slot/slot/16x/slot/slot layout, to allow two two-slot cards with spacing. From memory the only motherboards that can run three triple slot cards are the EVGA Z87 and Z97 classifieds due to their weird spacings.
|
# ? Jun 18, 2014 22:32 |
|
Shaocaholica posted:Getting any Cuda/OpenCL capable GPU for video editing isn't going to be a silver bullet for your performance issues especially at the super low end like the 610. You could just buy something from a local store to test and if it doesn't help you can always return it. Sounds like a good plan. Just hoping that moving video to a card and whatever little bump I get from CUDA might be enough to smooth things out. Shouldn't take much, it's only a very slight issue, more of a hesitation when switching between the browser and editing program. Really I'm kind of impressed that my system does as well as is, I built it for basic general purpose stuff. Re: 640, I'll have to check specs again but thought the 610 supported CUDA. Thanks all.
|
# ? Jun 18, 2014 23:27 |
|
Arzachel posted:Why wouldn't you go for one or two Titans/Blacks then?
|
# ? Jun 19, 2014 02:26 |
|
I'm curious do any games support or use CUDA or OpenCL for anything? I do know some games use PhysX but do games use CUDA or OpenCL for anything else like the AI or the netcode?
|
# ? Jun 19, 2014 03:20 |
|
spasticColon posted:I'm curious do any games support or use CUDA or OpenCL for anything? I do know some games use PhysX but do games use CUDA or OpenCL for anything else like the AI or the netcode? Pretty sure this wouldn't work as it would require a context switch which would kill performance if using 1 GPU. Not sure how Nvidia/Physx works on 1 GPU with respect to context switching or lack there of.
|
# ? Jun 19, 2014 03:31 |
|
GPU PhysX uses CUDA to do the physics processing, and there is indeed a context switch involved. The context switch's performance impact is so great that you can improve average and minimum framerates more with a low-power dedicated PhysX card than with an SLI pairing. For OpenCL... eh. Some games use DirectCompute, though - IIRC, Civ 5 uses DirectCompute to unpack compressed textures, rather than putting that on the CPU. E: Oh, Tomb Raider's TressFX does hair calculations through DirectCompute, too. Factory Factory fucked around with this message at 04:12 on Jun 19, 2014 |
# ? Jun 19, 2014 04:08 |
|
I was using EVGA Precision X to limit my framerate for GTX cards... what's the best app to limit framerates for Radeons?
|
# ? Jun 19, 2014 05:12 |
|
Factory Factory posted:GPU PhysX uses CUDA to do the physics processing, and there is indeed a context switch involved. The context switch's performance impact is so great that you can improve average and minimum framerates more with a low-power dedicated PhysX card than with an SLI pairing.
|
# ? Jun 19, 2014 05:24 |
|
Zero VGS posted:I was using EVGA Precision X to limit my framerate for GTX cards... what's the best app to limit framerates for Radeons? RadeonPro, probably.
|
# ? Jun 19, 2014 07:00 |
|
Factory Factory posted:GPU PhysX uses CUDA to do the physics processing, and there is indeed a context switch involved. The context switch's performance impact is so great that you can improve average and minimum framerates more with a low-power dedicated PhysX card than with an SLI pairing. I knew that PhysX uses CUDA in some games but nothing else does? That seems a bit underwhelming. I tried enabling PhysX in some games on my single 660Ti but the performance hit just was too much so I was thinking about getting a dedicated PhysX card but would a cheap GTX750 do the job? And would my motherboard support a dedicated PhysX card even though it doesn't support SLI? And it also only supports two cards in x8 mode (PCI-E 2.0 spec) so would that be a bottleneck?
|
# ? Jun 19, 2014 17:32 |
|
spasticColon posted:I knew that PhysX uses CUDA in some games but nothing else does? That seems a bit underwhelming. I tried enabling PhysX in some games on my single 660Ti but the performance hit just was too much so I was thinking about getting a dedicated PhysX card but would a cheap GTX750 do the job? And would my motherboard support a dedicated PhysX card even though it doesn't support SLI? And it also only supports two cards in x8 mode (PCI-E 2.0 spec) so would that be a bottleneck? You would be better selling your 660ti and using the money from a 750ti on a better overall card.
|
# ? Jun 19, 2014 17:36 |
|
To expand on the above: a PhysX coprocessor only helps when you play a PhysX game. Any other game, and you're better off just having a better single card. You have to play a lot of PhysX games and have a high standard for framerates and eye candy to make the coprocessor worthwhile, and even then, it's clearly a luxury purchase. That said, a 750 or 750 Ti is a good choice for a PhysX coprocessor. It's all the power needed for even the heaviest of game PhysX, and a low power consumption to go with it. Also PCIe 2.0 x8 isn't really a bottleneck for anything that isn't a dual-GPU card. *Maybe* you'll get a tiny hit on something the level of a 290X or 780 Ti/Titan Black.
|
# ? Jun 19, 2014 18:04 |
|
In other odds and ends: The rumor mill rumors that 800 series cards on 28mm will be arriving in Q4. Also, AMD's Dick Huddy spoke with PcPer and amongst other things spoke about Adaptive V-Sync. quote:Huddy discussed the differences, as he sees it, between NVIDIA's G-Sync technology and the AMD option called FreeSync but now officially called Adaptive Sync as part of the DisplayPort 1.2a standard. Beside the obvious difference of added hardware and licensing costs, Adaptive Sync is apparently going to be easier to implement as the maximum and minimum frequencies are actually negotiated by the display and the graphics card when the monitor is plugged in. G-Sync requires a white list in the NVIDIA driver to work today and as long as NVIDIA keeps that list updated, the impact on gamers buying panels should be minimal. But with DP 1.2a and properly implemented Adaptive Sync monitors, once a driver supports the negotiation it doesn't require knowledge about the specific model beforehand.
|
# ? Jun 19, 2014 21:30 |
|
|
# ? Jun 3, 2024 23:49 |
|
This 800 series bullshit is why I dropped Nvidia back in the 8800 era. 3 years of rehashing the same drat chip and claiming it's "new". Pfah, a pox on both their houses. Saw a translated interview with AMD claiming that they are going to be focusing heavily on VR performance optimizations moving forwards. Bodes well, along with the HSA stuff they are designing. Rime fucked around with this message at 22:54 on Jun 19, 2014 |
# ? Jun 19, 2014 22:52 |