|
They've got enough money for a 980ti but can't even afford a basic aftermarket CPU HSF like a 212 or something?
|
# ? Dec 17, 2015 19:01 |
|
|
# ? Jun 5, 2024 08:09 |
|
Seamonster posted:They've got enough money for a 980ti but can't even afford a basic aftermarket CPU HSF like a 212 or something? This isn't for me and its a i5-6500 the stock intel cooler works perfectly fine. Sadly the zotac has some rattling fans but i also have a windforce 980ti so i used that one instead. Don Lapre fucked around with this message at 19:09 on Dec 17, 2015 |
# ? Dec 17, 2015 19:04 |
|
Don Lapre posted:Air, zotac 980ti Amp! stock cooler Yeah ok this makes a lot more sense lol. Heaven has been that way as long as I've been using it.
|
# ? Dec 17, 2015 19:10 |
|
THE DOG HOUSE posted:Wow 1443 mhz factory settings is the highest ive heard of My factory setting GALAX 980 TI HOF does 1480 stock and can do 1575 on air. It also has a $100 price premium. SlayVus fucked around with this message at 19:27 on Dec 17, 2015 |
# ? Dec 17, 2015 19:20 |
|
xthetenth posted:I'm hoping Freesync gets borderless windowed and crossfire support soon because then it'll be a total replacement or at least close enough. I'll cheerfully accept it being slightly less capable for the price difference between an XR341CK and an X34 though. Aye, me too. I really want to give AMD my but being able to run SLI in boarderless windowed with gsync is really, really convenient - particularly as a chronic alt-tabber.
|
# ? Dec 17, 2015 19:29 |
|
El Scotch posted:Aye, me too. I really want to give AMD my but being able to run SLI in boarderless windowed with gsync is really, really convenient - particularly as a chronic alt-tabber. Yeah, I just took a loss sidegrading because I was having an issue with alt-tabbing, so not having borderless is the one real risk I feel I'm taking going with an XR341CK rather than the X34. Them adding low framerate compensation to Freesync with a driver upgrade makes me feel better about it to where I'll take the risk (that and my boss being interested in buying my 34UM95 covering much more of an XR than an X34).
|
# ? Dec 17, 2015 19:52 |
|
Paul MaudDib posted:Yeah at this point there are still definite choices you have to make. AMD has the better Eyefinity mode, NVIDIA has G-sync and DVI outputs. There's a pretty good lock-in right now to one ecosystem or the other. AMD also allows you to passthrough all their cards in VMware while Nvidia only allows that on their Quadro and Tesla cards. For all the dozens of people to who this matters worldwide.
|
# ? Dec 17, 2015 20:11 |
|
Thanks for making me feel bad about my plain Jane evga acx 980ti that tops out at 1403 or goes down to 1390 when it gets really stressed. I don't have any extra volts on it though.
|
# ? Dec 17, 2015 21:09 |
|
NihilismNow posted:AMD also allows you to passthrough all their cards in VMware while Nvidia only allows that on their Quadro and Tesla cards. Not that hard to trick the Nvidia cards into thinking that they're not actually in a VM. I hear it only takes one or two things.
|
# ? Dec 17, 2015 22:41 |
|
Lol geforce experience -> twitch, twitch auto records and auto exports to -> youtube, imgur takes youtube videos and converts to gifs the times we live in
|
# ? Dec 17, 2015 23:13 |
|
Why the heck does the nvidia control panel take like a minute to load up every time? I've got windows on my SSD and my computer boots up in literally like 8 seconds, but just the nvidia panel takes forever. Sometimes I have to close it manually and open it again just to get it to stop hanging.
|
# ? Dec 17, 2015 23:19 |
|
cat doter posted:Why the heck does the nvidia control panel take like a minute to load up every time? I've got windows on my SSD and my computer boots up in literally like 8 seconds, but just the nvidia panel takes forever. Sometimes I have to close it manually and open it again just to get it to stop hanging. I dont know man, that and if you try to manually open GFE sometimes it can take an obscene amount of time.
|
# ? Dec 17, 2015 23:27 |
|
Dogen posted:Thanks for making me feel bad about my plain Jane evga acx 980ti that tops out at 1403 or goes down to 1390 when it gets really stressed. I don't have any extra volts on it though. Don't feel bad, people spend like an extra $300-$400 or more to get ASIC qualities above 80. The 72% guarantee ones are $850. A used 80%+ is selling for $1,500 on amazon.
|
# ? Dec 17, 2015 23:52 |
|
cat doter posted:Why the heck does the nvidia control panel take like a minute to load up every time? I've got windows on my SSD and my computer boots up in literally like 8 seconds, but just the nvidia panel takes forever. Sometimes I have to close it manually and open it again just to get it to stop hanging.
|
# ? Dec 18, 2015 00:03 |
|
SinineSiil posted:I don't think SSD even makes a difference. It's awfully slow no matter what. Yeah, it's got some weirdnesses. I had a file with a machine generated file name that was crazy long lying in a folder and every time I tried to open the per-game settings window it would crash the program because it seems it scans the whole file system or a significant fraction. It took some combing through logs in process explorer to figure that one out because exception handling is for scrubs. xthetenth fucked around with this message at 00:08 on Dec 18, 2015 |
# ? Dec 18, 2015 00:04 |
|
xthetenth posted:Yeah, it's got some weirdnesses. I had a file with a machine generated file name that was crazy long lying in a folder and every time I tried to open the per-game settings window it would crash the program because it seems it scans the whole file system or a significant fraction. It took some combing through logs in process explorer to figure that one out because exception handling is for scrubs. I had to reinstall it once because it kept crashing while starting up. Awful
|
# ? Dec 18, 2015 00:13 |
|
I'll take a moment to toot AMDs horn and say Radeon Software is incredibly responsive even on old dumb HDDs.
|
# ? Dec 18, 2015 00:18 |
|
It had better. It's hard to gently caress up a Qt-based UI.
|
# ? Dec 18, 2015 00:28 |
The Nvidia control panel opens in 4-5 seconds for me, Geforce Experience in 7 seconds.
|
|
# ? Dec 18, 2015 01:24 |
|
cat doter posted:Why the heck does the nvidia control panel take like a minute to load up every time? I've got windows on my SSD and my computer boots up in literally like 8 seconds, but just the nvidia panel takes forever. Sometimes I have to close it manually and open it again just to get it to stop hanging. gently caress yeah, that thing is jank-tastic. It follows a time-honoured tradition though - all 'driver UI' software that comes with hardware, even decent hardware, runs like utter poo poo. Is generally really ugly too. Mouse software is by far the worst, closely followed by soundcard/audio stuff. The fact that Samsung Magician is probably the least awful example of this tells you everything. Oh, and NZXT CAM, which still runs like crap, but at least looks quite nice. And AMD have just released their Crimson thing, and whilst I have no idea how that runs, at least it looks not as awful as Catalyst did, though that's a loving low bar. AVeryLargeRadish posted:The Nvidia control panel opens in 4-5 seconds for me, Geforce Experience in 7 seconds. My times are around the same. And playing terribly fast & loose with numbers here, you are basically saying that in order to open & display a basic UI, plus run a few thousand (?) low-level hardware lookups takes between 20-25 *billion* CPU operations, assuming one core. That is some sloppy poo poo... xiansi fucked around with this message at 01:36 on Dec 18, 2015 |
# ? Dec 18, 2015 01:31 |
|
FaustianQ posted:I'll take a moment to toot AMDs horn and say Radeon Software is incredibly responsive even on old dumb HDDs. Whatever the last version of Catalyst Control Center that worked with the 5450 in my HTPC was the absolute worst for this - took over a minute to load sometimes.
|
# ? Dec 18, 2015 02:31 |
|
I really have to give AMD props for the rate of improvement in their drivers over the past year or so. Of course, they had the farthest to go so it's easier to see visible changes, but it was like they were doing gently caress-all for a long time.
|
# ? Dec 18, 2015 02:53 |
|
Here's what I am having a hard time understanding. So I'm running two MSI 970s. Shouldn't there be an "ideal" overclock for those cards since they're all the same? Like shouldn't I just be able to Google the "correct" overclock values and plug them in? Why do I need to experiment, etc., to find the ideal overclock? Is it because of their interaction with other parts of my PC (like eg the CPU, the RAM, etc.) or because each card is a bit different or what? I'm failing at overclocking and I feel like an idiot.
|
# ? Dec 18, 2015 04:42 |
|
beergod posted:Here's what I am having a hard time understanding. So I'm running two MSI 970s. Shouldn't there be an "ideal" overclock for those cards since they're all the same? beergod posted:Like shouldn't I just be able to Google the "correct" overclock values and plug them in? beergod posted:Why do I need to experiment, etc., to find the ideal overclock?
|
# ? Dec 18, 2015 04:48 |
|
beergod posted:Here's what I am having a hard time understanding. So I'm running two MSI 970s. Shouldn't there be an "ideal" overclock for those cards since they're all the same? Like shouldn't I just be able to Google the "correct" overclock values and plug them in? Why do I need to experiment, etc., to find the ideal overclock? Is it because of their interaction with other parts of my PC (like eg the CPU, the RAM, etc.) or because each card is a bit different or what? That would assume every chip is exactly the same. Atoms and the margin of error are way too big in comparison to transistors for that to be remotely true. Some chips are better than others and every one's unique. Basically the limiting factor of how small we can make transistors is the accuracy to which we can carve designs into wafers, and to get that extra performance we take it to the bleeding edge where some of the chips aren't even going to work. Then we pack up all the ones that actually hit a given performance target and ship them up to little boys and girls around the world. xthetenth fucked around with this message at 04:51 on Dec 18, 2015 |
# ? Dec 18, 2015 04:48 |
|
So what kind of FPS performance gain am I looking at in an "ideal" overlock? 5-10%? What are we using to drive our Predator X34s? My Fallout 4 Ultra FPS is dropping to 30-40 at night and That Is Unacceptable.
|
# ? Dec 18, 2015 05:06 |
|
beergod posted:So what kind of FPS performance gain am I looking at in an "ideal" overlock? 5-10%? From modern Maxwell cards, 20% is a fairly easy target (especially on the 980ti). The Fury series from AMD has sub 5% performance gains from overclocking. The Hawaii (290/390/x) series can get ~10% Super rough numbers of course
|
# ? Dec 18, 2015 06:01 |
|
beergod posted:So what kind of FPS performance gain am I looking at in an "ideal" overlock? 5-10%? As BurritoJustice said, a GeForce 9xx card should easily see 20%-25% improvement, and that's before you start talking silly measures like liquid cooling and whatnot. We are using 980/980Ti's to drive X34's. Not sure what you've got right now, but FO4 is actually a comparatively light game on the GPU for a modern AAA title (though it appears to benefit more from a beefier CPU than many other current-gen games). I've got a single 980 pushing a 1440p monitor, and with everything maxed it's comfortably above 75FPS the vast majority of the time.
|
# ? Dec 18, 2015 06:24 |
|
beergod posted:So what kind of FPS performance gain am I looking at in an "ideal" overlock? 5-10%? Note that Fallout 4 does not currently have sli support, so if you have sli right now you are only using one card. You can force sli using some other profiles and instructions around the Internet but that comes with its own side effects.
|
# ? Dec 18, 2015 06:31 |
|
xiansi posted:Mouse software is by far the worst Spoken like someone who has never had a printer.
|
# ? Dec 18, 2015 07:29 |
|
Here's a question I'm wondering. I can currently get a EVGA 970 FTW+ for about $269, should I go for that instead of say a slightly less oc'd 970 from Gigabyte or MSI?
|
# ? Dec 18, 2015 16:57 |
|
real_scud posted:Here's a question I'm wondering. I can currently get a EVGA 970 FTW+ for about $269, should I go for that instead of say a slightly less oc'd 970 from Gigabyte or MSI? That's a good price and I doubt the MSI or Gigabyte models would be any cheaper. I don't think the FTW ever got the crappy cooler.
|
# ? Dec 18, 2015 17:17 |
|
SwissArmyDruid posted:Not that hard to trick the Nvidia cards into thinking that they're not actually in a VM. I hear it only takes one or two things. And I'm one of those dozen people interested in this approach because I'd like to run some CUDA stuff in one VM running Linux and when not in use have the GPU switched to an HTPC VM that runs Windows (DRM BS for TV recording and such that's easier under Windows). Maybe I should just get separate GPUs or eat the cost of a Tesla but buying multiple $1k+ cards sucks for primarily home use.
|
# ? Dec 18, 2015 17:36 |
|
necrobobsledder posted:Not sure which trick(s) that's being used to do it. I'd think the driver would be what's in control of the VM detection logic and they could just patch it out easily otherwise. Only references I see that get around anything in any manner is to use nVidia GRID vGPU under ESXi and that's not what people are looking for normally I'd imagine. Things like changing MAC address to another vendor's range and whatnot seem like it wouldn't work since not every VM will be virtual network connected I'd think. Otherwise, not letting VMware Tools run or changing around PCI addresses for certain devices could very well cause bugs in other things that are not very nice either. I've heard it's possible, I've not yet done it myself. 2016 is the year I build a new box (Skull Canyon if it's got iGPU, Skylake if not, Arctic Islands either way) from scratch and move over fully to Linux and use hardware passthrough to VM Windows into its own little box where it can run my one or two remaining Windows-only apps and can't hurt us. And games. My chosen solution involving Qemu says that the driver looks for KVM extensions, and then self-terminates if it detects them. qEMU has flags you can use to hide those extensions to the driver, although it seems that they also look for Hyper-V as well. Qemu works around it, but apparently this costs real performance under windows and may subject you to CLOCK_WATCHDOG_TIMEOUT bluescreens. Quote from Nvidia: "We fixed some hypervisor detection code that was breaking Hyper-V. It's possible that fix may be preventing GeForce cards from working in passthrough, but because it is not officially supported for GeForce cards, this will not be fixed." https://forums.geforce.com/default/...232923/#4232923 SwissArmyDruid fucked around with this message at 18:20 on Dec 18, 2015 |
# ? Dec 18, 2015 18:17 |
|
Oh.... KVM basically. I've definitely read about getting nVidia card passthrough to work via KVM but not under ESXi. According to this it's just two flags http://www.se7ensins.com/forums/threads/how-to-setup-a-gaming-virtual-machine-with-gpu-passthrough-qemu-kvm-libvirt-and-vfio.1371980/ but it may not apply under Linux as the guest OS. Makes no sense why Hyper-V matters under KVM as the hypervisor but evidently the VM detection logic triggers under Windows as the guest OS for sure.
|
# ? Dec 18, 2015 18:44 |
|
SwissArmyDruid posted:Not that hard to trick the Nvidia cards into thinking that they're not actually in a VM. I hear it only takes one or two things. Do you have any links? I'd like to have options if i ever decide to upgrade the old AMD card i am using for this. I know it works with the older cards but i haven't found anyone who got it working with a consumer 9 series card.
|
# ? Dec 18, 2015 20:19 |
|
Any bets on how the next release cycle will play out? Is it really just going to be a "new 1080 matches/slightly beats the old 980 Ti"? Or will they lead off with big fully unlocked chips? It seems like a bit of a prisoner's dilemma to me. AMD really needs some high-margin parts and some good press as the market leader, and if they do it then NVIDIA has to have something to at least match them. Big chips on a new process node with oodles of HBM sounds expensive though. Paul MaudDib fucked around with this message at 21:08 on Dec 18, 2015 |
# ? Dec 18, 2015 21:02 |
|
Aren't the low-end cards (370/950) and top end parts (save for Fury) the high margin parts these days?
|
# ? Dec 18, 2015 21:20 |
|
Anime Schoolgirl posted:Aren't the low-end cards (370/950) and top end parts (save for Fury) the high margin parts these days? I'm pretty sure it's all "high margin"... they're all like $30 to make or something, most of the cost being the VRAM I think.
|
# ? Dec 18, 2015 21:27 |
|
|
# ? Jun 5, 2024 08:09 |
|
Paul MaudDib posted:Any bets on how the next release cycle will play out? Is it really just going to be a "new 1080 matches/slightly beats the old 980 Ti"? Or will they lead off with big fully unlocked chips? I think AMD is intent on a summer release for Arctic Islands, likely with a Fiji and Tonga shrink (so 470 and 490 cards) while waiting a bit later with a Greenland, Baffin and Ellesmere drop. Nvidia may be holding back just to see AMDs play and try to match it to knock the wind out of Arctics sails.
|
# ? Dec 18, 2015 21:36 |