|
I ran into a major issue upgrading to Windows 10 anniversary and the latest Nvidia drivers for my 1080 where during the driver installation, the screen just glitched out to random artifacts. Rebooting didn't work so I had to restart in safe mode and restore to pre Anniversary and pre latest set of drivers. DDU didn't help and Google doesn't seem to show that this is a common issue. Kinda scratching my head on this
|
# ¿ Aug 22, 2016 04:04 |
|
|
# ¿ May 4, 2024 15:49 |
|
how safe is it to run a Geforce 1080 at 70C (44% GPU fan) for extended periods of time (days-weeks)? i'm getting a second box with a 1080 Ti to be my main PC and am thinking of moving my existing PC into a 29C ambient temp store room as a headless hobby tensorflow machine. wondering how safe it is to leave it run full throttle.
|
# ¿ Jun 18, 2017 12:38 |
|
Oops wrong thread
|
# ¿ Jun 25, 2017 02:09 |
|
My Gigabyte 1080 Ti Gaming OC hits and sticks to 82C when I game (mid-60s for CUDA workloads). My brother's Gigabyte 1080 does the same. Is that a safe long-term operating temperature for gaming?
|
# ¿ Jun 30, 2017 09:29 |
|
It clocks in at 1922Mhz when gaming. How do I tell if it's throttling?
|
# ¿ Jun 30, 2017 09:51 |
|
I might be blind but I upgraded from a vanilla Dell 24 1080p 60hz IPS to a Acer 27XBU 1440p 144hz IPS gsync monitor last year (currently with a 1080 Ti) and while the higher res is great, I can't say the gsync has stood out for me. Is there a simple test video/app I can run to see what it actually does? Everything I've played is >60fps so could that be why?
|
# ¿ Aug 8, 2017 07:17 |
|
I've been idly thinking about a 4x 1080 Ti build for a hobby ML project (the benchmarks for the frameworks I use point to linear scaling with # of GPUs) and had some questions for you guys - 1) is nvidia likely to come out with a 1180/Ti equivalent in the next half a year? 2) what CPU platform would be a good fit? I'm happy with my current Ryzen 1700 but a quick Google seems to be suggesting that if I'm getting 4x GPUs, I should be getting a i9 7XXX series or TR series because the Ryzen and i7 mobos don't seem build for that many GPUs. is that right? 3) i was thinking of the Zotac 1080 Ti mini from a space standpoint. is there a specific model/make I should be looking at? 4) are there any other misc. considerations I should be aware of? thanks!
|
# ¿ Nov 11, 2017 12:23 |
|
Krailor posted:Another option for a bunch of pcie lanes would be to build an x99 system. It's a little older but plenty of places are still selling stock and it will be much cheaper than either a Threadripper or x299 system. ooh, hadn't thought about the need for blower style. thanks. and in terms of the power supply, is 1K watts enough for TR+ 4x 1080 Tis?
|
# ¿ Nov 11, 2017 15:04 |
|
for ML, I've been using nvidia-smi to turn down the power for my current 1080 Ti to 180W i have to check about whether my wall wart supports >1KW safely
|
# ¿ Nov 11, 2017 15:32 |
|
For Nvidia cards, is there a difference between lowering the power-load in wattage using nvidia-smi, down-clocking on a frequency level (?), or "undervolting"? I've been using nvidia-smi to down-"watt" my 1080 Tis from 270W to 200W but I'm wondering what the other techniques are? Or are they basically doing the same thing?
|
# ¿ Dec 8, 2017 02:56 |
|
LOL at suggesting that Nvidia restrict use of CUDA on their cards No way this wouldn't piss the poo poo out of the entire ML sector
|
# ¿ Jan 19, 2018 01:17 |
|
i'm lolling at the fact that my home ML server (4x 1080 Ti) which I built 3 months ago would be 2 grand more expensive if i were to build it today
|
# ¿ Jan 20, 2018 02:58 |
|
i've won the price of it on kaggle over the same time period so not really
|
# ¿ Jan 20, 2018 03:05 |
|
i've been playing around with ML frameworks (pytorch mainly at this point) as a hobby over the past year. started by just using a home gaming PC (Ryzen + 1x 1080 Ti) i then started working on Kaggle competitions (ML competitions platform) and decided to build a full-blown ML machine. It's currently a ThreadRipper 1950X + 64GB ram + 4x 1080 Ti. One of the few side benefits of the Bitcoin boom is you can buy nice and cheap aluminium open racks that hold multiple GPUs securely. the GPUs run at 70C while it's crunching numbers. and the thing about Kaggle is that while the non-Deep Learning competitions are brutal, the image-related competitions are still easy to do well in especially if you have a professional rack. i wouldn't be surprised if I make 20-30 grand this year in prizes from it as a hobby that i work on over the weekend.
|
# ¿ Jan 20, 2018 03:15 |
|
lol, i don't get the point of this argument. setting aside whether it's technically possible, paul started the discussion by saying that nvidia isn't going to do it. so it's a mutual jack-off session? please count me out
|
# ¿ Jan 20, 2018 10:05 |
|
Paul MaudDib posted:says you can't deliver a utility that can crop out an arbitrary 15m segment of a 4K netflix show at full resolution for VLC playback, in the next 7x24 hours from this post (given a legit Netflix user/password/yadda yadda). lol, i don't think you realize who's coming out worse in this exchange
|
# ¿ Jan 20, 2018 14:15 |
|
is everything ok? you've been in continuous meltdown mode the past couple weeks with the "spectre is actually good news for Intel and bad for AMD" and now "nvidia DRM = no bitcoins"
|
# ¿ Jan 20, 2018 14:19 |
|
tehinternet posted:poo poo, I argued in favor of gambling in video games in the PUBG thread.
|
# ¿ Jan 20, 2018 14:55 |
|
lol at graphing "key word mentions" one of the few things worse than wall street is techbros playing at being financial analysts
|
# ¿ Jan 20, 2018 16:57 |
|
can't wait for Ampere to have bad yields and extend this shitshow for another 2 quarters
|
# ¿ Jan 23, 2018 02:05 |
|
god help anyone trying to get a card now
|
# ¿ Jan 26, 2018 15:40 |
|
Does it matter if I use HDMI or DP to connect my videocard to my monitor?
|
# ¿ Jan 29, 2018 13:40 |
|
lol if you're not in YOSPOS
|
# ¿ Feb 14, 2018 07:48 |
|
Paul MaudDib posted:that one is even more fabricated and homosexual. Paul MaudDib posted:Yeah, Turing spent his life searching for nonces too.
|
# ¿ Feb 16, 2018 14:35 |
|
It'll be interesting to see the ML benchmarks of 1x 2080 Ti versus 2x 1080 Ti given the price points of the two cards and the fact that the former still only has 11GB of memory.
|
# ¿ Aug 21, 2018 03:31 |
|
My gaming desktop with a 1080 Ti has recently been behaving oddly when playing videos (e.g., Netflix in browser) or in-game where it'll stutter until I move my mouse around and then it'll "fast-forward" through the stuttered chunk and get back in-sync. Any idea what might be causing it? I initially thought it was a Chrome bug but I just started playing Ace Combat 7 and it showed the same behavior both during cut-scenes ... and then in the middle of gameplay...
|
# ¿ Feb 1, 2019 02:21 |
|
It's not just a browser thing - it's impacting my gaming...
|
# ¿ Feb 1, 2019 09:11 |
|
Dafaq... I seemed to have narrowed it down to a new "gigabit" wifi USB adapter ... Unplugging it seems to have gotten rid of all the stuttering.
|
# ¿ Feb 1, 2019 10:34 |
|
The deep learning research papers that Nvidia or Nvidia-adjacent researchers have published doesn't give the sense they have an edge over academia or the tech giants e.g. Google. I wouldn't be surprised if DLSS is a toy application that some Nvidia researcher played with that doesn't work well in real life conditions when thrown against a broad set of games but got seized upon by suits in the company out of desperation.
|
# ¿ Feb 14, 2019 13:05 |
|
I was looking at 2080 Tis as an upgrade for my home dev machine and ended up pulling the trigger on an RTX Titan - that 24GB ram and full throughput FP16 support was too hard to resist.
|
# ¿ Apr 29, 2019 01:14 |
|
Yeah the 2080 Ti runs neck to neck with the RTX Titan but I need all the RAM I can get. I've been using a 4x 1080 Ti till now but the setup was finicky physically as well as for development so I'm hoping to move to 2x RTX on a normal form factor case.
|
# ¿ Apr 29, 2019 01:25 |
|
Nvidia's probably more worried about outfits like Amazon, Facebook, Google rolling out their own TPU-equivalent hardware for inference at scale. They've been trying to position their data center & visualization segments as growth areas as their gaming segment matures.
|
# ¿ May 1, 2019 05:15 |
|
hell yeah, got my RTX Titan in today and ran some TensorFlow code against it Nvidia published a container image that allows you to set an environment variable and have all tensorflow code automagically use FP16 (Automatic Mixed Precision). Pretty gratifying to be able to immediately quadruple batch sizes (against a 1080 Ti) with a one line code change.
|
# ¿ May 2, 2019 08:12 |
|
yikes, obvious thing to do but clean out the dust from your case vents once in a while. i've been hitting 80c on Anno 1800 and figured that was it but after checking a couple things, noticed that the entire front vent of my computer case had been covered in dust. cleaning it out got temperatures down by 5c.
|
# ¿ May 5, 2019 10:34 |
|
By 2035, we're going to be un-mining carbon in the climate change gulags
|
# ¿ May 31, 2019 15:28 |
|
Setting aside the technical issues, why would you want to take the risk of buying into the Stadia library when Google has a track record of launching half baked products and giving up on them. Especially when you have Microsoft and Sony launching their own equivalent platforms and they have a history in gaming.
|
# ¿ Jun 7, 2019 23:29 |
|
And i wonder what the market size is for the demographic that is interested in console/PC level gaming, has good internet access, but isn't willing to get a console.
|
# ¿ Jun 7, 2019 23:31 |
|
Kinda wish Google would sell a cut down version of their TPUs to light a fire under Nvidia's rear end. Right now, the working model for a lot of people doing ML work is to do prototyping on 108/2080 Tis then scaling up to TPUs if necessary and you really feel the memory pinch on the Tis.
|
# ¿ Jun 30, 2019 06:29 |
|
Malcolm XML posted:Has anyone used a radeon vii for ML work? I have heard that it can more or less do 2080 ti numbers for 2080 prices A colleague did some tests against a 1080 Ti. Outperformed the Ti on off-the-shelf plaidml Keras pre-trained ImageNet models even controlling for memory/batch size. But performed significantly worse on an actual work model (NLP/transformer-derived architecture) on tf-rocm. Seems like the takeaway is the hardware is good but you're making a bet that people will optimize against it over time a la CuDNN, XLA etc.
|
# ¿ Jul 2, 2019 06:32 |
|
|
# ¿ May 4, 2024 15:49 |
|
AMD isn't Nvidia's main threat right now - it's Google. They'd blow up a big chunk of Nvidia's current market base if they decided to ever sell their TPU hardware.
|
# ¿ Jul 16, 2019 05:29 |