Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
shrike82
Jun 11, 2005

I ran into a major issue upgrading to Windows 10 anniversary and the latest Nvidia drivers for my 1080 where during the driver installation, the screen just glitched out to random artifacts. Rebooting didn't work so I had to restart in safe mode and restore to pre Anniversary and pre latest set of drivers.

DDU didn't help and Google doesn't seem to show that this is a common issue.

Kinda scratching my head on this

Adbot
ADBOT LOVES YOU

shrike82
Jun 11, 2005

how safe is it to run a Geforce 1080 at 70C (44% GPU fan) for extended periods of time (days-weeks)?

i'm getting a second box with a 1080 Ti to be my main PC and am thinking of moving my existing PC into a 29C ambient temp store room as a headless hobby tensorflow machine.
wondering how safe it is to leave it run full throttle.

shrike82
Jun 11, 2005

Oops wrong thread

shrike82
Jun 11, 2005

My Gigabyte 1080 Ti Gaming OC hits and sticks to 82C when I game (mid-60s for CUDA workloads). My brother's Gigabyte 1080 does the same.

Is that a safe long-term operating temperature for gaming?

shrike82
Jun 11, 2005

It clocks in at 1922Mhz when gaming. How do I tell if it's throttling?

shrike82
Jun 11, 2005

I might be blind but I upgraded from a vanilla Dell 24 1080p 60hz IPS to a Acer 27XBU 1440p 144hz IPS gsync monitor last year (currently with a 1080 Ti) and while the higher res is great, I can't say the gsync has stood out for me.

Is there a simple test video/app I can run to see what it actually does?

Everything I've played is >60fps so could that be why?

shrike82
Jun 11, 2005

I've been idly thinking about a 4x 1080 Ti build for a hobby ML project (the benchmarks for the frameworks I use point to linear scaling with # of GPUs) and had some questions for you guys -
1) is nvidia likely to come out with a 1180/Ti equivalent in the next half a year?
2) what CPU platform would be a good fit? I'm happy with my current Ryzen 1700 but a quick Google seems to be suggesting that if I'm getting 4x GPUs, I should be getting a i9 7XXX series or TR series because the Ryzen and i7 mobos don't seem build for that many GPUs. is that right?
3) i was thinking of the Zotac 1080 Ti mini from a space standpoint. is there a specific model/make I should be looking at?
4) are there any other misc. considerations I should be aware of?

thanks!

shrike82
Jun 11, 2005

Krailor posted:

Another option for a bunch of pcie lanes would be to build an x99 system. It's a little older but plenty of places are still selling stock and it will be much cheaper than either a Threadripper or x299 system.

Also if you're going to get 4x GPUs you want to get ones with blower style coolers; they're made for sitting close together in SLI. The open air coolers are great if there's space between cards but they start choking when the cards are right next to each other.

ooh, hadn't thought about the need for blower style. thanks.

and in terms of the power supply, is 1K watts enough for TR+ 4x 1080 Tis?

shrike82
Jun 11, 2005

for ML, I've been using nvidia-smi to turn down the power for my current 1080 Ti to 180W

i have to check about whether my wall wart supports >1KW safely

shrike82
Jun 11, 2005

For Nvidia cards, is there a difference between lowering the power-load in wattage using nvidia-smi, down-clocking on a frequency level (?), or "undervolting"?

I've been using nvidia-smi to down-"watt" my 1080 Tis from 270W to 200W but I'm wondering what the other techniques are? Or are they basically doing the same thing?

shrike82
Jun 11, 2005

LOL at suggesting that Nvidia restrict use of CUDA on their cards

No way this wouldn't piss the poo poo out of the entire ML sector

shrike82
Jun 11, 2005

i'm lolling at the fact that my home ML server (4x 1080 Ti) which I built 3 months ago would be 2 grand more expensive if i were to build it today

shrike82
Jun 11, 2005

i've won the price of it on kaggle over the same time period so not really

shrike82
Jun 11, 2005

i've been playing around with ML frameworks (pytorch mainly at this point) as a hobby over the past year.
started by just using a home gaming PC (Ryzen + 1x 1080 Ti)

i then started working on Kaggle competitions (ML competitions platform) and decided to build a full-blown ML machine.
It's currently a ThreadRipper 1950X + 64GB ram + 4x 1080 Ti. One of the few side benefits of the Bitcoin boom is you can buy nice and cheap aluminium open racks that hold multiple GPUs securely. the GPUs run at 70C while it's crunching numbers.

and the thing about Kaggle is that while the non-Deep Learning competitions are brutal, the image-related competitions are still easy to do well in especially if you have a professional rack. i wouldn't be surprised if I make 20-30 grand this year in prizes from it as a hobby that i work on over the weekend.

shrike82
Jun 11, 2005

lol, i don't get the point of this argument. setting aside whether it's technically possible, paul started the discussion by saying that nvidia isn't going to do it.

so it's a mutual jack-off session? please count me out

shrike82
Jun 11, 2005

Paul MaudDib posted:

:10bux: says you can't deliver a utility that can crop out an arbitrary 15m segment of a 4K netflix show at full resolution for VLC playback, in the next 7x24 hours from this post (given a legit Netflix user/password/yadda yadda).

:toxx: If you agree, one of us gets banned. Gauntlet is thrown, I'll do it if you will. Yes or no?

lol, i don't think you realize who's coming out worse in this exchange

shrike82
Jun 11, 2005

is everything ok? you've been in continuous meltdown mode the past couple weeks with the "spectre is actually good news for Intel and bad for AMD" and now "nvidia DRM = no bitcoins"

shrike82
Jun 11, 2005

tehinternet posted:

poo poo, I argued in favor of gambling in video games in the PUBG thread.

:dogbutton:

shrike82
Jun 11, 2005

lol at graphing "key word mentions"

one of the few things worse than wall street is techbros playing at being financial analysts

shrike82
Jun 11, 2005

can't wait for Ampere to have bad yields and extend this shitshow for another 2 quarters

shrike82
Jun 11, 2005

god help anyone trying to get a card now

shrike82
Jun 11, 2005

Does it matter if I use HDMI or DP to connect my videocard to my monitor?

shrike82
Jun 11, 2005

lol if you're not in YOSPOS

shrike82
Jun 11, 2005

Paul MaudDib posted:

that one is even more fabricated and homosexual.

Paul MaudDib posted:

Yeah, Turing spent his life searching for nonces too.

:yikes:

shrike82
Jun 11, 2005

It'll be interesting to see the ML benchmarks of 1x 2080 Ti versus 2x 1080 Ti given the price points of the two cards and the fact that the former still only has 11GB of memory.

shrike82
Jun 11, 2005

My gaming desktop with a 1080 Ti has recently been behaving oddly when playing videos (e.g., Netflix in browser) or in-game where it'll stutter until I move my mouse around and then it'll "fast-forward" through the stuttered chunk and get back in-sync.

Any idea what might be causing it? I initially thought it was a Chrome bug but I just started playing Ace Combat 7 and it showed the same behavior both during cut-scenes ... and then in the middle of gameplay...

shrike82
Jun 11, 2005

It's not just a browser thing - it's impacting my gaming...

shrike82
Jun 11, 2005

Dafaq... I seemed to have narrowed it down to a new "gigabit" wifi USB adapter ...
Unplugging it seems to have gotten rid of all the stuttering.

shrike82
Jun 11, 2005

The deep learning research papers that Nvidia or Nvidia-adjacent researchers have published doesn't give the sense they have an edge over academia or the tech giants e.g. Google.

I wouldn't be surprised if DLSS is a toy application that some Nvidia researcher played with that doesn't work well in real life conditions when thrown against a broad set of games but got seized upon by suits in the company out of desperation.

shrike82
Jun 11, 2005

I was looking at 2080 Tis as an upgrade for my home dev machine and ended up pulling the trigger on an RTX Titan - that 24GB ram and full throughput FP16 support was too hard to resist.

shrike82
Jun 11, 2005

Yeah the 2080 Ti runs neck to neck with the RTX Titan but I need all the RAM I can get.

I've been using a 4x 1080 Ti till now but the setup was finicky physically as well as for development so I'm hoping to move to 2x RTX on a normal form factor case.

shrike82
Jun 11, 2005

Nvidia's probably more worried about outfits like Amazon, Facebook, Google rolling out their own TPU-equivalent hardware for inference at scale.

They've been trying to position their data center & visualization segments as growth areas as their gaming segment matures.

shrike82
Jun 11, 2005

hell yeah, got my RTX Titan in today and ran some TensorFlow code against it


Nvidia published a container image that allows you to set an environment variable and have all tensorflow code automagically use FP16 (Automatic Mixed Precision).
Pretty gratifying to be able to immediately quadruple batch sizes (against a 1080 Ti) with a one line code change.

shrike82
Jun 11, 2005

yikes, obvious thing to do but clean out the dust from your case vents once in a while.
i've been hitting 80c on Anno 1800 and figured that was it but after checking a couple things, noticed that the entire front vent of my computer case had been covered in dust. cleaning it out got temperatures down by 5c.

shrike82
Jun 11, 2005

By 2035, we're going to be un-mining carbon in the climate change gulags

shrike82
Jun 11, 2005

Setting aside the technical issues, why would you want to take the risk of buying into the Stadia library when Google has a track record of launching half baked products and giving up on them.

Especially when you have Microsoft and Sony launching their own equivalent platforms and they have a history in gaming.

shrike82
Jun 11, 2005

And i wonder what the market size is for the demographic that is interested in console/PC level gaming, has good internet access, but isn't willing to get a console.

shrike82
Jun 11, 2005

Kinda wish Google would sell a cut down version of their TPUs to light a fire under Nvidia's rear end.

Right now, the working model for a lot of people doing ML work is to do prototyping on 108/2080 Tis then scaling up to TPUs if necessary and you really feel the memory pinch on the Tis.

shrike82
Jun 11, 2005

Malcolm XML posted:

Has anyone used a radeon vii for ML work? I have heard that it can more or less do 2080 ti numbers for 2080 prices

A colleague did some tests against a 1080 Ti.
Outperformed the Ti on off-the-shelf plaidml Keras pre-trained ImageNet models even controlling for memory/batch size.

But performed significantly worse on an actual work model (NLP/transformer-derived architecture) on tf-rocm.

Seems like the takeaway is the hardware is good but you're making a bet that people will optimize against it over time a la CuDNN, XLA etc.

Adbot
ADBOT LOVES YOU

shrike82
Jun 11, 2005

AMD isn't Nvidia's main threat right now - it's Google. They'd blow up a big chunk of Nvidia's current market base if they decided to ever sell their TPU hardware.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply