Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
1gnoirents
Jun 28, 2014

hello :)
If they have outputs, what exactly is the difference? poo poo if those are any cheaper than their "gaming" counterparts I dont see much reason to get a more expensive card with outputs you wont need

edit: I suppose its also protection for the manufacturer, if mining tanks then its not like they arent useful cards to sell

1gnoirents fucked around with this message at 20:51 on Jun 27, 2017

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Lockback posted:

The 1070s are designed to fail gracefully if you overclock and they actually package the software to do it with the product. They literally market the ability to OC. Nothing he said it outside of the spec of the card.

Underclocked core/overclocked memory is actually how you are supposed to run cards while mining. That is the "being nice to them" approach.

Overclocking without overvolting or increasing the power limit is perfectly safe for the core. It's not as good as reducing the power limit and undervolting, but it's no worse than running stock. And there's actually no way to control voltage on memory at all, it just is what it is.

To be honest one of the really nice things about Pascal is that it actually doesn't even do well with overvolting. The card is basically power limited (and then temperature is a second factor), increasing the voltage will actually run you into the power and temp limits even harder and cause throttling. A lot of gamers actually run their cards undervolted because of this. Pascal is also pretty loving smart about stepping down voltage internally, if you pull back on the power limit it achieves the reduced TDP by not boosting as hard and cutting back on the voltage at the resulting lower clocks.

If you are gaming on Pascal, once you own a 1080/1080 Ti an AIO is one of the best upgrades you can make, because it keeps it nice and cool and lets it throttle up to max boost nonstop.

Junior Jr.
Oct 4, 2014

by sebmojo
Buglord
If manufacturers like ASUS and Sapphire are going to start taking mining seriously and make cards specifically for them, why not make powerful single-slot cards so that encourages more miners to stock on them?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

1gnoirents posted:

If they have outputs, what exactly is the difference? poo poo if those are any cheaper than their "gaming" counterparts I dont see much reason to get a more expensive card with outputs you wont need

edit: I suppose its also protection for the manufacturer, if mining tanks then its not like they arent useful cards to sell

This is one reason I doubted the whole "mining cards" thing would ever happen. Why would NVIDIA/AMD give up a full-price sale of a second GPU and let you buy a discounted one instead? And mining cards with display outputs? That's literally just a price cut on whatever that card is.

There's a whooooole bunch of magical thinking on the part of AIBs with these mining cards. They make absolutely no sense to miners unless they are nearly functionally equivalent to a regular card, and at that point regular users are going to buy them too. If you gimp them too hard, miners are just going to buy regular cards as long as stock exists and only buy the gimped cards as an absolute last resort.

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

Any recs for a good AIO to keep my card in top shape? 1080ti

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS
its like a reserve of lovely cards that only miners will touch only after all the good ones are out of stock. It achieves nothing as long as miners have no limit on the amount of cards they can take in.

Comfy Fleece Sweater posted:

Any recs for a good AIO to keep my card in top shape? 1080ti

evga if its an FE

nzxt g10 if its not

Rastor
Jun 2, 2001

Junior Jr. posted:

Do you not have a monitor with a DP?

I find it really weird they're now releasing 'mining' cards with just a DVI port.
I assume the decision was to have a single display output and of the available choices DVI was the cheapest (license fees maybe?)


1gnoirents posted:

If they have outputs, what exactly is the difference? poo poo if those are any cheaper than their "gaming" counterparts I dont see much reason to get a more expensive card with outputs you wont need

edit: I suppose its also protection for the manufacturer, if mining tanks then its not like they arent useful cards to sell

Paul MaudDib posted:

This is one reason I doubted the whole "mining cards" thing would ever happen. Why would NVIDIA/AMD give up a full-price sale of a second GPU and let you buy a discounted one instead? And mining cards with display outputs? That's literally just a price cut on whatever that card is.

There's a whooooole bunch of magical thinking on the part of AIBs with these mining cards. They make absolutely no sense to miners unless they are nearly functionally equivalent to a regular card, and at that point regular users are going to buy them too. If you gimp them too hard, miners are just going to buy regular cards as long as stock exists and only buy the gimped cards as an absolute last resort.

Difference of a mining board:
1. Optimized for hashes instead of FPS. I assume this means underclocked core + overclocked RAM, done by the manufacturer so you don't need to modify the firmware yourself. ASUS for example is claiming 36% higher hashes/s vs. an equivalent gaming board.
2. One or zero display outputs.
3. Little or no warranty.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Comfy Fleece Sweater posted:

Any recs for a good AIO to keep my card in top shape? 1080ti

NZXT G10/G12 bracket (virtually interchangeable but G12 supposedly has easier installation) has the best compatibility with GPUs, then you need to pick one of the compatible AIOs off the list on their website. Most AIOs on the market are OEM'd from Asetek (due to their patent on waterblock/pump combo units) and anything they make with the given pattern should be compatible. I would probably just go for whatever's cheap (Corsair H55 or H75) but if you want to be fancy the NZXT Kraken series are supposed to be pretty decent, or you could go for a 2x120 or 2x140 unit like the H100i/H105/H110i. The G10/G12 bracket has a fan to blow air over the VRMs but most people recommend you put some copper heatsinks on the VRMs to keep them a little cooler (have not tried this myself).

There's also the EVGA Hybrid kits, which are a little more expensive and are specific to a particular model, but do look a lot more purpose-built instead of just being some poo poo cobbled together with a bracket. Note that the 1070/1080 models are different from the 1080 Ti models - the 1080 Ti has a second power connector and there's no cutout for that in the 1080 Hybrid kit's shroud. They charge a $50 premium for that model, or I guess you could take a hacksaw and do 'er up.

If you see yourself moving to a newer GPU in the near future then the G10/G12 gives you an upgrade path since it magically just fits pretty much anything.

Note that when you buy an AIO cooler (the cooler itself) do not buy refurb. The manufacturer warranty is important, if it explodes and destroys other components while under warranty then all the major companies have an informal policy of refunding anything else that's damaged as long as you didn't physically damage the cooler yourself. The warranty is normally 5 years but on refurb units it's much shorter, usually a year or less (often 90 days). AIOs have the potential to be an expensive accident, if they don't trust it, I don't see why I should either.

Paul MaudDib fucked around with this message at 21:45 on Jun 27, 2017

Cygni
Nov 12, 2005

raring to post

i think if you factor in consumer stupidity, especially the type of stupid consumer interested in cryptocurrency, the mining cards make more sense. they will have a slightly better profit margin for the AIB partners (less overbuilt power delivery and cooling), and there are enough idiots out there that think they will mine better because they have the word mining in the name to make the limited startup costs of reusing an already designed PCB worthwhile.

Also, I would wager that the DVI output is so cryptominers can flip them to internet cafe/gaming dungeons in SE asia that mostly use older DVI monitors.

Nybble
Jun 28, 2008

praise chuck, raise heck
I wonder if these mining cards would be good for Machine Learning too.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Nybble posted:

I wonder if these mining cards would be good for Machine Learning too.

Definitely, assuming you don't need any of the Tesla-specific features like NVLink or RDMA-over-Infiniband then it's basically just a consumer Tesla card.

edit: does anyone here do machine learning? Do you know if machine learning networks are designed to scale across multiple cards, and if so can they do that reasonably well without the high-speed interconnects on Tesla (just peer-to-peer RDMA over PCIe)? Or is it single-card-only?

Paul MaudDib fucked around with this message at 21:38 on Jun 27, 2017

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

Thanks for the AIO recs, that Kraken Looks pretty good actually, and it's not expensive at all(considering 30 dollars vs a 700dollar card)

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS

Comfy Fleece Sweater posted:

Thanks for the AIO recs, that Kraken Looks pretty good actually, and it's not expensive at all(considering 30 dollars vs a 700dollar card)

you still need to buy the AIO with the g10, the kraken is just the name of an aio designed to fit. The evga is close to the same price but only works on reference cards

1gnoirents
Jun 28, 2014

hello :)
I've done the G10 + Kraken route and others and was very, very pleased. I've heard a lot of complaints about pump noise with EVGA kits but I experienced none in the G10 + AIO route. I'm actually thinking about going back to that setup if I can figure out how to attach one to a RVZ02 case to eliminate most of the fan noise during gaming.

The concern in the past was no direct cooling for VRMs and memory and it was a founded one in very specific scenarios for high TDP cards. There was testing done with FLIR to show that heat on the PCB temperature could "run away" if it passed a certain threshold. However it took running Furmark on a 290 and OG Titan to pull it off, and only while overclocked. Otherwise the entire PCB cooled dramatically with all the heat getting sucked up through the waterblock just on the GPU itself. These days its even less of a concern, outside of perhaps the Vega card.

https://www.amazon.com/Saim-Cooling...AMAT7DP6CYPAWCW

You can stick on heatsinks like that if you want but it is without a doubt no longer necessary, but hey more cooling isn't a bad thing and there is a little peace of mind along with it.

Overall I'll always recommend the bracket + AIO route for a reasonably priced way to water cool GPUs. It's so effective you can set your AIO fan speed at a constant inaudible level and there is nothing you can do to the GPU to get it to throttle. Another plus is you can use what normally are considered lovely AIO's but they happen to work extremely well on GPUs

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS
what is the exact command line to lower intensity for equihash EWBF?
I am using --intensity= 17 and it doesnt seem to do anything. On the other pools that use that command it will take my GPU down to 88% or so. On EWBF its 99% all the time even if I try --intensity= 1

Craptacular!
Jul 9, 2001

Fuck the DH

1gnoirents posted:

If they have outputs, what exactly is the difference? poo poo if those are any cheaper than their "gaming" counterparts I dont see much reason to get a more expensive card with outputs you wont need

edit: I suppose its also protection for the manufacturer, if mining tanks then its not like they arent useful cards to sell

Warranty, motherfucker.

They also can use different parts, louder cooling fans, maybe longer lasting parts that consumers wouldn't want in their home rig because the bearings make a slight vibration hum or something.

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

Paul MaudDib posted:

Definitely, assuming you don't need any of the Tesla-specific features like NVLink or RDMA-over-Infiniband then it's basically just a consumer Tesla card.

edit: does anyone here do machine learning? Do you know if machine learning networks are designed to scale across multiple cards, and if so can they do that reasonably well without the high-speed interconnects on Tesla (just peer-to-peer RDMA over PCIe)? Or is it single-card-only?

Depends on the tool you are using. In tensorflow you can, but you need to model your data in such a way that each gpu can take a particular set. Other packages like deeplearning4J don't even always use the GPU all that efficiently to start with. But yeah, a lot of Dev environments will use multiple GPU systems.

For production it usually just makes more sense to use cloud computing, that's what we do.

Ghostlight
Sep 25, 2009

maybe for one second you can pause; try to step into another person's perspective, and understand that a watermelon is cursing me



Comfy Fleece Sweater posted:

Btw does anyone know what's this retarded meme of typing "Hodl" instead of Hold
a buttcoiner once misspelt it and their community is desperate for memes because on the internet memes are equal to cultural relevancy so they have used it ever since.

craig588
Nov 19, 2005

by Nyc_Tattoo

Fauxtool posted:

what is the exact command line to lower intensity for equihash EWBF?
I am using --intensity= 17 and it doesnt seem to do anything. On the other pools that use that command it will take my GPU down to 88% or so. On EWBF its 99% all the time even if I try --intensity= 1

For EWBF the command is

quote:

-i mining intensity. Possible values: 0...4. 0 - lowest intensity and CPU usage, 4 - maximal intensity. You can also specify values for every card, for example "-i 4,2,4". Default value is "4".

Mining isn't worth it for me anymore so haven't tested what would be ideal for using a PC with no performance hit.

Edit: Oh, I see there's another fork that uses --intensity. This is what I was talking about documentation being poor to bad.

Edit 2: And yet another fork:

quote:

Fixed:

EWBF extra launch parameters not working

craig588 fucked around with this message at 01:17 on Jun 28, 2017

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS

craig588 posted:

For EWBF the command is


Mining isn't worth it for me anymore so haven't tested what would be ideal for using a PC with no performance hit.

Edit: Oh, I see there's another fork that uses --intensity. This is what I was talking about documentation being poor to bad.

Edit 2: And yet another fork:

I tried -i and --intensity as low as 1 and it doesnt do anything. I have found some other people with this issue but no solution

willroc7
Jul 24, 2006

BADGES? WE DON'T NEED NO STINKIN' BADGES!
It's coming back!

Junior Jr.
Oct 4, 2014

by sebmojo
Buglord
I guess I have to admit ASIC mining on Slushpool was now a stupid idea, I tried out Nicehash with my GPU rig and so far the payout rate seems to be more efficient than whatever Slushpool did last year when I used it.

I made about £180 on Slushpool after Bitcoin inflated to $2000, and apparently my stats on Nicehash are telling me I should be making about £37-40 per week, so I guess I was better off using this miner instead if I can make the same money back in a month and a half. :v:

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS

willroc7 posted:

It's coming back!

making 4.30 today instead of 3.50 yesterday, time to buy 6 1080ti's!!!

originalnickname
Mar 9, 2005

tree

Junior Jr. posted:

I guess I have to admit ASIC mining on Slushpool was now a stupid idea, I tried out Nicehash with my GPU rig and so far the payout rate seems to be more efficient than whatever Slushpool did last year when I used it.

I made about £180 on Slushpool after Bitcoin inflated to $2000, and apparently my stats on Nicehash are telling me I should be making about £37-40 per week, so I guess I was better off using this miner instead if I can make the same money back in a month and a half. :v:

If it makes you feel any better, I worked with a guy who was all-in on bitcoin despite his normal job making him a very decent living wage.. He threw 24,000 (!!) dollars into bitcoin miners via wire transfer to a chinese company. I bet you can guess how many miners got delivered!

(this was more in response to your statement about the ASIC mining)

Bardeh
Dec 2, 2004

Fun Shoe

Fauxtool posted:

making 4.30 today instead of 3.50 yesterday, time to buy 6 1080ti's!!!



It's time to invest!!! Extrapolating this graph, we'll be at $10000000 by Friday, ITS FREE MONEY

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

Fauxtool posted:

making 4.30 today instead of 3.50 yesterday, time to buy 6 1080ti's!!!

Seeing more 1080 Ti models out of stock on nowinstock than usual so people are probably actually doing this :lol:

Anime Schoolgirl
Nov 28, 2002

MaxxBot posted:

Seeing more 1080 Ti models out of stock on nowinstock than usual so people are probably actually doing this :lol:
:yum:

Shrimp or Shrimps
Feb 14, 2012


originalnickname posted:

If it makes you feel any better, I worked with a guy who was all-in on bitcoin despite his normal job making him a very decent living wage.. He threw 24,000 (!!) dollars into bitcoin miners via wire transfer to a chinese company. I bet you can guess how many miners got delivered!

(this was more in response to your statement about the ASIC mining)

Oh God whyyyyy? They just pocketed his money and then used *his* miners to mine, didn't they?

MaxxBot posted:

Seeing more 1080 Ti models out of stock on nowinstock than usual so people are probably actually doing this :lol:

Oh lord is it happening? Hoping for a nice cheap upgrade.

originalnickname
Mar 9, 2005

tree

Shrimp or Shrimps posted:

Oh God whyyyyy? They just pocketed his money and then used *his* miners to mine, didn't they?

We'll never know for sure, haha!

Junior Jr.
Oct 4, 2014

by sebmojo
Buglord

originalnickname posted:

If it makes you feel any better, I worked with a guy who was all-in on bitcoin despite his normal job making him a very decent living wage.. He threw 24,000 (!!) dollars into bitcoin miners via wire transfer to a chinese company. I bet you can guess how many miners got delivered!

So basically it's his fault that china owns half of bitcoin. :v:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Lockback posted:

Depends on the tool you are using. In tensorflow you can, but you need to model your data in such a way that each gpu can take a particular set. Other packages like deeplearning4J don't even always use the GPU all that efficiently to start with. But yeah, a lot of Dev environments will use multiple GPU systems.

For production it usually just makes more sense to use cloud computing, that's what we do.

So you couldn't train one giga-model but you could do 4 little ones at the same time (f.ex), or a whole bunch of smaller ones? Not trying to make a self-driving car or anything, just curious. I did some pattern recognition back in college (classifiers, etc), AI was joked about as a dead field since Minsky, etc, poo poo's very different now just a couple years later.

shrike82
Jun 11, 2005
Probation
Can't post for 6 hours!
For a single model, you can parallelize by either having each GPU hold the entire model but train on different batches of data at a time, or having each GPU hold a separate chunk of the model and flow the same batch of data through. How easy it is to do this depends on the ML framework you're using.

This is lot easier if you're doing model averaging or stacking, you can just have each GPU work on a separate model.

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS

Fauxtool posted:

I tried -i and --intensity as low as 1 and it doesnt do anything. I have found some other people with this issue but no solution

more on this, the command that works is --intensity but only a value of one does anything and leave out the "=" or it wont work
"--intensity 1"

Fauxtool fucked around with this message at 03:17 on Jun 28, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

shrike82 posted:

For a single model, you can parallelize by either having each GPU hold the entire model but train on different batches of data at a time, or having each GPU hold a separate chunk of the model and flow the same batch of data through. How easy it is to do this depends on the ML framework you're using.

This is lot easier if you're doing model averaging or stacking, you can just have each GPU work on a separate model.


Ignoring software ecosystem issues, do you see any benefit to Vega's compute uarch over an NVIDIA equivalent with an equivalent amount of capacity or connectivity for this or any other common GPGPU tasks? At least from the presentation slides? (it's launch day and Vega is MIA)

I think at one point they showed off M.2 NVMe onboard capability - but I'm not sure how 2 TB of relatively slow (4 GB/s) scratch or storage space really helps vs the 16 GB/s on the PCIe bus. NVIDIA has long-since supported RDMA to certain PCIe SSDs on the bus too, IIRC, what's the benefit of an onboard mount over RDMA, or over just having buttloads of system RAM, or is it just a way of getting more lanes to the card?

Paul MaudDib fucked around with this message at 03:05 on Jun 28, 2017

repiv
Aug 13, 2009

Paul MaudDib posted:

I think at one point they showed off M.2 NVMe onboard capability - but I'm not sure how 2 TB of relatively slow (4 GB/s) scratch or storage space really helps vs the 16 GB/s on the PCIe bus. NVIDIA has long-since supported RDMA to certain PCIe SSDs on the bus too, IIRC, what's the benefit of an onboard mount over RDMA, or over just having buttloads of system RAM, or is it just a way of getting more lanes to the card?

We don't know how the Vega-based Radeon SSG works yet, but as far as I can tell the Fiji and Polaris SSGs were just gimmicks. They have no direct link between the GPU and onboard SSDs, they just use a PLX chip to multiplex the GPU and SSDs onto the system bus then have them communicate over plain old RDMA.

repiv fucked around with this message at 03:56 on Jun 28, 2017

Lockback
Sep 3, 2006

All days are nights to see till I see thee; and nights bright days when dreams do show me thee.

Paul MaudDib posted:

Ignoring software ecosystem issues, do you see any benefit to Vega's compute uarch over an NVIDIA equivalent with an equivalent amount of capacity or connectivity for this or any other common GPGPU tasks? At least from the presentation slides? (it's launch day and Vega is MIA)

I think at one point they showed off M.2 NVMe onboard capability - but I'm not sure how 2 TB of relatively slow (4 GB/s) scratch or storage space really helps vs the 16 GB/s on the PCIe bus. NVIDIA has long-since supported RDMA to certain PCIe SSDs on the bus too, IIRC, what's the benefit of an onboard mount over RDMA, or over just having buttloads of system RAM, or is it just a way of getting more lanes to the card?

Software ecosystem is kind of everything, thats why CUDA is such a big thing. Honestly, I only use non-cloud for doing POC-type work, so I really only care about memory size and if its CUDA-compliant to what my team is doing. Hell, we'll sometimes just CPU work rather than deal with some loving weird rear end implementation that isn't working with the GPUs on the first try.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Lockback posted:

Software ecosystem is kind of everything, thats why CUDA is such a big thing. Honestly, I only use non-cloud for doing POC-type work, so I really only care about memory size and if its CUDA-compliant to what my team is doing. Hell, we'll sometimes just CPU work rather than deal with some loving weird rear end implementation that isn't working with the GPUs on the first try.

No I fully understand this, I just want to purely know if you see any advantages to what AMD's proposing in terms of hardware. "If the ecosystem were there, X might be nice."

Paul MaudDib fucked around with this message at 03:56 on Jun 28, 2017

eames
May 9, 2009

Just last night I was wondering if the new ransomware outbreak (Petya) would affect prices and sure enough... interesting timing.

eames fucked around with this message at 08:05 on Jun 28, 2017

Fauxtool
Oct 21, 2008

by Jeffrey of YOSPOS
what do you mean bitcoins are only good for criminal activity? Lots of upstanding taxpayers use them all the time like...

Adbot
ADBOT LOVES YOU

Junior Jr.
Oct 4, 2014

by sebmojo
Buglord

Fauxtool posted:

what do you mean bitcoins are only good for criminal activity? Lots of upstanding taxpayers use them all the time like...

They're also used for buying steam games...I know I did...if you don't want to risk killing your wallet then that's an option.

  • Locked thread