Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

wargames posted:

till next craze happen, then no more gpus.

I don’t see how any Gpus can compete with any asic

Adbot
ADBOT LOVES YOU

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

QuarkJets posted:

Some more effortly details

- All of the people with GPU mining rigs would probably be really happy to have another way to use their hardware to make money, like what that grad student I mentioned was trying to do. All that matters to them is that they be paid for running their hardware. Most of the computational hardware is in the hands of non-experts, so you just need to make the user interface as simple as possible. NiceHash did a great job of this, "click a button and start slowly making money from your idle hardware" is an easy to understand paradigm and would bring a lot of people on board. In fact, more generalized GPU computing would probably bring in more people than cryptocurrency mining ever did, because it would like the natural skeeviness attached to cryptocurrency and people contribute idle resources for free to all kinds of good causes (SETI@Home, BOINC, etc.)

- Cryptocurrency miners are not actively seeking out these opportunities but would surely embrace them if they advertised themselves.

- Amazon is practically printing money because AWS offers a lot of computational power to anyone who wants to pay for it. AWS is highly generalized (e.g. not just for heavy computation), and there's certainly room for a lower-cost alternative focused on heavy computation

- I can't emphasize the need for an intuitive user interface enough.

- The subset of tasks I mentioned that aren't well-suited to a GTX 1080 Ti are definitely in the minority. Machine Learning is the vast majority of computational effort right now, it's a big hot topic in HPC and it doesn't need any of the bells and whistles offered by the premium cards. Caveat: the more premium cards have tensor cores that are extremely well-optimized for machine learning tasks, and they're way better at the half-precision computations that machine learning algorithms frequently perform. A GTX 1080 Ti will never be superior to a V100 for machine learning, but if compute time with ten GTX 1080 Tis is cheaper than compute time on one V100 then most professionals will go with the 1080 Tis.

- Corporations are risk-averse and are going to want assurances that you're not sending their data to their competitors.

- Solving the honesty problem means being able to easily verify computational outputs. This is easy by design in cryptocurrency, figuring out the correct nonce for the next block is hard but then verifying that the nonce is correct is easy. This is lot harder for computational problems. Say that I need to convolve one billion matrices with one billion other matrices; the only way to verify those outputs is to do the difficult computation yourself and check the answer. You can't solve this issue but you can mitigate it; other providers on the network can perform the verification, and you could wrap that into the cost offered to people looking for computational power (e.g. a user could ask that N% of the outputs be independently verified X times by Y independent providers; N could be 100 for renders, and maybe this degree of redundant computation is still cost effective because you don't have to purchase and maintain your own hardware)

What Quarkjets is trying to say here is that you need a Blockchain to solve these problems

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

Proof of steak heh heh heh 🥩

Comfy Fleece Sweater
Apr 2, 2013

You see, but you do not observe.

QuarkJets posted:

"Cpu hour" is computational time, it accounts for the fact that different computations take different lengths of time. "per computation" implies that time per computation is not a factor, which is not done anywhere that I know of

I think that attaching a certain amount of computational time (e.g. CPU hours) to a job submission is fine. But say that I want as many digits of pi as possible and I say that I want 100 cpu hours attached to the task. Even with verification of outputs, some systems will produce a different number of digits in the same amount of computational time. How do you identify bad actors in such a situation? "You can't" may be an acceptable answer, but it would be great to solve the problem anyway

BLOCKCHAIN!

  • Locked thread