Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Eyes Only
May 20, 2008

Do not attempt to adjust your set.
Man I hate to contribute to this thread, but cryptonerds did make my machine learning box pay for itself. Plus I guess it's not really crypto if we're talking about legit computation.

DrDork posted:

Yeah, the answer to the "bad actors" issue is to simply farm out the same computation to multiple nodes, and then look for a consensus result. The real question is can you get that consensus using a small enough duplication factor so that the cost of doing the same thing X times doesn't let the monetary cost exceed simply running it on AWS or another HPC farm whose one-run output you implicitly trust. Idle buttcoin farms will work for very low wages, but generally won't work for free or at a loss (unless you pay them in fake crypto coins, I suppose...).

This seems pretty simple to me; the answer is you can beat bad actors easily enough as long as you split the computation into discrete jobs and only pay out an address once it has completed N jobs.

The way I imagine it, before payout the system randomly duplicates some % of the jobs and gives them to other actors to check. If you don't trust the checkers, you can establish trust by checking a fraction of their checks, recursively going up the tree with progressively fewer checks until you get to some "central" authority - presumably the original issuer of the job since their interest is aligned here.

At payout time, you randomly check X% of jobs for that actor. If any fail, the actor forfeits the entire payout. The math works out such that as long as N is greater than 1/X cheating will be entirely unprofitable. If you cheat 100% of the time you'll never get paid. If you cheat more intelligently just some of the time, you may have a chance of getting away with it, but you'll also have to contribute real compute to ensure that chance isn't zero. In the end you will wind up forfeiting more real compute than you gain from cheating, regardless of what fraction of your results are fake.

Naturally you'd want some buffer room in either N or X to account for the checker trust system, but in total you can get the overhead to be pretty low. At a 1% checkrate you only need 100 jobs per payout to secure the system - achievable for most distributed tasks if you're targeting weekly or monthly payouts for a typical actor.

You'd need a reverse-check system to prevent the job issuers themselves from maliciously marking everyone as cheaters at the root level, and one for preventing actors from DDoSing the system by spamming fake results for no benefit. I think this idea can be applied to the former in the same way with little overhead since the issuer isn't checking a large number of results. I presume the latter is already solved in existing blockchain protocols somehow.

Adbot
ADBOT LOVES YOU

  • Locked thread