Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The Management
Jan 2, 2010

sup, bitch?
for those of you who haven’t worked with machine learning, here’s how it’s works:

* you take a bunch of pictures of shoes or horses or whatever the gently caress you want to recognize.
* then reduce those images to some ridiculous low resolution thing
* then you cram those images through some matrices until you get some matrix that happens to tell you horse or not horse.
* nobody knows why this matrix is horse, but it seems to work so you’re like, cool, I made a horse model.

now you take your horse model and set out to detect some horses. you take some pics and run them through your model and it’s all here a horse, there a horse, no horse there. but then there’s one picture of a horse and your model says no, not horse. and you have no idea why it’s not horse. the matrix is just a bunch of 8-bit values, it doesn’t even look like a horse. the answer, which you will never discover, is that the picture has a reflection of the sun in the water under the horse, which is not where the sun goes and your model couldn’t deal with that.

and then you realize that your model doesn’t understand poo poo, it hasn’t learned anything. it is incapable of learning, it’s a loving filter.

Adbot
ADBOT LOVES YOU

The Management
Jan 2, 2010

sup, bitch?
btw, ML processors that are the next magical computing wave are just parallel arithmetic engines that operate on matrices. many of them only support 8-bit arithmetic. the point is, whenever you read some article hyperventilating about TPUs and ML chips, you can mentally replace them with floating point units or vector instructions and see if the article still makes sense.

The Management
Jan 2, 2010

sup, bitch?

Cybernetic Vermin posted:

google along with a lot of other actors have realized that a lot of successful ml simply involves ridiculous amounts of semi-skilled man-hours tweaking parameters and topologies randomly until the thing seems to do the thing, so it is worth it to try to capture more hands early on.

there's some glimmers of theoretical work happening on what works and what doesn't, notably lin-tegmark(-rolnick) https://arxiv.org/abs/1608.08225v4 , which does the very basic needful demonstrating that e.g. encoding multiplication requires exponentially wide hidden layers, which should reasonably be interpreted as "cannot be learned with current methods".

i never find the time, but i am personally quite sure that it is a pretty doable task to draw from minsky's perceptrons book (which famously killed off neural networks last time they were popular) the needed machinery, combined with reasonable assumptions on what is the limits of learning (e.g. finite precision) to demonstrate that some practical-sounding problems cannot be taught to deep neural networks without encoding as much information as a full table realization of the function requires. which should again be interpreted as "cannot be learned with current methods". i don't think such a paper would change the enthusiasm that much, but it is one of those directions that ought to be investigated sooner or later.

I’m not an academic type but if I’m reading this correctly you’re saying that ML is horseshit, and I can confirm this from practical experience.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply