Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
big scary monsters
Sep 2, 2011

-~Skullwave~-

MononcQc posted:

they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues.

have there been examples of adversarial approaches that limit access to the network they're trying to spoof? presumably a real adversary isn't going to hand you their model and give you a week's unrestricted cluster time running them against each other - what's the minimum attempts you can use to turn a gun into a turtle?

Adbot
ADBOT LOVES YOU

big scary monsters
Sep 2, 2011

-~Skullwave~-
beer pal seems to have a solid understanding of machine learning, missing only this key image that i guess i probably got from this thread but i'm too lazy to find the post to quote

big scary monsters
Sep 2, 2011

-~Skullwave~-
i ran it throguh a nn and now it's bigger and more true

big scary monsters
Sep 2, 2011

-~Skullwave~-
a surprisingly optimistic ending for watts

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply