|
MononcQc posted:they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues. have there been examples of adversarial approaches that limit access to the network they're trying to spoof? presumably a real adversary isn't going to hand you their model and give you a week's unrestricted cluster time running them against each other - what's the minimum attempts you can use to turn a gun into a turtle?
|
# ¿ Sep 9, 2019 08:25 |
|
|
# ¿ May 14, 2024 21:52 |
|
beer pal seems to have a solid understanding of machine learning, missing only this key image that i guess i probably got from this thread but i'm too lazy to find the post to quote
|
# ¿ Nov 16, 2020 17:09 |
|
i ran it throguh a nn and now it's bigger and more true
|
# ¿ Nov 16, 2020 17:39 |
|
a surprisingly optimistic ending for watts
|
# ¿ Nov 18, 2020 11:38 |