|
ZZZorcerer posted:I’ll try to get in the masters program at the CS department in my Uni later this year but another option that I was thinking was the Philosophy dept to try Law/Ethics in Computer/AI/ML just get a phd from CMU's ML department
|
# ? Jun 7, 2019 02:43 |
|
|
# ? May 25, 2024 13:38 |
|
Kilometres Davis posted:woah I thought palantir's code was under NDA the posted example was a product of clean room RE.
|
# ? Jun 7, 2019 03:53 |
|
huhwhat posted:
this always returns false, it should be the other way around
|
# ? Jun 7, 2019 04:06 |
|
lancemantis posted:all the ethics courses in the world won't matter when each individual is just some alienated contributor to a greater machine, and who can rationalize away their own involvement in anything horrible which may result from their work ok but what if you did the least horrible thing you can, as far as you can tell from the information immediately available around you?
|
# ? Jun 7, 2019 05:13 |
|
It’s pretty hard to predict if anything you might work on could be weaponized Actually it’s pretty easy: it most likely will be Plenty of people working for google/amazon/whatever probably honestly didn’t think their work would be picked up by the MIC but it was Plenty of people doing research not even funded by one of the ARPAs might catch their interest out of the blue later and suddenly they’re pumping money into it Chemists and life science folks probably didn’t expect chemical and biological weapons to come out of their stuff Hell some of the early nuclear weapons folks probably didn’t realize how insane that would become
|
# ? Jun 7, 2019 06:41 |
|
Lonely Wolf posted:You stole my implementation and that's . . . ethical. DAMMIT
|
# ? Jun 7, 2019 16:30 |
|
weaponized memes
|
# ? Jun 7, 2019 17:21 |
|
huhwhat posted:weaponized memes this is already a thing op
|
# ? Jun 7, 2019 19:54 |
|
https://twitter.com/farbandish/status/1103099163296772096?s=21
|
# ? Jun 7, 2019 23:14 |
|
i really hope it didn't take him that long to realize that
|
# ? Jun 8, 2019 00:56 |
|
lol https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/ "To get a better handle on what the full development pipeline might look like in terms of carbon footprint, Strubell and her colleagues used a model they’d produced in a previous paper as a case study. They found that the process of building and testing a final paper-worthy model required training 4,789 models over a six-month period. Converted to CO2 equivalent, it emitted more than 78,000 pounds and is likely representative of typical work in the field." that's the lifetime emissions of 5 cars btw animist fucked around with this message at 21:04 on Jun 9, 2019 |
# ? Jun 9, 2019 21:00 |
|
how many car- lifetimes per day is bitcoin
|
# ? Jun 9, 2019 21:28 |
|
animist posted:lol the other day while something I was working on wasn’t working I joked about having the GPUs just producing heat instead of anything useful
|
# ? Jun 10, 2019 02:52 |
|
this ml app is actually kinda cool. neural art is neat, imo
|
# ? Jun 13, 2019 09:28 |
|
spooky paper: "adversarial examples" are actually just the computer picking up on patterns that humans can't see https://arxiv.org/abs/1905.02175 on the plus side you can train deep neural networks to not use those features. but then they lose accuracy
|
# ? Jun 20, 2019 17:37 |
|
I mean, the fact that the features picked up aren't necessarily the same ones a human might consciously choose is a pretty well known phenomenon in machine vision; the amusing bit being that people sometimes use this to get huffy about things when people sometimes unconsciously or consciously also do ridiculous things and we have much better sensors in some ways
|
# ? Jun 20, 2019 18:20 |
|
they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues.
|
# ? Jun 20, 2019 22:18 |
|
MononcQc posted:they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues. no it's because it can theoretically break the panopticon in ways that are less obvious than IR blinders or strobes
|
# ? Jun 20, 2019 22:49 |
|
it is a moral imperative to use any such methods
|
# ? Jun 20, 2019 22:53 |
|
it hardly matters, most actual """AI""" applications are just mechanical turk and other people
|
# ? Jun 20, 2019 23:13 |
|
Captain Foo posted:no it's because it can theoretically break the panopticon in ways that are less obvious than IR blinders or strobes gdi you're right i just started research on improved adversarial defenses and somehow i didn't grasp that this is what i'm actually getting paid for, lol
|
# ? Jun 21, 2019 03:04 |
|
animist posted:gdi you're right time to surreptitiously be terrible
|
# ? Jun 21, 2019 04:03 |
|
Captain Foo posted:time to surreptitiously be terrible lol if you do this surreptitiously
|
# ? Jun 21, 2019 04:04 |
|
it would be a mitzvah if one of you would come up with an algorithm that analyses your face and tells you where to subtly apply a makeup convolution so that the system thinks your face is a turtle. tia
|
# ? Jun 21, 2019 18:33 |
|
Captain Foo posted:time to surreptitiously be terrible /code review/ what’s this ‘fiducials.trumpface.backdoor.’ routine? oh ha ha just joking around let me delete that
|
# ? Jun 21, 2019 19:09 |
|
Sagebrush posted:it would be a mitzvah if one of you would come up with an algorithm that analyses your face and tells you where to subtly apply a makeup convolution so that the system thinks your face is a turtle. tia why would i want an ai to think i was mitch mcconnel
|
# ? Jun 22, 2019 05:29 |
|
florida lan posted:why would i want an ai to think i was mitch mcconnel i can think of a couple of reasons actually
|
# ? Jun 24, 2019 05:19 |
|
https://twitter.com/byJoshuaDavis/status/1147538052639682565
|
# ? Jul 7, 2019 16:45 |
|
It's not that the robots are coming for your jobs. It's that your boss wants to replace you with a robot.
|
# ? Sep 8, 2019 20:18 |
|
MononcQc posted:they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues. have there been examples of adversarial approaches that limit access to the network they're trying to spoof? presumably a real adversary isn't going to hand you their model and give you a week's unrestricted cluster time running them against each other - what's the minimum attempts you can use to turn a gun into a turtle?
|
# ? Sep 9, 2019 08:25 |
|
big scary monsters posted:presumably a real adversary isn't going to hand you their model and give you a week's unrestricted cluster time running them against each other They absolutely will as soon as it becomes a commercial product.
|
# ? Sep 9, 2019 08:27 |
|
does google employ ml for image searches to help them determine what can be displayed under safe search? because I can see adversarial imaging making poo poo go wild hardcore porno popping up with searches of “paw patrol” or ted Cruz’s ugly face being grouped with lemon party, the possibilities for fuckery are endless
|
# ? Sep 13, 2019 16:50 |
|
idk maybe they use a combination of classifying sites themselves (keywords) and images (some dumb nn) and then some threshold? ive seen porn go through a couple times.
|
# ? Sep 13, 2019 18:09 |
|
Krankenstyle posted:idk maybe they use a combination of classifying sites themselves (keywords) and images (some dumb nn) and then some threshold? ive seen porn go through a couple times. I thought you could almost always find porn if you keep scrolling down the results.
|
# ? Sep 14, 2019 00:14 |
|
akadajet posted:I thought you could almost always find porn if you keep scrolling down the results. probably? this was like page 1-2 of some image search
|
# ? Sep 14, 2019 00:43 |
|
something fun i noticed about google search, it doesn't matter what you're searching for, if you go about 5 or 6 pages back you'll eventually find a link that looks like exactly what you want but it redirects to a game called oval office WARS props to google for burying those links several pages back, but poo poo, they probably shouldn't be there at all
|
# ? Sep 14, 2019 02:55 |
|
https://twitter.com/dalykyle/status/1174360934237855749 No way this isn't an NLP error.
|
# ? Sep 18, 2019 20:40 |
|
lmao
|
# ? Sep 18, 2019 21:09 |
|
rofl
|
# ? Sep 18, 2019 23:28 |
|
|
# ? May 25, 2024 13:38 |
|
ultrafilter posted:https://twitter.com/dalykyle/status/1174360934237855749 oh look, the handy work of Summly, the "AI News Summarizing Product" built by some kids that Marissa gave like a billion dollars to. Working at yahoo under marissa was the fuckin worst. "guys, guys, don't you see how valuable tumblr is???"
|
# ? Sep 20, 2019 18:53 |