Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

ZZZorcerer posted:

I’ll try to get in the masters program at the CS department in my Uni later this year but another option that I was thinking was the Philosophy dept to try Law/Ethics in Computer/AI/ML :shobon:

just get a phd from CMU's ML department

Adbot
ADBOT LOVES YOU

Midjack
Dec 24, 2007



Kilometres Davis posted:

woah I thought palantir's code was under NDA

the posted example was a product of clean room RE.

animist
Aug 28, 2018

huhwhat posted:

code:
if ethical:
	return not ethical
else:
	return ethical

this always returns false, it should be the other way around :smug:

Deep Dish Fuckfest
Sep 6, 2006

Advanced
Computer Touching


Toilet Rascal

lancemantis posted:

all the ethics courses in the world won't matter when each individual is just some alienated contributor to a greater machine, and who can rationalize away their own involvement in anything horrible which may result from their work

ok but what if you did the least horrible thing you can, as far as you can tell from the information immediately available around you?

Arcteryx Anarchist
Sep 15, 2007

Fun Shoe
It’s pretty hard to predict if anything you might work on could be weaponized

Actually it’s pretty easy: it most likely will be

Plenty of people working for google/amazon/whatever probably honestly didn’t think their work would be picked up by the MIC but it was

Plenty of people doing research not even funded by one of the ARPAs might catch their interest out of the blue later and suddenly they’re pumping money into it

Chemists and life science folks probably didn’t expect chemical and biological weapons to come out of their stuff

Hell some of the early nuclear weapons folks probably didn’t realize how insane that would become

flakeloaf
Feb 26, 2003

Still better than android clock

Lonely Wolf posted:

You stole my implementation and that's . . . ethical. DAMMIT

huhwhat
Apr 22, 2010

by sebmojo
weaponized memes

vodkat
Jun 30, 2012



cannot legally be sold as vodka

huhwhat posted:

weaponized memes

this is already a thing op

teppichporsche
May 11, 2019

https://twitter.com/farbandish/status/1103099163296772096?s=21

Vomik
Jul 29, 2003

This post is dedicated to the brave Mujahideen fighters of Afghanistan

i really hope it didn't take him that long to realize that

animist
Aug 28, 2018
lol

https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

"To get a better handle on what the full development pipeline might look like in terms of carbon footprint, Strubell and her colleagues used a model they’d produced in a previous paper as a case study. They found that the process of building and testing a final paper-worthy model required training 4,789 models over a six-month period. Converted to CO2 equivalent, it emitted more than 78,000 pounds and is likely representative of typical work in the field."

that's the lifetime emissions of 5 cars btw

animist fucked around with this message at 21:04 on Jun 9, 2019

Bloody
Mar 3, 2013

how many car- lifetimes per day is bitcoin

Arcteryx Anarchist
Sep 15, 2007

Fun Shoe

animist posted:

lol

https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

"To get a better handle on what the full development pipeline might look like in terms of carbon footprint, Strubell and her colleagues used a model they’d produced in a previous paper as a case study. They found that the process of building and testing a final paper-worthy model required training 4,789 models over a six-month period. Converted to CO2 equivalent, it emitted more than 78,000 pounds and is likely representative of typical work in the field."

that's the lifetime emissions of 5 cars btw

the other day while something I was working on wasn’t working I joked about having the GPUs just producing heat instead of anything useful

animist
Aug 28, 2018
this ml app is actually kinda cool. neural art is neat, imo

animist
Aug 28, 2018
spooky paper: "adversarial examples" are actually just the computer picking up on patterns that humans can't see :tinfoil:

https://arxiv.org/abs/1905.02175

on the plus side you can train deep neural networks to not use those features. but then they lose accuracy

Arcteryx Anarchist
Sep 15, 2007

Fun Shoe
I mean, the fact that the features picked up aren't necessarily the same ones a human might consciously choose is a pretty well known phenomenon in machine vision; the amusing bit being that people sometimes use this to get huffy about things when people sometimes unconsciously or consciously also do ridiculous things and we have much better sensors in some ways

MononcQc
May 29, 2007

they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

MononcQc posted:

they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues.

no it's because it can theoretically break the panopticon in ways that are less obvious than IR blinders or strobes

flakeloaf
Feb 26, 2003

Still better than android clock

it is a moral imperative to use any such methods

Bloody
Mar 3, 2013

it hardly matters, most actual """AI""" applications are just mechanical turk and other people

animist
Aug 28, 2018

Captain Foo posted:

no it's because it can theoretically break the panopticon in ways that are less obvious than IR blinders or strobes

gdi you're right

i just started research on improved adversarial defenses and somehow i didn't grasp that this is what i'm actually getting paid for, lol

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

animist posted:

gdi you're right

i just started research on improved adversarial defenses and somehow i didn't grasp that this is what i'm actually getting paid for, lol

time to surreptitiously be terrible

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

Captain Foo posted:

time to surreptitiously be terrible

lol if you do this surreptitiously

Sagebrush
Feb 26, 2012

it would be a mitzvah if one of you would come up with an algorithm that analyses your face and tells you where to subtly apply a makeup convolution so that the system thinks your face is a turtle. tia

NoneMoreNegative
Jul 20, 2000
GOTH FASCISTIC
PAIN
MASTER




shit wizard dad

Captain Foo posted:

time to surreptitiously be terrible

/code review/

what’s this ‘fiducials.trumpface.backdoor.’ routine?
oh ha ha just joking around let me delete that

Qtotonibudinibudet
Nov 7, 2011



Omich poluyobok, skazhi ty narkoman? ya prosto tozhe gde to tam zhivu, mogli by vmeste uyobyvat' narkotiki

Sagebrush posted:

it would be a mitzvah if one of you would come up with an algorithm that analyses your face and tells you where to subtly apply a makeup convolution so that the system thinks your face is a turtle. tia

why would i want an ai to think i was mitch mcconnel

Uncle Enzo
Apr 28, 2008

I always wanted to be a Wizard

florida lan posted:

why would i want an ai to think i was mitch mcconnel

i can think of a couple of reasons actually

animist
Aug 28, 2018
https://twitter.com/byJoshuaDavis/status/1147538052639682565

ultrafilter
Aug 23, 2007

It's okay if you have any questions.



It's not that the robots are coming for your jobs. It's that your boss wants to replace you with a robot.

big scary monsters
Sep 2, 2011

-~Skullwave~-

MononcQc posted:

they're worrisome because everyone keeps the assumption that machines will suck and humans have to correct things and be the backstop to the technology. Adversarial examples that hit machine vision but not human vision break that possibility and explicitly removes our ability to easily detect and correct issues.

have there been examples of adversarial approaches that limit access to the network they're trying to spoof? presumably a real adversary isn't going to hand you their model and give you a week's unrestricted cluster time running them against each other - what's the minimum attempts you can use to turn a gun into a turtle?

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

big scary monsters posted:

presumably a real adversary isn't going to hand you their model and give you a week's unrestricted cluster time running them against each other

They absolutely will as soon as it becomes a commercial product.

funeral home DJ
Apr 21, 2003


Pillbug
does google employ ml for image searches to help them determine what can be displayed under safe search? because I can see adversarial imaging making poo poo go wild

hardcore porno popping up with searches of “paw patrol” or ted Cruz’s ugly face being grouped with lemon party, the possibilities for fuckery are endless

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



idk maybe they use a combination of classifying sites themselves (keywords) and images (some dumb nn) and then some threshold? ive seen porn go through a couple times.

akadajet
Sep 14, 2003

Krankenstyle posted:

idk maybe they use a combination of classifying sites themselves (keywords) and images (some dumb nn) and then some threshold? ive seen porn go through a couple times.

I thought you could almost always find porn if you keep scrolling down the results.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



akadajet posted:

I thought you could almost always find porn if you keep scrolling down the results.

probably?

this was like page 1-2 of some image search

DaTroof
Nov 16, 2000

CC LIMERICK CONTEST GRAND CHAMPION
There once was a poster named Troof
Who was getting quite long in the toof
something fun i noticed about google search, it doesn't matter what you're searching for, if you go about 5 or 6 pages back you'll eventually find a link that looks like exactly what you want but it redirects to a game called oval office WARS

props to google for burying those links several pages back, but poo poo, they probably shouldn't be there at all

ultrafilter
Aug 23, 2007

It's okay if you have any questions.


https://twitter.com/dalykyle/status/1174360934237855749

No way this isn't an NLP error.

Jonny 290
May 5, 2005



[ASK] me about OS/2 Warp
lmao

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



rofl

Adbot
ADBOT LOVES YOU

Winkle-Daddy
Mar 10, 2007

oh look, the handy work of Summly, the "AI News Summarizing Product" built by some kids that Marissa gave like a billion dollars to. Working at yahoo under marissa was the fuckin worst. "guys, guys, don't you see how valuable tumblr is???"

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply