Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
poptart_fairy
Apr 8, 2009

by R. Guyovich

pookel posted:

Because the primary system is terrible and the other Republican candidates were all terrible, and Hillary Clinton is so widely loathed by U.S. conservatives that even people who wouldn't normally get behind Trump are willing to elect him if it means she loses.

Spite politics are the best, aren't they. See everyone that voted UKIP to somehow prove Brexit wasn't racist. :iiam:

Adbot
ADBOT LOVES YOU

Annointed
Mar 2, 2013

Doc Hawkins posted:

This thread is being mind-killed by politics. :(

Sargon is shown repeatedly to be a inattentive dumbass with he attention span and reading comprehension of a goldfish, Hawkins is kicked out because he has gone full sexist, thunderfoot is shown to be a loving manchild when pressed by actual professors, Repzion can't stop masturbating to Anita Sarkeesian, the doofus duo of the Sarkeesian effect have laid low, Armored Skeptic is shown to be a sexist right wing proponent of Trump, The Bible Reloaded sided with Gay Uncle Tom and Scammer Milo Yiannopolus, so far all of them have already been milked and done for at the moment.

Plus the alt-right at the moment is the one to look out for, considering they hung a human model with hillary's face on it on the bust of a church.

Yes I want the YouTube Atheist community to die so I can laugh at them becoming Alex Jones clones.

divabot
Jun 17, 2015

A polite little mouse!
Neoreaction a Basilisk update:

Phil Sandifer posted:

soundchaser93 asked: How's the physical edition of NaB coming along? I've been putting off reading it until I can get that real "Grover yelling at you" experience.

Decent chance of me doing the scanning today - there was a bit of a hold-up photocopying the zine edition when the person with unlimited photocopier access that I asked to do it for me took two weeks to get around to it.

Terrible Opinions
Oct 18, 2013



Somfin posted:

Because the republicans are really that bad, and have really signed that many deals with various devils. Trump is a conman, playing their game better than they can, to the audience of rubes and suckers that they have spent decades of their all-too-mortal lives painstakingly assembling. Anyone got that Jastiger quote about Cruz?
I know that's true, but I really wish it was lizardmen rigging the election.

BornAPoorBlkChild
Sep 24, 2012

Annointed posted:

Plus the alt-right at the moment is the one to look out for, considering they hung a human model with hillary's face on it on the bust of a church.

when did this happen?...

The Vosgian Beast
Aug 13, 2011

Business is slow

ikanreed posted:

Oh, so they're afraid of AI that's actually smart.

No

No it's not

ikanreed
Sep 25, 2009

I honestly I have no idea who cannibal[SIC] is and I do not know why I should know.

syq dude, just syq!

The point of pretending Marx was right is to annoy reactionaries, not to actually defend a classless communist society as a goal.

Also... your grammar seems to be replying to a different structure of sentence than I actually wrote. Not sure what your complaint is.

Improbable Lobster
Jan 6, 2012

What is the Matrix 🌐? We just don't know 😎.


Buglord

ikanreed posted:

The point of pretending Marx was right is to annoy reactionaries, not to actually defend a classless communist society as a goal.

Also... your grammar seems to be replying to a different structure of sentence than I actually wrote. Not sure what your complaint is.

Marx was right, comradeski

I Killed GBS
Jun 2, 2011

by Lowtax

ikanreed posted:

The point of pretending Marx was right is to annoy reactionaries, not to actually defend a classless communist society as a goal.

Also... your grammar seems to be replying to a different structure of sentence than I actually wrote. Not sure what your complaint is.

lookit this milquetoast

ikanreed
Sep 25, 2009

I honestly I have no idea who cannibal[SIC] is and I do not know why I should know.

syq dude, just syq!

Small Frozen Thing posted:

lookit this milquetoast

Hey, when I'm first against the wall, would you guys mind watering my plants. Thank you.

MizPiz
May 29, 2013

by Athanatos

pookel posted:

Because the primary system is terrible and the other Democratic candidates were all terrible, and Donald Trump is so widely loathed by U.S. liberals that even people who wouldn't normally get behind Clinton are willing to elect her if it means he loses.

Not actually trying to say anything, just making myself a little more depressed. :smith:

The Vosgian Beast
Aug 13, 2011

Business is slow

ikanreed posted:

The point of pretending Marx was right is to annoy reactionaries, not to actually defend a classless communist society as a goal.

Also... your grammar seems to be replying to a different structure of sentence than I actually wrote. Not sure what your complaint is.

Being afraid of AI is only slightly above being afraid alien abduction

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

I personally prefer to reserve all my irrational Other-fear for Sasquatch, humanity's angry cousins.

uber_stoat
Jan 21, 2001



Pillbug
Roko's Basilisk doesn't scare me, but should I ever be confronted by the baleful stare of the Moth-man, god help me.

SolTerrasa
Sep 2, 2011

The Vosgian Beast posted:

Being afraid of AI is only slightly above being afraid alien abduction

An interesting anecdote is that the thing that they're afraid of is tangentially rooted in something near reality. I'm part of a working group to reduce racism in AI systems, because it's an important problem to solve. Apply a crazy-alt-right filter to that and you get "making intrusive modifications so that AI regards black people as inherently superior to white people #whitegenocide #allfeaturevectorsmatter".

The Vosgian Beast
Aug 13, 2011

Business is slow

Antivehicular posted:

I personally prefer to reserve all my irrational Other-fear for Sasquatch, humanity's angry cousins.

I fear no cryptid that walks on land.

I would piss all over myself if I got a glimpse of any given lake or sea monster though

Antivehicular
Dec 30, 2011


I wanna sing one for the cars
That are right now headed silent down the highway
And it's dark and there is nobody driving And something has got to give

SolTerrasa posted:

An interesting anecdote is that the thing that they're afraid of is tangentially rooted in something near reality. I'm part of a working group to reduce racism in AI systems, because it's an important problem to solve. Apply a crazy-alt-right filter to that and you get "making intrusive modifications so that AI regards black people as inherently superior to white people #whitegenocide #allfeaturevectorsmatter".

Okay, this is a stupid question, but I'm curious: what form does racism in AI systems take? Is it a residual effect of racist input? AIs don't just turn into mortifying Facebook uncles on their own, right?

Tesseraction
Apr 5, 2009

I was under the impression it's the latent racism of society that ends up being hypercharged when followed by a system not bound by the concept of 'shame'

WrenP-Complete
Jul 27, 2012

Yeah, I'll bite, since I'm doing data tidying for the third day in a row now and my mind is mush. What racism in AI systems? What?

Tesseraction
Apr 5, 2009

WrenP-Complete posted:

Yeah, I'll bite, since I'm doing data tidying for the third day in a row now and my mind is mush. What racism in AI systems? What?

I believe an AI was fed court cases including bios and they found that black people performing the same crime got larger sentences despite no other differences.

Tesseraction
Apr 5, 2009

Plus Google Images showing mugshots for 'black teens' and happy kids hanging out for 'white teens'

Shame Boy
Mar 2, 2010

Tesseraction posted:

I believe an AI was fed court cases including bios and they found that black people performing the same crime got larger sentences despite no other differences.

Clearly the AI is flawed and we must fix it :colbert:

I AM GRANDO
Aug 20, 2006

Didn't that twitter bot from a while back turn fascist from reading tweets? I totally believe that a machine that learns about the world from its cultural productions would come up with beliefs that reflect the features of that culture. It's the same way that human intelligences become racist.

ikanreed
Sep 25, 2009

I honestly I have no idea who cannibal[SIC] is and I do not know why I should know.

syq dude, just syq!

The Vosgian Beast posted:

Being afraid of AI is only slightly above being afraid alien abduction

Duh?

Being afraid of Marxist AI in particular because it meant their dreambots wouldn't validate their worldview was funny.

Assepoester
Jul 18, 2004
Probation
Can't post for 10 years!
Melman v2
https://www.youtube.com/watch?v=t4DT3tQqgRM

The Vosgian Beast
Aug 13, 2011

Business is slow

ikanreed posted:

Duh?

Being afraid of Marxist AI in particular because it meant their dreambots wouldn't validate their worldview was funny.

I feel this sentiment was poorly expressed in the post I was responding to

darthbob88
Oct 13, 2011

YOSPOS

Jack Gladney posted:

Didn't that twitter bot from a while back turn fascist from reading tweets? I totally believe that a machine that learns about the world from its cultural productions would come up with beliefs that reflect the features of that culture. It's the same way that human intelligences become racist.

Tay, yeah, and that's more or less it. She heard a lot of responses from fascists, so she started sending her own fascist tweets.

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Antivehicular posted:

Okay, this is a stupid question, but I'm curious: what form does racism in AI systems take? Is it a residual effect of racist input? AIs don't just turn into mortifying Facebook uncles on their own, right?

It's gonna be heavily influenced by racialised inputs. Like, a neural network that is taught what a face looks like using only white people's photos will look for stuff that isn't there on a black person's face and return a false negative, like in that video above.

AI projects are just systems that learn and improve without human decisions being made in the process. If an unconscious bias slips into the training process it will merrily run with it.

WrenP-Complete
Jul 27, 2012

Oh got it, my mush brain was imagining that AI could come up with its own racism against toasters or microwaves or what not.

Woolie Wool
Jun 2, 2006


WrenP-Complete posted:

Oh got it, my mush brain was imagining that AI could come up with its own racism against toasters

Now I'm imagining a hate group consisting of Daleks that chant about exterminating Cylons.

Shame Boy
Mar 2, 2010

WrenP-Complete posted:

Oh got it, my mush brain was imagining that AI could come up with its own racism against toasters or microwaves or what not.

I mean it probably could, there's lots of cases of a system being designed by people having unintended racial biases because the people who designed it also had those biases or didn't think about situations other than their own personal life as an upper-middle-class white Californian dude

Peel
Dec 3, 2007

Tesseraction posted:

I believe an AI was fed court cases including bios and they found that black people performing the same crime got larger sentences despite no other differences.

If you're thinking of what I was thinking of, I think the program was to predict chances of reoffending. My guess is the company just fed their program a massive wedge of criminal record data without controlling for racist input, with the result that shock, people with traits correlated with being black (if they didn't just straight-up have race as a feature) are more likely to be arrested in the future.

SolTerrasa
Sep 2, 2011

It's a really interesting topic, actually. The sorts of AI systems in wide use (deep neural nets, to the exclusion of almost anything else, in my experience) are basically universal approximators. You have a function, with inputs and outputs. Say the input is a 1920x1080 photo, and the output is a score from -1 to 1 saying how photogenic that photo is, to help users of your photo app rank their photos of themselves by attractiveness. You have 2,073,600 inputs (pixels), each of which is a number between 0 and 16.7 million (number of expressable colors). So your function has a huge number of inputs and a single output.

You have to teach this system how to compute the output from the inputs. It starts out totally random. So you do this with labeled data: this photo is a 0.7, this photo is a -0.2, this photo is a 1.0, etc. The network "learns" by mutating itself and seeing if it's doing better or worse, until finally it is "good enough". This is highly simplified but the details don't matter for understanding racism in AI.

The volume of data you need in order to teach the system is related to the complexity of the function you're trying to teach it. The thing is, you as the programmer don't know the actual details of how the function should be implemented. If you did, you'd use that instead of mucking about with neural nets. This function, where we take 2 million inputs with 16 million values each, is pretty complicated, so we probably need a lot of data.

It costs a lot of money to collect and label the ~10ish million pictures you'd need for this (a guess based on experience). The internet is already there, and it's pretty cheap. So, you go scrape the Flickr api, and say that images with a lot of likes / favorites / retweets or whatever are "good", and the other ones are "bad". You see the problem already.

The internet is pretty racist. Pictures of white people are way more popular than pictures of black people. It's also the case that the people working on AI are basically all white dudes (I'm a white dude). Not to mention the issue of access to the internet due to highly racially correlated poverty. The AI system is operating with basically no oversight, and it's nearly impossible to get good insight into why it makes the decisions it does. All we know is, it does well on the training and testing sets. But since the internet is so racist, and since access to the internet and free time is so correlated with race, the training and test set underrepresent black people.

AI is lazy as hell, by design (if you try to make it unlazy it will do worse things, ask me later if you want to know why). So, if it only rarely sees a picture of a black person, and most of the time the black person is scored low, it'll just learn "dark pixels = bad". And since the engineers working on it will test it with internet data, then *maybe* test it on photos of themselves, or their friends, it's really only going to get tested on white people. So they launch it, and later on a bunch of black people ask "why does this say all my photos of me are ugly?"

So, how do we fix it?

Well, one solution is to spend the huge amount of money to label the training data yourselves, with your Certified Non-Racist Sensitivity Trained Employees. This works okay if you're Facebook, and you have huge amounts of money that you can throw at silly features just cause it's fun. Not great, because everyone's a little bit racist and cameras themselves are kinda racist (another story, ask me later), but better. And you can go collect the data yourself and spend even more money on that, but nobody does that. It's just too expensive to embark on a ten million dollar data collection every time you want to launch some weird silly tweak to your photo app.

We could also try to ban the AI system from making inferences based on racial or racialized traits. But the thing is, AI is really clever about being lazy. You tell it "no inferring anything based on skin tone" (at great expense, since this is actually really hard to do) and you find that it behaves the same, it just decided to use width of nose instead of average skin tone. So you fix that, and now it's something about delta between pupil color and iris color. And you fix that. All the time you're making these changes, your AI system is getting worse, because these changes are invasive, expensive, and actually do negatively effect accuracy and recall. So now you've spent months and millions on making your system worse, and it's still kinda racist, it just works harder at it, and it's a little bit veiled.

Upsettingly, there isn't a lot of good research in how to solve this. So we're trying to do some. I can't talk about details, but to summarize:

1) this is a real problem
2) this is a pain in the rear end to fix
3) most companies and people and researchers don't bother
4) they should
5) qualified people are working on it
6) if we are successful we will definitely publish and then I can talk about it

And if you thought my example was hyperbolic, that one is pretty realistic:

https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people

Somfin
Oct 25, 2010

In my🦚 experience🛠️ the big things🌑 don't teach you anything🤷‍♀️.

Nap Ghost

Parallel Paraplegic posted:

I mean it probably could, there's lots of cases of a system being designed by people having unintended racial biases because the people who designed it also had those biases or didn't think about situations other than their own personal life as an upper-middle-class white Californian dude

How it gathers data, and the assumptions made about that data, factors into things. If you made a criminal behaviour prediction AI and fed it incarceration data (on the assumption that incarceration means guilt and the process leading to that point is fair), it would develop a racial bias toward assuming black people would be criminals, because of the massive overrepresentation of black people in prison.

WrenP-Complete
Jul 27, 2012

And now we've come around to my research topics too! I'm making dinner but will effort post about neural nets and drug trip data soon!

Tesseraction
Apr 5, 2009

Peel posted:

If you're thinking of what I was thinking of, I think the program was to predict chances of reoffending. My guess is the company just fed their program a massive wedge of criminal record data without controlling for racist input, with the result that shock, people with traits correlated with being black (if they didn't just straight-up have race as a feature) are more likely to be arrested in the future.

Oh yeah, that was the one!

Annointed
Mar 2, 2013

Race Realists posted:

when did this happen?...

http://theslot.jezebel.com/some-guy-just-hung-a-topless-clinton-effigy-in-case-yo-1786581412

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

SolTerrasa posted:

ask me later if you want to know why

another story, ask me later

Is it late enough now

Sit on my Jace
Sep 9, 2016

I would read the poo poo out of an Ask/Tell thread about AI research, and I'm sure I'm not the only one.

Adbot
ADBOT LOVES YOU

WrenP-Complete
Jul 27, 2012

I could help contribute content, if someone else who is more of a specialist would start it!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply