Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Morbus
May 18, 2004

CNNs (which are main technique for this kind of computer vision these days) are famous both for replicating any biases in their training data, and for performing very poorly when trained on an insufficiently broad set of examples. They are also trained against very large example sets, and usually judged on overall accuracy--outside of this overall success metric, the importance of particular cases is mostly reliant on engineer followup and judgement.

So if black faces are insufficiently represented in training data and/or you optimize your machine for an overall accuracy metric that can still be very good even if black faces aren't detected as well as white ones, you'll have some problems. Like, a classifier that is 10x better at identifying white male faces at the expense of a 4x reduction in detection accuracy for everyone else may score "higher" than a more neutral one, due to inherent biases in the training data. Additionally, white engineers are less likely to notice/care about the biases in the training data and even less likely to notice/care about poor performance on black faces as long as the overall score of the algorithm goes up.

Another (bigger) problem is that, can you really say a data set is "biased" if it is accurately reflecting the real world? If reality itself underrepresents or undervalues black women, for example, the most "accurate" machine will often be similarly biased. If the data is actually full of racist patterns, then a pattern recognition machine will itself be racist. This was famously the case when companies used machine learning to "predict" the probability of a candidate being hired.

It's actually kind of a beautiful and horrifying illustration of the nature of institutional racism vs. individual bigotry. Even a stone cold dumb machine, with no concept of race, running a totally abstract, platonic ideal of an algorithm which itself has no human input or guidance and is indeed totally opaque to it's human creators (i.e.an artificial neural network), cannot help but replicate the racist nature of the reality it is trained to understand.

You can try to band-aid the problem by going in and tweaking training data or changing metrics--in much the same way that you can try to address institutionalized racism via individual wokeness--but fundamentally the racist structure of society guides its constituents into racist patterns almost irrespective of their individual programming or goals, and without requiring their consent or even awareness.

Adbot
ADBOT LOVES YOU

crepeface
Nov 5, 2004

r*p*f*c*
a similar thing happened with film/cameras when they were first developed. dark skin has different lighting/calibrating needs and it wasn't until kodak's new film could black people be captured properly (because it was difficult to differentiate furniture made of dark wood).

someone might respond with "well black people are just harder to film, light can't be racist", which misses the point that it's almost never the inverse. there's never a product's userbase that includes a white guy where his experience is the worse than everyone elses because it won't be released before it's fixed. technology and techniques are developed with a specific user in mind and that person is usually a cis-het white guy.

if you hear a techdorks start with excuses about why this algorithm had to be this way, tell them to gently caress off.

(vox video warning) bias in film:
https://www.youtube.com/watch?v=d16LNHIEJzs

A Buttery Pastry
Sep 4, 2011

Delicious and Informative!
:3:

IAMKOREA posted:

Was your racist YouTube Swede the fat guy who goes and makes coffee from a pile of twigs he's gathered in the forest while wearing 1500 eur of Fjallraven gear in a little clearing that's probably 20 meters from his car and he gives bizzare lectures on masculinity and how modern society sucks because you can't pee anywhere you want? I feel like there's probably a lot of racist YouTube Swedes, but I'm sure we can get to the bottom of this.
Not every racist youtuber is Swedish, but every Swedish youtuber is racist. But no, not him. My racist was "The Golden One", who is certainly not fat. which he reminds you of by never putting on a shirt.

Giga Gaia posted:

if you hadnt have said fat id have thought you were talking about varg
Varg is Norwegian though.

Hodgepodge
Jan 29, 2006
Probation
Can't post for 233 days!
well you see, confirmation bias is a huge problem for human convgnotion, so if we automate it and take humans out of the loop we can finally stop feeling guilty and blame the computer

Charles 2 of Spain
Nov 7, 2017

Everyone who works in computer vision is a cop

SplitSoul
Dec 31, 2000

https://twitter.com/aftertheboop/status/1308091057863888896

Charles 2 of Spain
Nov 7, 2017

Lmao

LargeHadron
May 19, 2009

They say, "you mean it's just sounds?" thinking that for something to just be a sound is to be useless, whereas I love sounds just as they are, and I have no need for them to be anything more than what they are.

Now that I actually understand what these are, this is one of the funniest things I’ve seen on the internet

e: I mean, the one I quoted, specifically, not the algorithm’s flaws in general

crepeface
Nov 5, 2004

r*p*f*c*

incredible. a perfect tweet.

Spoondick
Jun 9, 2000

that darned algorithm also picked a bunch of nazis to be twitter mods, they should hire better programmers

ur in my world now
Jun 5, 2006

Same as it ever was
Same as it ever was
Same as it ever was
Same as it ever was


Smellrose
we gotta stop the racist computers before they steal our racism factory jobs. mr president, it's time to build the wall

Victory Position
Mar 16, 2004

fully automated and engineered racism, produced at breakneck speeds and raced at the same

Dustcat
Jan 26, 2019

the way racism technology has advanced, we could all be down to a 12-hour racism workweek by now, but that wouldn't suit the needs of racist capital which needs to squeeze out every ounce of our racism for its profits

vyelkin
Jan 2, 2011

DemoneeHo posted:

Good news is that right now the algorithm can't accurately identify the faces of black people, so they don't need to worry about privacy infringement as much.

Bad news is that the police don't care about about false positives and will use the inaccurate readings to punish black people, no matter how blatantly wrong their information is.

they're already doing this, there was a story recently about how police somewhere were using *~algorithms~* to determine who was likely to commit a crime and then they would go harass those people even if they hadn't done anything wrong and surprise surprise garbage in garbage out it was just another excuse for the cops to go bother innocent people of colour

Chamale
Jul 11, 2010

I'm helping!



Real Mean Queen posted:

I figured the title was referring to that microsoft (?) chatbot that went from “hello world” to “hitler did nothing wrong” within like a day, but I was also thinking it might refer to the youtube algorithm that says “oh you like any kind of video about any topic? We think you’ll enjoy racism.” The fact that “the racism computer” is vague enough to not refer to any one racist computer makes me think that maybe we’re not doing a great job having computers.

Yeah, the Youtube algorithm was programmed to show people videos that make them want to watch more videos. It turns out that white supremacists and conspiracy theorists watch a ton of Youtube videos, so the algorithm did what it was programmed to do and started trying to convert people to that mindset.

ur in my world now
Jun 5, 2006

Same as it ever was
Same as it ever was
Same as it ever was
Same as it ever was


Smellrose
it's time for the comrades of the dick-sucking factories and the racism factories to put aside their differences and work together. workers of the world, unite!

Main Paineframe
Oct 27, 2010

vyelkin posted:

they're already doing this, there was a story recently about how police somewhere were using *~algorithms~* to determine who was likely to commit a crime and then they would go harass those people even if they hadn't done anything wrong and surprise surprise garbage in garbage out it was just another excuse for the cops to go bother innocent people of colour

well, the algorithm was that every time the police suspect you of a crime you get offender points, and every time you're arrested for a crime you get more offender points, even if you're found innocent or not charged at all. you get a bonus point modifier if the cops say you're a gang member or if your name appears frequently in police reports (even as a witness or victim)

the algorithm was just a simple points system that ranked people by suspiciousness, based solely on how many times cops thought that person was suspicious. it's literally just using a computer and fancy tech words to launder the subjective judgment of individual police officers so that they can get away with running a harassment program

Atrocious Joe
Sep 2, 2011

drat this algorithm
https://twitter.com/AliAbunimah/status/1308810241291808768?s=20

exmarx
Feb 18, 2012


The experience over the years
of nothing getting better
only worse.

StashAugustine posted:

butlerian jihad now

StashAugustine
Mar 24, 2013

Do not trust in hope- it will betray you! Only faith and hatred sustain.

I'm taking a biometrics course to round out a double major and I think its driving me insane. Lecture this morning was on facial recognition of mugshots (all mugshots were of black people) then later in lab we had a news report about British police employing human "super recognizers" to surveil protests. Its all some loving Voight-Kampf poo poo

Deep Dish Fuckfest
Sep 6, 2006

Advanced
Computer Touching


Toilet Rascal
a ghost in the machine

TehSaurus
Jun 12, 2006

Deep Dish Fuckfest posted:

a ghost in the machine

The machine is a ghost of racisms past.

e-dt
Sep 16, 2019

The racism computer? I think we should turn it off, OP.

Hodgepodge
Jan 29, 2006
Probation
Can't post for 233 days!

Victory Position posted:

fully automated and engineered racism, produced at breakneck speeds and raced at the same

*changes the setting in the racism computer from "black" to "white"*

little trick i picked up from a show called rickj and morty

e-dt
Sep 16, 2019

But for real it's a huge problem how companies can use these inherently opaque "algorithms" (they're not really algorithms, by the way, because they're not really an understandable series of steps except in the most basic sense that all software is), these machine learning methods to launder their racism. Because you can't really know how they work, the companies can get away with shrugging and saying "we don't know how this works or how it does the racism, oh well". But they do know how it learned the racism - the racist training data that e.g. overrepresents white people, or is based on previous racist manual decisions. The opaqueness is just a smokescreen.

StashAugustine
Mar 24, 2013

Do not trust in hope- it will betray you! Only faith and hatred sustain.

The Machine That Won The Race War

Colonel Cancer
Sep 26, 2015

Tune into the fireplace channel, you absolute buffoon
I wonder what other industry will get fully automated next. Maybe murder?

Tulip
Jun 3, 2008

yeah thats pretty good


Chamale posted:

Yeah, the Youtube algorithm was programmed to show people videos that make them want to watch more videos. It turns out that white supremacists and conspiracy theorists watch a ton of Youtube videos, so the algorithm did what it was programmed to do and started trying to convert people to that mindset.

Importantly they also will watch the whole way through ads, and "a small core of people who will watch 16 hours of advertised content without skipping or adblocking" turns out to be pretty loving valuable to what YouTube wants. Pretty loving bad that those groups are mostly "babies" and "conspiracy nuts."

TehSaurus
Jun 12, 2006

e-dt posted:

But for real it's a huge problem how companies can use these inherently opaque "algorithms" (they're not really algorithms, by the way, because they're not really an understandable series of steps except in the most basic sense that all software is), these machine learning methods to launder their racism. Because you can't really know how they work, the companies can get away with shrugging and saying "we don't know how this works or how it does the racism, oh well". But they do know how it learned the racism - the racist training data that e.g. overrepresents white people, or is based on previous racist manual decisions. The opaqueness is just a smokescreen.

The sad thing is you can substantially limit the bias of a machine learning system in fairly well understood ways. But it is harder than just building a racism machine that converts misery to cash so nobody does this.

I don't have any idea how you fix youtube though turn that poo poo off and jettison it into the sun goddamn.

Random Asshole
Nov 8, 2010

TehSaurus posted:

I don't have any idea how you fix youtube though turn that poo poo off and jettison it into the sun goddamn.

I mean, it'd be pretty easy to fix if Google was even remotely interested in fixing it.

Deep Dish Fuckfest
Sep 6, 2006

Advanced
Computer Touching


Toilet Rascal
that's just unrealistic idealism. a corporation with more than 150 billions in revenue and 30 billion in income doesn't really have the means to fix something like that

sorry

Farm Frenzy
Jan 3, 2007


lol

Grand Theft Autobot
Feb 28, 2008

I'm something of a fucking idiot myself

bvj191jgl7bBsqF5m posted:

https://twitter.com/NeutralthaWolf/status/1307907048726814722?s=19

I see this a lot from people that don't know things about computers, so I wonder if there is a way that people can demystify algorithms and explain that they are a set of steps that a human writes for a computer to do that can take on the biases of the people writing them, and are not magic that phase in and out of existence unracistly

You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution.

Main Paineframe
Oct 27, 2010

Grand Theft Autobot posted:

You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution.

From Twitter's perspective, do you think they actually think there's a problem? As long as the cropping maximizes engagements, views, and shares, I doubt they're seriously bothered by racism and sexism in the algorithm. After all, that's why the issue happened in the first place - they prioritized their metrics above equitable behavior or benefiting society. I imagine they're just paying lip service to the idea of there being a problem while they wait for the meme to pass out of the headlines.

T-man
Aug 22, 2010


Talk shit, get bzzzt.

What if we all pray to the Wire and hope the goddess of the super online takes pity on us?

Victory Position
Mar 16, 2004

Main Paineframe posted:

From Twitter's perspective, do you think they actually think there's a problem? As long as the cropping maximizes engagements, views, and shares, I doubt they're seriously bothered by racism and sexism in the algorithm. After all, that's why the issue happened in the first place - they prioritized their metrics above equitable behavior or benefiting society. I imagine they're just paying lip service to the idea of there being a problem while they wait for the meme to pass out of the headlines.

you'll hear about it in three or six months

Random Asshole
Nov 8, 2010

What's your favorite racism computer, guys?

Personally, I really like the mall robot whose anti-children programming made it racist against little people. Like all great twists, it's impossible to see coming but obvious in retrospect.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Grand Theft Autobot posted:

You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution.

This is true, but the question then becomes why are you using ML at all if you know it produces racist, sexist outcomes and you have no idea how to make it do otherwise (because that’s impossible because it is trained on data generated by our racist, sexist society).

ate shit on live tv
Feb 15, 2004

by Azathoth

YOLOsubmarine posted:

This is true, but the question then becomes why are you using ML at all if you know it produces racist, sexist outcomes and you have no idea how to make it do otherwise (because that’s impossible because it is trained on data generated by our racist, sexist society).

ML has a lot of uses even if its not the right tool ideal for creating a perfectly equitable society. if the goal was to make a perfectly equitable society using ML you might have a point, if the goal for ML is making money via data sorting based on fuzzy pattern matching and its the best tool for the job so far, thats what you would use. seems straight forward to me.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

ate poo poo on live tv posted:

ML has a lot of uses even if its not the right tool ideal for creating a perfectly equitable society. if the goal was to make a perfectly equitable society using ML you might have a point, if the goal for ML is making money via data sorting based on fuzzy pattern matching and its the best tool for the job so far, thats what you would use. seems straight forward to me.

I didn’t say don’t use ML, I said don’t use it when the results are lovely and it literally just reproduces structural inequalities already baked into society. It’s a tool, and a fairly dull one at times. But plenty of companies treat it like the problem is in the tool (we don’t know why the model is racist, it’s unintentional, we’re trying to fix it) as opposed to their choice to use the tool for that particular problem.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply