|
CNNs (which are main technique for this kind of computer vision these days) are famous both for replicating any biases in their training data, and for performing very poorly when trained on an insufficiently broad set of examples. They are also trained against very large example sets, and usually judged on overall accuracy--outside of this overall success metric, the importance of particular cases is mostly reliant on engineer followup and judgement. So if black faces are insufficiently represented in training data and/or you optimize your machine for an overall accuracy metric that can still be very good even if black faces aren't detected as well as white ones, you'll have some problems. Like, a classifier that is 10x better at identifying white male faces at the expense of a 4x reduction in detection accuracy for everyone else may score "higher" than a more neutral one, due to inherent biases in the training data. Additionally, white engineers are less likely to notice/care about the biases in the training data and even less likely to notice/care about poor performance on black faces as long as the overall score of the algorithm goes up. Another (bigger) problem is that, can you really say a data set is "biased" if it is accurately reflecting the real world? If reality itself underrepresents or undervalues black women, for example, the most "accurate" machine will often be similarly biased. If the data is actually full of racist patterns, then a pattern recognition machine will itself be racist. This was famously the case when companies used machine learning to "predict" the probability of a candidate being hired. It's actually kind of a beautiful and horrifying illustration of the nature of institutional racism vs. individual bigotry. Even a stone cold dumb machine, with no concept of race, running a totally abstract, platonic ideal of an algorithm which itself has no human input or guidance and is indeed totally opaque to it's human creators (i.e.an artificial neural network), cannot help but replicate the racist nature of the reality it is trained to understand. You can try to band-aid the problem by going in and tweaking training data or changing metrics--in much the same way that you can try to address institutionalized racism via individual wokeness--but fundamentally the racist structure of society guides its constituents into racist patterns almost irrespective of their individual programming or goals, and without requiring their consent or even awareness.
|
# ? Sep 21, 2020 20:32 |
|
|
# ? May 7, 2024 14:12 |
|
a similar thing happened with film/cameras when they were first developed. dark skin has different lighting/calibrating needs and it wasn't until kodak's new film could black people be captured properly (because it was difficult to differentiate furniture made of dark wood). someone might respond with "well black people are just harder to film, light can't be racist", which misses the point that it's almost never the inverse. there's never a product's userbase that includes a white guy where his experience is the worse than everyone elses because it won't be released before it's fixed. technology and techniques are developed with a specific user in mind and that person is usually a cis-het white guy. if you hear a techdorks start with excuses about why this algorithm had to be this way, tell them to gently caress off. (vox video warning) bias in film: https://www.youtube.com/watch?v=d16LNHIEJzs
|
# ? Sep 22, 2020 04:30 |
|
IAMKOREA posted:Was your racist YouTube Swede the fat guy who goes and makes coffee from a pile of twigs he's gathered in the forest while wearing 1500 eur of Fjallraven gear in a little clearing that's probably 20 meters from his car and he gives bizzare lectures on masculinity and how modern society sucks because you can't pee anywhere you want? I feel like there's probably a lot of racist YouTube Swedes, but I'm sure we can get to the bottom of this. Giga Gaia posted:if you hadnt have said fat id have thought you were talking about varg
|
# ? Sep 22, 2020 06:06 |
|
well you see, confirmation bias is a huge problem for human convgnotion, so if we automate it and take humans out of the loop we can finally stop feeling guilty and blame the computer
|
# ? Sep 22, 2020 06:11 |
|
Everyone who works in computer vision is a cop
|
# ? Sep 22, 2020 07:08 |
|
https://twitter.com/aftertheboop/status/1308091057863888896
|
# ? Sep 22, 2020 10:07 |
|
Lmao
|
# ? Sep 22, 2020 10:28 |
|
Now that I actually understand what these are, this is one of the funniest things I’ve seen on the internet e: I mean, the one I quoted, specifically, not the algorithm’s flaws in general
|
# ? Sep 22, 2020 14:48 |
|
incredible. a perfect tweet.
|
# ? Sep 22, 2020 14:55 |
|
that darned algorithm also picked a bunch of nazis to be twitter mods, they should hire better programmers
|
# ? Sep 22, 2020 15:08 |
we gotta stop the racist computers before they steal our racism factory jobs. mr president, it's time to build the wall
|
|
# ? Sep 22, 2020 16:18 |
|
fully automated and engineered racism, produced at breakneck speeds and raced at the same
|
# ? Sep 22, 2020 16:46 |
the way racism technology has advanced, we could all be down to a 12-hour racism workweek by now, but that wouldn't suit the needs of racist capital which needs to squeeze out every ounce of our racism for its profits
|
|
# ? Sep 22, 2020 16:50 |
|
DemoneeHo posted:Good news is that right now the algorithm can't accurately identify the faces of black people, so they don't need to worry about privacy infringement as much. they're already doing this, there was a story recently about how police somewhere were using *~algorithms~* to determine who was likely to commit a crime and then they would go harass those people even if they hadn't done anything wrong and surprise surprise garbage in garbage out it was just another excuse for the cops to go bother innocent people of colour
|
# ? Sep 22, 2020 16:53 |
|
Real Mean Queen posted:I figured the title was referring to that microsoft (?) chatbot that went from “hello world” to “hitler did nothing wrong” within like a day, but I was also thinking it might refer to the youtube algorithm that says “oh you like any kind of video about any topic? We think you’ll enjoy racism.” The fact that “the racism computer” is vague enough to not refer to any one racist computer makes me think that maybe we’re not doing a great job having computers. Yeah, the Youtube algorithm was programmed to show people videos that make them want to watch more videos. It turns out that white supremacists and conspiracy theorists watch a ton of Youtube videos, so the algorithm did what it was programmed to do and started trying to convert people to that mindset.
|
# ? Sep 22, 2020 17:19 |
it's time for the comrades of the dick-sucking factories and the racism factories to put aside their differences and work together. workers of the world, unite!
|
|
# ? Sep 22, 2020 17:23 |
|
vyelkin posted:they're already doing this, there was a story recently about how police somewhere were using *~algorithms~* to determine who was likely to commit a crime and then they would go harass those people even if they hadn't done anything wrong and surprise surprise garbage in garbage out it was just another excuse for the cops to go bother innocent people of colour well, the algorithm was that every time the police suspect you of a crime you get offender points, and every time you're arrested for a crime you get more offender points, even if you're found innocent or not charged at all. you get a bonus point modifier if the cops say you're a gang member or if your name appears frequently in police reports (even as a witness or victim) the algorithm was just a simple points system that ranked people by suspiciousness, based solely on how many times cops thought that person was suspicious. it's literally just using a computer and fancy tech words to launder the subjective judgment of individual police officers so that they can get away with running a harassment program
|
# ? Sep 22, 2020 19:51 |
|
drat this algorithm https://twitter.com/AliAbunimah/status/1308810241291808768?s=20
|
# ? Sep 23, 2020 18:13 |
StashAugustine posted:butlerian jihad now
|
|
# ? Sep 23, 2020 20:50 |
|
I'm taking a biometrics course to round out a double major and I think its driving me insane. Lecture this morning was on facial recognition of mugshots (all mugshots were of black people) then later in lab we had a news report about British police employing human "super recognizers" to surveil protests. Its all some loving Voight-Kampf poo poo
|
# ? Sep 23, 2020 21:05 |
|
a ghost in the machine
|
# ? Sep 23, 2020 21:24 |
|
Deep Dish Fuckfest posted:a ghost in the machine The machine is a ghost of racisms past.
|
# ? Sep 23, 2020 21:47 |
The racism computer? I think we should turn it off, OP.
|
|
# ? Sep 23, 2020 23:12 |
|
Victory Position posted:fully automated and engineered racism, produced at breakneck speeds and raced at the same *changes the setting in the racism computer from "black" to "white"* little trick i picked up from a show called rickj and morty
|
# ? Sep 23, 2020 23:15 |
But for real it's a huge problem how companies can use these inherently opaque "algorithms" (they're not really algorithms, by the way, because they're not really an understandable series of steps except in the most basic sense that all software is), these machine learning methods to launder their racism. Because you can't really know how they work, the companies can get away with shrugging and saying "we don't know how this works or how it does the racism, oh well". But they do know how it learned the racism - the racist training data that e.g. overrepresents white people, or is based on previous racist manual decisions. The opaqueness is just a smokescreen.
|
|
# ? Sep 23, 2020 23:20 |
|
The Machine That Won The Race War
|
# ? Sep 23, 2020 23:21 |
|
I wonder what other industry will get fully automated next. Maybe murder?
|
# ? Sep 23, 2020 23:25 |
|
Chamale posted:Yeah, the Youtube algorithm was programmed to show people videos that make them want to watch more videos. It turns out that white supremacists and conspiracy theorists watch a ton of Youtube videos, so the algorithm did what it was programmed to do and started trying to convert people to that mindset. Importantly they also will watch the whole way through ads, and "a small core of people who will watch 16 hours of advertised content without skipping or adblocking" turns out to be pretty loving valuable to what YouTube wants. Pretty loving bad that those groups are mostly "babies" and "conspiracy nuts."
|
# ? Sep 24, 2020 00:21 |
|
e-dt posted:But for real it's a huge problem how companies can use these inherently opaque "algorithms" (they're not really algorithms, by the way, because they're not really an understandable series of steps except in the most basic sense that all software is), these machine learning methods to launder their racism. Because you can't really know how they work, the companies can get away with shrugging and saying "we don't know how this works or how it does the racism, oh well". But they do know how it learned the racism - the racist training data that e.g. overrepresents white people, or is based on previous racist manual decisions. The opaqueness is just a smokescreen. The sad thing is you can substantially limit the bias of a machine learning system in fairly well understood ways. But it is harder than just building a racism machine that converts misery to cash so nobody does this. I don't have any idea how you fix youtube though turn that poo poo off and jettison it into the sun goddamn.
|
# ? Sep 24, 2020 01:38 |
|
TehSaurus posted:I don't have any idea how you fix youtube though turn that poo poo off and jettison it into the sun goddamn. I mean, it'd be pretty easy to fix if Google was even remotely interested in fixing it.
|
# ? Sep 24, 2020 01:53 |
|
that's just unrealistic idealism. a corporation with more than 150 billions in revenue and 30 billion in income doesn't really have the means to fix something like that sorry
|
# ? Sep 24, 2020 03:42 |
|
lol
|
# ? Sep 24, 2020 03:54 |
|
bvj191jgl7bBsqF5m posted:https://twitter.com/NeutralthaWolf/status/1307907048726814722?s=19 You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution.
|
# ? Sep 24, 2020 12:59 |
|
Grand Theft Autobot posted:You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution. From Twitter's perspective, do you think they actually think there's a problem? As long as the cropping maximizes engagements, views, and shares, I doubt they're seriously bothered by racism and sexism in the algorithm. After all, that's why the issue happened in the first place - they prioritized their metrics above equitable behavior or benefiting society. I imagine they're just paying lip service to the idea of there being a problem while they wait for the meme to pass out of the headlines.
|
# ? Sep 24, 2020 13:22 |
|
What if we all pray to the Wire and hope the goddess of the super online takes pity on us?
|
# ? Sep 24, 2020 17:45 |
|
Main Paineframe posted:From Twitter's perspective, do you think they actually think there's a problem? As long as the cropping maximizes engagements, views, and shares, I doubt they're seriously bothered by racism and sexism in the algorithm. After all, that's why the issue happened in the first place - they prioritized their metrics above equitable behavior or benefiting society. I imagine they're just paying lip service to the idea of there being a problem while they wait for the meme to pass out of the headlines. you'll hear about it in three or six months
|
# ? Sep 24, 2020 17:49 |
|
What's your favorite racism computer, guys? Personally, I really like the mall robot whose anti-children programming made it racist against little people. Like all great twists, it's impossible to see coming but obvious in retrospect.
|
# ? Sep 24, 2020 19:28 |
|
Grand Theft Autobot posted:You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution. This is true, but the question then becomes why are you using ML at all if you know it produces racist, sexist outcomes and you have no idea how to make it do otherwise (because that’s impossible because it is trained on data generated by our racist, sexist society).
|
# ? Sep 24, 2020 20:15 |
|
YOLOsubmarine posted:This is true, but the question then becomes why are you using ML at all if you know it produces racist, sexist outcomes and you have no idea how to make it do otherwise (because that’s impossible because it is trained on data generated by our racist, sexist society). ML has a lot of uses even if its not the right tool ideal for creating a perfectly equitable society. if the goal was to make a perfectly equitable society using ML you might have a point, if the goal for ML is making money via data sorting based on fuzzy pattern matching and its the best tool for the job so far, thats what you would use. seems straight forward to me.
|
# ? Sep 24, 2020 22:20 |
|
|
# ? May 7, 2024 14:12 |
|
ate poo poo on live tv posted:ML has a lot of uses even if its not the right tool ideal for creating a perfectly equitable society. if the goal was to make a perfectly equitable society using ML you might have a point, if the goal for ML is making money via data sorting based on fuzzy pattern matching and its the best tool for the job so far, thats what you would use. seems straight forward to me. I didn’t say don’t use ML, I said don’t use it when the results are lovely and it literally just reproduces structural inequalities already baked into society. It’s a tool, and a fairly dull one at times. But plenty of companies treat it like the problem is in the tool (we don’t know why the model is racist, it’s unintentional, we’re trying to fix it) as opposed to their choice to use the tool for that particular problem.
|
# ? Sep 24, 2020 22:49 |