|
oh for sure. i thought this was some kind of "all tools are products of a racist society" so we shouldnt use them take. but in this case the answer is, capitalism will make "number go up" and drat the consequences. not climate-change, racism, loss of biodiversity or human lives will be heeded, ever.
|
# ? Sep 24, 2020 23:07 |
|
|
# ? Jun 10, 2024 11:41 |
|
YOLOsubmarine posted:I didnt say dont use ML, I said dont use it when the results are lovely and it literally just reproduces structural inequalities already baked into society. Its a tool, and a fairly dull one at times. But plenty of companies treat it like the problem is in the tool (we dont know why the model is racist, its unintentional, were trying to fix it) as opposed to their choice to use the tool for that particular problem. But the very definition of a racist society in the first place is that "reproducing structural inequalities" is a feature, not a bug. Just look at how badly people lose their poo poo when you do anything else. This isn't to say companies can't* or shouldn't do better, but it's not exactly mysterious why they favor a machine that performs well in the ways they care about without being bothered by it performing poorly in ways they don't care about *lol capitalism.
|
# ? Sep 24, 2020 23:32 |
|
Charles 2 of Spain posted:Everyone who works in computer vision is a cop
|
# ? Sep 24, 2020 23:38 |
|
e-dt posted:The racism computer? I think we should turn it off, OP. But how will I game
|
# ? Sep 25, 2020 01:47 |
|
The solution is ML ML Machine Learning Marxism Leninism
|
# ? Sep 25, 2020 01:54 |
|
StashAugustine posted:I'm taking a biometrics course to round out a double major and I think its driving me insane. Lecture this morning was on facial recognition of mugshots (all mugshots were of black people) then later in lab we had a news report about British police employing human "super recognizers" to surveil protests. Its all some loving Voight-Kampf poo poo lol, what the gently caress is a "super recognizer"? How does someone get a job like that?
|
# ? Sep 25, 2020 01:57 |
|
YOLOsubmarine posted:I didn’t say don’t use ML, I said don’t use it when the results are lovely and it literally just reproduces structural inequalities already baked into society. It’s a tool, and a fairly dull one at times. But plenty of companies treat it like the problem is in the tool (we don’t know why the model is racist, it’s unintentional, we’re trying to fix it) as opposed to their choice to use the tool for that particular problem. Like someone else said up thread, it's about being able to shift blame to the computer. As an example, I worked for a few years on an ML project for bail setting in criminal courts. Judges routinely gave black defendants higher bail amounts than white defendants charged with the same or similar offenses. Part of the reason was that Judges used an evaluation tool that assigned points based on poo poo like stable employment, level of education, marital status, geography, whether the person owned or rented, and so on. Some consultants came in with a solution: a bail evaluation tool with a machine learning algorithm that would take the racism out of the process with magic, and better yet the judges would no longer have the freedom to set bail amounts outside of the ranges recommended by the algorithm. As it turns out, not only did the ML tool make zanily racist decisions on our test data, it actually discriminated against black defendants even moreso than the judges did. This was because judges often assigned lower bail amounts than the old tool recommended, and sometimes they would let defendants go home without bail to wait for trial, even if the tool said lock them up. Because judges, despite many of them having biases against non-white people, can overcome their biases with effort, compassion, and empathy. It's easier, in my view, to make judges better at correcting social injustices than it is to craft the perfect algorithm to do the hard work for us. In the end, much of the drive toward ML in public service is a bunch of egghead liberals who think we can fix society by adjusting the knobs ever so carefully. As if we can just move the racism slider a little to the left and leave a parasitic and depraved institution in place. Cash bail should be abolished. Full stop. Grand Theft Autobot has issued a correction as of 02:08 on Sep 25, 2020 |
# ? Sep 25, 2020 02:05 |
|
So how long until catchpa's make us click on pictures of black people.
|
# ? Sep 25, 2020 02:06 |
|
Morbus posted:But the very definition of a racist society in the first place is that "reproducing structural inequalities" is a feature, not a bug. Just look at how badly people lose their poo poo when you do anything else. I dont think its mysterious that companies behave this way, theyre just following their prime directive. Its still important to call bullshit on it and point out that its evil when people try to come up with technocratic explanations for why twitters algorithm simply must be racist, because mathematics itself demands it. Grand Theft Autobot posted:Like someone else said up thread, it's about being able to shift blame to the computer. I agree with all of this, I was merely pointing out that decision making in neural net ML models is opaque even to the people developing them is not a valid excuse for a company continuing to use a model that they know is broken in pretty disgusting ways. To say nothing of releasing it publicly in the first place.
|
# ? Sep 25, 2020 02:35 |
|
Gods_Butthole posted:lol, what the gently caress is a "super recognizer"? How does someone get a job like that? In theory some people are abnormally good at spotting facial similarities, like how some people are savants at math or whatever, but I'm pretty much assuming in practice its an excuse to invent probable cause
|
# ? Sep 25, 2020 03:12 |
|
Gods_Butthole posted:lol, what the gently caress is a "super recognizer"? How does someone get a job like that?
|
# ? Sep 25, 2020 05:19 |
|
Dr Pepper posted:So how long until catchpa's make us click on pictures of black people. Not likely, the machines don't even realize they exist.
|
# ? Sep 25, 2020 05:25 |
|
plugging the yospos computer racism thread where we also talk about computer racism
|
# ? Sep 25, 2020 05:25 |
|
https://twitter.com/baumard_nicolas/status/1308715606196342784 Lmao what the hell is this
|
# ? Sep 25, 2020 15:50 |
|
Charles 2 of Spain posted:https://twitter.com/baumard_nicolas/status/1308715606196342784 evolutionary psychology for the social sciences, focused on economics
|
# ? Sep 25, 2020 16:17 |
|
Main Paineframe posted:evolutionary psychology for the social sciences, focused on economics powerfully cursed premise
|
# ? Sep 25, 2020 16:50 |
|
Grand Theft Autobot posted:Like someone else said up thread, it's about being able to shift blame to the computer. really cool post. i enjoyed it and will retell it to friends without citing you as anyone other than "some guy online posted." bl*ss
|
# ? Sep 25, 2020 17:08 |
|
Charles 2 of Spain posted:https://twitter.com/baumard_nicolas/status/1308715606196342784 I read it and the idea is that you can tell what people find normal/expected/desirable by how artists draw portraits, because it's never photorealistic and they always exaggerated or deemphasize some features. So they took some large random samples of people, surveyed what faces they found trustworthy, trained an AI on that data, and then applied the AI to portraits. The result is that perceived trustworthiness has become more normal as crime has fallen and large-scale social cohesion has gone up, and it's correlated with GDP, probably because people will act more honorably when making one mistake won't lead them to die
|
# ? Sep 25, 2020 18:06 |
|
Gods_Butthole posted:lol, what the gently caress is a "super recognizer"? How does someone get a job like that? Racist with an excuse.
|
# ? Sep 25, 2020 18:22 |
|
https://twitter.com/adamjohnsonNYC/status/1309126364205850627?s=20 I realize this was actual people at tech companies making the decision, and not the *~algorithm~* but this is still infuriating.
|
# ? Sep 25, 2020 18:53 |
|
Jewel Repetition posted:I read it and the idea is that you can tell what people find normal/expected/desirable by how artists draw portraits, because it's never photorealistic and they always exaggerated or deemphasize some features. So they took some large random samples of people, surveyed what faces they found trustworthy, trained an AI on that data, and then applied the AI to portraits. The result is that perceived trustworthiness has become more normal as crime has fallen and large-scale social cohesion has gone up, and it's correlated with GDP, probably because people will act more honorably when making one mistake won't lead them to die oh boy, just so stories with graphs, thanks evo psych
|
# ? Sep 25, 2020 19:31 |
|
Real Mean Queen posted:I figured the title was referring to that microsoft (?) chatbot that went from “hello world” to “hitler did nothing wrong” within like a day, but I was also thinking it might refer to the youtube algorithm that says “oh you like any kind of video about any topic? We think you’ll enjoy racism.” The fact that “the racism computer” is vague enough to not refer to any one racist computer makes me think that maybe we’re not doing a great job having computers. https://www.youtube.com/watch?v=HsLup7yy-6I probably :nsfw:
|
# ? Sep 25, 2020 19:37 |
|
A Buttery Pastry posted:First you must watch a TON of porn. Done (with gusto), what's the next step?
|
# ? Sep 25, 2020 20:51 |
|
Smythe posted:really cool post. i enjoyed it and will retell it to friends without citing you as anyone other than "some guy online posted." bl*ss "Weapons of Math Destruction" and "Technically Wrong" are both books that cover this same ground and have a lot of similar examples. There's also this ProPublica article about Compas, which is black box software that generates a recidivism score that judges use during sentencing. quote:ON A SPRING AFTERNOON IN 2014, Brisha Borden was running late to pick up her god-sister from school when she spotted an unlocked kids blue Huffy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
|
# ? Sep 25, 2020 21:22 |
Charles 2 of Spain posted:https://twitter.com/baumard_nicolas/status/1308715606196342784 i want to die
|
|
# ? Sep 26, 2020 01:35 |
|
|
# ? Sep 26, 2020 02:06 |
|
Jewel Repetition posted:I read it and the idea is that you can tell what people find normal/expected/desirable by how artists draw portraits, because it's never photorealistic and they always exaggerated or deemphasize some features. So they took some large random samples of people, surveyed what faces they found trustworthy, trained an AI on that data, and then applied the AI to portraits. The result is that perceived trustworthiness has become more normal as crime has fallen and large-scale social cohesion has gone up, and it's correlated with GDP, probably because people will act more honorably when making one mistake won't lead them to die They trained it on portraits of rich fucks who were never one mistake from death.
|
# ? Sep 26, 2020 03:37 |
|
i think im now mentally prepared that if i don't get a job when i graduate i can just become the unabomber
|
# ? Sep 26, 2020 05:18 |
|
StashAugustine posted:i think im now mentally prepared that if i don't get a job when i graduate i can just become the unabomber goonabomber
|
# ? Sep 26, 2020 05:45 |
|
Platystemon posted:They trained it on portraits of rich fucks who were never one mistake from death. Yeah, but it wasn't measuring whether those people actually had the features, it was measuring whether the portrait artists gave them those features, and therefore the features were expected/valued
|
# ? Sep 26, 2020 05:50 |
|
Is blackface okay if you're doing it to hide from a Killbot?
|
# ? Sep 26, 2020 05:59 |
|
No but for other reasons than you think
|
# ? Sep 26, 2020 06:00 |
|
lmao
|
# ? Sep 26, 2020 06:03 |
|
Gods_Butthole posted:Done (with gusto), what's the next step?
|
# ? Sep 26, 2020 06:03 |
|
Colonel Cancer posted:No but for other reasons than you think It's wrong to hide from your just desserts?
|
# ? Sep 26, 2020 06:03 |
|
lol
|
# ? Sep 26, 2020 06:47 |
|
https://twitter.com/Simon_Whitten/status/1309382555918049282
|
# ? Sep 26, 2020 08:28 |
|
YOLOsubmarine posted:"Weapons of Math Destruction" and "Technically Wrong" are both books that cover this same ground and have a lot of similar examples. given the mentoring required to serve on the highest court in the land and everything on down, it sounds like judges are trained from a very early age to be complete babies
|
# ? Sep 26, 2020 08:37 |
I'm going to become unable to tell the difference between black people so that I can get a job as a super recogniser
|
|
# ? Sep 26, 2020 13:08 |
|
|
# ? Jun 10, 2024 11:41 |
|
StashAugustine posted:i think im now mentally prepared that if i don't get a job when i graduate i can just become the unabomber finally the qcs dream realized
|
# ? Sep 26, 2020 13:34 |