Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Grand Theft Autobot
Feb 28, 2008

I'm something of a fucking idiot myself

bvj191jgl7bBsqF5m posted:

https://twitter.com/NeutralthaWolf/status/1307907048726814722?s=19

I see this a lot from people that don't know things about computers, so I wonder if there is a way that people can demystify algorithms and explain that they are a set of steps that a human writes for a computer to do that can take on the biases of the people writing them, and are not magic that phase in and out of existence unracistly

You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution.

Adbot
ADBOT LOVES YOU

Grand Theft Autobot
Feb 28, 2008

I'm something of a fucking idiot myself

YOLOsubmarine posted:

I didn’t say don’t use ML, I said don’t use it when the results are lovely and it literally just reproduces structural inequalities already baked into society. It’s a tool, and a fairly dull one at times. But plenty of companies treat it like the problem is in the tool (we don’t know why the model is racist, it’s unintentional, we’re trying to fix it) as opposed to their choice to use the tool for that particular problem.

Like someone else said up thread, it's about being able to shift blame to the computer.

As an example, I worked for a few years on an ML project for bail setting in criminal courts. Judges routinely gave black defendants higher bail amounts than white defendants charged with the same or similar offenses. Part of the reason was that Judges used an evaluation tool that assigned points based on poo poo like stable employment, level of education, marital status, geography, whether the person owned or rented, and so on.

Some consultants came in with a solution: a bail evaluation tool with a machine learning algorithm that would take the racism out of the process with magic, and better yet the judges would no longer have the freedom to set bail amounts outside of the ranges recommended by the algorithm.

As it turns out, not only did the ML tool make zanily racist decisions on our test data, it actually discriminated against black defendants even moreso than the judges did. This was because judges often assigned lower bail amounts than the old tool recommended, and sometimes they would let defendants go home without bail to wait for trial, even if the tool said lock them up.

Because judges, despite many of them having biases against non-white people, can overcome their biases with effort, compassion, and empathy. It's easier, in my view, to make judges better at correcting social injustices than it is to craft the perfect algorithm to do the hard work for us.

In the end, much of the drive toward ML in public service is a bunch of egghead liberals who think we can fix society by adjusting the knobs ever so carefully. As if we can just move the racism slider a little to the left and leave a parasitic and depraved institution in place. Cash bail should be abolished. Full stop.

Grand Theft Autobot has issued a correction as of 02:08 on Sep 25, 2020

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply