|
bvj191jgl7bBsqF5m posted:https://twitter.com/NeutralthaWolf/status/1307907048726814722?s=19 You're right, but the designers of the neural networks themselves do not always understand why a particular decision was made or at what stage. Hence, why Twitter has acknowledged the problem with image cropping and why they say they are struggling to find a solution.
|
# ¿ Sep 24, 2020 12:59 |
|
|
# ¿ May 19, 2024 17:48 |
|
YOLOsubmarine posted:I didn’t say don’t use ML, I said don’t use it when the results are lovely and it literally just reproduces structural inequalities already baked into society. It’s a tool, and a fairly dull one at times. But plenty of companies treat it like the problem is in the tool (we don’t know why the model is racist, it’s unintentional, we’re trying to fix it) as opposed to their choice to use the tool for that particular problem. Like someone else said up thread, it's about being able to shift blame to the computer. As an example, I worked for a few years on an ML project for bail setting in criminal courts. Judges routinely gave black defendants higher bail amounts than white defendants charged with the same or similar offenses. Part of the reason was that Judges used an evaluation tool that assigned points based on poo poo like stable employment, level of education, marital status, geography, whether the person owned or rented, and so on. Some consultants came in with a solution: a bail evaluation tool with a machine learning algorithm that would take the racism out of the process with magic, and better yet the judges would no longer have the freedom to set bail amounts outside of the ranges recommended by the algorithm. As it turns out, not only did the ML tool make zanily racist decisions on our test data, it actually discriminated against black defendants even moreso than the judges did. This was because judges often assigned lower bail amounts than the old tool recommended, and sometimes they would let defendants go home without bail to wait for trial, even if the tool said lock them up. Because judges, despite many of them having biases against non-white people, can overcome their biases with effort, compassion, and empathy. It's easier, in my view, to make judges better at correcting social injustices than it is to craft the perfect algorithm to do the hard work for us. In the end, much of the drive toward ML in public service is a bunch of egghead liberals who think we can fix society by adjusting the knobs ever so carefully. As if we can just move the racism slider a little to the left and leave a parasitic and depraved institution in place. Cash bail should be abolished. Full stop. Grand Theft Autobot has issued a correction as of 02:08 on Sep 25, 2020 |
# ¿ Sep 25, 2020 02:05 |