|
Chomp8645 posted:There is no way the explanation to this story wasn't going to be stupid. A kid paying off the whole school's lunch debt with his allowance is absurd. It could only mean that either his allowance is comically large or the debt is comically small. One actually useful thing I learned from the comments: this was in Irvine, where the median household income is $90k I also was reminded of why I quit twitter: That's some real useful discourse this comments section is enabling, definitely worth keeping it guys
|
# ? Jun 11, 2019 17:02 |
|
|
# ? Jun 6, 2024 14:19 |
|
Though I did notice that the comments aren't actually below the article, you have to click a thing and they appear in a popup, which I assume is some sort of quarantine system
|
# ? Jun 11, 2019 17:03 |
|
Shame Boy posted:I will never understand people who buy novelty monopoly versions, even if you're a turbo dork fan of something. there are big companies dedicated strictly to making dumb licensed versions of games. Like as in double licensing, so Hasbro doesn't make all of those dumb versions of monopoly, instead other companies license both the board game, and the other IP and then Hasbro gets a cut of the profits too
|
# ? Jun 11, 2019 18:26 |
|
Len posted:It's actually better than the year before where they didn't cover anything until you met the 4k deductible. Apparently their studies shows most people didn't meet that so they added in it working for the first $1000. if they gave you an estimate and your authorized that anything else without an additional authorization is unauthorized. too bad you cursed him out instead of pointing out that he just said you didn't have to pay for the tests you didn't authorize
|
# ? Jun 11, 2019 18:42 |
|
hobbesmaster posted:if they gave you an estimate and your authorized that anything else without an additional authorization is unauthorized. too bad you cursed him out instead of pointing out that he just said you didn't have to pay for the tests you didn't authorize I mean, that doesn't really change the fact that the additional tests weren't authorized. It's not like there's some gotcha that changes that.
|
# ? Jun 11, 2019 19:08 |
|
bob dobbs is dead posted:Monopoly was designed to be unfun in order to teach kids about georgism hth My Econ 491 class decided that Monopoly wasn't unfun enough and didn't accurately reflect the state of the world, so we changed to rules to reflect wealth inequality. The new rules stratified the players into wealth categories, and really if you were part of the 'poor' bracket, your goal wasn't so much to win as to just stay in the game long enough for the timer to run out. Which is to say for the seminar session to end.
|
# ? Jun 11, 2019 19:10 |
|
Necronomicon posted:I mean, that doesn't really change the fact that the additional tests weren't authorized. It's not like there's some gotcha that changes that. Seems like a pretty big deal if you can get them to admit that the tests weren't authorized.
|
# ? Jun 11, 2019 19:11 |
|
Shame Boy posted:One actually useful thing I learned from the comments: this was in Irvine, where the median household income is $90k welcome to the resistance Daddy
|
# ? Jun 11, 2019 19:11 |
|
hobbesmaster posted:if they gave you an estimate and your authorized that anything else without an additional authorization is unauthorized. too bad you cursed him out instead of pointing out that he just said you didn't have to pay for the tests you didn't authorize Actually the guy said the tests were performed and they only quoted me for one because that's the only one I specifically asked for and that there was nothing he could do and he was sorry it was that way but I had to pay it. Also the quote supposedly had a stipulation for "some additional charges" Fingers crossed I don't get sick the rest of the year since they took my entire years health insurance!
|
# ? Jun 11, 2019 20:52 |
|
Worker tracking program can automatically fire you. Hundreds of workers have been fired already with no human supervisor involvement. https://www.businessinsider.com/amazon-system-automatically-fires-warehouse-workers-time-off-task-2019-4
|
# ? Jun 11, 2019 21:25 |
|
spacetoaster posted:Worker tracking program can automatically fire you.
|
# ? Jun 11, 2019 21:38 |
|
spacetoaster posted:Worker tracking program can automatically fire you. Optimism about AI is a mistake (They're gonna make it all racist and poo poo)
|
# ? Jun 11, 2019 22:31 |
|
Accretionist posted:Optimism about AI is a mistake I guy got really heated at me the other day talking about AI. He's completely certain that AI/robots will be able to run everything soon and that it'll be the end of the patriarchy, racism, and every bad thing. I mentioned that these programs/machines will be made by people. The same racist/murderous/hateful/etc people that he had just spent 5 minutes ranting about. I was wrong, apparently, because all the bad stuff will be programmed out. I just said "Oh, ok. Good." and went to lunch with my kids.
|
# ? Jun 11, 2019 23:05 |
|
The Fourth Law of Robotics: A robot shall treat all humans equally, unless such treatment conflicts with the other three laws (it will).
|
# ? Jun 11, 2019 23:08 |
|
spacetoaster posted:Worker tracking program can automatically fire you. Christ how long until the only people earning any money at all are CEOs and the small number of IT devs who maintain their automation software.
|
# ? Jun 11, 2019 23:08 |
|
bike tory posted:Christ how long until the only people earning any money at all are CEOs and the small number of IT devs who maintain their automation software. Soon you can rest in comfort, the Masters will take care of everything
|
# ? Jun 11, 2019 23:18 |
|
bike tory posted:Christ how long until the only people earning any money at all are CEOs and the small number of IT devs who maintain their automation software.
|
# ? Jun 11, 2019 23:29 |
|
spacetoaster posted:I guy got really heated at me the other day talking about AI. He's completely certain that AI/robots will be able to run everything soon and that it'll be the end of the patriarchy, racism, and every bad thing. That's the worst part about "AI" in my opinion, how nearly everyone has this idea that it's magically impartial, that computers are somehow immune to bias because they operate on pure logic and well-defined rules and bias is something only humans can have with their squishy feelings and opinions. Like if it existed on its own it wouldn't be that terrible, just a tool like any other piece of software, but the public perception of it as somehow "better" is what really lets it cause tons of damage. Someone in yospos described it like "the equivalent of money laundering but for bias" which is pretty good
|
# ? Jun 11, 2019 23:42 |
|
Thanatosian posted:...Is this not pretty much where we are right now? Everyone else is drowning in debt. Idk, New Zealand still has a fairly sizeable proportion of the population that can be considered middle class. That's mostly bouyed by a property market bubble/boom so not necessarily sustainable but our median household net wealth is 340k
|
# ? Jun 11, 2019 23:44 |
|
Huh when asked to identify what it thinks a criminal is this neural net lists "black man" as its first criteria, well I guess it must be right and all black people are criminals, it clearly wasn't programmed to be racist, it's just operating on facts and logic!
|
# ? Jun 11, 2019 23:46 |
|
The AI will be fair and impartial. A beacon of equality and justice. *Glaces awkwardly at Taybot*
|
# ? Jun 11, 2019 23:51 |
|
the future is cleverbot yelling new and terrifying slurs at you forever
|
# ? Jun 11, 2019 23:59 |
|
T-man posted:the future is cleverbot yelling new and terrifying slurs at you forever
|
# ? Jun 12, 2019 00:02 |
|
Chomp8645 posted:The Fourth Law of Robotics: A robot shall treat all humans equally, unless such treatment conflicts with the other three laws (it will). Guillotine-bot looking good.
|
# ? Jun 12, 2019 00:23 |
|
bike tory posted:Idk, New Zealand still has a fairly sizeable proportion of the population that can be considered middle class. That's mostly bouyed by a property market bubble/boom so not necessarily sustainable but our median household net wealth is 340k
|
# ? Jun 12, 2019 00:42 |
|
Shame Boy posted:Someone in yospos described it like "the equivalent of money laundering but for bias" which is pretty good This is a good line. And on that point... Article: Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds Date: January 21, 2018 quote:A computer program used to calculate people’s risk of committing crimes is less accurate and more racist than random humans assigned to the same task, a new Dartmouth study finds.
|
# ? Jun 12, 2019 01:04 |
|
COMPAS: Computers Organizing Mass Prejudice Against Some
|
# ? Jun 12, 2019 01:09 |
|
Broken moral compas is a feature, not a bug.
|
# ? Jun 12, 2019 01:15 |
|
I'll bet the rear end in a top hat judges in Broward County LOVED it.
|
# ? Jun 12, 2019 01:20 |
|
I don't know, those computers might start putting hard-working racists out of work
|
# ? Jun 12, 2019 01:24 |
|
Accretionist posted:This is a good line. There is a great book by Cathy O'Neil called Weapons of Math Destruction that talks about this. When you make a machine learning model(read AI) you need tons of data to train it. That data informs the model so if the data is racist, like giving minorities higher sentences, then so follows the model. The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers.
|
# ? Jun 12, 2019 01:25 |
|
syntaxrigger posted:. The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers. thats the benefit though
|
# ? Jun 12, 2019 01:26 |
|
syntaxrigger posted:The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers.
|
# ? Jun 12, 2019 01:35 |
|
but a deep neural network is not convex
|
# ? Jun 12, 2019 01:36 |
|
also the theory behind convex optimization is about a billion times more developed than that of deep learning
|
# ? Jun 12, 2019 01:40 |
|
ai is a mirror. a smartish, adaptive mirror is all I've learned in the last 6 months since I've had to explain it to people who are smarter than I am
|
# ? Jun 12, 2019 01:44 |
|
|
# ? Jun 12, 2019 01:48 |
|
ai is when your robot runs in a circle because it will destroy itself coming near the centre but there's humans in danger so it can't go completely away
|
# ? Jun 12, 2019 01:49 |
|
Accretionist posted:This is a good line. So those personality tests for lovely retail jobs but used to put you in prison forever? Nice I wonder how many years get added if you say you wouldn't mail the nickel back to the phone company gets ya
|
# ? Jun 12, 2019 01:51 |
|
|
# ? Jun 6, 2024 14:19 |
|
syntaxrigger posted:The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers. ignoring the fact that not all ML/AI systems are neural networks, there are actually techniques that can create “transparent” models, or ensure that feature provenance is preserved through the system. another approach is to generate a rule set that closely approximates the behaviour of the model, which works better than I would have expected. mostly people can’t justify their decisions the way we want a machine to (if they’re even asked to) so there’s a reasonable hope that ML assistance or “sweet spot” handling will be more transparent than when humans operate purely on the basis of their own judgment. remains to be seen how widely visible those audit trails are made, of course. humans are generally bad at telling you why they made a lot of kinds of decisions too, if you poke at it. why do you think that person looks sad? why did you make that chess move? why do you say that this is a picture of a dog and not a cat? you can get simplistic answers that are equivalent to “repayment risk” or “employment history” or “genuine remorse”, but they don’t generally survive much scrutiny beyond that, and often are retconned because there wasn’t conscious reasoning in the first place. it’s very rarely hard to find an example that meets the stated criteria but would “obviously” lead to a different conclusion. this is likely one of the reasons that (largely supervised) example-based learning systems can outperform expert-authored sets of rules: we don’t actually *know* how we decided that this is a picture of mom and not her sister, or how to express it except through anecdotal labeling, even if we take the “pick them out of a 2 billion person lineup” aspect of scale out of the problem, or try to generalize it. (because of the composition of ImageNet, there are a lot of cul-de-sac research projects out there that got very good at distinguishing dog breeds but failed to generalize well to other types of object.)
|
# ? Jun 12, 2019 03:46 |