Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shame Boy
Mar 2, 2010

Chomp8645 posted:

There is no way the explanation to this story wasn't going to be stupid. A kid paying off the whole school's lunch debt with his allowance is absurd. It could only mean that either his allowance is comically large or the debt is comically small.



Wow a whole school's lunch debt.... of $74.50. What a non-story.

One actually useful thing I learned from the comments: this was in Irvine, where the median household income is $90k :allears:

I also was reminded of why I quit twitter:



That's some real useful discourse this comments section is enabling, definitely worth keeping it guys

Adbot
ADBOT LOVES YOU

Shame Boy
Mar 2, 2010

Though I did notice that the comments aren't actually below the article, you have to click a thing and they appear in a popup, which I assume is some sort of quarantine system

Coolness Averted
Feb 20, 2007

oh don't worry, I can't smell asparagus piss, it's in my DNA

GO HOGG WILD!
🐗🐗🐗🐗🐗

Shame Boy posted:

I will never understand people who buy novelty monopoly versions, even if you're a turbo dork fan of something.

"OH MY GOD IT SAYS DIAGON ALLEY INSTEAD OF PARK LANE THIS IS SO MADE OF WIN I CAN'T BELIEVE IT!!!"

there are big companies dedicated strictly to making dumb licensed versions of games. Like as in double licensing, so Hasbro doesn't make all of those dumb versions of monopoly, instead other companies license both the board game, and the other IP and then Hasbro gets a cut of the profits too

hobbesmaster
Jan 28, 2008

Len posted:

It's actually better than the year before where they didn't cover anything until you met the 4k deductible. Apparently their studies shows most people didn't meet that so they added in it working for the first $1000.

I had a phone call with the billing people again. Apparently the $250 quote was for just one test, which was the one I specifically called about, but then they ran two more without telling me. But it's fine and acceptable because the number I was given was an estimate not a fact.

So I said some things that offended my entire office and hung up on him. loving parasite.

if they gave you an estimate and your authorized that anything else without an additional authorization is unauthorized. too bad you cursed him out instead of pointing out that he just said you didn't have to pay for the tests you didn't authorize

Necronomicon
Jan 18, 2004

hobbesmaster posted:

if they gave you an estimate and your authorized that anything else without an additional authorization is unauthorized. too bad you cursed him out instead of pointing out that he just said you didn't have to pay for the tests you didn't authorize

I mean, that doesn't really change the fact that the additional tests weren't authorized. It's not like there's some gotcha that changes that.

Warmachine
Jan 30, 2012



bob dobbs is dead posted:

Monopoly was designed to be unfun in order to teach kids about georgism hth

My Econ 491 class decided that Monopoly wasn't unfun enough and didn't accurately reflect the state of the world, so we changed to rules to reflect wealth inequality. The new rules stratified the players into wealth categories, and really if you were part of the 'poor' bracket, your goal wasn't so much to win as to just stay in the game long enough for the timer to run out. Which is to say for the seminar session to end.

:capitalism:

hobbesmaster
Jan 28, 2008

Necronomicon posted:

I mean, that doesn't really change the fact that the additional tests weren't authorized. It's not like there's some gotcha that changes that.

Seems like a pretty big deal if you can get them to admit that the tests weren't authorized.

T-man
Aug 22, 2010


Talk shit, get bzzzt.

Shame Boy posted:

One actually useful thing I learned from the comments: this was in Irvine, where the median household income is $90k :allears:

I also was reminded of why I quit twitter:



That's some real useful discourse this comments section is enabling, definitely worth keeping it guys

welcome to the resistance Daddy

Len
Jan 21, 2008

Pouches, bandages, shoulderpad, cyber-eye...

Bitchin'!


hobbesmaster posted:

if they gave you an estimate and your authorized that anything else without an additional authorization is unauthorized. too bad you cursed him out instead of pointing out that he just said you didn't have to pay for the tests you didn't authorize

Actually the guy said the tests were performed and they only quoted me for one because that's the only one I specifically asked for and that there was nothing he could do and he was sorry it was that way but I had to pay it.

Also the quote supposedly had a stipulation for "some additional charges"

Fingers crossed I don't get sick the rest of the year since they took my entire years health insurance!

spacetoaster
Feb 10, 2014

Worker tracking program can automatically fire you.

Hundreds of workers have been fired already with no human supervisor involvement.

https://www.businessinsider.com/amazon-system-automatically-fires-warehouse-workers-time-off-task-2019-4

Homocow
Apr 24, 2007

Extremely bad poster!
DO NOT QUOTE!


Pillbug

spacetoaster posted:

Worker tracking program can automatically fire you.
*amazon drone deposits letter in your mailbox and floats away*

Accretionist
Nov 7, 2012
I BELIEVE IN STUPID CONSPIRACY THEORIES

spacetoaster posted:

Worker tracking program can automatically fire you.

Hundreds of workers have been fired already with no human supervisor involvement.

https://www.businessinsider.com/amazon-system-automatically-fires-warehouse-workers-time-off-task-2019-4

Optimism about AI is a mistake

(They're gonna make it all racist and poo poo)

spacetoaster
Feb 10, 2014

Accretionist posted:

Optimism about AI is a mistake

(They're gonna make it all racist and poo poo)

I guy got really heated at me the other day talking about AI. He's completely certain that AI/robots will be able to run everything soon and that it'll be the end of the patriarchy, racism, and every bad thing.

I mentioned that these programs/machines will be made by people. The same racist/murderous/hateful/etc people that he had just spent 5 minutes ranting about.

I was wrong, apparently, because all the bad stuff will be programmed out. I just said "Oh, ok. Good." and went to lunch with my kids.

Meme Poker Party
Sep 1, 2006

by Azathoth
The Fourth Law of Robotics: A robot shall treat all humans equally, unless such treatment conflicts with the other three laws (it will).

voiceless anal fricative
May 6, 2007

spacetoaster posted:

Worker tracking program can automatically fire you.

Hundreds of workers have been fired already with no human supervisor involvement.

https://www.businessinsider.com/amazon-system-automatically-fires-warehouse-workers-time-off-task-2019-4

Christ how long until the only people earning any money at all are CEOs and the small number of IT devs who maintain their automation software.

Zombiepop
Mar 30, 2010

bike tory posted:

Christ how long until the only people earning any money at all are CEOs and the small number of IT devs who maintain their automation software.

Soon you can rest in comfort, the Masters will take care of everything

Ham Equity
Apr 16, 2013

The first thing we do, let's kill all the cars.
Grimey Drawer

bike tory posted:

Christ how long until the only people earning any money at all are CEOs and the small number of IT devs who maintain their automation software.
...Is this not pretty much where we are right now? Everyone else is drowning in debt.

Shame Boy
Mar 2, 2010

spacetoaster posted:

I guy got really heated at me the other day talking about AI. He's completely certain that AI/robots will be able to run everything soon and that it'll be the end of the patriarchy, racism, and every bad thing.

I mentioned that these programs/machines will be made by people. The same racist/murderous/hateful/etc people that he had just spent 5 minutes ranting about.

I was wrong, apparently, because all the bad stuff will be programmed out. I just said "Oh, ok. Good." and went to lunch with my kids.

That's the worst part about "AI" in my opinion, how nearly everyone has this idea that it's magically impartial, that computers are somehow immune to bias because they operate on pure logic and well-defined rules and bias is something only humans can have with their squishy feelings and opinions. Like if it existed on its own it wouldn't be that terrible, just a tool like any other piece of software, but the public perception of it as somehow "better" is what really lets it cause tons of damage.

Someone in yospos described it like "the equivalent of money laundering but for bias" which is pretty good

voiceless anal fricative
May 6, 2007

Thanatosian posted:

...Is this not pretty much where we are right now? Everyone else is drowning in debt.



Idk, New Zealand still has a fairly sizeable proportion of the population that can be considered middle class. That's mostly bouyed by a property market bubble/boom so not necessarily sustainable but our median household net wealth is 340k

Shame Boy
Mar 2, 2010

Huh when asked to identify what it thinks a criminal is this neural net lists "black man" as its first criteria, well I guess it must be right and all black people are criminals, it clearly wasn't programmed to be racist, it's just operating on facts and logic!

Meme Poker Party
Sep 1, 2006

by Azathoth
The AI will be fair and impartial. A beacon of equality and justice.



*Glaces awkwardly at Taybot*

T-man
Aug 22, 2010


Talk shit, get bzzzt.

the future is cleverbot yelling new and terrifying slurs at you forever

Sing Along
Feb 28, 2017

by Athanatos

T-man posted:

the future is cleverbot yelling new and terrifying slurs at you forever

Hollandia
Jul 27, 2007

rattus rattus


Grimey Drawer

Chomp8645 posted:

The Fourth Law of Robotics: A robot shall treat all humans equally, unless such treatment conflicts with the other three laws (it will).

Guillotine-bot looking good.

Ham Equity
Apr 16, 2013

The first thing we do, let's kill all the cars.
Grimey Drawer

bike tory posted:

Idk, New Zealand still has a fairly sizeable proportion of the population that can be considered middle class. That's mostly bouyed by a property market bubble/boom so not necessarily sustainable but our median household net wealth is 340k
Sorry, I was referring to the dystopian capitalist hellscape that is modern-day America.

Accretionist
Nov 7, 2012
I BELIEVE IN STUPID CONSPIRACY THEORIES

Shame Boy posted:

Someone in yospos described it like "the equivalent of money laundering but for bias" which is pretty good

This is a good line.

And on that point...

Article: Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds
Date: January 21, 2018

quote:

A computer program used to calculate people’s risk of committing crimes is less accurate and more racist than random humans assigned to the same task, a new Dartmouth study finds.

Before they’re sentenced, people who commit crimes in some U.S. states are required to take a 137-question quiz. The questions, which range from queries about a person’s criminal history, to their parents’ substance use, to “do you feel discouraged at times?” are part of a software program called Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS. Using a proprietary algorithm, COMPAS is meant to crunch the numbers on a person’s life, determine their risk for reoffending, and help a judge determine a sentence based on that risk assessment.

...

And a study released last week from Dartmouth researchers found that random, untrained people on the internet could make more accurate predictions about a person’s criminal future than the expensive software could.

A privately held software, COMPAS’s algorithms are a trade secret. Its conclusions baffle some of the people it evaluates.

...

COMPAS came to its conclusion through its 137-question quiz, which asks questions about the person’s criminal history, family history, social life, and opinions. The questionnaire does not ask a person’s race. But the questions — including those about parents’ arrest history, neighborhood crime, and a person’s economic stability — appear unfavorably biased against black defendants, who are disproportionately impoverished or incarcerated in the U.S.

A 2016 ProPublica investigation analyzed the software’s results across 7,000 cases in Broward County, Florida, and found that COMPAS often overestimated a person’s risk for committing future crimes. These incorrect assessments nearly doubled among black defendants, who frequently received higher risk ratings than white defendants who had committed more serious crimes.

...

Meme Poker Party
Sep 1, 2006

by Azathoth
COMPAS: Computers Organizing Mass Prejudice Against Some

Outrail
Jan 4, 2009

www.sapphicrobotica.com
:roboluv: :love: :roboluv:
Broken moral compas is a feature, not a bug.

anonumos
Jul 14, 2005

Fuck it.
I'll bet the rear end in a top hat judges in Broward County LOVED it.

Accretionist
Nov 7, 2012
I BELIEVE IN STUPID CONSPIRACY THEORIES
I don't know, those computers might start putting hard-working racists out of work

syntaxrigger
Jul 7, 2011

Actually you owe me 6! But who's countin?

Accretionist posted:

This is a good line.

And on that point...

Article: Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds
Date: January 21, 2018

There is a great book by Cathy O'Neil called Weapons of Math Destruction that talks about this.

When you make a machine learning model(read AI) you need tons of data to train it. That data informs the model so if the data is racist, like giving minorities higher sentences, then so follows the model. The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers.

Shear Modulus
Jun 9, 2010



syntaxrigger posted:

. The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers.

thats the benefit though

Accretionist
Nov 7, 2012
I BELIEVE IN STUPID CONSPIRACY THEORIES

syntaxrigger posted:

The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers.

Shear Modulus
Jun 9, 2010



but a deep neural network is not convex

Shear Modulus
Jun 9, 2010



also the theory behind convex optimization is about a billion times more developed than that of deep learning

Eat This Glob
Jan 14, 2008

God is dead. God remains dead. And we have killed him. Who will wipe this blood off us? What festivals of atonement, what sacred games shall we need to invent?

ai is a mirror. a smartish, adaptive mirror is all I've learned in the last 6 months since I've had to explain it to people who are smarter than I am

syntaxrigger
Jul 7, 2011

Actually you owe me 6! But who's countin?


:laffo:

T-man
Aug 22, 2010


Talk shit, get bzzzt.

ai is when your robot runs in a circle because it will destroy itself coming near the centre but there's humans in danger so it can't go completely away

Raldikuk
Apr 7, 2006

I'm bad with money and I want that meatball!

Accretionist posted:

This is a good line.

And on that point...

Article: Computer Program That Calculates Prison Sentences Is Even More Racist Than Humans, Study Finds
Date: January 21, 2018

So those personality tests for lovely retail jobs but used to put you in prison forever? Nice

I wonder how many years get added if you say you wouldn't mail the nickel back to the phone company gets ya

Adbot
ADBOT LOVES YOU

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

syntaxrigger posted:

The kicker is that you cannot have a neural network(read AI) justify its decision to you. It will just look like meaningless numbers.

ignoring the fact that not all ML/AI systems are neural networks, there are actually techniques that can create “transparent” models, or ensure that feature provenance is preserved through the system. another approach is to generate a rule set that closely approximates the behaviour of the model, which works better than I would have expected. mostly people can’t justify their decisions the way we want a machine to (if they’re even asked to) so there’s a reasonable hope that ML assistance or “sweet spot” handling will be more transparent than when humans operate purely on the basis of their own judgment. remains to be seen how widely visible those audit trails are made, of course.

humans are generally bad at telling you why they made a lot of kinds of decisions too, if you poke at it. why do you think that person looks sad? why did you make that chess move? why do you say that this is a picture of a dog and not a cat? you can get simplistic answers that are equivalent to “repayment risk” or “employment history” or “genuine remorse”, but they don’t generally survive much scrutiny beyond that, and often are retconned because there wasn’t conscious reasoning in the first place. it’s very rarely hard to find an example that meets the stated criteria but would “obviously” lead to a different conclusion. this is likely one of the reasons that (largely supervised) example-based learning systems can outperform expert-authored sets of rules: we don’t actually *know* how we decided that this is a picture of mom and not her sister, or how to express it except through anecdotal labeling, even if we take the “pick them out of a 2 billion person lineup” aspect of scale out of the problem, or try to generalize it. (because of the composition of ImageNet, there are a lot of cul-de-sac research projects out there that got very good at distinguishing dog breeds but failed to generalize well to other types of object.)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply