|
FMguru posted:Nate is interesting because he was right about how there was a much larger chance of a massive systematic polling failure than anyone else was estimating, but that really undercuts the entire raison d'etre of his website.. It's great that he predicted that all this poll stuff had a good likelihood of being garbage, but I'm not sure that's going to be much of a selling point to his bosses at ESPN/ABC/Disney - "I was the only poll aggregator in the media who was right about how worthless all this polling stuff you pay us to do actually is" isn't a very strong pitch. The amount of polling error and the vulnerability of campaigns to it is variable. For example Clinton was a lot more vulnerable to it than Obama was in 2012, despite Obama having a tighter lead. He did correctly predict that there was a lot of confidence in Obama winning then. I think the horserace narrative that Wang and certain posters denounced that was shown in his website was also accurate. He showed that events and news had a genuine impact on the election, and that conclusion doesn't require systematic bias to work. Showing systematic error instead of otherwise is also important - if you flowed his model into a real time prediction as the election results came out, you could have actually made a bunch of money betting on election day. Having someone tell you that polls are garbage in certain ways is also important, as we see people dive into the polling data to confirm their own biases that it's all the fault of white men/old people/young people/hispanic people/the dnc/comey/women/black people/clinton/bernie/hillbots/bernie bros/ ... (Mind you, Nate himself is not exactly immune from this, but his site is still better than the general level of ignorance.) Fangz has issued a correction as of 15:57 on Nov 10, 2016 |
# ? Nov 10, 2016 15:50 |
|
|
# ? May 3, 2024 16:45 |
|
Also holy gently caress Romney had a point back in 2012 about unskewing the polls
|
# ? Nov 10, 2016 17:45 |
|
Fuckin Trump Riot posted:And yet Wang's model is converging at a very reasonable EV number based on what we've seen so far.
|
# ? Nov 10, 2016 17:50 |
|
Typo posted:Also nah unskewing is still dumb
|
# ? Nov 10, 2016 17:53 |
|
Fangz posted:These types of model have been around forever and are always garbage that demonstrates the problem of overfitting and the statistical incompetence of political science departments. Maybe you should learn what overfitting actually means before you decide to talk about statistics.
|
# ? Nov 10, 2016 18:10 |
|
mastershakeman posted:I don't have a stats background but my issue with what Nate did is essentially what Bill Mitchell did which is to go "the polls are bad go Trump". how can that even be modeled? Wangs model seems to have trusted the inputs because that's really all that can be done with a poll driven forecast. drat this is a very stupid post. Even amongst the 500 pre-election posts saying "Nate is bad because he won't just tell us who wins!!! Shook!!" Probability. It all operates on probability. That's not the same as hand-waving or equivocation or making poo poo up.
|
# ? Nov 10, 2016 18:33 |
|
joepinetree posted:Maybe you should learn what overfitting actually means before you decide to talk about statistics. Lol
|
# ? Nov 10, 2016 18:54 |
|
Typo posted:538 is literally one of the most relevant websites in the US: It's popular now because it's built its brand as being able to predict the presidential election, no one's arguing that. It's more that three months from now those numbers crater and Disney/ESPN still has to slash jobs/budget to make their investors happy, 538's a prime candidate to be on the chopping block to some extent.
|
# ? Nov 10, 2016 19:12 |
|
C. Everett Koop posted:It's popular now because it's built its brand as being able to predict the presidential election, no one's arguing that. It's more that three months from now those numbers crater and Disney/ESPN still has to slash jobs/budget to make their investors happy, 538's a prime candidate to be on the chopping block to some extent. 538 is really not that expensive to run for a giant corporation like Disney. It's not like the site came into existence this year. Fangz has issued a correction as of 19:19 on Nov 10, 2016 |
# ? Nov 10, 2016 19:17 |
|
FMguru posted:Nate is interesting because he was right about how there was a much larger chance of a massive systematic polling failure than anyone else was estimating, but that really undercuts the entire raison d'etre of his website.. no, that's pretty much exactly what he has been trying to do the entire time
|
# ? Nov 10, 2016 19:33 |
|
Fangz posted:538 is really not that expensive to run for a giant corporation like Disney. It's not like the site came into existence this year. When you're looking to cut salary, everyone's up in the air. There's going to be jobs lost over there, to what extent is the question, and I'd be amazed if the 538 people weren't aware of that. There was thought before Grantland died that ESPN's two verticals would be merged together and let Grantland handle the more pop culture stuff that 538 was doing. Now that Grantland is gone and the majority of those resources have gone towards The Undefeated, I think it's more likely the non-political aspects of 538 get the axe, and it's possible that 538 gets rolled into ABC News and it's just Nate and a small handful of people.
|
# ? Nov 10, 2016 19:34 |
|
C. Everett Koop posted:When you're looking to cut salary, everyone's up in the air. There's going to be jobs lost over there, to what extent is the question, and I'd be amazed if the 538 people weren't aware of that. It might just be the circles I hang in but 538 seems a lot better known than Grantland, and also benefits from not having a ton of competitors. I see 538 in off years as a prestige thing that they wanna hold on to just to say they run it.
|
# ? Nov 10, 2016 19:45 |
|
Hot take: 538 will get more attention than ever under a Trump Presidency and we'll all get to watch Nate gain another 50 lbs and lose his remaining hair over the next 4 years as he completes his transformation into a fully-fledged pundit.
|
# ? Nov 10, 2016 19:47 |
|
Jeremys Iron posted:Can I just check whether this crow was eaten yet? I don't care about some poster eating crow. Has Wang eaten the bug?
|
# ? Nov 11, 2016 04:02 |
|
FMguru posted:but I'm not sure that's going to be much of a selling point to his bosses at ESPN/ABC/Disney - "I was the only poll aggregator in the media who was right about how worthless all this polling stuff you pay us to do actually is" isn't a very strong pitch. People keep saying this gleefully like it means Silver won't be around covering elections. His operation is pretty cheap. If they have to they'll keep him, jettison everybody else, and fold him into their stable of bloggers, albeit slightly more highly paid and probably with a team of his own come elections in 2020. Or he walks and does it on his own again like he did when he was first starting out. Or The Ringer hires him. The pieces of FiveThirtyEight that are expensive are the salaries of the rest of the editorial team. I think Silver's probably going to be able to continue drawing a decent living doing data journalism while trying to educate lay people about uncertainty. He's been at a different outlet each presidential election cycle. Who knows maybe WaPo buys 538. ErIog has issued a correction as of 04:24 on Nov 11, 2016 |
# ? Nov 11, 2016 04:21 |
|
|
# ? Nov 11, 2016 18:58 |
|
There it is. The long-awaited "how my rear end taste" screed. http://fivethirtyeight.com/features/why-fivethirtyeight-gave-trump-a-better-chance-than-almost-anyone-else/
|
# ? Nov 11, 2016 22:16 |
|
Vox Nihili posted:drat this is a very stupid post. Even amongst the 500 pre-election posts saying "Nate is bad because he won't just tell us who wins!!! Shook!!" yeah that's not what I meant at all. the polls, especially in the Midwest, were so far outside the margin of error as to be useless (wi was 8.1% off iirc). theres no way to perform poll analysis if your argument is lol we can't know because the polls could miss by historic margins I guess my real question is what's the math on a bunch of polls with a margin of error of 3 or whatever all missing by 8 now that i see Nate's article it's making a lot more sense but it's still a drastic margin of error for the Midwest states. mastershakeman has issued a correction as of 22:36 on Nov 11, 2016 |
# ? Nov 11, 2016 22:23 |
|
mastershakeman posted:yeah that's not what I meant at all. the polls, especially in the Midwest, were so far outside the margin of error as to be useless (wi was 8.1% off iirc). theres no way to perform poll analysis if your argument is lol we can't know because the polls could miss by historic margins yeah basically the 538 model specifically accounted for regional polling errors such as what we saw, so when his model ran its thousands of simulations there were dozens/hundreds of outcomes where the midwest defected just as it ultimately did. those dozens of outcomes were added to a pile that included "national polls miss badly" (didn't happen) and lesser outcomes like "major clinton safe state flips" (this sort of happened with pennsylvania) so while its not like in 2012 where you can elegantly say "wow he called it all correct" because the polls were right, we can still look at his model and say "wow he accounted for the specific polling failures that actually happened, while others failed to do so"
|
# ? Nov 11, 2016 22:42 |
|
ok cool. good for him
|
# ? Nov 11, 2016 22:45 |
|
the interesting thing imo will be to see how 538 and the other forecasters change their systems in response to this election. will people make big changes, or just minor tweaks, things like that also, if anyone wants to see some hot bug action: https://twitter.com/SamWangPhD
|
# ? Nov 12, 2016 02:05 |
|
I don't think Nate needs to change anything except maybe the presentation. Looking forward to overcorrections from other people though.
|
# ? Nov 12, 2016 03:53 |
|
yeah i think displaying a probabilistic forecast as a single topline figure of "100% divided among these candidates" is not the best way to get across what your prediction is actually saying. 538's simulations obviously had a chunk of results in which the dice came up trump in some of the key states he ended up winning, such as in the midwest and upper midwest. imo it doesn't get across to the reader very well why they are giving X% of win to this candidate, based on their current display systems, compared to something (e.g.) where they broke it down and showed key state groupings that, if they all fall one way or the other, would spell victory for one candidate or the other
|
# ? Nov 12, 2016 05:51 |
|
Lutha Mahtin posted:yeah i think displaying a probabilistic forecast as a single topline figure of "100% divided among these candidates" is not the best way to get across what your prediction is actually saying. 538's simulations obviously had a chunk of results in which the dice came up trump in some of the key states he ended up winning, such as in the midwest and upper midwest. imo it doesn't get across to the reader very well why they are giving X% of win to this candidate, based on their current display systems, compared to something (e.g.) where they broke it down and showed key state groupings that, if they all fall one way or the other, would spell victory for one candidate or the other I think they could do a better job visually displaying the results of the model, but it's not like the information you're talking about was hidden or nonexistent. They did lots of editorial content that explained it. They did multiple podcasts specifically about the model on top of talking about the results of the model on their other weekly elections podcast. I'm not accusing you of this, but it seems like people are now trying to justify to themselves why they didn't believe 538 and pointing the finger at some sort of weird messaging problem with 538 rather than back at themselves for not listening to what 538 editorial along with other people on this forum were trying to tell them about the polling and statistics.
|
# ? Nov 12, 2016 06:20 |
|
FWIW I always agreed with Nate, aside from assuming incorrectly that GOTV would be some hidden factor that isn't captured in the polls which would give Hillary some upside potential. I think there is an element of bad presentation though. The trendline thing was particularly poorly explained. I think some people will always accuse Nate of secretly manipulating the model for clicks, but some things could have been done to avoid feeding that view.
|
# ? Nov 12, 2016 13:17 |
|
https://www.youtube.com/watch?v=E2hIThs9fBs
|
# ? Nov 12, 2016 16:19 |
|
Sam Wang owns
|
# ? Nov 12, 2016 16:30 |
|
ErIog posted:I think they could do a better job visually displaying the results of the model, but it's not like the information you're talking about was hidden or nonexistent. They did lots of editorial content that explained it. They did multiple podcasts specifically about the model on top of talking about the results of the model on their other weekly elections podcast. nah i know all that, im just an idealistic devotee of edward tufte style BEAUTIFUL INFORMATION
|
# ? Nov 12, 2016 16:32 |
|
Fangz posted:I think there is an element of bad presentation though. The trendline thing was particularly poorly explained. I think some people will always accuse Nate of secretly manipulating the model for clicks, but some things could have been done to avoid feeding that view. People straight-up have a hard time parsing probabilities. This is especially true when you use percentages, which carry their own connotations from other areas (80% is a good grade, 80% is a good metacritic score, so surely an 80% chance is an extremely good chance). At certain thresholds, people will, consciously or not, round the odds up to 100% or down to 0%. This means even if you're presenting a perfectly reasonable probability, the way you present it will send people walking off with the wrong impression. This is a tough nut to crack. I think one good solution would involve a diagram highlighting some select potential outcomes (ie, local modes from those monte carlo simulations), and a concise verbal summary of their relative possibility and the circumstances that could produce them. And that has its own downsides. It would be difficult from a visual design perspective. It would require a degree of human interpretation, which increases the risk of bias, editorializing, and punditry. You couldn't auto-update it more than once per day, which would be less satisfying to the feverish refreshers. And ultimately, people want The Numbers. So yeah I dunno.
|
# ? Nov 12, 2016 22:10 |
|
etalian posted:Sam Wang owns he is seriously better at eating bugs than statistics / polling analysis
|
# ? Nov 12, 2016 22:13 |
|
one thing 538 could do is, instead of a percentage, they put the number of simulations in which each candidate won. or they could at least just add the disclaimer of e.g. "68% of 10,000 simulations"
|
# ? Nov 12, 2016 23:26 |
|
Baloogan posted:he is seriously better at eating bugs than statistics / polling analysis
|
# ? Nov 12, 2016 23:34 |
|
Supercar Gautier posted:This is a tough nut to crack. I think one good solution would involve a diagram highlighting some select potential outcomes (ie, local modes from those monte carlo simulations), and a concise verbal summary of their relative possibility and the circumstances that could produce them. And that has its own downsides. It would be difficult from a visual design perspective. It would require a degree of human interpretation, which increases the risk of bias, editorializing, and punditry. You couldn't auto-update it more than once per day, which would be less satisfying to the feverish refreshers. And ultimately, people want The Numbers. One thing I could see is pulling from the simulations to create a red state / blue state map that would flash between different possible outcomes. Visually seeing a map that goes red 25% of the time would be a better way to get probabilities across than the current color map. Most people will round light blue up to blue without understanding that it's actually closer to light red than it is to deep blue.
|
# ? Nov 12, 2016 23:36 |
|
in_cahoots posted:One thing I could see is pulling from the simulations to create a red state / blue state map that would flash between different possible outcomes. Visually seeing a map that goes red 25% of the time would be a better way to get probabilities across than the current color map. Most people will round light blue up to blue without understanding that it's actually closer to light red than it is to deep blue. if you sit a study participant in front of an array of lights where the lights blink in totally random patterns, the participant will swear up and down that there is a pattern to it. i'm not sure if blinking 51 different elements in an accurate probabalistic fashion would produce any different a result
|
# ? Nov 12, 2016 23:39 |
|
In the run-up to the Canadian election last year, The Globe and Mail had a model where you could click a button to run their parliamentary monte carlo simulation yourself. Click, it's a Liberal Party majority, click, it's an NDP minority, click, now it's a Conservative minority. It was cool for understanding how their model worked, but it wasn't informative as to the actual result. You could click and click until you got a result you liked, and walk away feeling confident it was possible/likely. Basically, throwing too much noise at people isn't any better than giving them oversimplified numbers.
|
# ? Nov 13, 2016 00:23 |
|
There's been recent discussion of using animations to illustrate uncertainty in the statistics community, yeah, and maybe it could work. That said, the NYT tried that during the election itself, didn't they? And I don't think people terribly liked that.
|
# ? Nov 13, 2016 01:06 |
|
https://twitter.com/ForecasterEnten/status/797590844727554048 He's beginning to (un)believe.
|
# ? Nov 13, 2016 01:09 |
|
Fangz posted:There's been recent discussion of using animations to illustrate uncertainty in the statistics community, yeah, and maybe it could work. That said, the NYT tried that during the election itself, didn't they? And I don't think people terribly liked that. I think the part they didn't like was, the animation that the NYT chose used the metaphor of a set of rapidly twitching pressure gauges, on a night when people's nerves were completely shot.
|
# ? Nov 13, 2016 01:10 |
|
Supercar Gautier posted:I think the part they didn't like was, the animation that the NYT chose used the metaphor of a set of rapidly twitching pressure gauges, on a night when people's nerves were completely shot. People were talking poo poo about that stuff very early in the night even when everybody thought it was going to be a Clinton landslide. It also just seemed extraordinarily click-baity like they were trying to inject more excitement and juice more excitement out of a night when anybody who's interested in looking at their website is already pretty excited. It also gave a false impression of what their model was doing and made it seem like new information was coming in every millisecond when election night is a night where information just doesn't come in that fast. It felt like they handed all their poo poo off to a bunch of web and graphic designers who went loving nuts on it without any regard for the actual data. Vox Nihili posted:https://twitter.com/ForecasterEnten/status/797590844727554048 I like Harry Enten a lot, but this is some Monday-morning-quarterback bullshit. He needs to stay in his lane. She should have done those things probably. Comey letter still won Trump the election. VRA being gutted still won Trump the election. By going with this narrative he's papering over systemic flaws and unforeseen fuckery that led to disastrous consequences. ErIog has issued a correction as of 01:23 on Nov 13, 2016 |
# ? Nov 13, 2016 01:15 |
|
|
# ? May 3, 2024 16:45 |
|
This election loss was death by a thousand cuts. It was close enough that any one or two factors might have made the difference. It's silly for Enten to push this "It was X and Y but ABSOLUTELY NOT Z" angle when it's obvious that yes, X/Y/Z each had an effect.
|
# ? Nov 13, 2016 01:30 |