538 Model Megathread
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
May 30, 2024, 11:02:06 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2016 U.S. Presidential Election
  538 Model Megathread
« previous next »
Pages: 1 ... 33 34 35 36 37 [38] 39 40 41 42 43 ... 49
Author Topic: 538 Model Megathread  (Read 85085 times)
Beefalow and the Consumer
Beef
Junior Chimp
*****
Posts: 9,123
United States


Political Matrix
E: -2.77, S: -8.78

Show only this user's posts in this thread
« Reply #925 on: November 02, 2016, 01:35:29 PM »

What the bloody hell?  How does the Marquette poll showing Clinton holding Wisconsin by early-October margins cause her chances to tick downward?
Logged
adrac
adracman42
Jr. Member
***
Posts: 722


Political Matrix
E: -9.99, S: -9.99

Show only this user's posts in this thread
« Reply #926 on: November 02, 2016, 01:35:29 PM »

I don't understand how the Martquette poll changes the final probability by .6%. Even adjusting to C +5, the poll still has her winning by more than the model expects her to even after the poll's been added to the database. That goes against how the model uses polling data, right?
Logged
Yank2133
Junior Chimp
*****
Posts: 5,387


Show only this user's posts in this thread
« Reply #927 on: November 02, 2016, 01:37:46 PM »

Nowcast has NV FL NC OH IA all leaning Trump.  Some by the narrowest of margins of course.

Input garbage, output garbage. Only Nate could have his model claim Trump's leading NV and then release an article about how Clinton is likely to exceed her already positive NV polls.

The primary broke Nate.



The fact that he's resorting to quoting political futures markets to back his models is telling. Also the fact that he's basing his uncertainty on polling data going back to 1972 betrays that he really still wants the lesson of the primary to have been that nobody can know a thing, rather than that he just got it wrong.

He has been hedging his bets this entire cycle and it is cowardly. At least Cohn and Wang have the balls to stick to their guns.
Logged
Donnie
Jr. Member
***
Posts: 351


Show only this user's posts in this thread
« Reply #928 on: November 02, 2016, 01:41:13 PM »

Nate Silver ‏@NateSilver538  8m8 minutes ago

Each candidate won Florida exactly 5,000 times in our 10,000 simulations just now. http://53eig.ht/29fvWfn
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


Show only this user's posts in this thread
« Reply #929 on: November 02, 2016, 01:41:18 PM »

What is the uncertainty of the model, given the # of simulations they run?  That is, each time they enter new polls, they apparently run 10,000 simulations based on the latest #s, and that produces (among other things) an overall projected vote margin and win probability.  But let's say they then ran *another* 10,000 simulations with the same input #s but a different random seed?  How different would the results be?  Because if they'd be different, then in theory you could put in a favorable poll for one candidate and it would end up "helping" the other candidate, just because of simulation noise.  But I'm assuming that 10,000 is enough for the simulation noise to be small?
Logged
Ebsy
Junior Chimp
*****
Posts: 8,001
United States


Show only this user's posts in this thread
« Reply #930 on: November 02, 2016, 01:42:02 PM »

It's pretty obvious that Silver's model is a gigantic joke.
Logged
elcorazon
Sr. Member
****
Posts: 3,402


Show only this user's posts in this thread
« Reply #931 on: November 02, 2016, 01:50:52 PM »

Silver's model underestimates the risk of a Trump Presidency at this point. I'd put it at best at a coin flip. I hate America.
Logged
PaperKooper
Jr. Member
***
Posts: 827
United States


Political Matrix
E: 5.23, S: 5.57

Show only this user's posts in this thread
« Reply #932 on: November 02, 2016, 01:51:08 PM »

Let the salt course through your veins.  
Logged
Erc
Junior Chimp
*****
Posts: 5,823
Slovenia


Show only this user's posts in this thread
« Reply #933 on: November 02, 2016, 01:56:17 PM »

What is the uncertainty of the model, given the # of simulations they run?  That is, each time they enter new polls, they apparently run 10,000 simulations based on the latest #s, and that produces (among other things) an overall projected vote margin and win probability.  But let's say they then ran *another* 10,000 simulations with the same input #s but a different random seed?  How different would the results be?  Because if they'd be different, then in theory you could put in a favorable poll for one candidate and it would end up "helping" the other candidate, just because of simulation noise.  But I'm assuming that 10,000 is enough for the simulation noise to be small?


This is purely a statistical noise effect; given Clinton's win percentage of 70% or so, we shouldn't be surprised by jumps of 0.6% or so in Clinton's win percentage just from rerunning the simulations (1 sigma).  If you start looking at individual battlegrounds, we definitely shouldn't be surprised if one of them jumps a percent or two just from statistical noise.
Logged
Devout Centrist
Atlas Icon
*****
Posts: 10,141
United States


Political Matrix
E: -99.99, S: -99.99

P P
Show only this user's posts in this thread
« Reply #934 on: November 02, 2016, 01:59:03 PM »

Lol happy go lucky Larry Sabato may be better than Nate Silver.
Logged
adrac
adracman42
Jr. Member
***
Posts: 722


Political Matrix
E: -9.99, S: -9.99

Show only this user's posts in this thread
« Reply #935 on: November 02, 2016, 02:00:59 PM »

What is the uncertainty of the model, given the # of simulations they run?  That is, each time they enter new polls, they apparently run 10,000 simulations based on the latest #s, and that produces (among other things) an overall projected vote margin and win probability.  But let's say they then ran *another* 10,000 simulations with the same input #s but a different random seed?  How different would the results be?  Because if they'd be different, then in theory you could put in a favorable poll for one candidate and it would end up "helping" the other candidate, just because of simulation noise.  But I'm assuming that 10,000 is enough for the simulation noise to be small?


This is purely a statistical noise effect; given Clinton's win percentage of 70% or so, we shouldn't be surprised by jumps of 0.6% or so in Clinton's win percentage just from rerunning the simulations (1 sigma).  If you start looking at individual battlegrounds, we definitely shouldn't be surprised if one of them jumps a percent or two just from statistical noise.

I wouldn't have expected changes that significant with 10,000 trials, although I will say my technical experience in statistics is fairly limited.
Logged
Erc
Junior Chimp
*****
Posts: 5,823
Slovenia


Show only this user's posts in this thread
« Reply #936 on: November 02, 2016, 02:16:19 PM »

What is the uncertainty of the model, given the # of simulations they run?  That is, each time they enter new polls, they apparently run 10,000 simulations based on the latest #s, and that produces (among other things) an overall projected vote margin and win probability.  But let's say they then ran *another* 10,000 simulations with the same input #s but a different random seed?  How different would the results be?  Because if they'd be different, then in theory you could put in a favorable poll for one candidate and it would end up "helping" the other candidate, just because of simulation noise.  But I'm assuming that 10,000 is enough for the simulation noise to be small?


This is purely a statistical noise effect; given Clinton's win percentage of 70% or so, we shouldn't be surprised by jumps of 0.6% or so in Clinton's win percentage just from rerunning the simulations (1 sigma).  If you start looking at individual battlegrounds, we definitely shouldn't be surprised if one of them jumps a percent or two just from statistical noise.

I wouldn't have expected changes that significant with 10,000 trials, although I will say my technical experience in statistics is fairly limited.

Statistical noise scales as 1 / sqrt(number of trials), so it takes a lot of trials to get your error below the percent level.  To decrease your statistical error by a factor of 10, you need to run your simulation 100 times longer.

This does mean that 538 really shouldn't be listing the percentage points on their probabilities, unless they feel like running a million simulations each time.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


Show only this user's posts in this thread
« Reply #937 on: November 02, 2016, 02:17:41 PM »

What is the uncertainty of the model, given the # of simulations they run?  That is, each time they enter new polls, they apparently run 10,000 simulations based on the latest #s, and that produces (among other things) an overall projected vote margin and win probability.  But let's say they then ran *another* 10,000 simulations with the same input #s but a different random seed?  How different would the results be?  Because if they'd be different, then in theory you could put in a favorable poll for one candidate and it would end up "helping" the other candidate, just because of simulation noise.  But I'm assuming that 10,000 is enough for the simulation noise to be small?


This is purely a statistical noise effect; given Clinton's win percentage of 70% or so, we shouldn't be surprised by jumps of 0.6% or so in Clinton's win percentage just from rerunning the simulations (1 sigma).  If you start looking at individual battlegrounds, we definitely shouldn't be surprised if one of them jumps a percent or two just from statistical noise.

I wouldn't have expected changes that significant with 10,000 trials, although I will say my technical experience in statistics is fairly limited.

Thinking about it some more, I guess it's just Poisson noise.  It's like taking a poll of 10,000 people.  Your margin of error is small, but it's not going to be as low as 0.1%.  You should expect the win probability for one of the candidates to shift by ~0.5% from one set of simulations to the next (meaning that the gap between them will shift by ~1% from one set of simulations to the next).

Which means, yeah, if the win probability changes by about half a percent, it is statistically meaningless.  You would literally expect a shift of that size just from using exactly the same set of polls, but running the simulations a second time.  And given that there are 50 states, there are going to be at least a few states where the simulation creates a big shift, even when the polls haven't changed at all.
Logged
adrac
adracman42
Jr. Member
***
Posts: 722


Political Matrix
E: -9.99, S: -9.99

Show only this user's posts in this thread
« Reply #938 on: November 02, 2016, 02:21:16 PM »

What is the uncertainty of the model, given the # of simulations they run?  That is, each time they enter new polls, they apparently run 10,000 simulations based on the latest #s, and that produces (among other things) an overall projected vote margin and win probability.  But let's say they then ran *another* 10,000 simulations with the same input #s but a different random seed?  How different would the results be?  Because if they'd be different, then in theory you could put in a favorable poll for one candidate and it would end up "helping" the other candidate, just because of simulation noise.  But I'm assuming that 10,000 is enough for the simulation noise to be small?


This is purely a statistical noise effect; given Clinton's win percentage of 70% or so, we shouldn't be surprised by jumps of 0.6% or so in Clinton's win percentage just from rerunning the simulations (1 sigma).  If you start looking at individual battlegrounds, we definitely shouldn't be surprised if one of them jumps a percent or two just from statistical noise.

I wouldn't have expected changes that significant with 10,000 trials, although I will say my technical experience in statistics is fairly limited.

Thinking about it some more, I guess it's just Poisson noise.  It's like taking a poll of 10,000 people.  Your margin of error is small, but it's not going to be as low as 0.1%.  You should expect the win probability for one of the candidates to shift by ~0.5% from one set of simulations to the next (meaning that the gap between them will shift by ~1% from one set of simulations to the next).

Which means, yeah, if the win probability changes by about half a percent, it is statistically meaningless.  You would literally expect a shift of that size just from using exactly the same set of polls, but running the simulations a second time.  And given that there are 50 states, there are going to be at least a few states where the simulation creates a big shift, even when the polls haven't changed at all.


Guess so, I'm just mildly surprised that I haven't seen effects like that pointed out before.

And the T+1 Nevada poll gives her .2% nationally too huh.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


Show only this user's posts in this thread
« Reply #939 on: November 02, 2016, 02:25:47 PM »

And the T+1 Nevada poll gives her .2% nationally too huh.

Yeah, I just noticed that too.  It's a poll that's very mildly more pro-Trump than the Nevada average, but it's a very low-weight poll.  It's only the 8th most heavily weighted poll for Nevada at the moment, meaning that it should have virtually no impact on the overall win statistics.  And we see the national win numbers shift to Clinton by 0.2%, which again is not really because of this poll, but because of the statistical noise in the model.
Logged
Erich Maria Remarque
LittleBigPlanet
YaBB God
*****
Posts: 3,646
Sweden


Show only this user's posts in this thread
« Reply #940 on: November 02, 2016, 02:29:35 PM »
« Edited: November 02, 2016, 02:31:42 PM by Happy Sad Trumpista »

IIRC, the old polls from the same pollster always get less weight at that moment when a newer poll is added to the database.

It might be a part of explanation.

One also probably should compare to    Polling average rather than Adjusted polling average or defenetly not with Projected vote share for Nov. 8. But it is still strange.
Logged
Slander and/or Libel
Figs
Sr. Member
****
Posts: 2,338


Political Matrix
E: -6.32, S: -7.83

Show only this user's posts in this thread
« Reply #941 on: November 02, 2016, 02:32:38 PM »

Anyone know precisely how the model handles correlation between states? I know he talked about it a bit, but it still feels hazy. When he's noting a trend in a state, is it a trend from previous polling, or from his adjustment to the polling, or from his adjustment due to correlation with polling from other states or nationally? It's good to note that states are correlated, but I think it quite likely that by putting his thumb on that scale so hard, he's double counting some things, or canceling out others.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


Show only this user's posts in this thread
« Reply #942 on: November 02, 2016, 02:43:07 PM »

IIRC, the old polls from the same pollster always get less weight at that moment when a newer poll is added to the database.

It might be a part of explanation.

I don't think you need that to explain it.  As I said, shifts or around ~0.5% in the win probability are going to happen even when you don't add any new polls at all, simply because of statistical noise.  And that's an average.  Sometimes the "phantom shifts," that cannot be explained by any particular poll, are larger than that.
Logged
Erich Maria Remarque
LittleBigPlanet
YaBB God
*****
Posts: 3,646
Sweden


Show only this user's posts in this thread
« Reply #943 on: November 02, 2016, 02:47:22 PM »

IIRC, the old polls from the same pollster always get less weight at that moment when a newer poll is added to the database.

It might be a part of explanation.

I don't think you need that to explain it.  As I said, shifts or around ~0.5% in the win probability are going to happen even when you don't add any new polls at all, simply because of statistical noise.  And that's an average.  Sometimes the "phantom shifts," that cannot be explained by any particular poll, are larger than that.


OK. 10000 seems like a lot for 0.5% shift to me. And in absolutely most cases, the changes seemed reasonable.

Someone might ask about model's variance them on Twitter. 538's guys usually answer.
Logged
Slander and/or Libel
Figs
Sr. Member
****
Posts: 2,338


Political Matrix
E: -6.32, S: -7.83

Show only this user's posts in this thread
« Reply #944 on: November 02, 2016, 02:48:36 PM »

IIRC, the old polls from the same pollster always get less weight at that moment when a newer poll is added to the database.

It might be a part of explanation.

I don't think you need that to explain it.  As I said, shifts or around ~0.5% in the win probability are going to happen even when you don't add any new polls at all, simply because of statistical noise.  And that's an average.  Sometimes the "phantom shifts," that cannot be explained by any particular poll, are larger than that.


OK. 10000 seems like a lot for 0.5% shift to me. And in absolutely most cases, the changes seemed reasonable.

Remember that 0.5% means a change in 50 runs out of 10,000. It's not actually a big jump at all.
Logged
Mr. Morden
Atlas Legend
*****
Posts: 44,066
United States


Show only this user's posts in this thread
« Reply #945 on: November 02, 2016, 02:48:44 PM »

IIRC, the old polls from the same pollster always get less weight at that moment when a newer poll is added to the database.

It might be a part of explanation.

I don't think you need that to explain it.  As I said, shifts or around ~0.5% in the win probability are going to happen even when you don't add any new polls at all, simply because of statistical noise.  And that's an average.  Sometimes the "phantom shifts," that cannot be explained by any particular poll, are larger than that.


OK. 10000 seems like a lot for 0.5% shift to me.

As I said, it's basically equivalent to taking a poll with 10,000 people in it.  The MoE is not going to be less than 0.5%.
Logged
ursulahx
Jr. Member
***
Posts: 527
United Kingdom


Show only this user's posts in this thread
« Reply #946 on: November 02, 2016, 02:58:01 PM »

Silver got burnt by Brexit, the UK general election and the primaries. He's desperate not to get this one wrong, so he's hedging his bets. You should hear him on the podcast, dissing all the other models who have Clinton at 90%.

If Clinton wins he can relax, no damage done. If Trump wins, he will look like the star player. Quite smart, in a way.
Logged
elcorazon
Sr. Member
****
Posts: 3,402


Show only this user's posts in this thread
« Reply #947 on: November 02, 2016, 03:06:21 PM »

sometimes I think we internalize our expectations before they're fully baked into the model. Perhaps we see a few national polls showing that the race is getting closer and let's say based on that, WI might be expected to be C + 3-5, then a poll comes in at C + 6 and we think that's good for Clinton. But the model may think that reinforces the tightening because before the tightening it might have expected to see C +8-9. It takes many polls to fully  alter the overall picture.
Logged
Erich Maria Remarque
LittleBigPlanet
YaBB God
*****
Posts: 3,646
Sweden


Show only this user's posts in this thread
« Reply #948 on: November 02, 2016, 03:13:03 PM »
« Edited: November 02, 2016, 03:14:43 PM by Happy Sad Trumpista »

As I said, it's basically equivalent to taking a poll with 10,000 people in it.  The MoE is not going to be less than 0.5%.
It makes sense now! Thanks!

Silver got burnt by Brexit, the UK general election and the primaries. He's desperate not to get this one wrong, so he's hedging his bets. You should hear him on the podcast, dissing all the other models who have Clinton at 90%.

If Clinton wins he can relax, no damage done. If Trump wins, he will look like the star player. Quite smart, in a way.
Brexit, UK general election???

Primary, yeah. Nate did misstake, but not his model that is practically the same as 2012.
Logged
Figueira
84285
Atlas Icon
*****
Posts: 12,173


Show only this user's posts in this thread
« Reply #949 on: November 02, 2016, 03:23:43 PM »

What is the uncertainty of the model, given the # of simulations they run?  That is, each time they enter new polls, they apparently run 10,000 simulations based on the latest #s, and that produces (among other things) an overall projected vote margin and win probability.  But let's say they then ran *another* 10,000 simulations with the same input #s but a different random seed?  How different would the results be?  Because if they'd be different, then in theory you could put in a favorable poll for one candidate and it would end up "helping" the other candidate, just because of simulation noise.  But I'm assuming that 10,000 is enough for the simulation noise to be small?


I've wondered the same thing. I suspect the noise should be small, although it would be noticeable if they actually reported the number of simulations instead of rounding it to the nearest 10.
Logged
Pages: 1 ... 33 34 35 36 37 [38] 39 40 41 42 43 ... 49  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.067 seconds with 12 queries.