Nate Silver's Model Bias
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
June 13, 2025, 03:37:53 PM
News: Election Calculator 3.0 with county/house maps is now live. For more info, click here

  Talk Elections
  Election Archive
  Election Archive
  2020 U.S. Presidential Election
  Nate Silver's Model Bias
« previous next »
Pages: [1] 2
Author Topic: Nate Silver's Model Bias  (Read 4154 times)
ElectionsGuy
Atlas Star
*****
Posts: 21,101
United States


Political Matrix
E: 7.10, S: -7.65

P P P
Show only this user's posts in this thread
« on: April 27, 2020, 05:21:29 AM »
« edited: April 27, 2020, 05:34:02 AM by ElectionsGuy »

So I gathered a bunch of data from Silver's models from both 2016 and 2018. I compared his average projected vote in every state for 4 sets of elections (2016 Senate, 2016 President, 2018 Senate, 2018 Governor) and compared it to the actual results. Here's the average of those errors (D+1 means the model on average overestimates Democrats by a point compared to real results). Initially, I wanted to do 2014-2018 (one R year, one neutral year, one D year I thought would be a perfect mix) but in 2014 I don't even know if he made a governor model (can't find it) so I decided against including that.



This isn't perfect. For some states, only two elections are used, some all four. The following states only have two elections used for this (AK, CA, DE, KY, LA, MS, MT, NC, VA, WV). Most of them because their governor elections come on non-midterm years (I'm not aware of any models he made for 2016/2017/2019 governor races), or because their Senate races didn't have a fairly comparable opponent (D vs D in CA, D in AK in 2016 was nearly irrelevant and a bunch of 3rd party). These states along with heavily lopsided states that get little/no polling should be taken for granted and aren't as important here.

Key findings:

 - Heavily partisan states usually get their margins underestimated. This was especially the case in his 2016 models.
 - Still, red states tend to get underestimated in their margins far harsher than blue states.
 - Unsurprisingly, the states with the strongest biases come in either places that are trending towards either party (rural Midwest, suburban sunbelt) or the reddest and bluest places.
 - It's undeniable here the kind of areas that get underestimated for Republicans. When you get to the projections for the house districts in 2018, the biggest errors came from rural/exurban heavily white non-college areas. Places like OH-13, WI-03, MN-08. He overestimated Republicans, almost to a similar degree, in suburban sprawly sunbelt areas like TX-32 or any of the OC, California districts.
 - The deep south has the least bias, where racial patterns are highly predictive of voting.

There are some instances here where there was 2016/2018 divide in bias. For example, he underestimated R's in PA in 2016, but slightly overestimated them in 2016. He underestimated Trump in KS in 2016, but overestimated Kobach to an almost similar degree in 2018. Even though in most states the 2016 bias was worse than the 2018 bias, there are some where this is not the case such as FL and IN (more of a D bias than 2016)

Nationally, I'm using the average of his error for the 2016 presidential race and the 2018 house vote. I made this partially because I'm curious but also because I think it's important to keep this in mind for when his model for 2020 comes out.
Logged
Skye
yeah_93
YaBB God
*****
Posts: 4,663
Venezuela


Show only this user's posts in this thread
« Reply #1 on: April 27, 2020, 05:36:39 AM »

This is very interesting.

Can't blame him for overestimating Ds in North Dakota. I never would have expected Trump's margin there.
Logged
Brittain33
brittain33
Moderators
Atlas Star
*****
Posts: 22,849


Show only this user's posts in this thread
« Reply #2 on: April 27, 2020, 05:47:21 AM »

This is very interesting.

Can't blame him for overestimating Ds in North Dakota. I never would have expected Trump's margin there.

Also, he missed Heitkamp winning in 2012.
Logged
ProgressiveModerate
Atlas Icon
*****
Posts: 18,299


Show only this user's posts in this thread
« Reply #3 on: April 27, 2020, 07:58:03 AM »

Seems to underestimate trends, as well as overestimating parties in states where the demographics do not favor them. ND clearly have Demographics that heavily favor Rs, for example.
Logged
pppolitics
Junior Chimp
*****
Posts: 6,082


Show only this user's posts in this thread
« Reply #4 on: April 27, 2020, 08:50:03 AM »

Democrats underperformed the polls across the board in 2016.

Also, results from the presidential election shouldn't be mixed with results from governor and senate races.
Logged
CivicParticipant
Spark498
Junior Chimp
*****
Posts: 9,693
United States


Show only this user's posts in this thread
« Reply #5 on: April 27, 2020, 12:00:54 PM »

Democrats underperformed the polls across the board in 2016.

Also, results from the presidential election shouldn't be mixed with results from governor and senate races.
Logged
Fmr. Gov. NickG
NickG
Junior Chimp
*****
Posts: 9,099


Political Matrix
E: -8.00, S: -3.49

Show only this user's posts in this thread
« Reply #6 on: April 27, 2020, 12:33:01 PM »

Have you looked at the correlation between average bias (absolute value) and the number of polls taken in a state?  I would guess that many of the noncompetitive states are way off because there weren’t many polls, and thus the models was much more heavily based in past election results.
Logged
Mr.Bakari-Sellers
olawakandi
Atlas Institution
*****
Posts: 98,482
Jamaica


Political Matrix
E: -6.19, S: -4.17

Show only this user's posts in this thread
« Reply #7 on: April 27, 2020, 12:44:35 PM »

Allow me 538, 278 en route to Prez

It's also www.electionprojection.com prediction too
WI is the tipper and NC Senate race is the tipping point

Logged
Alben Barkley
KYWildman
Atlas Icon
*****
Posts: 19,901
United States


Political Matrix
E: -2.97, S: -5.74

Show only this user's posts in this thread
« Reply #8 on: April 27, 2020, 01:02:10 PM »

Silver’s model reflects the polls, not the actual results. If for example his model seems to overestimate Democrats by about 3 points in Michigan, it’s not because of “bias” on Silver’s part. It’s because the Michigan polls made it look like the Democrats would do better than they actually did in Michigan. If anything the adjustments Silver makes to his model (accounting for the known D or R leans of pollsters, etc.) generally make it closer to the actual results than the raw polls in most cases.

Nate Silver is not a fortune teller and has never claimed to be. He doesn’t try to predict election results. He makes models that reflect the state of a race at a given time according to whatever data he has available (namely polls). Why so many people fail to understand this is beyond me.
Logged
jake_arlington
Jr. Member
***
Posts: 459


P
Show only this user's posts in this thread
« Reply #9 on: April 27, 2020, 01:56:41 PM »

And I think that a better way to read this would be subtracting the D+1.0 nationwide margin number from what you see in each state. (For instance, MN would then become a R+0.7 relative tilt.)

This is because historically, modelling errors have tended to cancel out overall with time, but in specific states there can be a tendency to systemically over/under-estimate one party at another's expense.
Logged
jake_arlington
Jr. Member
***
Posts: 459


P
Show only this user's posts in this thread
« Reply #10 on: April 27, 2020, 01:58:51 PM »

Silver’s model reflects the polls, not the actual results. If for example his model seems to overestimate Democrats by about 3 points in Michigan, it’s not because of “bias” on Silver’s part. It’s because the Michigan polls made it look like the Democrats would do better than they actually did in Michigan. If anything the adjustments Silver makes to his model (accounting for the known D or R leans of pollsters, etc.) generally make it closer to the actual results than the raw polls in most cases.

Nate Silver is not a fortune teller and has never claimed to be. He doesn’t try to predict election results. He makes models that reflect the state of a race at a given time according to whatever data he has available (namely polls). Why so many people fail to understand this is beyond me.

It's not that he doesn't get the way models work; this is still a useful indicator and reliable prediction of margin, when comparing the pre-election forecast with final results - as you mentioned, there's a disconnect between that figure and the polling average. So while he may get things closer onto the mark than depending, why you still continue to fail in understanding this is beyond me!
Logged
ElectionsGuy
Atlas Star
*****
Posts: 21,101
United States


Political Matrix
E: 7.10, S: -7.65

P P P
Show only this user's posts in this thread
« Reply #11 on: April 27, 2020, 11:21:03 PM »

Have you looked at the correlation between average bias (absolute value) and the number of polls taken in a state?  I would guess that many of the noncompetitive states are way off because there weren’t many polls, and thus the models was much more heavily based in past election results.

I made a comment about that in my post.

Democrats underperformed the polls across the board in 2016.

Also, results from the presidential election shouldn't be mixed with results from governor and senate races.

Okay, I think it's fair to include the election of somebody that will be on the ballot again in 2020. In fact, my own opinion is that it will be much more predictive than the 2018 bias. Regardless, I thought it would be a good mix to have both the most recent presidential election and the most recent midterm. And what else do you want me to do? If I just included 2016 results people would say "well that's just 2016" (same for 2018) when many if not most of the statewide races in both 2016 and 2018 reaffirmed the same patterns, even in the D friendly environment of 2018.

Silver’s model reflects the polls, not the actual results. If for example his model seems to overestimate Democrats by about 3 points in Michigan, it’s not because of “bias” on Silver’s part. It’s because the Michigan polls made it look like the Democrats would do better than they actually did in Michigan. If anything the adjustments Silver makes to his model (accounting for the known D or R leans of pollsters, etc.) generally make it closer to the actual results than the raw polls in most cases.

Nate Silver is not a fortune teller and has never claimed to be. He doesn’t try to predict election results. He makes models that reflect the state of a race at a given time according to whatever data he has available (namely polls). Why so many people fail to understand this is beyond me.

Bro, part of Silver's model is a forecast (an estimate of the future), obviously, he's not going to get it right all the time, my point here was to take from the past to learn for the future. This idea that we must believe all the data and what he puts out because "it's a data point at a given time" when the numbers I'm using for his forecast are the numbers right before the election, is getting old. If a bias gets repeated multiple times, maybe there's something that's missing or something that should get included that isn't. If the polls are getting wrong in the same direction consistently, that should be accounted for in some way so we don't make the same mistakes again. Maybe don't have the forecast be as poll-heavy, and have more of weight towards partisanship or demographic data, that would be my suggestion.

Also, his model has more of a D bias than averages of polls, on average, so your point about that just isn't true.
Logged
ElectionsGuy
Atlas Star
*****
Posts: 21,101
United States


Political Matrix
E: 7.10, S: -7.65

P P P
Show only this user's posts in this thread
« Reply #12 on: September 07, 2020, 07:42:48 AM »

So I gathered a bunch of data from Silver's models from both 2016 and 2018. I compared his average projected vote in every state for 4 sets of elections (2016 Senate, 2016 President, 2018 Senate, 2018 Governor) and compared it to the actual results. Here's the average of those errors (D+1 means the model on average overestimates Democrats by a point compared to real results). Initially, I wanted to do 2014-2018 (one R year, one neutral year, one D year I thought would be a perfect mix) but in 2014 I don't even know if he made a governor model (can't find it) so I decided against including that.



This isn't perfect. For some states, only two elections are used, some all four. The following states only have two elections used for this (AK, CA, DE, KY, LA, MS, MT, NC, VA, WV). Most of them because their governor elections come on non-midterm years (I'm not aware of any models he made for 2016/2017/2019 governor races), or because their Senate races didn't have a fairly comparable opponent (D vs D in CA, D in AK in 2016 was nearly irrelevant and a bunch of 3rd party). These states along with heavily lopsided states that get little/no polling should be taken for granted and aren't as important here.

Key findings:

 - Heavily partisan states usually get their margins underestimated. This was especially the case in his 2016 models.
 - Still, red states tend to get underestimated in their margins far harsher than blue states.
 - Unsurprisingly, the states with the strongest biases come in either places that are trending towards either party (rural Midwest, suburban sunbelt) or the reddest and bluest places.
 - It's undeniable here the kind of areas that get underestimated for Republicans. When you get to the projections for the house districts in 2018, the biggest errors came from rural/exurban heavily white non-college areas. Places like OH-13, WI-03, MN-08. He overestimated Republicans, almost to a similar degree, in suburban sprawly sunbelt areas like TX-32 or any of the OC, California districts.
 - The deep south has the least bias, where racial patterns are highly predictive of voting.

There are some instances here where there was 2016/2018 divide in bias. For example, he underestimated R's in PA in 2016, but slightly overestimated them in 2016. He underestimated Trump in KS in 2016, but overestimated Kobach to an almost similar degree in 2018. Even though in most states the 2016 bias was worse than the 2018 bias, there are some where this is not the case such as FL and IN (more of a D bias than 2016)

Nationally, I'm using the average of his error for the 2016 presidential race and the 2018 house vote. I made this partially because I'm curious but also because I think it's important to keep this in mind for when his model for 2020 comes out.

I made this thread a while ago, but I want to revisit it to "unskew" it (based on the biases on the map) and see the results that come out. Here are those results. One of them I use just the 2016 presidential error (the first) and the other I use the average error (the latter).

It goes 2016 Error / '16 and '18 Average Error:

National: Biden +5.1% / Biden +5.5%

AL: Trump +26.2% / Trump +22.7%
AK: Trump +16.3% / Trump +14.3%
AZ: Biden +1.0% / Biden +1.4%
AR: Trump +21.2% / Trump +21.4%
CA: Biden +37.4% / Biden +37.1%
CO: Biden +10.2% / Biden +8.1%
CT: Biden +19.6% / Biden +19.0%
DE: Biden +20.3% / Biden +18.4%
FL: Trump +0.3% / Trump +1.4%
GA: Trump +4.3% / Trump +3.7%
HI: Biden +41.8% / Biden +35.0%
ID: Trump +40.7% / Trump +37.4%
IL: Biden +25.0% / Biden +25.3%
IN: Trump +21.4% / Trump +21.9%
IA: Trump +10.5% / Trump +9.4%
KS: Trump +20.2% / Trump +12.5%
KY: Trump +31.6% / Trump +28.2%
LA: Trump +18.9% / Trump +15.9%
ME: Biden +3.8% / Biden +6.0%
 ME2: Trump +14.6% / Trump +10.9%
MD: Biden +27.9% / Biden +28.0%
MA: Biden +35.4% / Biden +32.4%
MI: Biden +2.6% / Biden +3.9%
MN: Biden +1.3% / Biden +4.9%

MS: Trump +17.2% / Trump +16.5%
MO: Trump +19.4% / Trump +16.8%
MT: Trump +17.7% / Trump +15.7%
NE: Trump +26.2% / Trump +21.6%
 NE2: Biden +1.6% / Biden +1.5%
NV: Biden +7.5% / Biden +8.8%
NH: Biden +3.0% / Biden +5.5%
NJ: Biden +20.5% / Biden +19.1%
NM: Biden +15.1% / Biden +17.0%
NY: Biden +29.2% / Biden +29.6%
NC: Trump +4.0% / Trump +3.6%
ND: Trump +37.1% / Trump +36.3%
OH: Trump +8.2% / Trump +7.6%
OK: Trump +38.7% / Trump +34.9%
OR: Biden +16.2% / Biden +13.2%
PA: Trump +0.8% / Biden +2.6%
RI: Biden +24.5% / Biden +20.3%
SC: Trump +16.0% / Trump +9.0%
SD: Trump +35.2% / Trump +29.8%
TN: Trump +30.8% / Trump +25.6%
TX: Trump +4.6% / Trump +2.4%
UT: Trump +24.8% / Trump +22.0%
VT: Biden +33.8% / Biden +30.3%
VA: Biden +10.6% / Biden +10.8%
WA: Biden +26.5% / Biden +23.5%
WV: Trump +50.0% / Trump +44.3%
WI: Trump +0.9% / Biden +1.5%
WY: Trump +51.8% / Trump +48.1%

2020 Model Adjusted on 2016 Error



Trump/Pence: 278
Biden/Harris: 260

2020 Model Adjusted on 2016/2018 Average Error



Biden/Harris: 290
Trump/Pence: 248

So what's my conclusion? I think it's safe to say he won't have the same 8+ point errors in safe red states like WV again in 2020, but perhaps there will still be lesser errors in the same direction. I think it's quite possible this model is more accurate than 2016 given Biden is performing 3 points better than Clinton nationally but around the same in the swing states. The only swing state where I think Biden is significantly overestimated in the model is Michigan. This could be state polls and projections being more accurate, but also it could be national polls being more off than last time (something I tend to doubt). But this exercise shows how Trump can defy the odds again to win a 2nd term, just by a similar polling and modeling error, not anything huge like many would have you believe.
Logged
ElectionsGuy
Atlas Star
*****
Posts: 21,101
United States


Political Matrix
E: 7.10, S: -7.65

P P P
Show only this user's posts in this thread
« Reply #13 on: September 07, 2020, 07:43:19 AM »

Here is a detailed comparison of the 2016 and 2020 models and their chances of winning. 2016 Now = September 7th, 2016. I also added the implied trend of each state assuming the national vote is correct (this if from 2016 results, not 2016 projections). It's easy to see why Biden may need to win by a 5 point margin or higher to win the EC. AZ, NE2, NV, NH, and NV are the only critical states where he performs significantly better than Clinton in the model.

National

2016 Now: Clinton +3.5% (67/33)
2016 End: Clinton +3.6% (71/29)
2020 Now: Biden +6.5% (71/29)

Arizona

2016 Now: Trump +3.4% (68/32)
2016 End: Trump +2.2% (67/33)
2020 Now: Biden +2.3% (61/39) --> Trend: D+1.3%

Florida

2016 Now: Clinton +1.6% (59/41)
2016 End: Clinton +0.5% (55/45)
2020 Now: Biden +1.6% (58/42) --> Trend: R+1.6%

Georgia

2016 Now: Trump +3.8% (72/28)
2016 End: Trump +4.0% (79/21)
2020 Now: Trump +3.2% (68/32) --> Trend: R+2.5%

Iowa

2016 Now: Trump +0.8% (54/46)
2016 End: Trump +2.9% (70/30)
2020 Now: Trump +4.0% (69/31) --> Trend: D+1.0%

Maine

2016 Now: Clinton +6.4% (76/24)
2016 End: Clinton +7.5% (83/17)
2020 Now: Biden +8.2% (78/22) --> Trend: D+0.9%

Maine's 2nd District

2016 Now: Trump +3.8% (64/36)
2016 End: Clinton +0.3% (51/49)
2020 Now: Trump +4.0% (64/36) --> Trend: D+1.9%

Michigan

2016 Now: Clinton +4.3% (73/27)
2016 End: Clinton +4.2% (79/21)
2020 Now: Biden +7.0% (83/17) --> Trend: D+2.9%

Minnesota

2016 Now: Clinton +5.7% (78/22)
2016 End: Clinton +5.8% (85/15)
2020 Now: Biden +5.6% (77/23) --> Trend: R+0.3%

Nebraska's 2nd District

2016 Now: Trump +1.7% (56/44)
2016 End: Trump +1.8% (56/44)
2020 Now: Biden +2.0% (59/41) --> Trend: R+0.1%

Nevada

2016 Now: Clinton +2.8% (64/36)
2016 End: Clinton +1.2% (58/42)
2020 Now: Biden +6.3% (79/21) --> Trend: R+0.5%

New Hampshire

2016 Now: Clinton +3.8% (66/33)
2016 End: Clinton +3.6% (70/30)
2020 Now: Biden +6.2% (72/28) --> Trend: D+1.5%

North Carolina

2016 Now: Clinton +0.4% (53/47)
2016 End: Clinton +0.7% (55/44)
2020 Now: Biden +0.4% (53/47) --> Trend: R+0.4%

Ohio

2016 Now: Clinton +0.9% (55/44)
2016 End: Trump +1.9% (65/35)
2020 Now: Trump +2.0% (60/40) --> Trend: D+1.7%

Pennsylvania

2016 Now: Clinton +4.0% (72/28)
2016 End: Clinton +3.7% (77/23)
2020 Now: Biden +3.6% (70/30) --> Trend: R+0.1%

Texas

2016 Now: Trump +9.3% (92/8)
2016 End: Trump +8.5% (94/6)
2020 Now: Trump +4.2% (71/29) --> Trend: D+0.4%

Wisconsin

2016 Now: Clinton +3.7% (70/30)
2016 End: Clinton +5.3% (83/16)
2020 Now: Biden +5.2% (75/25) --> Trend: D+1.5%
Logged
BlueSwan
blueswan
Junior Chimp
*****
Posts: 7,721
Denmark


Political Matrix
E: -4.26, S: -7.30

WWW Show only this user's posts in this thread
« Reply #14 on: September 07, 2020, 08:26:36 AM »

Nice work, but I don't think you can deduce much from this. For instance, in 2016 there was a pretty heavy swing to Trump leading right up to the election and undecideds broke heavily for him. This obviously wasn't caught in polling, especially not in state level polling which is more sparse.
Logged
ProgressiveModerate
Atlas Icon
*****
Posts: 18,299


Show only this user's posts in this thread
« Reply #15 on: September 07, 2020, 09:46:21 AM »

As someone who's made a model, I'll give my 2 cents on what makes a good model and where models go wrong.

1. Including "expert ratings". The reason why I believe this is such a flaw is beacuse it's based on assumptions about states that may or may not be true. Lot's of people who make these "Safe, Likely, Lean" models believe there's some factors in play that data isn't picking up, but their track record of successfully picking up on those things is spotty to say the least. A great example is the Atlas Consensus; in 2016, the rust belt was as blue as ever despite polling showing Hillary is a weaker position than Obama in those states

2. Using PVI from the previous cycles. I believe this is a mistake because it constrains trends suggested by data. This is why, in my model, as a state gets more data, PVI becomes basically meaningless in the outcome of how that state will vote.

Senate models tend to be a bit more interesting, because there are more factors at play. I was actually able to successfully reverse engineer a 2018 model that got every race correct (except FL-Sen) just off weighting factors differently. In this partisan day and age, 538 has weighted flawed incumbency scores too heavily, and has not weighted the state's partisan lean (this is an appropriate time to use PVI since down ballot PVI lags behind residential PVI anyways). Another factor that is really important is fundraising; it can tell you a lot about who has the enthusiasm edge. Again, for senate I wouldn't use expert ratings as they usually don't like calling flips while rating senate races that end up being very competative as lean, likely, or even safe for the incumbent party early on.

Results of a reversed engineered 2018 senate model:



I guess in conclusion, models are just speculation about how big of a role different factors will play in any given election cycle. However, while a good model is never gonna be perfect, it can pick up on a lot of things at play in an election cycle, and can sort out BS assumptions from actual facts.
Logged
Annatar
Jr. Member
***
Posts: 1,064
Australia


Show only this user's posts in this thread
« Reply #16 on: December 06, 2020, 05:00:13 AM »

Looks like the bias continued this cycle in his models.
Logged
Mr.Bakari-Sellers
olawakandi
Atlas Institution
*****
Posts: 98,482
Jamaica


Political Matrix
E: -6.19, S: -4.17

Show only this user's posts in this thread
« Reply #17 on: December 06, 2020, 06:00:03 AM »

Why are we comparing everything to 2016, there was Benghazi Hillary and Gary Johnson that helped Trump.  Everything reverts backs in 2018 and 2020, the Rs are gonna have a favorable candidate like Hillary to pick on, 2022, 2024 or 2028 as they give their tax cuts to the wealthy philosophy
Logged
ElectionsGuy
Atlas Star
*****
Posts: 21,101
United States


Political Matrix
E: 7.10, S: -7.65

P P P
Show only this user's posts in this thread
« Reply #18 on: December 06, 2020, 11:15:53 AM »

Looks like the bias continued this cycle in his models.

Oh, I've got some analysis coming very soon. Hint: It's bad, and parts of it in a way that his previous models weren't. But yes, much of the same patterns of bias continued, as expected regardless of if Biden were to win or not.

I've basically just been waiting for everything to be finalized/certified before I do this stuff.
Logged
ElectionsGuy
Atlas Star
*****
Posts: 21,101
United States


Political Matrix
E: 7.10, S: -7.65

P P P
Show only this user's posts in this thread
« Reply #19 on: January 11, 2021, 10:06:08 AM »

Compare this to the average at the top to see how well his model performed this time.

Presidential



The overall popular vote was D+3.6%. 47/50 states overestimated Democrats. This is his model, mind you. Almost across the board, the polling averages are even worse (his polling averages - which use house effects to "adjust" the polls to whatever bias he calculates they have, were even more wrong than straight polling averages).  Silver commonly uses "average polling error" to describe what happens in recent elections while ignoring the one-sidedness of the error. He also has disingenuously insisted that because they got the winner correct, they weren't that bad. 1% more Republican and that wouldn't have been true.

Unlike his previous model errors, it was more of an across the board error and even happened in blue states (many small blue states like Rhode Island had some of the worst ones). But there are still patterns that can be observed. States with high percentages of whites with no degree get underestimated the most. The states with increasing college graduates and growing metros - such as Colorado, Minnesota, Georgia, and Virginia - do not get underestimated as badly. It seems to me that pollsters and modelers are simply unable to pick up on Republican trends in the way they are able to pick up on Democratic trends. That may also explain why pollsters for the first time in a while underestimated Republicans in critical sunbelt states like Arizona, Nevada, and Texas - where Trump and Republicans improved with Latinos much more than polling suggested.

Senate



Arkansas was only R+9.6% because he overestimated the share Cotton would get. The libertarian in that race got almost the same share a Democrat would've gotten.

The Senate errors were in many cases even worse than the presidential ones. This is a good reminder that (unlike in 2016) it's not just the Trump coalition that is getting understated - it's the broader Republican coalition. The fact that Susan Collins got underestimated more than Trump did in Maine should raise questions about what kind of voter the polls are missing. The House errors are by far and away the worst though, and I plan to analyze those later.

What's the lesson for the future? Silver's models are something that politically astute observers should consider with some grains of salt. His models have previously been proven to routinely overestimate Democrats overall and rely way too much on polling. Polling bias can be affected by herding and response bias which was worse than years before in the 2020 elections. The bias is systematic in many ways - and should be noted before attempting to make predictions or assessments that will be accurate.
Logged
ChiefFireWaterMike
LordRichard
Jr. Member
***
Posts: 1,369


Show only this user's posts in this thread
« Reply #20 on: January 11, 2021, 04:19:58 PM »

Compare this to the average at the top to see how well his model performed this time.

Presidential



The overall popular vote was D+3.6%. 47/50 states overestimated Democrats. This is his model, mind you. Almost across the board, the polling averages are even worse (his polling averages - which use house effects to "adjust" the polls to whatever bias he calculates they have, were even more wrong than straight polling averages).  Silver commonly uses "average polling error" to describe what happens in recent elections while ignoring the one-sidedness of the error. He also has disingenuously insisted that because they got the winner correct, they weren't that bad. 1% more Republican and that wouldn't have been true.

Unlike his previous model errors, it was more of an across the board error and even happened in blue states (many small blue states like Rhode Island had some of the worst ones). But there are still patterns that can be observed. States with high percentages of whites with no degree get underestimated the most. The states with increasing college graduates and growing metros - such as Colorado, Minnesota, Georgia, and Virginia - do not get underestimated as badly. It seems to me that pollsters and modelers are simply unable to pick up on Republican trends in the way they are able to pick up on Democratic trends. That may also explain why pollsters for the first time in a while underestimated Republicans in critical sunbelt states like Arizona, Nevada, and Texas - where Trump and Republicans improved with Latinos much more than polling suggested.

Senate



Arkansas was only R+9.6% because he overestimated the share Cotton would get. The libertarian in that race got almost the same share a Democrat would've gotten.

The Senate errors were in many cases even worse than the presidential ones. This is a good reminder that (unlike in 2016) it's not just the Trump coalition that is getting understated - it's the broader Republican coalition. The fact that Susan Collins got underestimated more than Trump did in Maine should raise questions about what kind of voter the polls are missing. The House errors are by far and away the worst though, and I plan to analyze those later.

What's the lesson for the future? Silver's models are something that politically astute observers should consider with some grains of salt. His models have previously been proven to routinely overestimate Democrats overall and rely way too much on polling. Polling bias can be affected by herding and response bias which was worse than years before in the 2020 elections. The bias is systematic in many ways - and should be noted before attempting to make predictions or assessments that will be accurate.
Amazing
Logged
Motorcity
Jr. Member
***
Posts: 1,471


Show only this user's posts in this thread
« Reply #21 on: January 12, 2021, 03:05:18 PM »

My guess is that Nate Sliver assumed Biden would get Gore and Kerry levels of support of WWC because he was a white guy, while getting Obama/Hillary level of minorities.

That would have given him 2-4 points in each state
Logged
Tintrlvr
Junior Chimp
*****
Posts: 5,898


Show only this user's posts in this thread
« Reply #22 on: January 12, 2021, 03:31:15 PM »

Is there a comparison of Nate Silver's model compared to the pure polling averages (utilizing the polls that Nate Silver included, but unweighted), that would be helpful to compare to? An obvious problem with this critique of the model is that it is dependent on the polls being high quality; if the polls are garbage, it's garbage in, garbage out and hard to blame the model, but ideally the model still did better than the polls standing alone.
Logged
Figueira
84285
Atlas Icon
*****
Posts: 12,333


Show only this user's posts in this thread
« Reply #23 on: January 12, 2021, 03:44:49 PM »

Is there a comparison of Nate Silver's model compared to the pure polling averages (utilizing the polls that Nate Silver included, but unweighted), that would be helpful to compare to? An obvious problem with this critique of the model is that it is dependent on the polls being high quality; if the polls are garbage, it's garbage in, garbage out and hard to blame the model, but ideally the model still did better than the polls standing alone.

I guess you could compare it to RCP, although RCP has its own quirks.
Logged
vitoNova
YaBB God
*****
Posts: 3,848
United States


Show only this user's posts in this thread
« Reply #24 on: January 12, 2021, 07:15:58 PM »

In hindsight, he basically predicted the black guy winning. 
Logged
Pages: [1] 2  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.076 seconds with 7 queries.