NH-UNH: Clinton +11, Clinton +10 (4-way) (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
March 29, 2024, 02:18:26 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2016 U.S. Presidential General Election Polls
  NH-UNH: Clinton +11, Clinton +10 (4-way) (search mode)
Pages: [1]
Author Topic: NH-UNH: Clinton +11, Clinton +10 (4-way)  (Read 13468 times)
Desroko
Jr. Member
***
Posts: 346
« on: November 06, 2016, 11:46:59 PM »

Let's predict how much Nate Silver adjusts this toward Trump "out of caution"

I'm sure it'll reduce HRC's chances to win by 2% and flip Maine.

Sigh.  OK, fine, don't bother to even superficially understand how that stuff works guys, whatever. Smiley

Aggressive trendline adjustments to a noisy dataset are like turning the amps up to 11 at a Skrillex concert.
Logged
Desroko
Jr. Member
***
Posts: 346
« Reply #1 on: November 07, 2016, 02:45:34 AM »
« Edited: November 07, 2016, 02:50:54 AM by Desroko »

Let's predict how much Nate Silver adjusts this toward Trump "out of caution"

I'm sure it'll reduce HRC's chances to win by 2% and flip Maine.

Sigh.  OK, fine, don't bother to even superficially understand how that stuff works guys, whatever. Smiley

Aggressive trendline adjustments to a noisy dataset are like turning the amps up to 11 at a Skrillex concert.

On what basis are you convinced Silver's trendline adjustments are too aggressive?  We have past empirical data that Silver claims he used to make these decisions.  I'm always wary of dismissing models like that because they come out with counterintuitive results unless we can explain why some other approach is more sound.

You're wary of dismissing a model that shows a counterintuitive result, but you uncritically accept outliers.

Polling results are extraordinarily noisy under the best of circumstance. If you want empiricism, simulate an election in which underlying voter intentions remain at 52-48, and commission one poll per day for 100 days, each with a 3 MoE and no nonrandom error. It looks like an EKG in tachycardia, except less regular. Feel free to adjust those trendlines after every new survey, but you're just chasing noise.

And of course, no poll is free of nonrandom error, which means that very favorable condition simulations like the above are less noisy than real world conditions. To start, much polling "movement" is actually differential nonresponse:

http://www.stat.columbia.edu/~gelman/research/published/swingers.pdf

http://www.columbia.edu/~rse14/Erikson_Panagopoulos_Wlezien.pdf

When you control for nonresponse, you find that polling margins are much more stable than an entertainment website would have you believe:

https://today.yougov.com/news/2016/11/01/beware-phantom-swings-why-dramatic-swings-in-the-p/

http://fivethirtyeight.com/features/most-voters-havent-changed-their-minds-all-year/

TLDR: Unsophisticated people are impressed by the bells and whistles in a model. But all bells and whistles really do is make noise.


Logged
Desroko
Jr. Member
***
Posts: 346
« Reply #2 on: November 07, 2016, 06:27:53 AM »


538 is less likely to be predictive than its peers because it's the outlier, regardless of the methodological problems we'll get into. You don't throw the outlier away, but you have to recognize it for what it is instead of accepting it at face value - which is a problem that 538 itself struggles with, as we'll see below.

The model is a black box when it comes to exact methodology, so no one knows exactly what Silver is doing. But it is very naive, which can be fairly easily demonstrated by polling database and probability updates.

1. The trend line adjustments are heavily affected by outliers, even those that clearly did not presage an actual trend. This is visible even among highly rated outfits with low house effects, which in theory will be affected mostly or entirely by the trendline adjustments. Move to excel, sort by date, and find the rolling average - there are sharp changes to the adjustment average that are precipitated by outlier polls that did not actually presage a trend. The USC/LA Times poll did it in nearly every time. Prudent trendline adjustments shouldn't fall for these, but the 538 model is too naive and accepts them at face value - likely because it's mean-based and/or has a short memory, and ends up amplifying polling noise that smooths out over the long term and in the median.

2. Single surveys in thinly-polled states produce large swings in probability. See Nov. 4 at 5:18 pm, Nov. 3 at 11:17 am, Nov. 2 at 7:04 pm, Nov. 1 at 1:41 pm, and many more - though my personal favorite is Oct. 23 at 4:41 pm. An Oklahoma result of R+30/33 - almost exactly the 2012 margin - was enough to move the estimate a point and a half by itself. Lol, bullsh**t. Likely because the state correlation is assumed to be too high, and because the model lacks information in non-battlegrounds and thus places too much importance on individual surveys - which is exactly the sort of thing 538 is supposed to prevent. (We won't touch the fact that Trump +30/33 in OK is actually a fairly neutral or pro-Clinton result.) Better modeling would have this information priced in, and would not derive national trends from individual state surveys.

3. Some believe that the model is double-counting national and state polling. So essentially, if we have a national trend of Trump +1 over a week, and a state trend of Trump +1, the model counts it as +2 instead of +1. 538 says it only uses national polling to inform the adjustments and produce the projected vote margin, though outlier polls have swung the probability harder than seems reasonable if that were true. I'm not entirely sure what's going on there, but it would explain a lot.

4. And of course, as pointed out earlier, trends are mostly artifacts of nonrandom polling error and methodological changes (including but not limited to herding), not changes in voter intention among the population. You can ask a single panel over the course of an election, or use control questions, or simply look at changes in demo response rates in raw polling data, and that becomes abundantly clear. 538 doesn't have an "approach" to this beyond literally modeling this noise.

As for why his methods all seem designed to amplify noise and increase variance - clicks. 538 is owned by ESPN, which is under severe financial pressure and which has already shut down Grantland, the closest thing to 538 under its umbrella. Silver is trying to keep his vertical economically viable, and I don't blame him for that much.

Sorry for the late reply. My clients are APAC-based, and I work when they do.
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.032 seconds with 12 queries.