NH-UNH: Clinton +11, Clinton +10 (4-way)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 18, 2024, 04:21:24 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2016 U.S. Presidential General Election Polls
  NH-UNH: Clinton +11, Clinton +10 (4-way)
« previous next »
Pages: 1 2 3 [4]
Author Topic: NH-UNH: Clinton +11, Clinton +10 (4-way)  (Read 13624 times)
Fmr President & Senator Polnut
polnut
Atlas Icon
*****
Posts: 19,489
Australia


Political Matrix
E: -2.71, S: -5.22

Show only this user's posts in this thread
« Reply #75 on: November 07, 2016, 05:47:49 AM »

It's an outlier , but shows that HRC is pulling away from the orange fascist

I think is an outlier, but I don't think it's a huge one. I never bought the tie idea.
Logged
Desroko
Jr. Member
***
Posts: 346
Show only this user's posts in this thread
« Reply #76 on: November 07, 2016, 06:27:53 AM »


538 is less likely to be predictive than its peers because it's the outlier, regardless of the methodological problems we'll get into. You don't throw the outlier away, but you have to recognize it for what it is instead of accepting it at face value - which is a problem that 538 itself struggles with, as we'll see below.

The model is a black box when it comes to exact methodology, so no one knows exactly what Silver is doing. But it is very naive, which can be fairly easily demonstrated by polling database and probability updates.

1. The trend line adjustments are heavily affected by outliers, even those that clearly did not presage an actual trend. This is visible even among highly rated outfits with low house effects, which in theory will be affected mostly or entirely by the trendline adjustments. Move to excel, sort by date, and find the rolling average - there are sharp changes to the adjustment average that are precipitated by outlier polls that did not actually presage a trend. The USC/LA Times poll did it in nearly every time. Prudent trendline adjustments shouldn't fall for these, but the 538 model is too naive and accepts them at face value - likely because it's mean-based and/or has a short memory, and ends up amplifying polling noise that smooths out over the long term and in the median.

2. Single surveys in thinly-polled states produce large swings in probability. See Nov. 4 at 5:18 pm, Nov. 3 at 11:17 am, Nov. 2 at 7:04 pm, Nov. 1 at 1:41 pm, and many more - though my personal favorite is Oct. 23 at 4:41 pm. An Oklahoma result of R+30/33 - almost exactly the 2012 margin - was enough to move the estimate a point and a half by itself. Lol, bullsh**t. Likely because the state correlation is assumed to be too high, and because the model lacks information in non-battlegrounds and thus places too much importance on individual surveys - which is exactly the sort of thing 538 is supposed to prevent. (We won't touch the fact that Trump +30/33 in OK is actually a fairly neutral or pro-Clinton result.) Better modeling would have this information priced in, and would not derive national trends from individual state surveys.

3. Some believe that the model is double-counting national and state polling. So essentially, if we have a national trend of Trump +1 over a week, and a state trend of Trump +1, the model counts it as +2 instead of +1. 538 says it only uses national polling to inform the adjustments and produce the projected vote margin, though outlier polls have swung the probability harder than seems reasonable if that were true. I'm not entirely sure what's going on there, but it would explain a lot.

4. And of course, as pointed out earlier, trends are mostly artifacts of nonrandom polling error and methodological changes (including but not limited to herding), not changes in voter intention among the population. You can ask a single panel over the course of an election, or use control questions, or simply look at changes in demo response rates in raw polling data, and that becomes abundantly clear. 538 doesn't have an "approach" to this beyond literally modeling this noise.

As for why his methods all seem designed to amplify noise and increase variance - clicks. 538 is owned by ESPN, which is under severe financial pressure and which has already shut down Grantland, the closest thing to 538 under its umbrella. Silver is trying to keep his vertical economically viable, and I don't blame him for that much.

Sorry for the late reply. My clients are APAC-based, and I work when they do.
Logged
Erich Maria Remarque
LittleBigPlanet
YaBB God
*****
Posts: 3,646
Sweden


Show only this user's posts in this thread
« Reply #77 on: November 07, 2016, 07:20:50 AM »
« Edited: November 07, 2016, 07:24:03 AM by Little Big BREXIT »


538 is less likely to be predictive than its peers because it's the outlier, regardless of the methodological problems we'll get into. You don't throw the outlier away, but you have to recognize it for what it is instead of accepting it at face value - which is a problem that 538 itself struggles with, as we'll see below.

The model is a black box when it comes to exact methodology, so no one knows exactly what Silver is doing. But it is very naive, which can be fairly easily demonstrated by polling database and probability updates.

1. The trend line adjustments are heavily affected by outliers, even those that clearly did not presage an actual trend. This is visible even among highly rated outfits with low house effects, which in theory will be affected mostly or entirely by the trendline adjustments. Move to excel, sort by date, and find the rolling average - there are sharp changes to the adjustment average that are precipitated by outlier polls that did not actually presage a trend. The USC/LA Times poll did it in nearly every time. Prudent trendline adjustments shouldn't fall for these, but the 538 model is too naive and accepts them at face value - likely because it's mean-based and/or has a short memory, and ends up amplifying polling noise that smooths out over the long term and in the median.

2. Single surveys in thinly-polled states produce large swings in probability. See Nov. 4 at 5:18 pm, Nov. 3 at 11:17 am, Nov. 2 at 7:04 pm, Nov. 1 at 1:41 pm, and many more - though my personal favorite is Oct. 23 at 4:41 pm. An Oklahoma result of R+30/33 - almost exactly the 2012 margin - was enough to move the estimate a point and a half by itself. Lol, bullsh**t. Likely because the state correlation is assumed to be too high, and because the model lacks information in non-battlegrounds and thus places too much importance on individual surveys - which is exactly the sort of thing 538 is supposed to prevent. (We won't touch the fact that Trump +30/33 in OK is actually a fairly neutral or pro-Clinton result.) Better modeling would have this information priced in, and would not derive national trends from individual state surveys.

3. Some believe that the model is double-counting national and state polling. So essentially, if we have a national trend of Trump +1 over a week, and a state trend of Trump +1, the model counts it as +2 instead of +1. 538 says it only uses national polling to inform the adjustments and produce the projected vote margin, though outlier polls have swung the probability harder than seems reasonable if that were true. I'm not entirely sure what's going on there, but it would explain a lot.

4. And of course, as pointed out earlier, trends are mostly artifacts of nonrandom polling error and methodological changes (including but not limited to herding), not changes in voter intention among the population. You can ask a single panel over the course of an election, or use control questions, or simply look at changes in demo response rates in raw polling data, and that becomes abundantly clear. 538 doesn't have an "approach" to this beyond literally modeling this noise.

As for why his methods all seem designed to amplify noise and increase variance - clicks. 538 is owned by ESPN, which is under severe financial pressure and which has already shut down Grantland, the closest thing to 538 under its umbrella. Silver is trying to keep his vertical economically viable, and I don't blame him for that much.

Sorry for the late reply. My clients are APAC-based, and I work when they do.


Lol, stop it. At least read their models description or something. It does not amply noise, no.
It gives not double trends state + national, no.









About this poll. TN Volunteer cherry picked it, lol. Hillary won't win it in landslide, not even close, unless there will be a polling error across all states & nationally, lol
Logged
GeorgiaModerate
Moderators
Atlas Superstar
*****
Posts: 32,607


Show only this user's posts in this thread
« Reply #78 on: November 07, 2016, 07:40:20 AM »


TLDR: Unsophisticated people are impressed by the bells and whistles in a model. But all bells and whistles really do is make noise.


I love this statement.  It applies to most engineering projects as well.
Logged
Antonio the Sixth
Antonio V
Atlas Institution
*****
Posts: 58,068
United States


Political Matrix
E: -7.87, S: -3.83

P P
Show only this user's posts in this thread
« Reply #79 on: November 07, 2016, 07:49:25 PM »

ANGRY WOMEN WITH A VENGEANCE
Logged
Seriously?
Sr. Member
****
Posts: 3,029
United States


Show only this user's posts in this thread
« Reply #80 on: November 09, 2016, 11:27:26 PM »

Pile of state poll junk.
Logged
Attorney General, LGC Speaker, and Former PPT Dwarven Dragon
Dwarven Dragon
Atlas Politician
Atlas Superstar
*****
Posts: 31,677
United States


Political Matrix
E: -1.42, S: -0.52

P P P

Show only this user's posts in this thread
« Reply #81 on: November 09, 2016, 11:39:19 PM »

Logged
Pages: 1 2 3 [4]  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.039 seconds with 14 queries.