Non-Gallup/Rasmussen tracking polls thread (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
May 20, 2024, 11:38:48 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Election Archive
  Election Archive
  2008 Elections
  2008 U.S. Presidential General Election Polls
  Non-Gallup/Rasmussen tracking polls thread (search mode)
Pages: 1 2 [3] 4
Author Topic: Non-Gallup/Rasmussen tracking polls thread  (Read 142700 times)
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #50 on: October 30, 2008, 02:12:13 PM »

Nah.  That was their attempt to correct it.  Basically, they'd been getting a weirdly low number of youth voters in their polls.  They ignored it, until the sample shifted to being ridiculously McCain.  To fix this, they started applying a non-random sample to meet quotas.  Basically, he ignored a methodological flaw until it started showing up, and then applied a non-random solution.  It's shoddy pollster behavior.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #51 on: October 30, 2008, 03:25:55 PM »
« Edited: October 30, 2008, 03:30:14 PM by Alcon »

What I meant was that what you quoted indicated why the numbers were screwy (in that much-highlighted chart last week) in the first place -- extremely low sample sizes.

I'm not enough of a stats guy to get into the fix side of things.

What happened is they took a screwy sample size and weighted it higher.  That makes it less likely to be indicative of bad methodology, because the raw sample was smaller.  On the other hand, the pollster literally just proved their methodology is bad when they tried to "fix" it.  The problem is, the initial tiny sample size is indicative of a methodological problem.  They essentially outright admitted they had one, and then they de-randomized their sample to fix it.  Bad juju.

It was indicative of some problem in methodology -- that problem was undersampling, which cannot be fixed by drastically overweighting a tiny sample, especially when that sample was wrong.  I was right, J. J. wasn't.  I am King of the Mountain, bring me your women.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #52 on: October 30, 2008, 04:08:48 PM »

So why don't they just go out and get more 18-24s?  Or am I missing some geeky pollster sh** that allows a poll to be valid without actually polling people?  Smiley

That would be fine, and it's what they're doing.  But their sample size was still suspiciously small (unless I mis-calculated it) and enough to indicate that their sampling was probably screwy.  Moreover, their new youth turnout target would be a decline from 2004.  I think youth turnout isn't going to boom, but I'd be surprised if it were down from '04.

The problem is his solution:

Quote
You must be logged in to read this quote.

The first is OK-ish, but has a flaw: it's not a random person under 30, rather, it's the youngest.  In a house with two sub-30s, it's not going to get a random sub-30, but rather the younger.  Trivial, but un-random.

The second, while non-specific, is not trivial.  How does he determine what houses are "likely to have younger voters"?  Why isn't he telling us?  What in the phone book says "kid here"?  How could this not introduce other variables?  I can't imagine a way of determining a kid is likely present that would not introduce other variables.

So, here are my complaints:

1. The guy's sampling was suspiciously low on young voters.

2. His new weights still seem suspiciously low.

3. Instead of fixing his sample size, he just upweighted.

4. He didn't fix anything until suspicious samples forced him to.

5. The way he fixed it is un-transparent, un-random, and could easily introduce new error.  It also does not address whatever sampling error caused the problem originally.

Death by a thousand papercuts.  I don't trust his poll much anymore, and don't recommend anyone else does.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #53 on: November 02, 2008, 05:09:42 PM »

Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #54 on: November 02, 2008, 10:45:40 PM »


Talking codpiece?

That's a really weird metaphor
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #55 on: November 02, 2008, 11:00:19 PM »


I'm lost.  The Washington Post is secretly Democratic because their athletic wear may talk, threatening their masculinity and causing them to lust for their mothers?
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #56 on: November 03, 2008, 01:25:08 AM »

Alleged Zogby

Obama 50.9% (+1.4)
McCain 43.8% (nc)
Undecided 5.3% (-1.4)

Quote
You must be logged in to read this quote.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #57 on: November 06, 2008, 10:49:34 PM »
« Edited: November 06, 2008, 11:05:23 PM by Alcon »

I guess if you keep narrating authoritatively...whatever.

Obviously the best explanation for Utah polling (of which there was little, and virtually none of it was respectable) is the Bradley Effect.  Utah, a state known for its rich history of racial tension. Utah.

Good god.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #58 on: November 06, 2008, 11:56:56 PM »

No, but I do know that defaulting to the Bradley Effect in every instance is dumb.

Perhaps the lack of any racial tension correlation whatsoever is indicative of something, hmm.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #59 on: November 07, 2008, 10:46:40 AM »

Statistically Obama should overperform his poll margins 50% of the time

Actually, Obama should over perform about 1 in 20 times out of the MOE.  We have Zogby numbers on Gallup and ABC/WP.  Come on, a 2 point MOE and the polls are off 4.5 points.

1. Statistical error does not include artificial sampling error, which obviously exists in every poll.

2. How are you calculating a 2.0 MoE?

3. The national samples lag several days, being that they're rolling averages.

You can't just ignore these other variables and decide 'Bradley effect.'
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #60 on: November 07, 2008, 12:14:40 PM »

1. Statistical error does not include artificial sampling error, which obviously exists in every poll.

You're going have to go into greater detail.

MoE assumes a perfectly representative sample.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #61 on: November 07, 2008, 09:09:36 PM »

*suggest Lunar puts down the crack pipe*

Ugh.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #62 on: November 07, 2008, 10:16:30 PM »

So, 538's poll average-based model doesn't show the effect you allege because...?
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #63 on: November 07, 2008, 10:40:17 PM »


fivethirtyeight-dot-com

They weight by sample size, and time elapsed since release, and pollster record.  But unless you can explain why doing so removes the Bradley Effect, your suggestion that Obama under-performed vs. state polls is unfounded.

2.  Would you explain why, in terms of national polls, none were outside the MOE, except those showing a lead for Obama?

n=2?
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #64 on: November 07, 2008, 11:24:37 PM »

Sigh.

fivethirtyeight-dot-com = www.fivethirtyeight.com

C'mon man
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #65 on: November 07, 2008, 11:55:03 PM »


The section the particular point you are making, "538's poll average-based model." Roll Eyes

The Supertracker?

"Trend-Adjusted" under the right-hand side.  Did you bother to read the FAQ, or look through the entire page?  It's pretty self-explanatory.

Yes, that method is used with the Supertracker.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #66 on: November 08, 2008, 01:28:58 AM »

Wait, so the Bradley Effect was contingent upon the place in the election cycle?

Besides, the weighting was done in such a way that polls in the last week were hugely over-weighted relative to earlier ones.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #67 on: November 08, 2008, 11:38:11 AM »

We should remember in the midst of glorifying 538 that all his talk about cell phones and young voters emerging turned out to be bull.

True, but not related to the model.

In any case, I guess we're just not understanding J. J. on the following:

1. Weighting based on pollster quality, age and poll conduct time somehow gets rid of the Bradley Effect, in a way I assume J. J. will refuse to explain.

2. Any over-performance by Obama is not the Bradley Effect.  Any under-performance probably is, despite the fact that Kerry under-performed polls, too.  Was his West Virginia performance the Bradley Effect?

3. Instead of using fancy-schmancy state polls, we should stick to three trackers, two of which "are not Zogby."

4. The Bradley Effect not showing up in states with racial tension isn't a sign of anything, for some reason.  Even if a state has no real history of racial strife issues (Utah), it should be assumed to be the Bradley Effect.

5. J. J. was going to argue for this, if he could find any remotely feasible opportunity to, no matter how intellectually dishonest it is.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #68 on: November 08, 2008, 11:17:12 PM »
« Edited: November 08, 2008, 11:30:09 PM by Alcon »

Didn't you suggest that PA is likely to have the Bradley Effect because of racial tension?  (Caveat:  I was wrong to say that Utah has never had racial issues)

Okey doke, give me a list of pollsters.  Give me a timeframe.  We will agree on a methodology.  Either pick margin-vs.-final (margin minus final margin) or relative candidate-showings-vs.-final ([Obama minus final Obama]-[McCain versus final McCain]).  Then, I will run a poll compilation and test for statistical significance.

If there is no statistical significance, will you finally admit you are wrong -- or at least overestimated the Bradley Effect to the extent it was eaten up by noise/something else?

If there is statistical significance, will you admit that there are plausible reasons other than a Bradley Effect?  At that point, we could debate the relative merits of the possible explanations--but only if there is statistical significance.  Otherwise, seriously, I could claim there's any damn effect and just believe in it no matter what.  Useless.

So, agreed?  Again, you get to outline the specific methodology.  Hell, we can do both, and with multiple timeframes.  It's your choose.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #69 on: November 09, 2008, 12:34:47 AM »

No, you can tell me which polls are "complete crap" now.  You don't do an experiment and then toss out data points afterwards.  Why would you?  That serves no purpose whatsoever other than to potentially introduce bias.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #70 on: November 09, 2008, 12:44:00 PM »

That's fine, J. J., but I'm not going to let you throw out polls after I do the analysis, lol.  That would be incredibly unscientific.  My argument isn't with the concept that "some polls are crappy," it's with ex post facto toss-outs.

Are you willing to list which polls you consider acceptable so as to look at this objectively, or not?
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #71 on: November 09, 2008, 02:40:22 PM »
« Edited: November 09, 2008, 02:45:18 PM by Alcon »

Well, you said you want to look at state and national polls.  State polls have many more interviews, and are many greater in number, than national polls.  Thus, can we agree that they're the best place to find the Bradley Effect?  There's no logical reason to weight national polls in your determination any more than state polls.  They're the same thing, but with a national sample, and there's fewer of them.

Now:  Why would you throw pollsters out after testing for the Bradley Effect?  Seems that would just potentially introduce unconscious personal bias.  No reason not to throw them out now.  Let's do that, and then I'll test for you.

So, choose which of the following pollsters you want to throw out.

ARG
CNN
Field
Insider Advantage
Los Angeles Times
Marist
Mason-Dixon
National Journal
PPP
Quinnipiac
Rasmussen
Research 2000 (DailyKos)
Selzer
Strategic Vision
SurveyUSA
YouGov
Zogby (phone)

Now, decide what time period you want polls from, and which of the two methods I mentioned you want to use.

We're applying an objective mathematical test to figure this one out.  Then, we can see whether there's mathematical support for a Bradley Effect or not.  Then, we find out the truth, right? Smiley
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #72 on: November 10, 2008, 09:21:37 AM »
« Edited: November 10, 2008, 11:11:44 AM by Alcon »

I can include the national trackers too, if you like, but in proportion to their interviews.  Again, there is no logical reason reason to weight them any more strongly than state polls with an equal number of interviews.

I don't want to throw out polls afterward no matter who it is favorable to.  It is unscientific.  It will enter our own experimental bias into the mix.  There is no reason whatsoever not to throw them out beforehand.  Which of the listed polls do you want to remove?

The fact that you're defending something by arguing I should do it because it might show the results I like, is exactly why people are skeptical of your analytical abilities.
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #73 on: November 10, 2008, 05:54:56 PM »

All right.  Again, which pollsters do you want to throw out of the above list?
Logged
Alcon
Atlas Superstar
*****
Posts: 30,866
United States


« Reply #74 on: November 10, 2008, 07:05:57 PM »

All right.  Again, which pollsters do you want to throw out of the above list?

And again, none.  Let's look at the last week of state polling, from 10/28 onward.  The only ones (and I can only think of one of these) that shouldn't be counted would be state tracking polls.

I don't see why not?  State tracking polls are just three-day polls that roll.  In its last instance, Muhlenberg was just a three-day poll.  Why not use it?

Other than that, I still need you to pick one of the following:

Quote
You must be logged in to read this quote.

Then we'll get to 'er.
Logged
Pages: 1 2 [3] 4  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.068 seconds with 12 queries.