Predict the September Jobs & UE numbers (user search)
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
June 17, 2024, 09:53:12 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  General Politics
  Economics (Moderator: Torie)
  Predict the September Jobs & UE numbers (search mode)
Pages: [1]
Author Topic: Predict the September Jobs & UE numbers  (Read 1075 times)
muon2
Moderators
Atlas Icon
*****
Posts: 16,826


« on: October 09, 2012, 02:05:03 PM »

Since the UE numbers are based on surveys, it seems natural for experts in surveys to weigh in. Last week Gallup had its analysis of the September results. They share some of the concerns expressed by other skeptics of the rate, though not because anything was rigged. They just don't care for the methodology. Since they also track employment they showed their comparison chart.



As I see it Gallup shows a lot more volatility than the smoothed data from BLS. I know that BLS contacts about twice the number of households as Gallup (60 K to 30 K) so there should be less volatility. But the differences can't be explained by that alone.

Gallup shows a volatility measured as a standard deviation of about 0.4% from the trendline since the beginning of 2011. With twice as many calls BLS should have a standard deviation of about 0.3%. However, The BLS data shows a standard deviation closer to 0.1% so it seems to be over smoothed. This may be reflected by the revisions to past data.

If the observed smoothing is due to revisions of past data, then it means the most recent data is less accurate than it may otherwise appear, because it hasn't had a chance to be smoothed by future surveys. If that's true then the deviation based on the single month survey is more appropriate. That puts an error of 0.3% on the announced value of 7.8%. In other words there's only a 68% chance that the actual Sept UE rate is between 7.5% and 8.1%.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,826


« Reply #1 on: October 09, 2012, 09:37:46 PM »

First of all, it should be pointed out that Gallup is currently showing a 7.5 percent unemployment rate, which is even lower than BLS.

Unemployment rates (unlike employment levels per the establishment survey) aren't adjusted, so the lower volatility of the BLS estimate can't be explained by post facto smoothing. More to the point, relatively more stable measures would have more credibility for such a broad measurement of millions of people, that you wouldn't expect to surge and then plunge the way the Gallup data sometimes does.

The household survey is actually the Current Population Survey, a monthly survey of 60 K, as you noted. It isn't only used to report unemployment numbers, it's used for all labor force statistics, has been around since 1940, and is commonly used in social science since it is panel data.

This means that the BLS identifies about 72,000 households, makes repeated attempts to contact them by physically going to their house, and finally reach about 60,000 households, achieving a response rate of between 92 to 93 percent. Many of the surveys are actually conducted in the respondent's house in person by professional interviewers, and others are conducted over the computer.

They then go back and interview the same people for 8 months, yielding better continuity of data and estimates of change. Gallup doesn't report its response rate, but it does say they rely on random digit dialing of listed telephone numbers, which means their response rate is almost certainly very low as is the case for all telephone polls. CPS is undoubtedly one of the most accurate surveys in America today, Gallup can't compare.

In fact, I found it baffling that Gallup proposed replacing CPS with its P2P measure (for one, they measure different things, and Gallup never explained why their measure is better, besides saying that it's "simpler") because in their own methodology description, they write, Demographic weighting targets are based on the March 2011 Current Population Survey figures for the aged 18 and older-- so if CPS were done away with, they would not be able to weight their own P2P.

There's a piece of information in there that's of interest. The BLS surveys the same households for 8 months in a row. Are there 60K new each month for a total of 480K in the pool? That's not how I read it. Otherwise they have 60 K total with roughly 8K rolling off each month. If so then that certainly accounts for the unusually smooth curve they have given the data.

However, it belies the level of systematic error that is inherent in that type of sampling. An initially skewed sample propagates through the pool but isn't reflected in statistical variations. A proper display of the data would separately cite the statistical and systematic errors separately so that the reader would know to read greater error into the measurement than appears due to the intrinsic fluctuations.

Generally this is not a preferred technique precisely because it hides one of the sources of error in the data. A straight random sample more clearly exposes the natural fluctuations of the sample. I recognize that lots of pollsters like a rolling sample precisely because it smooths the data and is cheaper in terms of contact hours. In terms of measurement theory a single larger statistically independent sample is preferable. The reader can inspect the raw fluctuations to test if they make statistical sense, and if a smoother curve is desired for display it can be constructed from the statistically independent data points.
Logged
muon2
Moderators
Atlas Icon
*****
Posts: 16,826


« Reply #2 on: October 10, 2012, 09:58:15 PM »

Systematic error is generally the result of the measurement tool, so it does not matter whether you are using a panel or cross sectional "straight random sample". Yes, the error in one particular cohort of the panel will likely carry over to the next time periods, but if a new sample were taken in the next time period instead, it would just as likely contain the same error. The way to deal with systematic error is to identify and eliminate measurement bias. That is why the CPS's careful interview methodology is likely more reliable than the Gallup's phone polling. There are also more public, sophisticated methods of analyzing potential error in CPS. Since it has been used in social science for decades and has a transparent methodology, pretty much every social scientist in the country has been able to weigh in. Gallup does not face anything near the same level of scrutiny.

This is clearly an area where social and physical scientists can disagree. In the physical sciences better reproducibility of results have come from independent samples and double-blind modeling fits. We also worry a lot about eliminating measurement bias, so I suspect we have common cause on that point.
Logged
Pages: [1]  
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.026 seconds with 10 queries.