538 Model Debut: 64% Chance of Republican Majority; R+7 Most Likely
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
June 05, 2024, 01:50:14 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  Other Elections - Analysis and Discussion
  Congressional Elections (Moderators: Brittain33, GeorgiaModerate, Gass3268, Virginiá, Gracile)
  538 Model Debut: 64% Chance of Republican Majority; R+7 Most Likely
« previous next »
Pages: 1 2 [3]
Author Topic: 538 Model Debut: 64% Chance of Republican Majority; R+7 Most Likely  (Read 3289 times)
Slander and/or Libel
Figs
Sr. Member
****
Posts: 2,338


Political Matrix
E: -6.32, S: -7.83

Show only this user's posts in this thread
« Reply #50 on: September 16, 2014, 08:22:01 AM »

So, what then is the value of forecasting, if it can never be proven right or wrong? That is, if we're saying there's an 80% chance of Republican control of the Senate on election day, but the Democrats wind up winning, the forecaster can just say, "This was part of the 20%." So what was the value of the exercise?

More than that, though, what's the value of continuous forecasting up until the election unless those forecasts are relatively stable? Seems to me the relative value of a particular forecast would be in its ability to predict, with a decent degree of confidence, what was going to happen before it happened. But if the best estimate of that is just to look at the polls the night before the election, then isn't most of this just gum-flapping at best?
Logged
dmmidmi
dmwestmi
Jr. Member
***
Posts: 1,095
United States


Show only this user's posts in this thread
« Reply #51 on: September 16, 2014, 09:05:24 AM »

So, what then is the value of forecasting, if it can never be proven right or wrong? That is, if we're saying there's an 80% chance of Republican control of the Senate on election day, but the Democrats wind up winning, the forecaster can just say, "This was part of the 20%." So what was the value of the exercise?

More than that, though, what's the value of continuous forecasting up until the election unless those forecasts are relatively stable? Seems to me the relative value of a particular forecast would be in its ability to predict, with a decent degree of confidence, what was going to happen before it happened. But if the best estimate of that is just to look at the polls the night before the election, then isn't most of this just gum-flapping at best?

This is what renders his NCAA March Madness brackets completely useless. In my office pool, I'm required to submit my picks before the Round of 64. Mr. Silver updates his as the tourney goes along. What good is that to anyone?
Logged
Mister Mets
YaBB God
*****
Posts: 4,440
United States


Show only this user's posts in this thread
« Reply #52 on: September 16, 2014, 10:58:44 AM »

While Silver does often emphasize the uncertainty in his predictions, he contradicts that for taking credit in correctly calling close races, which means he's more vulnerable to a backlash in a bad cycle.

Continuous forecasting makes sense because things change. Politicians make gaffes, there are endorsements, and effective ads.
Logged
Lief 🗽
Lief
Atlas Legend
*****
Posts: 45,021


Show only this user's posts in this thread
« Reply #53 on: September 16, 2014, 04:17:12 PM »

Down to a 53% chance today. It's happening folks.
Logged
7,052,770
Harry
Atlas Superstar
*****
Posts: 35,641
Ukraine


Show only this user's posts in this thread
« Reply #54 on: September 16, 2014, 07:40:08 PM »

So, what then is the value of forecasting, if it can never be proven right or wrong? That is, if we're saying there's an 80% chance of Republican control of the Senate on election day, but the Democrats wind up winning, the forecaster can just say, "This was part of the 20%." So what was the value of the exercise?

Over time, when he says something has an 80% chance, it should happen around 80% of the time and not happen 20% of the time. With a large enough sample size, we can test to see if his model is good. If the things he has have an 80% probability only happen 60% of the time, his model sucks. Similarly, if they happen 90% of the time, his model sucks.

Remember, he's supposed to be "wrong" 20% of the time in this scenario. If he's only "wrong" 5% of the time, his model is awful, even if he probably wouldn't get the discredit.
Logged
KCDem
Jr. Member
***
Posts: 1,928


Show only this user's posts in this thread
« Reply #55 on: September 16, 2014, 07:43:52 PM »

So, what then is the value of forecasting, if it can never be proven right or wrong? That is, if we're saying there's an 80% chance of Republican control of the Senate on election day, but the Democrats wind up winning, the forecaster can just say, "This was part of the 20%." So what was the value of the exercise?

Over time, when he says something has an 80% chance, it should happen around 80% of the time and not happen 20% of the time. With a large enough sample size, we can test to see if his model is good. If the things he has have an 80% probability only happen 60% of the time, his model sucks. Similarly, if they happen 90% of the time, his model sucks.

Remember, he's supposed to be "wrong" 20% of the time in this scenario. If he's only "wrong" 5% of the time, his model is awful, even if he probably wouldn't get the discredit.

Since we'll never get a large enough sample size of his outcomes, the model is worthless and should be thrown out. Silver hasn't been all that impressive with his outcomes overall. Throw it on the pile or in the trash.
Logged
Linus Van Pelt
Sr. Member
****
Posts: 2,145


Show only this user's posts in this thread
« Reply #56 on: September 16, 2014, 08:23:45 PM »

So, what then is the value of forecasting, if it can never be proven right or wrong? That is, if we're saying there's an 80% chance of Republican control of the Senate on election day, but the Democrats wind up winning, the forecaster can just say, "This was part of the 20%." So what was the value of the exercise?

Over time, when he says something has an 80% chance, it should happen around 80% of the time and not happen 20% of the time. With a large enough sample size, we can test to see if his model is good. If the things he has have an 80% probability only happen 60% of the time, his model sucks. Similarly, if they happen 90% of the time, his model sucks.

Remember, he's supposed to be "wrong" 20% of the time in this scenario. If he's only "wrong" 5% of the time, his model is awful, even if he probably wouldn't get the discredit.

According to an analysis by Andrew Gelman in 2010, Silver's predictions were generally somewhat underconfident, meaning that for probabilities p the percentage of events assigned p that actually occur tends to be greater than p where p>0.5 and less than p where p<0.5. This means that he's "wrong" in the intuitive non-probabilistic sense less than he would be if his probabilities were perfectly calibrated, and so this problem is actually the opposite of the usual criticism based on cases like the North Dakota senate race.

http://andrewgelman.com/2010/11/03/some_thoughts_o_8/

For reasons stated in the link, there are technical reasons that this can be hard to avoid in a case like election modeling where the number of data points is fairly small. So I would not say that his model is "awful" for this reason. But it is a fairly imperfect process.
Logged
Antonio the Sixth
Antonio V
Atlas Institution
*****
Posts: 58,380
United States


Political Matrix
E: -7.87, S: -3.83

P P
Show only this user's posts in this thread
« Reply #57 on: September 17, 2014, 12:42:17 AM »

Yeah, underconfidence is inherently much less of a problem than overconfidence.
Logged
Adam Griffin
Atlas Star
*****
Posts: 20,092
Greece


Political Matrix
E: -7.35, S: -6.26

Show only this user's posts in this thread
« Reply #58 on: September 17, 2014, 01:18:26 AM »

Barring some big changes in several different races, 2014 may be the year when Silver loses his shine. Looking back, each national cycle he has covered has been relatively one-sided and not as many individual races were truly close; his "none here and one there" track-record of inaccuracies may fall apart. As it stands and as it has stood for many months, there's a good chance that:

  • several Senate races could be very close to 50/50 (two-way model)
  • the national PV could be very close to 50/50
  • the composition of the Senate may end up being 50/50

That makes his whole probability angle risky in terms of correctly identifying who will win (I don't care if the method provides a technical cop-out for him: people listen to him because they expect his probabilities are going to be the result).

Nate has said himself that he doubts he (or anyone else) is likely to get every race (or all but 1) race correct. That isn't (or at least shouldn't) be a knock on him. He's not a wizard, he can only work with the information available. If a race is a true tossup according to all available data then he can't read people's minds.

I didn't think it sounded like that's what I was saying. Hopefully this description will be acceptable to Harry since he is right: Silver's model has missed the mark several times when it comes to associating which candidate has a majority chance of winning with actual victory, and that's fine. It has been no biggie up until now when compared to how often his model does associate who has a better chance of winning with who actually wins.

Basically, I'm saying it's possible that there are 6, 7 or 8 of those exceptions with his model in this election, instead of 1, 2 or 3. I think this could happen due to how uniformly close many individual races are, and the national sentiment/likely turnout. Maybe a lot of these races clearly solidify with a final trend before Election Day, and then his model would likely perform as usual.

If there are a lot of upsets, then people are going to start doubting him. It doesn't matter what his calculations actually reflect - people's perceptions do. Even among political nerds (who aren't statistics majors/the most data-driven of the data-driven), there'll be a lot of people who suddenly don't have as much faith in Nate Silver as they once did. And I'm fine with that (Silver should be, too), because incorrect perceptions of his line of work are what made him notable in the first place.
Logged
Pages: 1 2 [3]  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.038 seconds with 11 queries.