I or Silver might point out there’s a 75% chance anything besides two heads in a row happening (which is accurate.)
Is it?
Suppose I gave you two coins, which may or may not be weighted. You think they aren’t, and I think they are weighted 2:1 towards heads. Your model predicts one head, and mine predicts two heads.
We toss and get two heads. Does that mean the odds I gave are right? Does it mean the odds you gave are wrong?
In the real world, your odds will depends on your priors, which you can never prove or disprove. If we were working with coins, then we could repeat the experiment and possibly update our priors.
But suppose we only have one chance to toss them, and after which they shatter. In that case, the model we use for the coins, weighted vs unweighted, is just a means to arrive at a prediction. The prediction can be right or wrong, but a one-shot model is unfalsifiable. Same with Silver and the 2016 election.
You can’t really falsify the claim “Clinton has a higher chance of winning”, at least the way Nate Silver models it. His model is based upon statistics, and he basically runs a bunch of simulations of the election. In more of these simulations, Clinton won, hence his claim. But we had exactly one actual election, and in the election, Trump won. Perhaps his model is just wrong, or perhaps the outcome matched one of the simulations in his model where Trump won. If we could somehow run the election hundreds of times (or observe what happened in hundreds of parallel universes) then maybe we could see if his model matched the outcome of a statistically significant number of election results. But nevertheless, Nate Silver had a model and statistics to back up his claim.
As for Michael Moore, I’m not sure exactly how he came up with his prediction, but I get the impression it was mostly a gut feeling based upon his observations of what was happening. Nevertheless, Michael Moore still could back up his statement by articulating why he was claiming that and the observations he had made.
Though one crucial difference is still the whole prediction thing. Michael Moore actually made a prediction of a Trump win. Whereas Nate Silver just stated that Clinton had a higher chance of winning, and once again that was not a prediction. So you’re really comparing two different things here.
I guess it’s up to you if want to trust it or not. He doesn’t share all the details, but he (at least in the past) shared enough details on his blog that I felt pretty good that he knew what he was talking about it.
I will point out that he was one of the very few aggregators in 2016 that was saying “hey look, Trump has a very real chance of winning this”. Which is why I find it so amusing when people say he got it wrong in 2016 when in actuality he was one of the few that was right. After 2008 there were a bunch of copycats out there trying to do similar things as Nate Silver, and many of them were saying things like 99.99% Clinton. If people are going to criticize, that’s where I would direct it.
Even if he was the only one saying that, why are we giving him credit for it?
Maybe he was the first, but going forward anyone can follow his example and say things like, “Harris has a very real chance of winning. So does Trump. Also, Cruz and Allred both have very real chances of winning. So do Elizabeth Warren and her opponent, John Deaton”.
Silver showed that if you hedge by replacing a testable prediction with a tautology, then you can avoid criticism regardless of the result. I don’t think that is useful political analysis.
Is it?
Suppose I gave you two coins, which may or may not be weighted. You think they aren’t, and I think they are weighted 2:1 towards heads. Your model predicts one head, and mine predicts two heads.
We toss and get two heads. Does that mean the odds I gave are right? Does it mean the odds you gave are wrong?
In the real world, your odds will depends on your priors, which you can never prove or disprove. If we were working with coins, then we could repeat the experiment and possibly update our priors.
But suppose we only have one chance to toss them, and after which they shatter. In that case, the model we use for the coins, weighted vs unweighted, is just a means to arrive at a prediction. The prediction can be right or wrong, but a one-shot model is unfalsifiable. Same with Silver and the 2016 election.
The thing is, Nate Silver did not make a prediction about the 2016 race.
He said that Hilary had a higher chance of winning. He didn’t say Hilary was going to win.
How can you falsify the claim “Clinton has a higher chance of winning”?
Alternately:
Silver said “Clinton has a higher chance of winning in 2016” whereas Michael Moore said “Trump has a higher chance of winning in 2016”.
In hindsight, is one of these claims more valid than the other? Because if two contradictory claims are equally valid, then they are both meaningless.
You can’t really falsify the claim “Clinton has a higher chance of winning”, at least the way Nate Silver models it. His model is based upon statistics, and he basically runs a bunch of simulations of the election. In more of these simulations, Clinton won, hence his claim. But we had exactly one actual election, and in the election, Trump won. Perhaps his model is just wrong, or perhaps the outcome matched one of the simulations in his model where Trump won. If we could somehow run the election hundreds of times (or observe what happened in hundreds of parallel universes) then maybe we could see if his model matched the outcome of a statistically significant number of election results. But nevertheless, Nate Silver had a model and statistics to back up his claim.
As for Michael Moore, I’m not sure exactly how he came up with his prediction, but I get the impression it was mostly a gut feeling based upon his observations of what was happening. Nevertheless, Michael Moore still could back up his statement by articulating why he was claiming that and the observations he had made.
Though one crucial difference is still the whole prediction thing. Michael Moore actually made a prediction of a Trump win. Whereas Nate Silver just stated that Clinton had a higher chance of winning, and once again that was not a prediction. So you’re really comparing two different things here.
Silver claimed that Trump had a 28% chance of winning in 2016.
Suppose I built a model that claimed Trump had a 72% chance of winning in 2016.
Given there is only one 2016 election and Trump won it, is there any reason to believe that Silver’s results are better or worse than mine?
Sure, you could present your model and the data it is based upon and everyone could make their judgement.
Ok. But Silver’s model is proprietary and the details of its workings have not been presented to the public. So on what basis should we trust it?
I guess it’s up to you if want to trust it or not. He doesn’t share all the details, but he (at least in the past) shared enough details on his blog that I felt pretty good that he knew what he was talking about it.
I will point out that he was one of the very few aggregators in 2016 that was saying “hey look, Trump has a very real chance of winning this”. Which is why I find it so amusing when people say he got it wrong in 2016 when in actuality he was one of the few that was right. After 2008 there were a bunch of copycats out there trying to do similar things as Nate Silver, and many of them were saying things like 99.99% Clinton. If people are going to criticize, that’s where I would direct it.
Even if he was the only one saying that, why are we giving him credit for it?
Maybe he was the first, but going forward anyone can follow his example and say things like, “Harris has a very real chance of winning. So does Trump. Also, Cruz and Allred both have very real chances of winning. So do Elizabeth Warren and her opponent, John Deaton”.
Silver showed that if you hedge by replacing a testable prediction with a tautology, then you can avoid criticism regardless of the result. I don’t think that is useful political analysis.