I get where you're coming from, but in reality we're all relying on data for all of our predictions. What we lack is a thorough and consistent process for applying that data.
Having a model is a way to formalize that data process and make it consistent. We will never see a perfectly predictive model, and that's OK. We shouldn't expect one…
I get where you're coming from, but in reality we're all relying on data for all of our predictions. What we lack is a thorough and consistent process for applying that data.
Having a model is a way to formalize that data process and make it consistent. We will never see a perfectly predictive model, and that's OK. We shouldn't expect one. Just like we shouldn't expect it from more traditional pundits.
I think we all know that tossup/lean/likely/safe all have degrees within them. Sabato has WI, MI, AZ, NV, PA, GA, and NC all as tossup states for the presidential election. That doesn't truly mean each of them is exactly as likely to go to Harris, but more that they exist in some spectrum of maybe something in the range of 45-55 to 55-45. Similar idea for lean and likely. There's little practical difference between "Lean D" and "70.3% of D win" in that sense. They're both the result of models, one informal and one formal. The exactness of one prediction is a result of it being a formal model and the consequences of it being mathematically based, rather than it having anything approaching that degree of confidence.
So long as we take into account the limitations at play I rather like formal data models. If a prediction changes, it will be known an obvious why. If you feed it the same exact data to two different elections, it will give the same prediction. There's no fretting about emotions and secret sources and personal bias. There's a place for them if they can source good data.
I've heard this before, I just haven't seen any reason to believe that these models are any more accurate than simply asking the people who would know like Sabato and asking them to put percentages on a candidate's likelihood to win
I get where you're coming from, but in reality we're all relying on data for all of our predictions. What we lack is a thorough and consistent process for applying that data.
Having a model is a way to formalize that data process and make it consistent. We will never see a perfectly predictive model, and that's OK. We shouldn't expect one. Just like we shouldn't expect it from more traditional pundits.
I think we all know that tossup/lean/likely/safe all have degrees within them. Sabato has WI, MI, AZ, NV, PA, GA, and NC all as tossup states for the presidential election. That doesn't truly mean each of them is exactly as likely to go to Harris, but more that they exist in some spectrum of maybe something in the range of 45-55 to 55-45. Similar idea for lean and likely. There's little practical difference between "Lean D" and "70.3% of D win" in that sense. They're both the result of models, one informal and one formal. The exactness of one prediction is a result of it being a formal model and the consequences of it being mathematically based, rather than it having anything approaching that degree of confidence.
So long as we take into account the limitations at play I rather like formal data models. If a prediction changes, it will be known an obvious why. If you feed it the same exact data to two different elections, it will give the same prediction. There's no fretting about emotions and secret sources and personal bias. There's a place for them if they can source good data.
I've heard this before, I just haven't seen any reason to believe that these models are any more accurate than simply asking the people who would know like Sabato and asking them to put percentages on a candidate's likelihood to win