A nice lesson on poll accuracy

Rutgers-Eagleton has released an independent study of why their 2013 polls did so poorly. Basically, they correctly forecast that Christie and Booker would win by a lot but overestimated the margins by a huge amount. They had Christie at +36 but he “only” won by 22; Booker was at +22 and won by 10. I would have wrote it off to the problems of forecasting low-turnout elections, but I would have been wrong:

The Langer report identifies the primary reason for the inaccurate results as the failure to put the “head-to-head” questions, which asked respondents for their vote intention, at or near the beginning of the questionnaire. Because these questions were asked after a series of other questions, it appears that respondents were “primed” to think positively about Governor Chris Christie in the November survey, which then may have led Democrats and independents in particular to over-report their likelihood of voting for the Governor. A similar process occurred with the October Senate poll, where voters were first reminded of how little they knew about Lonegan and how much they liked Booker before being asked the vote question.

As the post makes clear, this was not done for a nefarious purpose but simply to continue a series of questions polled over the years. Ideally there would have been separate “horse race” and issue polls. It’s a good lesson in how difficult it is to poll fairly but a perfect example of public accountability. It’s too bad Rutgers-Eagleton doesn’t have a large budget because I view their polls as a important public service.  

Leave a Comment

Your email address will not be published. Required fields are marked *