For the past four months I have been arguing that the contest for the Democratic gubernatorial nomination between the sitting State Attorney General and State Treasurer was much closer than the pollsters would have us believe. Now that the results on Election Day seem to support my conclusion, the pollsters who saw a 40-plus point Coakley lead in the Spring and a 20 or more point lead three days before the polls opened, need to figure out where they went wrong.
Why weren’t pollsters able to get accurate samples of the primary electorate even one week out? Some folks want us to believe that the race changed dramatically in the final week; that huge last minute ad blitzes pushed undecided voters overwhelmingly toward the Treasurer. While this explanation is theoretically possible, it is unlikely in relation to the polls showing a 20 or more point lead for Coakley in the final week. The Suffolk Poll, which used what I thought was a good likely voter screen, had Coakley beating Grossman by 12 points.
The good news is that the pollsters will go back to the drawing board and try to figure out how to avoid similarly poor performances going forward, regardless of what they say publicly in the aftermath of last Tuesday’s election. Pollsters who get elections wrong either figure it out and fix it, or they go out of business. I have been asked repeatedly why the Globe and WBUR polls didn’t use the kind of likely voter screen used in the late August Suffolk poll. As to polls in the last month or so, my answer is that I don’t know for sure. For polls conducted in the spring and early summer, pollsters were probably operating with assumptions that had proven reasonable in the past, namely that tighter likely voter screens in the early going are prohibitively expensive and ultimately unnecessary. The reason they cost more is that screening out more respondents means having to conduct more interviews in order to get a sufficiently large sample, probably many more, which increases the production costs of the polls.
In most election cycles, as Election Day gets closer, tighter likely voter screens become increasingly cost effective and results become increasingly accurate. When pollsters brag about the accuracy of their election polls, they are referring to the last one conducted before the polls open, not the first one conducted in the race. When they get results wrong, on the other hand, they energetically remind critics that even the latest polls are “snapshots” not forecasts, a point that is rarely emphasized when they nail the results.
So, why didn’t the Globe and WBUR polls tighten much more down the stretch? Why didn’t these pollsters rely on the tighter likely voter screening methods used in the late August Suffolk Poll? That’s a tough one. They may have shared the opinion of journalist David Bernstein regarding the profile of likely Democratic primary voters. As readers may recall Bernstein argued that the Suffolk Poll got it wrong because they screened out a lot of likely voters. By screening out folks who couldn’t say (within a month) when the election would occur and folks who hadn’t voted in the last couple of primaries Suffolk had, according to Bernstein, under-represented the number of low information, inconsistent primary voters that would turnout on September 9th. In other words, the pollsters, like Bernstein, may have relied on an inaccurate assumption about the profile of Democratic primary voters and thereby over-estimated the turnout of low information partisans and non-partisan primary voters.
If the inaccuracy of the polls was due to assumptions about who turns out Massachusetts primary elections, then avoiding future poor performances may be as simple as increasing the rigor of their likely voter screens down the stretch. However, I think there might be more to it. I think Massachusetts pollsters also need to consider tightening up their likely voter screens much earlier in the campaign season, and I think this may have become necessary to some degree because of the introduction by the Boston Globe of weekly candidate preference polling in the three months prior to Primary Election Day.
Unfortunately, I haven’t yet crystallized my theory about the impact of weekly polling on the accuracy of polling down the stretch. Hopefully, pollsters and fellow political scientists with enough expertise and resources will try to help clarify and test my hunch/theory on this. Sadly, as far as I know, there were no statewide exit polls conducted last Tuesday, which is a crying shame because a good exit poll could have been very illuminating here.