The other day, I found myself watching the 2000 Vice Presidential debate between Dick Cheney and Joe Lieberman on YouTube. At one point, Lieberman laid out the highlights of the Democratic ticket’s proposed tax reform, which included a number of specific credits and changes he said would help the middle class and spur growth. Though I had my qualms, I felt Lieberman clearly and succinctly communicated what he felt were the benefits of the proposal. But after he was done, Cheney dropped a rhetorical hammer saying essentially you’d have to have a PhD in math to understand how that complicated policy would benefit you and your family. He said, what Americans need is simple: a tax cut for everyone.
Aside from the crystal clear distinction Cheney skillfully provided voters with a five second sound bite, this exchange got me thinking about the broader disconnect between the “public” and “policy” in public policy debates. That is, the depth of understanding among policy professionals who work on these issues every day compared to the nominal level of knowledge the average American can be expected to have on a policy issue as expansive and complicated as the US tax code. This exchange also made me reflect on my job as a researcher, clarifying my thinking about why simpler research can work to produce clearer insights on complicated policy research.
From my experience working on and helping craft research on complicated policy issues, I’ve realized something that at first glance may seem counter-intuitive: The simpler the subject, the more deep and complicated the research can be. More complicated subjects on the other hand are best served by simpler research designs, the shallow-end of the public’s policy knowledge. Here’s why:
When exploring opinion around subjects that are part of a respondent’s daily life, there are myriad ways opinions are formed and connected to other opinions, making it possible to dive deeply and explore attitudes, beliefs, expectations, hesitations, and concerns.
For instance, a survey on economic policy might evaluate views from personal experiences, such as job loss, or how respondents see the role of government in spurring economic growth. The survey may include whether respondents feel the economy has been getting better or worse over time and how optimistic they are about the economy for future generations. All those types of questions could provide needed context around responses to the economic policy the survey is focused on. As a result, research documents (i.e. surveys, focus group guides, one-on-one interview scripts) can be longer and more robust because there are multiple angles from which to look at the issue and different perspectives from which to frame a policy position.
On the other hand, measuring opinion on complicated policy issues, such as the effect legislation would have on how an industry or business operates, is harder because these issues are disconnected from most peoples’ lives. As a result, the research documents that provide the most useful context and direction are ones that wade in with broad and simple designs.
These simpler research designs often employ a few broad measures of underlying attitudes about the subject, a brief and accurate synopsis of the policy issue, simple straight-forward measures of support or opposition, and two to four simple messages from each side (if needed).
Even this simpler research can provide the essence of what good policy research should be: a common set of accurate; easy to understand conventions about a given subject that helps guide communications and/or actions. But, to do it right three elements are essential to the strategic advice we provide our clients.
- The most important element of any opinion research effort is to make sure you are talking to the right audience. Many complicated policy issues may only be relevant to business leaders in a single industry, others may be subjects that highly informed and engaged voters would be interested in and hearing about. But, importantly, research is an iterative process and usually the first step is to quantify what information those most engaged in and affected by a given policy understand and respond to.
- The second element is the generous use of open-end questions we use to “listen” to how, if at all, respondents tie complicated policy back to their daily lives or overall world view. As a firm, we pride ourselves on the high quality and richness of the verbatim responses we collect and give these qualitative data appropriate weight in our analysis.
- And, third, provide respondents who don’t have enough information or don’t feel strongly enough to choose one of the options provided the opportunity to “opt out.” Forcing respondents into a four-point scale (e.g. Strongly support, somewhat support, somewhat oppose, strongly oppose) on an issue they don’t really understand makes data murky. Telling respondents it’s okay to say “I don’t really know enough” or “I don’t have strong feelings one way or the other” makes the data among those who do choose an option much clearer.
On many issues, more Americans are in the shallow-end of policy than you may think. In the NBC-WSJ August poll, for instance, we found 40% of Americans didn’t know enough about “the rise of the terrorist group ISIS in Iraq,” 42% didn’t know enough about the Syrian civil war, and about a third said they didn’t know enough about the conflicts in Gaza (35%) and in Ukraine (32%) to say if they were satisfied or dissatisfied with how the US was handing these situations.
Even 20% said they didn’t know enough about the immigration problem with children from Central America illegally crossing the border to have an opinion, which is a comparatively smaller percent but still a considerable chunk of the electorate. The point is we wouldn’t know what share of Americans weren’t informed enough to offer an answer if we didn’t provide respondents the option to say so.
Still, opinion polling is as much an art as a science. So, when in doubt and when the budget allows, pollsters should field a survey among a very small sample and do post-test interviews among respondents and interviewers (when applicable) to identify questions or language that is confusing.
In much of the policy work we do we can delve deeply. But, the secret to clear, accurate, and useful research on complicated policy issues is to stay in the shallow end of the policy pool, let respondents say if they just don’t know, and listen using open-ended questions.
Research designs on complicated policy issues that don’t identify the right audience, assume too deep of a knowledge base, and/or force respondents into pre-determined responses can produce fuzzy data that doesn’t hold up in real life. And, even with all these elements in place, pre-testing is advisable when the budget allows.