Public Opinion Polls: Propaganda or Plebiscite?
On the eve of the U.S. invasion of Iraq in 2003, USA Today ran a story under the headline: “Americans Want This War, Or So They Tell Pollsters.” The headline encapsulates the way the news media interpreted polls of the U.S. public in the run-up to the invasion of Iraq. Up front it offers a misreading of poll data. Americans were not telling pollsters that they wanted war with Iraq for the simple reason that pollsters were not asking them “do you want war.” The second half of the headline, meanwhile, is suggestive of a certain doubt or skepticism – an awareness that reading poll results as showing that “Americans want this war” might be a mistake.
In the run up to the invasion of Iraq, what were Americans telling pollsters? We identified two different types of questions regarding support for the Bush administration’s policy of invasion. The first type posed abstract but straightforward up or down questions about whether or not respondents would support an invasion. For example: “Do you support or oppose military action against Iraq?” The second kind of question raised specific considerations such as U.N. approval of an invasion, or other contingencies. For example, “Do you agree or disagree that the U.S. should send ground troops to Iraq to remove Saddam Hussein from power even if the United Nations opposes such action.” The first type of question consistently returned majorities saying they “favored,” or “supported” invasion. The second type, meanwhile, right up until the eve of the invasion, often returned majorities that disagreed with military action.
The public as measured by opinion polls, then, was incoherent. It seemed to be in favor of war in the abstract, but opposed to war under actually existing conditions. This is an unsurprising outcome. When it comes to policy questions (as opposed to “horserace” questions about which candidate respondents prefer in an election) polls usually do not offer clear, unambiguous conclusions about what the majority wants. This case is an instance of how differently worded questions often return majorities that seem to be on opposite sides of the same question. At a broader level, looking at multiple issues rather than one issue in isolation, polls regularly return majorities in favor of incompatible policies (for example, balancing the budget, increasing education spending, and lowering taxes). Polls do not, as a rule, enable us to say categorically that the majority clearly favors a particular policy path.
Despite the mathematical precision of polling data, then, they are not positive and unambiguous measurements of a concrete reality, but rather indicators of a complex and ill-defined phenomenon. Interpretation is a key element of using them to describe “public opinion.” To see how polls were interpreted in the U.S. news media before the invasion of Iraq, we examined transcripts from CNN, CBS News, Fox News, as well as the content of the 26 most important national newspapers (including letters to the editor) in the half-year before the invasion.
There were two ways in which journalists tended to favor the interpretation that polls showed support for an invasion. The first was simply to ignore the ambiguity of polled support altogether, reporting only on the simple up or down question and ignoring contingencies such as U.N. support. We found that more than 40% of stories before the invasion did this, while less than 1% offered the opposite interpretation that the public unambiguously opposed an invasion.
In the remaining stories, we found that while ambiguity was acknowledged, this generally did not interfere with the narrative that “Americans” supported war. We can paraphrase the narrative prevailing in this body of stories as follows: polls show a majority of Americans supports military action was qualified by such considerations as, but only under certain conditions or, but they think the administration should take more time to find a peaceful solution. The interpretation that Americans oppose war under existing conditions would have been just as valid, but was rare.
The result of this coverage appears to have been a widespread perception, among journalists and their audiences, that polls showed unambiguously that the public was enthusiastic about war. Many sensed that this was not quite correct. Among this number was New York Times columnist ThomasFriedman, who argued “don’t believe the polls.” Similar arguments were made by other journalists, as well as by several anti-war writers of letters to the editor. Pro-war writers of letters to the editor simply took for granted that their position was that of the majority. One wrote: “All objective polls indicate that two-thirds of Americans support President Bush in his determination to disarm Iraq.”
We see two fundamental misunderstandings of polling technology in all this. The first misunderstanding is that polls are a good vehicle for the expression of citizen demands. Consider a New York Times story that claimed that poll results showed that Europeans and Americans were “urging the United States to act.” If a pollster calls you up and asks you to choose either “support,” “oppose” or “don’t know”, it is a stretch to claim that by opting for “support” you are urging anybody to do anything in particular.
The second misunderstanding is that poll results are authoritative measurements of a positive reality, rather than reliableindicators of a complex and vague phenomenon. We see this in the “don’t believe the polls” argument advanced by Friedman and others. Instead of questioning an excessively narrow and singular interpretation of poll results, these people questioned the validity of polls themselves, thereby reinforcing the fallacy that polls offer an uncomplicated measurement of a singular “public opinion.”
Especially in a context like this one, in which poll results were reported heavily in the news media, polls are not instruments that transcend what they measure. In this case at least, polling data were not isolated from the public discussion of which they became a part, a discussion dominated by the Bush administration’s full-court press for war, with almost no opposition expressed in Congress or other high-levels of the government. In an essentially pro-war mass mediated political climate, journalists’ failure to do justice to the ambiguity of poll results obscured the fact that polled majorities neither clearly advocated nor clearly opposed war, validating and feeding back into the prevailing climate. We see this in anti-war letters to the editor. Instead of making the perfectly defensible argument that the majority was anti-war, many letters accepted the premise that the majority was in favor of war, and resorted to arguments like, “popular does not mean it is right.”
In On Liberty, John Stuart Mill describes a democratic majority as “those who succeed in making themselves accepted as the majority.” In the months before the U.S. invasion of Iraq, poling data helped the pro-war camp succeed in making itself accepted as the majority. In this way, they served a purpose that was more propaganda than democratic plebiscite.