"Good" Polls / "Bad" Polls -- How Can You Tell?:
Ten Tips for Consumers of Survey Research

By Michael W. Link and Robert W. Oldendick

The number of polls and surveys conducted in the United States has increased dramatically in recent years. Every day, it seems, the public and policymakers alike are confronted with the results from the latest survey conducted by news organizations, political campaigns, interest groups, and a variety of public and private firms. Poll results have become an important component in public policy debates and, therefore, have a direct effect on governmental action. While well done surveys can provide important information about public attitudes and preferences, polls are no longer devices for simply taking the "public pulse"; rather, survey methodology has become a tool in many sophisticated efforts to manipulate opinions (see sidebar #1). Given the prevalence and potential impact of public polling, the question arises: How can policymakers (and the public for that matter) tell the difference between a "good" survey and a "bad" one?

Presented here are ten tips for evaluating research on public attitudes. While the list is certainly not exhaustive, it addresses many of the more common, potential problems with survey data. The important lesson is that one does not need to be an expert in survey methodology to evaluate the quality of survey research.

#1: Know the Purpose and the Sponsor of the Study

The objectives of a "good" survey should be specific, clear-cut, and unambiguous.

Surveys should be designed to develop statistical information about a topic or issue, not to produce predetermined results. Most commissioned surveys done by reputable polling organizations tend to produce reliable and accurate results. This is due, in part, to the fact that the sponsors of such surveys have a genuine need for accurate information. There are organizations, however, who conduct surveys for other reasons, such as promoting certain positions or convincing the public (and policymakers) of the wisdom of their particular issue stance. As one survey expert points out:

To that end the survey will be designed to yield the desired results; this is most often accomplished by use of highly loaded questions, although more subtle methods are also used. Sometimes the samples of people interviewed are skewed to ensure a predetermined outcome. In many cases the poll itself is secondary to the real aim of the group... (Source: Herbert Asher. 1992. Polling and the Public:   What Every Citizen Should Know. Washington, DC: Congressional Quarterly Press).

Surveys are increasingly used for ulterior purposes, such as soliciting money, creating news filler, marketing, and even shaping opinions on certain issues (see sidebar #2). Consumers of survey research need an understanding, therefore, of why the study was conducted. Sometimes this will require the consumer to look past the stated objective of the survey and examine the context and manner in which the results are presented or published.

At times sponsorship can provide clues to survey’s purpose. Sponsorship does not automatically invalidate the results of a survey, but it can raise some "red flags." Certainly every group or individual who publishes the results of a survey does so for some particular purpose, usually to inform or to persuade. The problem arises when groups or individuals manipulate or purposely bias results in order to make their point. For example, in 1993 political activist Ross Perot announced that he would conduct a "national referendum" on government reform. The centerpiece of this effort was a 17 question ballot distributed in TV Guide, newspapers, and magazines. People were encouraged to clip-out the questionnaire and return it to Perot’s United We Stand America organization, which tabulated and reported the results. The effort drew strong criticism from the professional polling community for its unrepresentative sampling scheme and loaded questions. For example, one of the questions included in the "referendum" read: "Should Congress and officials in the White House set an example for sacrifice by eliminating all perks and special privileges currently paid by taxpayers?" Both the wording and the tone of the question as well as the context in which it was asked insured that many respondents would provide Perot with the responses he was both looking for and desiring to capitalize on. Despite the high visibility of the effort and its sponsor, the "referendum" was a classic case of "pseudo-polling."

Point to Remember: Survey research is an effective tool which can be used to inform or misused to manipulate views of the public and policymakers alike.

#2. Know Who was Interviewed

One of the first steps in survey research is identifying the population of individuals to be surveyed. This can vary greatly depending on the purpose of the survey. Target populations, for example, can include all of the adults in a particular state, persons with disabilities, children in high schools, eligible voters in certain voting districts, or males between the ages of 30 and 70 who have been diagnosed with colon cancer. The possibilities are endless!

Consumers of survey research, however, should focus primarily on two questions: (1) Is the target population (that is, the population whose opinions the results purport to reflect) properly identified in reports of the survey results?, and (2) Was the sampling frame (that is, the method used to select people to interview) an adequate representation of the target population? Information about the target population usually is (and always should be) included in any survey data report or news story. This piece of information is vital because survey results are applicable only to the population targeted. For example, we cannot interview adults in only one county and then claim that the results are applicable statewide. The reason is that the types of people living in the particular county (and hence the attitudes and opinions of these people) are usually not going to reflect the types (and hence attitudes) of people living in other parts of the state.

It is usually more difficult, however, for consumers of survey research to identify whether the sampling frame used adequately reflected the population of interest. Such information, if given at all, is usually only found in technical reports and rarely makes it into press releases and news stories. Yet, this should not take away from the importance of this information. If we want to make statements about the attitudes and opinions of a particular segment of society, we need to be sure that proper efforts were made to actually reach the potential members of that segment of society. For example, if we are interested in looking at the political viewpoints of women, we could not rely simply on a listing of members of a particular women’s group or organization. The reason is that the list would only contain the names of those women with a particular viewpoint who decided to join that particular organization. In other words, the sampling frame would be inadequate and not allow us to make broader statements about women’s views of politics. This does not mean that we need to talk to every women to draw conclusions about women’s attitudes, but it does mean that we need a reliable way of ensuring that every woman has a calculable chance of being interviewed.

Point to Remember: The survey results are only applicable to the population targeted by the survey.

#3. Know How the Survey Respondents Were Selected

Perhaps one of the most mystifying aspects of survey research for most people involves the drawing of a sample and how this allows researchers to then make conclusions about a larger population. In a "good" survey, respondents are selected "randomly", but not "haphazardly." Following the tenants of probability theory, a sample of individuals to be surveyed should be selected in such a way that each person in the population has a measurable (although perhaps not equal) chance of selection. This allows the results to be reliably projected from the sample to the population with known levels of certainty or precision.

All serious surveys use some form of random or probability sampling strategy. The primary characteristic of a probability sample is that it allows the researcher to determine the probability of any single person being included in the sample. This is not true of non-probability samples, such as mall intercept (where interviewers stand at a busy location in a shopping mall and select respondents from the shoppers passing by) or call-in (1-900) surveys. In neither instance is the respondent selected in a random manner; rather, in both cases either the interviewer or the respondent has control over who will be surveyed. A well designed probability sample allows a researcher to talk to a relatively small number of individuals (usually 800 to 1500), and to project those responses to the larger population of interest (such as all adults statewide).

Underlying probability sampling is the concept of "representativeness"; that is, a good probability design will allow the researcher to select a sample which is representative of the larger population. Because of this property, it is quite possible that a probability sample of 1,000 respondents will produce results that more accurately reflect the population than a non-probability sample of 200,000 respondents.

The respondent selection process is analogous to having one’s blood drawn:

"To perform a blood test a medical technician draws only a drop or two of blood from the patient. This very small sample of the total amount of blood in the patient’s body is sufficient to produce accurate results, since any particular drop has properties identical to those of the remaining blood. The technician does not need to choose the specific blood cells to be tested." (Source: Asher. Polling and the Public).

The same is true of sampling in survey research.

A related point involves the response rate for the survey. Although there are a number of different ways in which response rates can be calculated, most involve dividing the number of completed interviews by the number of eligible respondents in the sample. Remember that when properly drawn, the sample is representative of the population from which it was selected. For the results to be representative, therefore, it is necessary to try to interview all individuals selected for the sample. This would result in a 100% response rate. Unfortunately, this goal is almost never achieved. For any number of reasons -- including respondents refusing to participate, being too sick to be interviewed, or being out of town during the interviewing period -- it is nearly impossible to interview everyone selected for the sample. Problems arise when the attitudes and opinions of those who are not interviewed deviate significantly from those who are interviewed. The results can be biased, reflecting the views of the latter group. For issues such as the need for government health care services, where the views of hard-to-reach populations such as those with low incomes may be quite distinct from those with higher incomes, nonresponse can become a serious issue. Most reputable surveys attain a response rate of 65% to 75%.

Point to Remember: The representativeness of the sample results vis-à-vis the larger population is a direct reflection of how the sample was drawn and the response rate -- not the number of interviews completed.

#4. Know How Many People Were Interviewed

While the number of people interviewed does not directly affect the representativeness of the sample per se, it does have direct bearing on the accuracy of the findings. The level of sampling error in survey findings is based, in part, on the number of people interviewed. Sampling error is simply the difference between the opinion estimates or results obtained from the sample and the true population value (or what we would have found if we could have collected data from every member of the population). For example, if a survey found that 40% of high school teenagers drank alcohol in the past year and the sampling error for the survey was 4%, then the actual percentage of teens using alcohol could really be as low as 36% or as high as 44%. In general, the larger the sample, the smaller the sampling error.

Yet, the relationship between sample size and sampling error is not a simple straight line. As shown in Figure 1, as the number of people interviewed increases, the sampling error decreases -- sharply at first but then essentially levels off. The industry standard for sampling errors is about 4%. How likely the actual population value on a question falls within the sampling error range is a function of the confidence interval selected. Usually a 95% confidence interval is used, meaning that if the survey was conducted 100 times, the results obtained from 95 of these surveys would fall within the sampling error range.

It is important to recognize, however, that the sampling error associated with analysis of subgroups within the larger sample is usually greater than the sampling error for the overall survey. For example, if we interviewed 800 residents of a particular state, the overall sampling error for questions answered by all 800 people would be 3.46%. Yet, if we wanted to look at only the responses of those age 65 and over, we would have to calculate a new sampling error. In this case, if older persons made up 20% of the sample (i.e., 160 respondents), the sampling error would increase to 7.75%. In sum, the fewer the number of people (or cases) within a subgroup, the larger the sampling error (and hence the lower the degree of precision in the results for that subgroup).

Point to Remember: The sampling error (which is, in part, a reflection of sample size) and the confidence level provide measures of the potential error in a survey resulting from the fact that not all members of the population were interviewed.

#5 Know the Exact Wording of the Questions Used

Planning the questionnaire is perhaps the most important stage in the survey development process. How questions are asked and the response categories provided are crucial to determining results. Common sense tells us that the use of loaded words or the phrasing of a question can affect the pattern of responses to a survey question. For example, an organization of parents and teachers in a city in Indiana several years ago sent a questionnaire to parents in 36 elementary schools asking if they "oppose the forced busing of our elementary school children for the purposes of racial balance." The use of loaded words such as "forced" and the lack of balance in the question (i.e., there is no pro-busing stance or argument made) make the outcome of the question a virtual certainty. The fact that the survey was accompanied by a flyer stating "We must act now!" and "It’s time to stand up and be counted!" provided further cues that a strong anti-busing response was desired.

While individuals or groups with an ax to grind can easily construct questions that will generate desired responses, question wording effects can be present even when the questions are constructed in a non-biased, fair, and straightforward manner. Seemingly simple questions (such as the number of times one has seen a physician in the past year) can seem ambiguous or be interpreted differently by respondents. Some respondents will use the previous twelve months as a time frame, while others may simply include visits during the previous or current calendar year (depending on whether the survey is being conducted in the early or latter part of the year). Likewise, respondents are likely to interpret the meaning of "physician" differently.

Words that sound alike, such as "prophets" and "profits" or "very" and "fairly" can often be confusing. In the early 1940s, the Gallup organization asked respondents if they "owned stock." In analyzing the results, the percentage of the population owning "stock" was much higher than expected, especially in the South and West. Later investigation revealed that many of those surveyed interpreted the question as asking about "live stock." The problem of word choice is heightened when researchers and survey sponsors use technical terms or "buzz" words, which may mean something to the researchers, but mean little or nothing to the respondents. Even frequently used terms such as "managed care," "government mandates," and "life-long learning" can often leave respondents cold or confused.

As a general rule, the concepts employed should be clearly defined and questions unambiguously phrased. Respondents should be aware of what is being asked of them, not "tricked" into giving responses.

Point to Remember: The responses to survey questions are a direct reflection of the questions asked.

#6. Know the Order and the Context of the Questions

While question wording effects may seem like common sense to most survey consumers, the effects of question order usually are not. A typical survey contains many questions, in some cases exceeding a hundred or more items. Each question serves to set the context for subsequent questions. This can have a major impact on the responses given to these questions. For instance, if we ask people at the beginning of a survey, "What do you think is the most important problem facing the nation?", the top answers will usually include "the economy," "education," "crime," etc. If, however, we first asked a series of questions about the increase in sexually transmitted diseases among the nation’s youth and then asked the most important problem question, invariably the problem of "sexually transmitted diseases among youth" would appear to rise towards the top of the public agenda. In effect, the respondents would have been "educated" to this issue and would have had it at the forefront of their minds when answering the question. A similar effect was reported by researchers who asked people about their interest in politics. Respondents who were asked about their attentiveness to politics early in the survey asserted much higher levels of interest than those who were asked this question after a series of difficult questions about their representative’s record. The wording of the questions was identical. All that was changed was the order in which the questions were asked.

Point to Remember: What comes first sets the context for what comes next.

#7 Recognize That There Are "Opinions" and "Non-Opinions"

No matter how carefully attention is paid to the sampling and construction aspects of a survey, it is important to keep in mind that respondents will almost always give you an answer to your questions -- even if they really don’t understand or know anything about the topic. Because surveys involve interaction between a respondent and an interviewer, there are social pressures which can lead respondents to give answers or opinions even when they do not really know an answer or hold an opinion. Because few people want to admit that they are uninformed about issues they feel others might expect them to be informed about, people will offer some type of opinion or response. These responses are often referred to as "nonattitudes."

Problems arise when these "nonattitudes" are treated as genuine expressions of public opinion. The result can be an inaccurate picture of public attitudes. Unfortunately, it is often difficult to distinguish between nonattitudes and genuine attitudes. The problem of nonattitudes was illustrated by researchers who asked respondents about their attitudes towards a fictitious "Public Affairs Act." Respondents were asked, "Some people say that the 1975 Public Affairs Act should be repealed. Do you agree or disagree with this idea?" Fully one-third of those surveyed offered an opinion. It is a certainty that such is also the case with many "real" survey items which are not central in the minds of respondents.

One way to gauge the level of nonattitudes and the accuracy of survey responses is to look at the distribution of opinion on items over time. Barring an event which would create a notable shift in public opinion, the distribution should be relatively stable over time. Consumers of survey research need to recognize, therefore, that not every issue of central importance to a researcher is an appropriate topic for public inquiry. Different people have different concerns; those who conduct and use survey research need to recognize this and proceed accordingly.

Point to Remember: Respondents will almost always give you an answer -- but does it mean anything?

#8 Check the Interpretation of the Data

In any report of public opinion, it is incumbent that the findings and interpretations be presented honestly and objectively, with full reporting of all relevant findings, including any that may seem contradictory or unfavorable. Unfortunately, this standard is often not met when either the researcher or the survey sponsor are stakeholders in the policy debate or issue in question. Sometimes this involves overt biased reporting of findings in which the results of the survey are misconstrued. It is important, therefore, to make sure that the conclusions presented in a report or press release are consistent with the data provided.

One of the more common mistakes in presenting survey data involves the incorrect or misleading presentation of percentages in survey tables, as illustrated by the following tables:

 

EXAMPLE OF INCORRECT PRESENTATION OF PERCENTAGES

 

Never Used Marijuana

Used Marijuana

TOTAL

Never Used Heroin 81.5% 18.5% 100%
  (799) (181) (980)
Used Heroin 5.0% 95.0% 100%
  (1) (19) (200

 

EXAMPLE OF CORRECT PRESENTATION OF PERCENTAGES

  Never Used Marijuana Used Marijuana
Never Used Heroin 99.9% 90.5%
  (799) (181)
Used Heroin 0.1% 9.5%
  (1) (19)
TOTAL 100% 100%
  (800) (200)

Survey data similar to these have been reported as evidence that marijuana is a gateway drug for heroin. In the first example, the logical conclusion is that marijuana use leads to heroin use; that is, 95% of those who used heroin had also used marijuana. In fact, while marijuana users are more likely than non-users to try heroin, the likelihood is not as dramatic as this 95% figure implies. As shown in the second example, when the correct percentages are examined, 9.5% of those who used marijuana also used heroin compared to 0.1% heroin users among those who had not tried marijuana. In short, if percentages are calculated in the wrong direction, the resultant conclusions can be very misleading.

Another potential area of misinterpretation involves the reporting of some group differences. Reports often focus on one subgroup (e.g., African-Americans, women, older people); but are the results for this subgroup unique are do they mirror those of the general population? With smaller samples, differences among groups may be reported as being important when, in fact, these distinctions may be the result of chance factors. With very large samples, some differences among subgroups may be reported as being statistically significant, but may not be substantively important.

Point to Remember: Recall the old adage, "statistics don't lie, but liars use statistics." Where possible, review all the figures reported as a check on the author's interpretation.

#9 Recognize That What Was Not Reported May Be Important

Rather than misinterpreting data, a problem that occurs with survey reports and press releases involves what is not reported. This can include lack of documentation of the sample population, sample size, response rate, question wording, or other technical aspects of the survey. It can also include not reporting findings that may have been unfavorable to the position held by the sponsor of the survey. For instance, in 1989 a pro-choice organization commissioned a survey to examine public attitudes towards abortion. While the survey indicated that there was a high level of support for a woman’s right to choose, it also showed that the public supported certain restrictions on abortion, particularly when it involved pregnant women under the age of 17. The press release put out by the group, however, reported the results showing the public to be in favor of choice while ignoring those that indicated the public’s misgivings, creating a misleading impression of support for the pro-choice position.

Point to Remember: What is missing from a report of survey data can be at least as important as what is reported.

#10 Be Familiar with the Reputation of the Organization Conducting the Study

Finally, survey research and public polling have become cottage industries of a sorts with all manner of organizations, quasi-organizations, and individuals entering the field to meet the growing demand for information on public attitudes. As a result, there is great variation in quality of the organizations conducting survey research. These can range from large organizations which specialize in survey research (such as Gallup or Louis Harris and Associates) to interest groups which conduct their own surveys on an intermittent basis, to college professors who wrangle groups of students into conducting ad hoc surveys for research projects. While decisions on the type of organization used are usually driven by cost considerations, the fact remains that the more reputable organizations that specialize in survey research will have both the expertise and resources to conduct the survey properly and to ensure quality control. This is not to say that a "good" survey cannot be conducted on a small scale, but rather that the likelihood is greater that consistently high quality data will be produced by an organization that specializes in the field. Survey research is an endeavor best undertaken by survey professionals.

Among the many trademarks of a professional survey organization, three characteristics stand out. First is the level of expertise and professionalism of the staff. Most reputable survey organizations are headed by individuals with extensive experience with survey research methods and approaches. Such individuals are knowledgeable about proper procedures and experienced in identifying and dealing with potential problems. The field work itself is often directed by field directors who oversee the entire data collection process and supervisors who monitor data collection. Expertise and professionalism also apply to the interviewing staff. Interviewers at professional organizations undergo comprehensive training in asking questions properly, probing respondents effectively, and generally ensuring that the data collected are of the highest quality. Such considerations, while oftentimes overlooked or ignored by less professional organizations, can have a major impact on the final results of a survey. A second characteristic involves the resources the organization has available. Today, most professional survey organizations have permanent offices and make extensive use of computer technology. CATI (Computer Aided Telephone Interviewing) and CAPI (Computer Aided Personal Interviewing) systems have become the norm and go a long way towards ensuring quality control in the data collection and processing phases of survey research. Finally, reputable organizations have a proven track record. The proof of their ability to collect and report accurate data on public attitudes can be seen in their prior work in this area.

Most importantly, a professional survey organization is aware of the factors --e.g., identifying a target population, selecting a representative sample, asking unbiased questions, collecting data in a neutral manner, etc. -- that comprise a good survey and that have been discussed in the previous points. A properly designed and executed survey can provide valuable information for the policy process; but, it involves much more than asking a few people some questions about an issue.

Point to Remember: Survey research is a professional endeavor and sloppy execution can oftentimes undermine the results of even the most well-planned survey.

Conclusion

It doesn’t take an advanced degree in statistics to become an astute consumer of survey research information. It does, however, take a basic understanding of the process involved and an awareness of the potential problems posed by this method of gauging public attitudes. The best advice for survey consumers, therefore, is "buyer beware."

"Just as consumers in a supermarket will often inspect the list of ingredients in a product, so should public opinion consumers question what went into a poll before accepting its results." (Source: Asher. Polling and the Public).

Yet, if survey consumers are to evaluate public opinion research in a reasoned manner, it is incumbent on the firms conducting surveys and the groups or individuals publishing the results to hold to the notion of "truth in advertising." Proper collection and accurate portrayal of public opinion results are the industry standard -- a standard enforced by professional survey research associations such as the American Association for Public Opinion Research (AAPOR). In their recent volume, "Best Practices for Survey and Public Opinion Research," the AAPOR governing council states:

Excellence in survey practice requires that survey methods be fully disclosed--reported in sufficient detail to permit replication by another researcher--and that all data (subject to appropriate safeguards to maintain privacy and confidentiality) be fully documented and made available for independent examination.

Only then can consumers of survey results have an adequate basis for judging "good" from "bad" polls.

REFERENCES

We’d like to thank Ron Shealy for his helpful comments on this article. In addition to the numerous suggestions and examples forwarded by our colleagues in the survey field, the following sources were used in the writing of this article:

Asher, Herbert. 1992. Polling and the Public: What Every Citizen Should Know.

Washington, DC: Congressional Quarterly Press.

American Association for Public Opinion Research. 1997. Best Practices for Survey and

Public Opinion Research and Survey Practices AAPOR Condemns. Ann Arbor, MI: AAPOR.

Bishop, George F., Robert W. Oldendick, and Alfred J. Tuchfarber. 1980. "Pseudo- Opinions on Public Affairs." Public Opinion Quarterly 44: 198-209.

Bishop, George F., Robert W. Oldendick, and Alfred J. Tuchfarber. 1984. "What Must

My Interest in Politics Be If I Just Told You ‘I Don’t Know’?" Public Opinion Quarterly 48: 510-519.

Bradburn, Norman M., and Seymour Sudman. 1989. Polls and Surveys: Understanding What They Tell Us.

National Council on Public Polls. Unknown. Twenty Questions A Journalist Should Ask About Poll Results. Conference Handout.

Sudman, Seymour, and Norman M. Bradburn. 1987. "The Organizational Growth of

Public Opinion Research in the United States." Public Opinion Quarterly 51: 17-28.

Traugott, Michael W., and Paul J. Lavarkas. 1996. The Voter’s Guide to Election Polls. Chatham, NJ: Chatham House Publishers.

SIDEBAR #1

Survey Research in Historical Perspective

The earliest counterpart of modern opinion surveys is generally tracked to 1824 when the Harrisburg Pennsylvanian reported a straw poll based on the compilation of results from a number of sources, such as public meetings, militia musters, grand juries, Fourth of July picnics, and the like. Even though polls such as this were criticized as not being representative and were often inaccurate, their use continued throughout the 19th and well into the 20th centuries, due in large part to the fact that they provided information about what "the electorate" was thinking and, as such, were thought to be valuable in selling newspapers.

The first institutionalized public opinion poll using modern techniques was conducted in 1935 by Elmo Roper and was published in Fortune magazine. Later that same year the first Gallup Poll was published. One of the distinctive features of these polls was that they attempted to collect data from a representative sample of people so that the results could be generalized to some larger population. The success of scientific opinion polls led to a great increase in public opinion polls in the following years. The growth of survey research was seen in all sectors, including the federal government, non-profit organizations, academic circles, and commercial ventures.

The development of survey research was also markedly aided by advances in technology, such as the spread of telephone coverage, and the development of procedures for selecting representative samples from households with telephones. Most early surveys were conducted in-person. Beginning in the early 1970s, however, telephone coverage in the United States became almost universal, so that it was possible to reach most households by telephone. Today it is estimated that some 95% of households in the United States have telephones. This fact, together with the development of random digit dialing (RDD), produced a shift in the way that surveys are conducted. Today the largest percentage of survey research is conducted by telephone using CATI (Computer-Aided Telephone Interviewing) systems.

Return to text

SIDEBAR #2

The Growth of "Pseudo-Polls"

The high demand for public opinion data has led to the growth of what some in the survey industry have labeled "pseudo-polls." They include efforts such as 1-900 call-in polls, clip-out or write-in polls, political "push polls," and internet polls to name a few. The major problems with such efforts tend to be two-fold: First, due to the way respondents are selected for these "polls" the samples are rarely representative of the larger populations they portent to represent. For example, many nightly news programs will pose questions and then ask viewers to call-in and register their opinion. Those who do so are usually viewers most interested in the topic (and they only include viewers watching that particular program at that particular time). Unfortunately, these results are often portrayed as representing the views of the "general public." Once this happens, the results become "facts" in the public domain.

A second problem with "pseudo-polls" is often their purpose. While reputable surveys are conducted to provide objective information about public attitudes and opinions, "pseudo-polls" oftentimes have a hidden motive or agenda, such as fund-raising or manipulating public attitudes. Increasingly political campaigns have turned to the use of so-called "push polls," a "telemarketing technique in which telephone calls are used to canvass potential voters, feeding them false or misleading ‘information’ about a candidate under the pretense of taking a poll to see how this ‘information’ affects voter preferences" (AAPOR, 1997). In reality, "push polls" are not "polls" at all, but simply sophisticated forms of telemarketing designed to manipulate public opinion, rather than measure it. The use of "pseudo-polls" and the representation of data from these "polls" as genuine reflections of public sentiment have been strongly condemned by professional survey research associations.

Return to text

SIDEBAR #3

The Literary Digest Fiasco

One of the distinctive features of modern polling is that it attempts to collect data from a representative sample of the population. One of the first tests of these methods came during the presidential election campaign of 1936. Gallup, Roper, and others who were adopting a "scientific" approach to polling had conducted polls prior to the 1936 election that indicated that the Democratic candidate, Franklin Delano Roosevelt, would defeat the Republican, Alf Landon. This created something of a stir, because the most well-known poll during that period, conducted by the Literary Digest, had drawn the opposite conclusion. The Literary Digest, one of the largest circulation magazines of the time, had successfully predicted the presidential elections of 1920, 1924, 1928, and 1932. In fact, when Gallup criticized the Digest’s methods before the election, the editor of the Digest wrote: "Our fine statistical friend should be advised that the Digest will carry on with these old-fashion methods that have produced correct forecasts exactly one hundred percent of the time."

The Digest poll, which was based on 2.4 million mail ballots, predicted that Landon would get 57% of the vote; in reality, he got 38.5%.

What happened? Despite the phenomenal number of responses received, the sampling frame used by the Literary Digest was flawed. They had used telephone directories, automobile registration lists, and their own subscriber list in selecting people to whom they sent ballots. At that time, people who were on such lists tended to have higher incomes and were more likely to support Landon. Lower socio-economic status individuals were underrepresented in the sampling frame. This together with a significant nonresponse rate (only one-fourth of the ballots that were mailed out were returned) led the Literary Digest to a badly mistaken conclusion.

 

SIDEBAR #4

Being a Wise Survey Research Consumer

Survey and polling data can provide decision-makers with valuable information about public attitudes and opinions on a range of important issues. When evaluating such data remember to keep the following in mind:

  1. What was the purpose and who was the sponsor of the survey? Surveys are conducted for any number of reasons, ranging from the need for accurate information to the desire to manipulate public attitudes. Familiarity with the purpose and sponsorship of a survey can assist poll consumers in evaluating the objectivity of a survey.
  2. Who was interviewed? Did the sample used adequately reflect the population whose opinions the results purport to reflect? Survey results are only applicable to the population from which the sample was drawn.
  3. How did the researchers select the people who were actually interviewed? Were the respondents chosen randomly (i.e., a probability survey) or were the interviewers or respondents allowed to choose who would be interviewed (i.e., a non-probability survey)? In general, results from a sample can only be generalized to a larger population if the sample was drawn in a random – but not haphazard – manner.
  4. How many interviews were conducted overall? How many interviews were conducted with members of particular subgroups of interest (e.g., men, teenagers, registered voters, etc.)? The potential level of error in survey results is, in part, a reflection of how many people were surveyed.
  5. What was the exact wording of each question? How questions are worded is critical to evaluating the results. The responses to surveys are a direct reflection of the specific questions asked.
  6. In what order did the questions appear? In what context were the questions asked? Minor variations in the placement of a question in a questionnaire can have significant effects on responses to that question.
  7. Do the responses reflect genuine opinions or are they simply "pseudo-opinions"? Keep in mind that respondents will almost always give you answers to your questions – even if they don’t understand or know anything about the topic.
  8. Was the analysis conducted properly? Were percentages calculated in the right direction? By focusing on a particular subgroup, survey reports can sometimes give the incorrect impression that the attitudes of this group are particularly important or distinctive when in fact the results may not be unique at all. Examining the data completely can help guard against such misuses of data.
  9. What doesn’t the report, article, or press release tell you? Sometimes important information such as sample size, target population, question wording, or even results unfavorable to the sponsoring organization are purposely not included in reports of survey data, which can lead to misperceptions about the accuracy and conclusions drawn from the survey.
  10. Who conducted the survey? Today the field of survey research has become a cottage industry with varying levels of competency and professionalism across polling organizations. Like most other professional fields, the conduct of "good" survey research requires a certain level of expertise and attention to detail.

These tips are meant to assist consumers of survey research in evaluating the validity and credibility of polling data. Problems with any one of these points need not automatically lead you to discard or ignore the results of a survey, but they should raise a "red flag" when interpreting and using the findings. Oftentimes all that is needed is a little follow-up with the sponsoring organization or the firm or individual conducting the poll. Most professionals will share some of the more technical aspects of a survey upon request. Consumers should be on their guard, however, when organizations are unable or unwilling to provide you with the information you need to comfortably assess the validity of a survey.

Michael Link was Assistant Director of the Survey Research Laboratory at the Institute for Public Service and Policy Research, The University of South Carolina at the time this article was written.   Robert Oldendick is the Director of the Survey Research Laboratory and a faculty member in the Department of Government and International Studies at The University of South Carolina.

Figure 1
The Relationship Between Sample Size and Confidence Intervals

Sample Size Confidence Interval at the 95% Level
(+/- Percent)
50 13.9
100 9.8
250 6.2
500 4.4
750 3.6
1,000 3.1
1,250 2.8
1,500 2.5
2,000 2.2
2,500 2.0
5,000 1.4

Return to text

This article appeared in the Fall 1997 issue of the South Carolina Policy Forum magazine published by the University of South Carolina Institute for Public Service and Policy Research.

Robert Oldendick is Director, Survey Research Laboratory, Institute for Public Service and Policy Research, University of South Carolina.

Michael Link was Assistant Director of the Survey Research Laboratory at the Institute for Public Service and Policy Research.  He now works for the Research Triangle Institute in North Carolina.

Articles, Research Notes, Working Papers