overarching goal of this study was to develop an improved understanding concerning assessing and developing the survey research methodology within an educational setting in general and the use of the survey research method to determine attitudes and behavioral intentions of the students in a university setting regarding their acceptance of e-learning in particular. The first part of the study presents the breadth component which is used to identify the differences between three important research paradigms which are defined, compared, and contrasted with various types of research methodologies with a particular emphasis on survey research methodology, using a selected bibliography to evaluate the methods.

Identifying Constraints to e-Learning for Rural Nigerian Students

Don't use plagiarized sources. Get Your Custom Essay on
E-Learning for Rural Nigerian Students Thesis
Just from $13/Page
Order Essay

The Breadth Component


This study is organized into three parts. The first part, the breadth component, is used to identify the differences between three important research paradigms which are defined, compared, and contrasted with various types of research methodologies with a particular emphasis on survey research methodology, using a selected bibliography to evaluate the methods. The second part of the study, the depth component, presents the respective strengths and weaknesses of the survey methodology, as well as an evaluation of data collection instruments and sampling strategies, and the key steps that must be taken to ensure the successful use of the approach; this part also includes an annotated bibliography aligned to the research objectives. The final part, the application component, provides details concerning how the survey research method will be specifically used in the author’s thesis work that seeks to determine attitudes and behavioral intentions of the students in a private university in a rural area of Nigeria regarding their acceptance of e-learning? . This is accomplished by identifying a problem for the research, the research purpose, research questions, theoretical foundations of the proposed research, and the methodology used to conduct the research.

Breadth Objectives

The objectives of the Breadth Component were as follows:

1. Identify the differences between positivist, constructivist, and pragmatic research paradigms.

2. Define a wide range of commonly used quantitative and qualitative research methods in social and behavioral sciences, with a particular emphasis on survey research methodology.

3. Compare and contrast the survey research methodology against other research approaches.

Breadth Demonstration

Today, the people of Nigeria stand at an educational crossroads, with one path leading to a continuation of the lackluster status quo and the other leading to opportunities for improvement in the manner in which educational services are delivered. The former path will likely result in the country’s literacy rate remaining low, its infancy death rate remaining high and its people relegated to a life expectancy of less than 50 years (Nigeria, 2010). By very sharp contrast, the latter path can lead to improvements in the access to educational services in general and for females in particular for the large numbers of Nigerian population who live in rural regions of the country in ways that will contribute to their ability to gain meaningful employment and contribute to the economic and social growth of Nigeria in the future.

According to Roffe (2004), Nigerians who live in rural regions of the country are faced with some profound and complicated challenges in overcoming the so-called “digital divide” that separates the information “haves” from the “have-nots.” Although electrification efforts have proceeded apace over the years, many parts of the country remain without reliable sources of grid-based electricity. Moreover, even assuming that an alternative energy source such as solar or wind power can be used to power Internet-enabled computers, some Nigerians may live in regions where hills or mountains interfere with the with one of the country’s Internet hosts for reliable Internet service. Assuming as well that these challenges can be overcome in a cost-effective fashion, the problem remains concerning whether Nigerians living in remote regions of the country will accept the technology and apply it for learning purposes. In this regard, Roffe emphasizes that, “People living in [rural] areas are assumed to be ‘digital poor.’ To benefit from technology, citizens will need a suite of e-skills, not just in digital literacy, but also in a range of associated key skills such as collaborative working and learning to learn” (p. 16).

Likewise, teachers will also need to develop new teaching skills to use a virtual learning environment effectively. For example, Stevens (2006) emphasizes that, “Teaching face-to-face and online are different skills and teachers have to learn to teach from one site to another. This is fundamental to the success of e-teaching. Teachers have to learn to teach collaboratively with colleagues from multiple sites and have to judge when it is appropriate to teach online and when it is appropriate to teach students in . These judgments have to be defended on the basis of sound pedagogy” (p. 120).

Research on teacher preparation and adult learning highlight several factors that exemplify high quality training of teachers for e-learning settings:

1. Subject matter must be made meaningful and understandable. Through activities and tasks, students must learn general principles to apply in authentic settings (field sites).

2. Subject matter is acquired best in environments where the knowledge and methods to be learned are modeled. Multi-media presentations generated from field sites can be used to demonstrate and analyze effective practice.

3. Online modules of evidence-based practices can serve this population well by providing practice, providing relevant examples, answering questions, and offering research citations to support the practice. Such modules can be increasingly easy to access, respond to consumers’ “need to know,” and be updated quickly as new evidence is published (West & Jones, 2007, p. 4).

Unfortunately, many of the benefits that are available to e-learners may remain unavailable even when these young Nigerian citizens are able to gain access to institutions of higher learning where there are modern learning tools available because they may lack of so-called “e-skills” needed to use e-learning tools to their best effect. It is reasonable as well to assume that students who lack these skills will hold a vastly different attitude concerning the introduction of e-learning initiatives compared to those who do have them. The advantages of e-learning in general and for higher educational institutions that lack geographic proximity to larger urban centers in particular are well documented, though. For example, Stevens notes that, “The growth of e-learning in schools has led to pedagogical considerations and to the development of new ways of managing knowledge that enable these institutions to assume extended roles in the regions they serve” (2006, p. 119)

Therefore, in order to formulate such a path to improvement, an appropriate research paradigm must be identified and used in an effective fashion. In this regard, Wright (2002) emphasizes that, “There is a current lack of relevance of educational research in Africa that highlights the need for paradigms that would link research better with policy and practice in education” (p. 279). Moreover, the unique nature of the Nigerian educational context requires a robust research paradigm that is capable of developing an understanding of the issues involved from an African perspective. As Wright concludes, “Any such [research] paradigm needs to be firmly rooted in the reality of a particular African educational context” (p. 279). It is in this African educational context that the review of the literature concerning the analysis and selection of an appropriate research paradigm which proceeds below.

Review and Analysis

The Age of Information is characterized by incessant research of all types by people from all walks of life. Indeed, Internet “surfers” routinely search Google and other engines billions of times a day for timely answers to their academic, professional and personal questions. More formal approaches to research, though, typically involve a more systemic and rigorous approach to data collection and analysis. For instance, Leedy and Ormrod (2005) advise that, “research is a systematic process of collecting, analyzing, and interpreting information (data) in order to increase our understanding of the phenomenon about which we are interested or concerned” (p. 2).

The emphasis on systematic as part of the requirements for formal research is also noted by Cohen, Manion and Morrison (2000) who point out, “Research is best conceived as the process of arriving at dependable solutions to problems through the planned and systematic collection, analysis, and interpretation of data” (p. 45). As an extension of formal research, research paradigms provide the general framework in which research can proceed in such a systematic fashion. In this regard, Olapurath (2008) reports that, “Research paradigms are coherent sets of beliefs about that the nature of social reality, purpose of social science research, nature of knowledge, and research procedures and criteria, held by practicing researchers, and that guide the research they do” (p. 37).

There remains a lack of consensus concerning which research paradigm is best suited for specific purposes and some authorities even reject the categorization of research traditions into paradigm form at all. For instance, Corby notes that, “Some researchers do not see the value of classification by paradigm. The choice and construction of research approach is a technical matter reflecting the middle-range theory and intellectual reference point applied by the investigator to a research problem. Good researchers tend to pull methods out of a tool kit as they are needed” (2006, p. 54). Notwithstanding these criticisms and constraints, though, most social researchers seem to agree that classification by some type of research paradigm is a useful approach based on the need to determine which approach is best suited for a given research enterprise. In this regard, Corby concludes that, “The contested nature of research makes it impossible and unhelpful to ignore the different aims and purposes of various research projects and the methods and approaches being used to carry them out” (2006, p. 54). Therefore, the different aims and purposes of the positivist research paradigm, the constructivist research paradigm and the pragmatic research paradigm are discussed further below.

Positivist Research Paradigm

The positivist research paradigm is a quantitative-based approach that generally seeks to identify trends and patterns that can be used to formulate predictions concerning how humans behave. For instance, according to Neuman (2003), positivist social science is “an approach to social science that combines a deductive approach with precise measurement of quantitative data so researchers can discover and confirm causal laws that will permit predictions about human behavior” (p. 541). Likewise, Krauss (2005) notes that, “In the positivist paradigm, the object of study is independent of researchers; knowledge is discovered and verified through direct observations or measurements of phenomena; facts are established by taking apart a phenomenon to examine its component parts” (p. 759).

The quantitative basis of positivist research makes it appealing to some social researchers who cite the validity and reliability that can be achieved using these methods. In this regard, Davis (1998) emphasizes that, “The quantitative (or positivist) research paradigm is based on the assumption that research is ‘value-free’ and objective. It is used to test hypotheses in a controlled environment based on validity, reliability, generalization, and replication” (p. 5). It is this assumption of the positivist research paradigm as being “value-free” that has attracted some criticisms of this approach, with some researchers arguing that this is difficult if not impossible to achieve. Further, quantitative approaches such as positivist research may not deliver the reliability goods that its advocates promise. For instance, Davis adds that, “In quantitative research, the concept of reliability assumes an unchanging world, where inquiry can quite logically be replicated. In the real world, we know that change is a constant, and the social world we live in is always being constructed, therefore making replication and generalization difficult at best” (p. 5).

Despite these constraints, positivist and other quantitative-based research paradigms have been shown to be useful in educational areas as “exercise physiology, public health, trends in recreation and leisure services, and movement analysis in dance forms” (Davis, 1998, p. 5). Social researchers who are interested in discerning broader social processes in educational settings may also elect to use a positivist methodology. For instance, according to Cohen, Manion and Morrison (2000), “Positivist researchers are more concerned to derive universal statements of general social processes rather than to provide accounts of the degree of commonality between various social settings (e.g. schools and classrooms)” (p. 109). In some ways, the positivist approach relies on both numbers as well as resources that contain numbers which can then be used for further analysis. In this regard, Lin (1998) reports that, “Positivist researchers believe that they can take information from thick description or case studies about variables and hypotheses that they then test more rigorously” (p. 162). The positivist approach contrasts sharply with other research paradigms such as the constructivist paradigm which is discussed further below.

Constructivist Research Paradigm

In sharp contrast to the positivist research paradigm, the constructivist paradigm maintains that:

1. Knowledge is created via the meanings that humans attach to the phenomena under investigation;

2. Researchers interact with the subjects of study to obtain data;

3. Inquiry changes both researcher and subject; and,

4. Knowledge is context and time dependent (Krauss, 2005).

The constructivist research paradigm is also differentiated from the positivist research paradigm in that the former is essentially objectivist, or, there is the belief that it is possible for an observer to exteriorize the reality studied, remaining detached from it and uninvolved with it. The constructivist takes the position that the knower and the known are co-created during the inquiry” (Krauss, 2005, p. 761).

To help illustrate the hierarchy of the constructivist research paradigm compared to other lower-level approaches, Kincheloe (2002) makes the distinctions concerning the three levels of research cognition shown in Table 1 below, with the constructivist research paradigm being situated at the top of the research cognition hierarchy.

Table 1

Three levels of research cognition

Level of Research Cognition


First level: Puzzle-solving research

First-level research revolves around the concept that puzzles are well-structured problems. All aspects necessary for a solution to a puzzle are knowable and there is a particular procedure for solving it. The role of the researcher is to learn this procedure and then to go about solving puzzles. Educational problems, thus, are viewed as puzzles for which particular solutions may be inductively agreed upon after researchers have all been exposed to a common set of empirical observations. Certainty is deemed possible because puzzles push researchers into one way of thinking — a correct pathway to a solution exists, the goal of the research act is to find it. Research as puzzle-solving does not require the consideration of alternative strategies; such a view of research often blinds the inquirer to information which does not ostensibly relate to the solution of the puzzle. The attempt to verify or refute existing educational theories is a form of puzzle-solving — the rules are all established a priori. Indeed, puzzle-solving as a mode of research has little to do with the everyday world of schooling because educational decision-making rarely presents itself in the form of puzzles. It is far more complex.

Second level

Level 2 research involves a form of meta-cognition where researchers reflect on their Level 1 research activities. Such reflection may involve the identification of mistakes and the analysis of alternative strategies and data-gathering tools in the attempt to solve the puzzle. New variables may be found which make the research process more sophisticated, new forms of research which provide unique perspectives on the puzzle may be applied. The third level of research draws upon our notion of critical constructivism as it opens the door to epistemological considerations. Here, researchers examine the criteria of knowing, and the certainty of knowing, asking questions about the nature of problems themselves. An important difference between the levels of research cognition involves the Level 2 analysis of the problem-solving potential of a particular research strategy and the Level 3 questioning of whether or not particular research questions allow for a solution we know for certain to be correct. Most research in educational situations involves ill-structured problems. Puzzles and ill-structured problems are different epistemologically, that is, in the ways they can be known. To argue that the act of teaching is a puzzle problem is to trivialize its multi-dimensional complexity

Third level

Level 3 teacher research with its postmodern rejection of certainty transcends the conventional view of inquiry which accepts the universal applicability of the educational knowledge base. There is a correct way, conventionalists assume, to set up a fifth grade math class. The empirical research base does not support diverse ways of teaching this subject at this level; they are not just different, they are incorrect.

Source: Kincheloe, 2002, p. 154

The foregoing levels of research cognition indicate that the constructivist research paradigm takes place once the foregoing levels of research have been completed, an assumption that is consistent with Kincheloe’s guidance that, “Each level of research is necessary to the understanding of the next. Indeed, there are puzzle-like questions in education that lend themselves to empirical analysis. A form of meta-cognition is undoubtedly valuable to increasing the sophistication of such empirical questions. But such forms of research cognition do researchers little good when we begin to look at ill-defined questions such as ‘What is the relationship between school performance and social class?’ And ‘How do definitions of intelligence affect that relationship?’” (2002, p. 155).

In order to formulate timely answers to these research dilemmas, educational researchers must proceed along the continuum of research cognitions until they gain the background information they required to establish the evidence needed to proceed with the subsequent stages of inquiry (Kincheloe, 2002). Once this level of illumination has been achieved, the researcher can conduct the constructivist analyses in an informed fashion assuming that the steps needed for this type of research paradigm are also taken into account. For example, the constructivist research paradigm maintains that sustained engagement with a study’s participants is an essential element that is required in order to establish trust and build a rapport; however, even the advocates of the constructivist research paradigm have warned researchers concerning the danger of “going native,” or, in other words, becoming overly involved with the culture of their participants (Torres & Magolda, 2002). According to these authorities, “Incorporating oneself into the culture being observed is essential, yet one needs some distance from which to render professional judgment. This involves balancing “falling in love” with participants and maintaining some distance from them. Full involvement with participants, or in this case going into their stories, gaining access to intimate details of their lives, and caring about their well-being, yields rapport and understanding” (p. 475). Even here, though, there are caveats: “Researchers must also step back and see their own stories in the inquiry, the stories of the participants, as well as the larger landscape on which they all live” (Torres & Magolda, 2002, p. 475).

To date, the constructivist research paradigm has been applied to educational research in a number of ways but there have been some caveats associated with its use in these settings (Kincheloe, 2002). For example, Kincheloe (2002) notes that, “The difference between critical constructivist research and objective traditional educational research rests on the willingness of critical constructivist researchers to reveal their allegiances, to admit their solidarities, their value structures, and the ways such orientations affect their inquiries” (p. 61). Other educational researchers suggest that the constructivist research paradigm has a great deal of value to offer but that, like other qualitative approaches, lacks widespread acceptability by members of the same profession that stand to gain the most from its use. For example, according to Engstrom (2000), “Our profession needs to think creatively about how to create adequate space in our literature for constructivist research efforts so the integrity of the research process is respected and the potential contributions of these works are maximized” (p. 132). A final research paradigm that focuses on achieving value-oriented results rather than theory is the pragmatic research paradigm discussed further below.

Pragmatic Research Paradigms

According to Corby (2006), pragmatic research paradigms can be equated to various types of evaluations. In this context and as the term implies, pragmatic evaluation is “research with a distinctive purpose, a focus on value — the most important purpose of evaluation is not to prove but to improve” (p. 53). The focus on improvement is not unique to the pragmatic research paradigm, of course, but it does stand apart by being less concerned with identifying specific causal relationships or discerning patterns and trends than it is with a rigid adherence to one research paradigm to the exclusion of others. This point is also made by Klenke (2008) who notes, “Pragmatists share, with other anti-positivists, the view that multiple interpretations of events and different concepts and classificatory schemes can be used to describe phenomena” (p. 27).

Moreover, pragmatic researchers do not place as much emphasis on objectivity and subjectivity and the distinctions that are typically drawn between these constructs. In this regard, Klenke points out that, “While pragmatists reject an essential and fundamental distinction between objective and subjective, they can accept, for pragmatic reasons, that there are differences between facts and values and different methods of inquiry appropriate to each” (p. 27). By acknowledging these fundamental differences, pragmatic researchers can employ a wide range of methodological approaches in their research as and when they are needed. This lack of organizational foresight has certainly not escaped critics of pragmatic research, but Corby suggests that provided the researcher is forthcoming concerning the details and rationale that are involved in the use of different methodologies, the pragmatic research paradigm can be a valuable addition to the researcher’s repertoire: “The designation of the term ‘pragmatic’ to the methods used in research can obscure what is really happening. This does not mean that multi-method researching [or] an eclectic approach cannot take place and does not have value, provided it is done transparently” (Corby, 2006, p. 54).

Therefore, the pragmatic research paradigm, like its positivist and constructivist counterparts, is characterized by certain features that make is more appropriate for use in some settings than others, just as these three research paradigms are marked by certain strengths and weaknesses that may make their use more appropriate for a given purpose than others. Finally, all of these research paradigms were shown to have their encamped advocates and critics, with few scholars willing to give much ground concerning the efficacy of their preferred approach compared to the others. Unfortunately, this same type of divisiveness tends to cloud the debate over whether qualitative or quantitative research methods are preferable — and why — and these issues are discussed further below.

Commonly Used Quantitative and Qualitative Research Methods in Social and Behavioral Sciences

The decision concerning which research paradigm is best suited for a particular application specifically relates to what types of information are available and the intended goals of the researcher. For example, if a researcher is interested in determining the percentage increase in African-Americans living in the United States during the period from 1980 to 1990, a quantitative analysis of the census data for these periods would be most appropriate. By contrast, if a researcher was more interested in determining the gains or losses in quality of life indicators for this population during the same time period, a qualitative approach would be more appropriate. In some cases, researchers are able to achieve more using both qualitative and quantitative methods in their research than through the use of either approach in isolation of the other (Baines & Chansarker, 2002).

Some of the more distinctive differences between qualitative and quantitative research are shown in Table 2 below.

Table 2

Differences between qualitative and quantitative research methods

Qualitative Research

Quantitative Research

Methods include focus groups, in-depth interviews, and reviews


Primarily inductive process used to formulate theory

Primarily deductive process used to test pre-specified concepts, constructs, and hypotheses that make up a theory



More in-depth information on a few cases

Less in-depth but more breadth of information across a large number of cases

Unstructured or semi-structured response options

Fixed response options

No statistical tests

Statistical tests are used for analysis

Can be valid and reliable: largely depends on skill and rigor of the researcher

Can be valid and reliable: largely depends on the measurement device or instrument used.

Time expenditure lighter on the planning end and heavier during the analysis phase

Time expenditure heavier on the planning phase and lighter on the analysis phase

Less generalizable

More generalizable

Source: Quantitative research methods, 2010, para. 5

There are some similarities between the qualitative and quantitative research traditions as well. For instance, both are capable of being valid and reliable (Quantitative research methods, 2010). In addition, as Neuman (2003) points out, both qualitative and quantitative research methods “use several specific research techniques (e.g., survey, interview, and historical analysis), yet there is much overlap between the type of data and the style of research. Most qualitative-style researchers examine qualitative data and vice versa” (p. 16).

Quantitative Methods. Quantitative research methods rely on the use of numbers in some form to develop their findings (Neuman, 2003). Despite the growing acceptance of qualitative research methods for social research applications, many researchers continue to favor quantitative methods based on their perceived advantages of providing reliability and validity using a scientific approach. For example, Davis points out that, “There are many fallacies associated with the quantitative paradigm. When studying human beings, is there really any research that can be truly ‘value-free’ and objective? Is it enough to study only ‘what is’ without looking at the ‘how and why?’” (p. 5).

Furthermore, just “counting the beans” may not be enough to gain a comprehensive picture of the topic being researched. In this regard, Davis also notes that, “Quantitative researchers spend years manipulating numbers and gathering surface data that have little meaning. They detach themselves from their subjects, missing valuable information that cannot be observed through ‘objective’ methods” (p. 5). Indeed, Davis suggests that quantitative validity is much more elusive than many advocates maintain. As this author emphasizes, “I have yet to see a quantitative research study in any research journal that has been done well – one that has included enough information to be replicated, that has controlled for all of the possible biases and extraneous variables that affect the results, that has provided sufficient evidence of informed consent, that has been generalized correctly to a specific target population, etc.” (Davis, 1998, p. 5). In a similar way, qualitative methods have their respective strengths and weaknesses — and their proponents and critics, and these issues are discussed further below.

Qualitative Methods. In contrast to quantitative research that uses numbers in some form, qualitative research methods rely on text, pictures, graphics and other non-quantified resources (Neuman, 2003). Despite some criticisms from the “quantitative crowd,” qualitative methods have gained increasing acceptance in recent years. For instance, Crowley (1994) notes that, “During the past two decades researchers have increasingly used qualitative research methods to access traditionally unavailable data. Far from a unified set of principles, qualitative research methods encompass a range of procedures to select from based on their suitability to the research purpose. These methods are used across the social and physical sciences” (p. 55). As with quantitative methods, there are also several methods that are available to the qualitative researcher, including the historical methodology, ethnography, phenomenology, hermeneutics, field-based case study, grounded theory as well as action research (Burton & Steane, 2004). In truth, both qualitative and quantitative research methodologies share some commonalities. For example, Davis (1998) emphasizes that, “The qualitative (or postpositivist) research paradigm explores a problem or describes a setting, a process, a social group, or a pattern of interaction. The goal of qualitative methodology is the same as quantitative methodology – to identify clear and consistent patterns of phenomena using a systematic process” (p. 5). There are some distinct differences in the framework of inquiry that is used by qualitative and quantitative researchers, though. As Krauss (2005) points out, “Many qualitative researchers operate under different epistemological assumptions from quantitative researchers. For instance, many qualitative researchers believe that the best way to understand any phenomenon is to view it in its context” (p. 75).

The context in which a phenomenon exists clearly cannot be captured in its entirety through number crunching, which is the basis for many of the criticisms directed at quantitative research by qualitative researchers: “[Qualitative researchers] see all quantification as limited in nature, looking only at one small portion of a reality that cannot be split or unitized without losing the importance of the whole phenomenon” (p. 759). The strength of qualitative researcher, then, is related to its focus on the “importance of the whole phenomenon,” a focus which can contribute to the meaning-making process. For instance, according to Krauss (2005), “Qualitative research has the unique goal of facilitating the meaning-making process. The complexity of meaning in the lives of people has much to do with how meaning is attributed to different objects, people and life events” (p. 763). In this same vein, Creswell (1997) reports that, “The contours of qualitative research might be seen by looking across several perspectives shared by leading authors. Writers agree that one undertakes qualitative research in a natural setting where the researcher is an instrument of data collection who gathers words or pictures, analyzes them inductively, focuses on the meaning of participants, and describes a process that is expressive and persuasive in language” (p. 14).

Moreover, the same constraints that receive much of the criticism are viewed as being some of qualitative research’s main strengths: “The natural subjectivity of qualitative research, for which it is criticized the most, is actually its greatest strength. It is used by researchers to develop theory, describe complex social situations, gain entry into research areas that are not available to quantitative researchers, uncover rival hypotheses and unanticipated outcomes, and extend previous quantitative research” (Davis, 1998, p. 5). The ability to use the findings of one study to confirm the findings of another form the basis of objectivity, and objectivity, Davis (1998) suggests, is achievable in qualitative studies since the objectivity relates to the data under review rather than the researcher. Although qualitative researcher is gaining in popularity, many peer-reviewed journals still tend to prefer to publish quantitative studies (Davis, 1998). According to Davis, “Qualitative research does not yet have the general acceptance that quantitative studies enjoy. Even at universities, dissertation committees continue to criticize systematic, well-grounded qualitative studies because they do not use numerical data or because they do not have enough people in their sample” (p. 5). These criticisms, though, are unfounded because they are based in large part on the same criteria that is applied to quantitative research methodologies and as Smith points out, “Obviously, qualitative research cannot be judged by quantitative standards” (p. 5).

Most importantly, perhaps, is the fact that both quantitative and qualitative research methodologies have their specific strengths that can be brought to bear on a research topic. As Davis concludes, “Quantitative research based on the scientific method has contributed to the knowledge base in all of these areas for a long time and is certainly appropriate for some research purposes. However, that does not make it ‘better’ than qualitative research” (Davis, 1998, p. 5).

This point is also made by Torres and Magolda (2002) who report that both quantitative as well as qualitative methods are becoming mainstream, but that qualitative research studies are still viewed as “unscientific” by many. Indeed, Babbie (1999) notes that one constraint that is commonly associated with “nonscientific” qualitative research methods concerns the selective observations and involvement of the researcher’s ego which can combine to produce findings that are subjective rather than being objective. Likewise, Austin and Pinkleton (2001) emphasize that, “When research findings are objective, they unbiasedly reflect the attitudes and behaviors of study participants, regardless of the personal views of researchers or project sponsors. On the other hand, selective observation may occur when researchers selectively interpret results. When this happens, research results are worse than meaningless: they are wrong” (p. 82). Assuming that these constraints are avoided or are otherwise controlled for, though, it is possible to use both qualitative and quantitative research methods alone or in tandem to good effect in survey research compared to other methodologies and these issues are discussed further below.

Survey Research vs. Other Research Approaches

Human beings are enormously difficult subjects to study, especially over the long-term, accounting, perhaps, for the relatively few longitudinal studies that are conducted, particularly in educational settings. Longitudinal methods, like some other research methods, are also very expensive. In this regard, Loughborough (1999) notes that, “When longitudinal research of this kind is conducted [in organizational settings], there is often a loss of respondents, either through a lack of preparedness among some initial respondents to participate at a later stage or because of mobility within the organization or even turnover” (p. 125). It is in this area that survey research appears to have a distinct advantage. According to Malhotra (2004), “Surveys are the most flexible means of obtaining data from respondents” (p. 117). As a result of this flexibility, surveys have become an increasingly popular method of primary research collection. For example, Hopkins (1999) reports that, “The mailed questionnaire has become the most common data-gathering tool in educational and social research” (p. 52).

The use of surveys for the collection of primary data, though, may be extraneous or superfluous to researchers’ specific needs. As Babbie (1990) points out, “Scientific research should not be equated with the collection and analysis of original data. In fact, some research topics can be examined through analysis of data already collected and compiled” (p. 31). In this regard, Dennis and Harris (2002) note that, “Secondary data are information that has been collected earlier for a different purpose, but which may still be useful to the research project under consideration” (p. 39). Further, Dennis and Harris add that, “Census data are a good example of secondary data, and of course the Internet can be searched by key words entered in search engines to obtain secondary data on a huge range of subjects. Finding the information needed to answer a particular research question from secondary data avoids the need to spend time and money on primary research, but the likelihood of an ideal match is remote” (2002, p. 39). Indeed, in some cases, secondary data are entirely satisfactory for the purposes of a given research project and primary research will not be needed at all. According to Boyle and Schmierbach (2004), “Our comparison with census data confirms the value of this process for researchers. Students conducting studies as part of a class are able to collect data equal in quality to data collected by professional survey research firms” (p. 374).

In other cases, though, primary data or a combination of secondary and primary data is regarded as being the gold standard or optimum approach. In this regard, Dennis and Harris point out, “Primary data are information that is being collected for the first time in order to address a specific research problem. This means that it is likely to be directly relevant to the research, unlike secondary data, which may be out of date or collected for a totally different purpose. Ideally, an effective research project should incorporate both primary and secondary data” (Dennis & Harris, 2002, p. 39).

According to the Oak Ridge Institute for Science and Education (2010), some of the strengths related to the use of survey research include the following:

1. When the survey involves a convenience sample (e.g., a mall intercept study), data can be collected and analyzed fairly quickly.

2. When the survey involves a statistically valid random sample, the results from the sample can be generalized to the entire population if the response rate is high enough.

3. Surveys can provide reliable (i.e., repeatable) direction for planning programs and messages.

4. Surveys can be anonymous, which is useful for sensitive topics.

5. Like qualitative research methods, surveys can include visual material and can be used to pretest prototypes.

6. Researchers can generalize their findings beyond their participant group.

Some of the weaknesses that are related to survey research include the following:

1. They have a limited ability to probe answers.

2. People who are willing to respond may share characteristics that do not apply to the population as a whole, creating a potential bias in the study.

3. They can be very costly (Quantitative research methods, 2010, para. 3).

As to their cost, survey researchers have benefited from innovations in information and communications technologies as well. Computer-based applications and online survey services have proliferated in recent years and researchers can administer and analyze survey data faster today than ever before; in particular, the use of online surveys can help researchers gather large numbers of responses in a short amount of time in a highly cost effective fashion, with a number of online survey services offering both free and pay-based services. These online survey services also provide researchers with analysis of their survey data, including the presentation of the aggregated data in graph form (Babbie, 2009). These trends in survey administration follow other innovations in survey research over the past century or so. In this regard, Young and Ross (2000) note that, “Historically, survey research has been one of the most useful and valuable tools for obtaining information about attitudes and opinions of a particular population. The most significant advances in survey research methodology this past century were the introduction of random sampling in the 1940s and the telephone interview, which became popular in the mid-1970s” (p. 31). Eclipsing even face-to-face and telephonic interviews, though, is the growing use of computer-based and online surveys. According to Young and Ross, “The collection of data through electronic surveys is another development that may prove to be even more profound than random sampling and telephone interviewing. It has the potential to become the wave of the future in communicating and gathering information, attitudes, and opinions from a wide variety of respondents” (p. 31).

These emerging survey methods, though, are not without their constraints (Babbie, 2009). The use of self-administered surveys by researchers using computer-based applications in which participants complete a survey in the presence of the researcher provide verification of the identity of the individual but there is a real problem with administering surveys in online settings. The advantage of using computer-based survey applications is the researcher’s “clear and constant presence in the data collection process” (Smith, MacQuarrie, Herbert, Cairns & Begley, 2004).

There are some disadvantages to this approach as well: “Having a researcher on site as a regular part of the data collection protocol might not be feasible. This is an extreme time commitment for researchers” (Smith et al., 2004, p. 13). This disadvantage is eliminated or mitigated in large part through the use of online surveys; however, although researchers can go to great lengths to help ensure that only the desired respondents participate in an online survey, there is always the chance that others will complete the survey in lieu of or in addition to the targeted population (Austin & Pinkleton, 2001). In this regard, Austin and Pinkleton caution that, “When surveys are posted on bulletin boards, it may be difficult to control who actually completes a questionnaire. Ultimately, the reliability and external validity of a survey’s results may be extremely low as a result of these limitations” (2001, p. 37).

Other constraints involved in the use of online and computer-based survey techniques relate to the so-called “digital divide,” constraints that will be especially challenging for the rural populations in Nigeria that are being targeted by this study.

Even in countries where the Internet is pervasive such as the United States, there remains a significant disparity in access between urban and rural residents (Borgida, Gangl & Jackson, 2002). Indeed, rural households in the U.S. continue to experience the lowest rate of Internet access and rural cities and towns are still behind their urban counterparts in gaining access to high-speed broadband services (Borgida et al., 2002).

Therefore, survey research that relies on computer-based on online survey approaches may be inappropriate or unviable in some rural settings. In this regard, Austin and Pinkleton (2001) emphasize that, “Perhaps most important, a sample is limited to persons with computers and/or online access. Although the use of technology is rapidly expanding, much of the population does not have computers and/or online access” (p. 37). These are particularly salient issues as they relate to the rural population of Nigeria where Internet access may be limited and where there may be a lack of infrastructure to support the administration of computer-based or online surveys. There are some ways that this constraint can be overcome or at least addressed in an incremental fashion. While access to the technology and infrastructure that support computer-based and online surveys is important, the use of these technologies also depends on the willingness and abilities of the teachers and administrators involved, with widespread acceptance frequently requiring a top-down approach (Tao & Wepner, 2002).

As to the digital “have-nots” that may lack access to the computer-based resources needed in an e-learning environment, Brock et al. (2002) emphasize that there are a growing number of different ways that access can be improved for even the most remote regions of the world, and these methods may be useful for delivering educational services as well. For instance, rural farmers in Cote d’Ivoire are using a village cellular phone connection that allows them to monitor real-time cocoa prices that are broadcast from the Chicago commodities exchange, an innovation that has taken place in a part of the world where most people have never made a telephone call (Rischard, 2002). In this regard, Alkalimat (2001) makes the point that, “There are more phone lines in Manhattan than in all of Africa. On the other hand, what’s interesting is that new telephones in Africa are mainly wireless, cellular. In other words, the new information age has technology that enables Africans to skip over one era and create the possibility for communicating in ways that in the next decades will cause a revolution in consciousness” (p. 19).

Although the rural Nigerian university in question does not suffer from these specific types of infrastructure and access constraints, it is important to take these issues into account when developing an e-learning practicum because of the differing levels of e-skills that the students and teachers will possess by virtue of where they were born and raised in the country. By any measure, though, the nations of Africa could benefit from “skipping one era” entirely and simply moving into the next, and the continuing expansion of wireless technologies holds a great deal of promise for e-learning applications in the future.

Summary and Conclusion

The review of the relevant literature showed that the positivist, constructivist, and pragmatic research paradigms are among the predominant frameworks being used in educational settings today, and that each of these research approaches had certain strengths that could be used to good advantage provided that their corresponding weaknesses were taken into account. One of the overriding themes that emerged from the research was the fact that although qualitative and quantitative research methods are fundamentally distinct from each other in terms of their focus, these methods can be used where they are deemed most appropriate or in combination with other to provide researchers with more robust findings, particularly when used in survey research. Finally, survey research was shown to be a cost-effective and efficient way to gather large amounts of data in a relatively short amount of time, particularly when computer-based or online surveys are used. Survey research, like other research methodologies, though, has its respective strengths and weaknesses that must be taken into account during the design, data collection, data analysis and interpretation in order to achieve successful outcomes.

Breadth References

Alkalimat, a. (2001). E-black facing up to the digital divide in higher education. Liberal Education, 87(2), 18-19.

Austin, E.W. & Pinkleton, B.E. (2001). Strategic public relations management: Planning and managing effective communication programs. Mahwah, NJ: Lawrence Erlbaum


Babbie, E. (1990). Survey research methods (2nd ed.). Belmont, CA: Wadsworth Publishing


Babbie, E. (2009). The practice of social research (12th ed.). Belmont, CA: Wadsworth/

Thompson Learning.

Borgida, E., Gangl, a., Jackson, M.S. (2002). Civic culture meets the digital divide: The role of community electronic networks. Journal of Social Issues, 58(1), 125.

Boyle, M.P. & Schmierbach, M. (2004). Student-collected survey data: an examination of data quality and the value of survey research as a learning tool. Journalism & Mass

Communication Educator, 58(4), 374-375.

Brock, T.C., Green, M.C., & Strange, J.J. (2002). Narrative impact: Social and cognitive foundations. Mahwah, NJ: Lawrence Erlbaum Associates.

Burton, S., & Steane, P. (2004). Surviving your thesis. New York: Routledge.

Baines, P. & Chansarkar, B. (2002). Introducing marketing research. New York: John Wiley & Sons.

Cohen, L., Manion, L. & Morrison, K. (2000). Research methods in education. London:

Routledge Falmer.

Corby, B. (2006). Applying research in social work practice. Maidenhead, England: Open

Crowley, E.P. (1994). Using qualitative methods in special education research. Exceptionality, 5(2), 55-67.

Creswell, J.W. (1997). Qualitative inquiry and research design: Choosing among five traditions. Sage Publications.

Davis, K. (1998). Could qualitative research become the ‘rule’ instead of the ‘exception’?

JOPERD — the Journal of Physical Education, Recreation & Dance, 69(2), 5.

Dennis, C., & Harris, L. (2002). Marketing the e-business. London: Routledge.

Engstrom, C.M. (2000). Giving voice to critical campus issues: Qualitative research in student affairs. Journal of College Student Development, 41(1), 131-132.

Hopkins, K.D. (1999). Response rates in survey research: a meta-analysis of the effects of monetary gratuities. Journal of Experimental Education, 61(1), 52.

Kincheloe, J.L. (2002). Teachers as researchers: Qualitative inquiry as a path to empowerment.

New York: RoutledgeFalmer.

Klenke, K. (2008). Qualitative research in the study of leadership. Bingley, UK: Emerald Group


Krauss, S.E. (2005). Research paradigms and meaning making: A primer. The Qualitative

Report. 10(4), 758-770.

Leedy, P.D., & Ormrod, J.E. (2001). Practical research: Planning and design (7th ed.). Upper

Saddle River, NJ: Merill/Prentice Hall.

Lin, a.C. (1998). Bridging positivist and interpretivist approaches to qualitative methods. Policy Studies Journal, 26(1), 162-163.

Loughborough, a. (1999). Research methods and organization studies. London: Routledge.

Malhotra, N.K. (2004). Marketing research: An applied orientation. (4th ed). Upper Saddle

River, NJ: Pearson Prentice Hall.

Neuman, W.L. (2003). Social research methods: Qualitative and quantitative approaches, 5th ed. New York: Allyn & Bacon.

Nigeria. (2010). U.S. government: CIA world factbook. Retrieved from https://www.


Olapurath, R.C. (2008). Organization of the firm’s activities: A pragmatic research paradigm extension of transaction cost theory. Unpublished dissertation. Pursue University.

Retrieved from http://docs.lib.purdue.edu/dissertations/AAI9900242/.

Quantitative research methods. (2010). Oak Ridge Institute for Science and Education. Retrieved from http://www.orau.gov/cdcynergy/demo/Content/activeinformation / tools/toolscontent/quantitativemethods.htm.

Rischard, J.F. (2002). High noon: 20 global issues, 20 years to solve them. New York: Basic


Roffe, I. (2004). Innovation and e-learning: E-business for an educational enterprise. Cardiff,

Wales: University of Wales Press.

Singleton, R.A. Jr., & Straits, B.C. (2005). Approaches to social research (4th ed.). New York:

Smith, P.B., MacQuarrie, C.R., Herbert, R.J., Cairns, D.L. & Begley, L.H. (2004). Preventing data fabrication in telephone survey research. Journal of Research Administration, 35(2),


Stevens, K. (2006). Rural schools as regional centres of e-learning and the management of digital knowledge. International Journal of Education and Development using

Information and Communication Technology, 2(4), 119-120.

Tao, L. & Wepner, S.B. (2002). From master teacher to master novice: Shifting

responsibilities in technology-infused classrooms; When technology becomes an integral

part of the classroom, teachers’ roles may change. The Reading Teacher, 55(7), 642.

Torres, V. & Magolda, M.B. (2002). The evolving role of the researcher in constructivist longitudinal studies. Journal of College Student Development, 43(4), 474-475.

West, E. & Jones, P. (2007). A framework for planning technology used in teacher education programs that serve rural communities. Rural Special Education Quarterly, 26(4), 3-4.

Wright, C.A. (2002). Collaborative action research in education (CARE) — Reflections on an innovative paradigm. International Journal of Educational Development, 8(4), 272-292.

Young, S.J. & Ross, C.M. (2000, June). Web questionnaires: a glimpse of survey research in the future. Parks & Recreation, 35(6), 30-31.

Depth Abstract

The second section of the study, the depth component, is used to describe the respective strengths and weaknesses of the survey methodology, as well as an evaluation of data collection instruments and sampling strategies, and the key steps that must be taken to ensure the successful use of the approach; this part also includes an annotated bibliography aligned to the research objectives. The objectives of this part was three-fold as follows: (a) present the strengths and weaknesses of the survey methodology; (b) evaluate data collection instruments and sampling strategies used in the survey research; and (c) delineate key steps that must be taken to ensure successful use of the survey approach.

PART 2: The Depth Component


Researchers in the 21st century are faced with the same challenges that have characterized social research for the past century or more, but these challenges have become even more complicated in recent years as a result of changes in philosophical thought concerning the superior research paradigm that should be used and what sampling regimens are required in order to obtain meaningful results. Furthermore, the introduction of electronic surveying methods such as computer-based techniques and online surveys has raised some important questions concerning how survey researchers can ensure that the individuals who are completing their survey instruments are the intended ones, as well as how to improve the overall response rate for all types of survey instruments. To gain some fresh insights into these issues, the depth component was guided by the following objectives.

Depth Objectives

The objectives of this part are to:

1. Present the strengths and weaknesses of the survey methodology.

2. Evaluate data collection instruments and sampling strategies used in the survey research.

3. Delineate key steps that must be taken to ensure successful use of the survey approach.

Depth Demonstration

The depth component is demonstrated through two parts. The first part consists of a review of the relevant literature to determine how best to fortify the dissertation’s research design for measuring attitudes concerning e-learning initiatives at a rural Nigerian university and to achieve the above-stated objectives; the second part consists of an annotated bibliography of 15 peer-reviewed journal articles concerning the respective strengths and weaknesses of the survey methodology, types of data collection instruments typically used, what sampling strategies are used in survey research and some of the key steps that must be taken into account in order to ensure the successful use of this approach.

Review and Analysis

Strengths and weaknesses of the survey methodology.

One of the preeminent authorities on research methodologies reports that “survey research” is “quantitative social research in which one systematically asks many people the same questions, then records and analyzes their answers” (Neuman, 2003, p. 546). Although different methods are available, the general survey methodology is described by Loughborough (1999) as follows: “Data are collected, usually either by interview or by questionnaire, on a constellation of variables. The objective then is to examine patterns of relationship between the variables” (p. 29). Some of the most significant strengths of survey research relate to its flexibility and cost effectiveness (Neuman, 2003). Some of the most important weaknesses related to survey research involve the need to identify ways to improve response rates which are traditionally less than optimal and to design and administer a survey instrument in a fashion that is transparent and replicable by other researchers (Neuman, 2003). These respective strengths and weaknesses are discussed further below as they relate to the evaluation of data collection instruments and sampling strategies that are commonly used in survey research.

Evaluation of data collection instruments and sampling strategies used in the survey research.

The main tool used in survey research is the questionnaire (also known by the term “survey”). According to Neuman (2003), a questionnaire is “a written document in survey research that has a set of questions given to respondents or used by an interviewer to ask question and record the answers” (p. 542). Survey instruments and the types of data they collect have changed in significant ways over the past century or so. Following the failure of survey research methods to accurately predict a U.S. presidential victor during the early 20th century, representative sampling became the standard for survey researchers (Krosnick, 1999). These improved sampling methods became widely accepted and survey researchers have followed this type of standard guidance concerning sampling until relatively recently (Krosnick, 1999). According to this author, “This standard practice included not only the notion that systematic, representative sampling methods must be used, but also that high response rates must be obtained and statistical weighting procedures must be imposed to maximize representativeness” (Krosnick, 1999, p. 537). The advantages of surveys in general and Web-based surveys in particular compared to other

In fact, one of the major drawbacks of self-administered surveys is viewed as being their traditionally poor return rate (Krosnick, 1999). For example, Krosnick emphasizes that, “One hallmark of survey research is a concern with representative sampling. Scholars have, for many years, explored various methods for generating samples representative of populations, and the family of techniques referred to as probability sampling methods do so quite well. Many notable inaccuracies of survey findings were attributable to the failure to employ such techniques” (p 537). Based on these observations and the fact that other authorities have cited appropriate sampling methods as being essential elements in quantitative survey research, representative sampling remains the preferred approach. As Kronick points out, “Consequently, the survey research community believes that representative sampling is essential to permit generalization from a sample to a population” (p. 537).

In addition, survey researchers have traditionally maintained that in order for a given sample to be considered representative of a larger population, the researcher must achieve a high response rate; the majority of telephonic surveys, though, typically achieve less than 60% response rate and a majority of face-to-face surveys do not do much better, ranging around 70% (Krosnick, 1999). A return rate of 60% for mailed or hand-delivered self-administered surveys is regarded as being normal (Babbie, 1990). According to Hopkins (1999), though, mailed questionnaires have a major drawback concerning their generally dismal response rates. “The proportion of the sample who do not return the questionnaire is usually large,” Hopkins notes, “and the question remains: ‘Is the experimentally accessible sample representative of the sample that was surveyed?’” (p. 52).

Based on the need to be able to generalize a survey study’s findings to a larger population, Krosnick suggests that researchers should reevaluate their sampling methods and goals. For instance, Krosnick writes, “In the extreme, a sample will be nearly perfectly representative of a population if a probability sampling method is used and if the response rate is 100%. But it is not necessarily true that representativeness increases monotonically with increasing response rate. Remarkably, recent research has shown that surveys with very low response rates can be more accurate than surveys with much higher response rates” (p. 537). These are important considerations because as Hopkins (1999) emphasizes, “The external validity of survey research is seriously eroded when a substantial proportion of the sample does not participate in the study” (p. 52).

Besides the fading gold standard of representative sampling, other sampling methods include the self-described convenience sample and an opportunistic sample; opportunistic sampling is especially appropriate for research endeavors in which there is no requirement for generalization of survey results to a larger population but rather the focus is on developing comprehensive descriptions and assertions that can lead to theory (Babbie, 1990).

As to data analysis, survey researchers today enjoy the benefit of a growing number of powerful and user-friendly data analysis software applications that can facilitate this traditionally complex aspect of the survey research approach, but the type of data analysis that is used will once again relate to the type of information that is desired and the goals of the researchers. Some typical data analysis techniques include those shown in Table 3 below.

Table 3

Typical data analysis techniques used with survey research

Type of Analysis



This type of analysis provides a measure of dispersion for one variable that indicates the percentage of cases at or below a score or point (Neuman, 2003).


This type of analysis is used to test the hypothesis that the row and column variables are independent, without indicating strength or direction of the relationship; the Chi-Square Test procedure is used to tabulate a variable into categories and compute a chi-square statistic. This goodness-of-fit test is then used to compare the observed and expected frequencies in each category to test either that all categories contain the same proportion of values or that each category contains a user-specified proportion of values (SPSS version 11.0, 2005).

Regression analysis

Several types are available, but generally, this type of analysis is used to develop an estimation of the linear relationship between a dependent variable and one or more independent variables or covariates (SPSS version 11.0, 2005).

Frequency analysis

This type of analysis provides various statistics and graphical displays that are useful for describing many types of variables (SPSS version 11.0, 2005). Frequency analyses provide tabulations and percentages that are useful for developing descriptions for data from any distribution, particularly for those variables with ordered or unordered categories. The majority of the optional summary statistics, such as the mean and standard deviation, are based on normal theory and are appropriate for quantitative variables that have symmetric distributions. More robust statistics, such as the median, quartiles, and percentiles, are appropriate for quantitative variables that may or may not meet the assumption of normality (SPSS version 11.0, 2005).

Binomial Test

This procedure is used to compare the observed frequencies of the two categories of a dichotomous variable to the frequencies expected under a binomial distribution with a specified probability parameter (SPSS version 11.0, 2005).


This analytical method is used to test whether the mean of a single variable differs from a specified constant (SPSS version 11.0, 2005).

The Paired-Samples T Test

This procedure is used to compare the means of two variables for a single group. It computes the differences between values of the two variables for each case and tests whether the average differs from 0 (SPSS version 11.0, 2005).


The Analysis of variance (or “ANOVA”) is an analytical method that is used to test the null hypothesis that several group means are equal in the population, by comparing the sample variance estimated from the group means to that estimated within the groups (SPSS version 11.0, 2005). This procedure produces a one-way analysis of variance for a quantitative dependent variable by a single factor (independent) variable; the analysis of variance is used to test the hypothesis that several means are equal. This technique is an extension of the t test (SPSS version 11.0, 2005).

Sources: As indicated

As noted in Table 3 above, SPSS is a popular data analysis software application but it is not cheap and there are other, less costly alternatives, available as well, including those described in Table 4 below.

Table 4

Computer software for survey research

Software Package



Offered by SAS, Inc., this software package is generally more comprehensive than SPSS, has no single produce oriented specifically to survey research. Proc Surveyselect is for sampling (simple, stratified, clustered, multistage, other), with output being a data set containing the selected units, with selection probabilities and sampling weights. A variety of survey-related statistical procedures are available, including Proc Surveymeans, Proc Surveyreg (regression), Proc Surveylogistic, and Proc Surveyfreq.


Offered by Stata, Inc., this is a comprehensive statistics package which, in the survey research area, supports statistical analysis of sampling weights, multistage designs; stratified sampling; cluster sampling; complex survey designs; poststratification analysis of non-response; and a wide variety of statistical procedures. Indeed, the Stata Survey Data Reference Manual is a valuable reference regardless of which software is used by the researcher; however, Stata Inc. does not currently feature specialized modules for survey creation, survey administration, or survey data entry.

SDA (Survey Documentation and Analysis)

SDA is software from the Computer-Assisted Survey Methods program at the University of California, Berkeley. SDA consists of a set of programs for the documentation and web-based analysis of survey data. The “Codebooks” module produces survey codebooks suitable for printing or web use, with data definitions for SAS, SPSS, Stata and DDI (XML) formats. The “Analysis” module provides basic, web-implemented statistical procedures for survey data. The “Subsetting” module has procedures for creating customized subsets of datasets. The website gives access to the demonstration SDA Archive, related documentation, and several datasets. The separate CASES software is designed for collecting survey data based on structured questionnaires, using telephone or face-to-face interviewing as well as self-administered procedures. CASES claims to be one of the most widely used packages for the purpose. CASES and SDA are not purchased but instead obtained by joining the Association for Computer-assisted Surveys (ACS), which requires a minimum one-year membership agreement and fee.

SPSS (various versions)

This vendor offers a wide range of survey research products and services. SPSS Data Entry supports creation of surveys and forms using the “SPSS Data Entry Builder” module, and data entry using the “SPSS Data Entry Station” module. SPSS SmartViewer Web Server software may be used to distribute survey results online. More advanced survey research is supported by SPSS Dimensions, which is a web-enabled survey system for creation and data collection of surveys by web, phone, scan forms, or mobile devices, along with survey coding and analysis. Central data storage in Dimensions integrates with most popular databases and operating systems. Dimensions features include support strategies for privacy protection, increasing response rates, and creation of web-based interactive reports on survey results. SPSS Complex Samples is the SPSS module for stratified, clustered, multistage, and other complex sampling. SPSS also offers online survey hosting and online survey research services. .


This is a specialized survey analysis software package from the Research Triangle Institute, Research Triangle Park, NC. It consists of nine analytic procedures to adjust survey data analysis for stratified, clustered, and other complex samples, including repeated measures designs. It claims to be the only broadly applicable software for analysis of correlated and weighted survey data.

IMPS and CSPro

IMPS (the Integrated Microcomputer Processing System) is freely downloadable software from the U.S. Census, designed for the major tasks in survey and census data processing: data entry, data editing, tabulation, data dissemination, statistical analysis and data capture control. CSPro ((the Census and Survey Processing System) is also available, designed for entering, editing, tabulating, and disseminating data from censuses and surveys. CSPro combines the features of the Integrated Microcomputer Processing System (IMPS) and the Integrated System for Survey Analysis (ISSA), and is intended to replace both. While it can process Census data, CSPro also can create user-defined data entry forms (screens) for data capture.

Source: Garson, 2009, para. 4-5

Delineation of key steps that must be taken to ensure successful use of this approach.

The delineation of the key steps required to achieve successful uses of survey research depend on the researcher’s goals and the type of information that is being sought, but there are methodological decisions that are required for any type of survey research project. For example, according to Stinson (1999), “In any survey design process, there are always a series of methodological decisions to be made” (p. 12). Until fairly recently, there were few general guidelines available for questionnaire design, with many researchers viewing survey design “as more of an art than a science” (Krosnick, 1999, p. 537). From this outdated perspective, it was possible to gain equal levels of information from respondents as long as there was representative sampling used. In this regard, Krosnick emphasizes that, “There is no best way to design a question, said proponents of this view; although different phrasings or formats might yield different results, all are equally informative in providing insights into the minds of respondents” (p. 537). Today, things are different, though, and survey researchers have access to several timely guidelines and recommended steps concerning survey design that can be used to help develop an instrument that gathers the type of information that is desired from the desired population. Although the steps tend to differ somewhat from authority to authority, they also share some commonalities that highlight the more important points that are involved and these steps are discussed further below.

Designing a survey project and instrument requires a decision concerning content and population. According to Wiley and Legge (2006), “The first step in a typical survey project is to design an effective survey instrument. This means making certain key decisions: What questions to ask? How many questions to ask? Are benchmark data available? Who to survey? How frequently?” (p. 8). For this purpose, Raghunathan and Grizzle (1999) report that like software engineers who reuse code that has proven efficacy, many researchers simply “pick and choose” their questions from survey instruments with known validity and realibility. According to these authorities, “The design of surveys must balance many competing goals. Due to the financial burden of recruiting or selecting individuals for studies, many survey questionnaires are formed by pooling questions from existing survey instruments in an attempt to obtain information that can be used for a variety of purposes” (Raghunathan & Grizzle, 1999). Effective survey design must also take into account potential sources for random and systematic error (including question content and design) which include those shown in Table 5 below.

Table 5

Sources of random and systematic errors in survey design



Poor survey design

Question ambiguity, which includes unclear questions and items that ask two or more questions (double-barreled statements), should be avoided. Some ‘negative’ or reverse-order questions should be included in each questionnaire to minimize any tendency for respondents simply to circle numbers without careful consideration. Reverse-order questions permit the analyst to determine whether each survey form is fit for inclusion in analysis.

Poor survey content

It is only reasonable to include questions to which respondents are qualified to respond. Questions should be limited to those areas in which respondents have direct experiences.

Source: Bedggood & Pollard, 1999, p. 129

It is also essential to establish the validity of the survey instrument which can be achieved in a number of different ways, including those shown in Table 6 below which are described in terms of their use and appropriateness in a representative educational setting.

Table 6

Methods for establishing validity of survey instruments

Validity Method

Description/Appropriateness of Use

Face validity

This method is achieved by acquiring expert consensus that the measure adequately represents a particular concept. It provides a modest starting point and merely establishes that the measure appears valid without empirical testing.

Criterion validity

This approach serves to establish that the construct behaves as expected; that the measure accurately predicts some criterion measure. For example, if the teaching objective is learning, then there should be a correlation between teacher effectiveness ratings and a learning measure. It should be noted, though, that other factors can obstruct criterion validity using student learning, because students may perform poorly or well for reasons other than the teacher’s performance. Therefore other checks for criterion validity may be necessary. For this purpose, Bedggood and Pollard recommend recording the ratings of the same instructor in different courses, noting changes in student behaviours, conducting experimental manipulations, and measuring progress on specific course objectives.

Construct validity

This method establishes that a rating is directly related to the construct it purports to represent. This involves determining whether the rating is correlated with several other measures of the same concept (convergent validity) and showing that it is not correlated with measures of a different concept (discriminant validity). Construct validity can be checked if survey ratings are supplemented with: observation by trained experts, self-evaluation / reflection, peer assessments, student work samples, and retrospective evaluations by former students. Drawing upon such sources is recommended.

Source: Bedggood & Pollard, 1999, p. 129

Depending on the research circumstances, one, two or all of these methods for establishing the validity of the survey instrument may be needed. In this regard, Bedgood and Pollard (1999) conclude that, “All three types of validity should be established to minimize systematic error. Establishing face and criterion validity alone may be insufficient” (p. 129).

Once these foregoing questions are answered satisfactorily and appropriate types of survey questions are identified, the next step involves designing a survey instrument that will achieve these research needs. For this purpose, Grinnell and Unrau (2005, p. 273) recommend that the following key steps shown in Table 7 below should be followed when developing survey instruments.

Table 7

Key steps to successful survey design



Survey Planning and Initial Design

1) Definition of the research problem area;

2) Definition of research questions and/or hypotheses;

3) Operational definition of variables;

4) Development of the survey design.

Development & Application of the Sampling Plan

1) Definition of the population;

2) Identification of subpopulations;

3) Detailed sampling procedures; and,

4) Selection of the sample.

Construction of Interview Schedule or Questionnaire

1). Development of questions or selection of measuring instrument;

2) Development of anticipated analysis procedures;

3) Pretest of instrument;

4) Revision of questions (as frequently and to the degree required).

Data Collection

1). Implementation of interviews, questionnaires, inventories, tests, or observations schedules;

2) Follow-ups;

3) Initial tabulation and coding.

Translation of the Data

1) Construction of category systems as necessary;

2) Technical preparation of data for analysis.

Data Analysis

1) Separate analyses of questions, individually or in groups;

2) Synthesis, interpretation of results.

Conclusions and Reporting

Various formats are available for this purpose.

A slightly different approach for survey design and administration is recommended by Neuman (2003) as shown in Table 8 below.

Table 8

Key steps to successful survey research



Step #1

Develop hypotheses.

Decide on type of survey.

Write survey questions.

Decide on response categories.

Design layout.

Step #2

Plan how to record data.

Step #3

Decide on target population.

Get sampling frame.

Decide on sample size.

Select sample.

Step #4

Locate respondents.

Administer survey.

Carefully record data.

Step #5

Enter data into computers.

Recheck all data.

Perform statistical analysis on data.

Step #6

Describe methods and findings in research report.

Presenting findings to other for critique and evaluation.

Source: Neuman, 2003 at p. 268.

Taken together, the key steps recommended by Grinnell and Unrau (2005) and Neuman (2003) agree that the survey design and development process should proceed in a step-wise fashion with a determination of what objectives are involved and what and how many of which population can best serve to satisfy these objectives. The crafting of survey questions and their refinement also occupy different levels in the steps recommended by Grinnel and Unrau and Neuman, but these steps are among the first that should be accomplished from both perspectives. In reality, these are the easy parts and can be accomplished by even novice researchers if care is taken to follow the delineated steps; however, identifying appropriate research participants and successfully recruiting them for participation represents an entirely different matter. In organizational settings, recruiting a sufficient number of survey participants can be an especially daunting enterprise. In this regard, Loughborough (1999) reports that, “When the objective is to collect data from individuals about themselves, two levels of access are required: to the firm and to the individuals” (p. 105).

When survey researchers are confronted with both levels of access, they must exercise some diplomacy and tact in order to successfully navigate past the first organizational gatekeepers who are typically senior management. According to Loughborough, “The nature of the research has to be explained to fairly senior managers before access to respondents can be granted. The administration of such research instruments is likely to involve considerable skill on the part of the investigator to be admitted to the organization and to gain the cooperation of those who will provide the data” (p. 106). Clearly, when unknown researchers approach an organization’s senior management concerning a survey of their employees, it is little wonder that there will be some reluctance on the part of the organization, particularly when there is no economic incentives involved in cooperating. As Loughborough points out, “Various parties may be suspicious about the research, albeit for different reasons” (p. 106). Survey researchers can also recruit subjects from a pool of colleagues, friends, classmates, faculty members and other in academic settings, where access might be more forthcoming. Survey recruiting advertisements placed in trade journals, special interest magazines and online forums also represent potentially valuable sources for subject recruitment (Loughborough, 1999).

Some other fundamental problems must be addressed in the design of effective surveys as well. Researchers must ensure that any relationships identified in the survey research are not attributable to other causes to the maximum extent possible. In this regard, Loughborough reports that, “The problems associated with establishing a relationship from a correlational survey design is handled by a post hoc imposition of control. In so doing, the survey researcher is providing an approximation to an experimental design, that is, ruling out alternative explanations of the postulated association” (p. 122).

The next step involved in the survey design process is included by Grinnell and Unrau (2005) but not Neuman (2003) and involves pre-testing the survey instrument. Accordng to Krosnick (1999), “Questionnaire pretesting identifies questions that respondents have difficulty understanding or interpret differently than the researcher intended” (p. 537). Likewise, Garson (2009) emphasizes that, “Pretesting is considered an all-but-essential step in survey research. No matter how experienced the survey researcher, pretests almost invariably bring to light item ambiguities and other sources of bias and error” (para. 3). The pretesting process is also used to identify weak or misleading question content and to fine-tune the remaining questions to achieve the best possible results. In this regard, Garson also notes that:

The first pretest may have up to twice the number of items as the final, as one purpose is to identify weaker items and drop them from the survey. Items may also be dropped if the first pretest shows they exhibit too little variance to be modeled. The first pretest will also have many more probe items that the final, and respondents may even be told they are being given a pretest and their help solicited in refining the instrument” (para. 4).

If the survey is going to be posted in an online setting of some sort (i.e., a special interest forum or an online survey service), there are some useful guidelines available for this purpose as well. For example, Young and Ross (2000) recommend the following steps for posting a completed survey online:

1. Provide survey respondents with a password that enables them exclusive access to the questionnaire. This security measure helps to maintain the integrity of the data by restricting access to the questionnaire from the general public browsing the Internet.

2. Use e-mail messages to send brief notes to respondents reminding them that their input is important to the data collection process. This method of communication creates a feeling for the respondents of more interaction with the person conducting the study. In addition, personalize email communication as much as possible by avoiding mass mailing correspondence that displays user groups, listserv, or multiple recipient addresses. Each survey participant should feel that they are receiving an individualized, personal e-mail message.

3. Design the electronic survey for individuals with minimal or no computer skills. Do not assume that all respondents know how to navigate the Web. For example, some respondents may not know how to reveal hidden responses behind a drop-down menu or how to enter text in open-ended boxes. Also, label buttons so that they clearly describe the action to be taken.

4. Use a larger font than would be appropriate for a paper questionnaire. The larger font (at least 13-point) is much easier to read when viewing the questions from a computer screen. Because of the significant differences in the resolution quality of various display monitors and different browser programs, it is recommended to limit the display line length to approximately 70 characters. Having screen pages “wrap-around” is annoying and confusing for respondents.

5. Use a textured background, colored headings, and small graphics when appropriate. By implementing these features, the questionnaire becomes more interesting and appealing.

6. Use appropriate response icons. Radio buttons are preferred for single choice selection answers; check boxes for multiple selection answers; and drop-down boxes when there are more than 10 items in a single-choice selection answer.

7. When open-ended questions are appropriate to use, the text box should be multi-lined with enough space to accommodate the maximum amount of text expected.

8. Use multiple category or section headings rather than one long questionnaire. Provide appropriate links to allow users to go to the top and bottom parts of each section, enabling the user to navigate through the questionnaire more easily than having to scroll through the entire document.

9. Link a customized thank-you page at the end of the questionnaire so that when respondents click on the “submit” icon, a brief thank-you note pops onto their screens (Young & Ross, 2000, p. 30).

Based on their study of response rates to Web-based surveys among college students in the United States, Mitra, Jain-Shukla, Robbins, Champion and Durant (2008) provide the following additional recommendations for posting surveys online:

1. Ample effort needs to be put towards the development of the questionnaire so that the paper and pencil version is appropriately translated to the html version. This is particularly important with respect to the creation and testing of the “skip” patterns in the questionnaire. There needs to be sufficient planning about what questions would be considered mandatory in the questionnaire.

2. It needs to be understood that in the data collection process the critical period is the first 96 hours following the broadcast of the email inviting people to participate in the study. As the data shows here, most of the data is collected within the first three days and there needs to adequate planning to have personnel and resources available to harvest the data within this period.

3. It is important to have frequent reminders sent after the first email invitation. Given that most of the data is collected early in the process, it is best to plan on sending the reminder emails quickly after the first email.

4. It needs to be recognized that there is variability in the rate of response and response rate based on a variety of factors such as gender, school year and the technology environment of the school. The latter is particularly important since this can have a large on the efficiency of using web-based data collection processes. It is useful to have some sense of the technology environment of the target population (Mitra et al., 2008, p. 266).

Following the design, questions, target population and the other steps described above, it is vitally important to include a follow-up feature to survey research in order to realize as much value from the process as possible. For this purpose, the seven-step model described in Table 9 below by Wiley and Legge (2006) can be used.

Table 9

Seven-step survey feedback and action model



Step 1: Understand results

Review survey results to understand the overall picture, including strengths and opportunities.

Step 2: Establish priorities

Select two to three key areas of focus for follow-up. Addressing too many issues based on survey findings dilutes the planning and results.

Step 3: Communicate results and priorities

Prepare and communicate an overview of the results to stakeholders.

Step 4: Clarify priorities.

Find out not just what respondents think, but why they feel the way they do. Do not jump to conclusions about the results. Instead, to understand respondent motivations and points-of-view.

Step 5: Generate recommendations

When communicating with administrators and teachers, brainstorm ideas to address priorities and generate recommended action items.

Step 6: Develop and implement action plans.

Convert recommendations into formal action plans that include specific objectives and clear accountability.

Step 7: Monitor progress

Hold periodic reviews to ensure objectives are achieved

Source: Wiley & Legge, 2006, p. 9

Clearly, these seven steps may require modification or fine-tuning when they are used for different purposes such as to measure the level of acceptance to e-learning methods by the population of interest herein, whom as noted above are students attending a rural Nigerian university. Nevertheless, these key steps and follow-up steps provide a useful framework in which surveys can be designed, administered, analyzed and appropriate recommendations for action can be formulated.

Summary and Conclusion

The research was consistent in emphasizing that survey researchers today enjoy a growing body of evidence concerning how effective survey design can be achieved and what factors should be taken into account during the developmental process. Among these, survey question design was shown to be particularly important and researchers must avoid the use of ambiguous or “double-barreled” questions that could result in misinterpretation by the survey respondents, creating data that may not only be misleading, but is essentially wrong. The research was also consistent, but not identical, in determining what steps should be followed in the survey design process, but all of the authorities reviewed noted that it was a step-by-step process that required careful decision making throughout. Finally, there were a number of different data analysis methods available, and researchers can take advantage of the increasing numbers of powerful data analysis software applications (including Excel that is typically included as part of most word processing suites) to help them analyze, interpret and present the results of their survey research.

Depth References

Babbie, E. (1990). Survey research methods (2nd ed.). Belmont, CA: Wadsworth Publishing


Bedggood, R.E. & Pollard, R.J. (1999). Uses and misuses of student opinion surveys in eight

Australian universities. Australian Journal of Education, 43(2), 129.

Garson, G.D. (2009, August 28). Survey research. North Carolina University. Retrieved from http://faculty.chass.ncsu.edu/garson / PA765/survey.htm.

Grinnell, R.M. Jr. & Unrau, Y.A. (2005). Social work research and evaluation: Quantitative and qualitative approaches. New York: Oxford University Press.

Hopkins, K.D. (1999). Response rates in survey research: a meta-analysis of the effects of monetary gratuities. Journal of Experimental Education, 61(1), 52.

Loughborough, a. (1999). Research methods and organization studies. London: Routledge.

Mitra, a., Jain-Shukla, P., Robbins, a., Champion, H. & Durant, R. (2008). Differences in rate of response to Web-based surveys among college students. International Journal on ELearning, 7(2), 265-266

Neuman, W.L. (2003). Social research methods: Qualitative and quantitative approaches, 5th ed. New York: Allyn & Bacon.

Raghunathan, T.E. & Grizzle, J.E. (1999). A split questionnaire survey design. Journal of the American Statistical Association, 90(429), 54-55.

SPSS Version 11.0 for Windows (Student Version). (2005). SPSS. [DVD].

Stinson, L.L. (1999). Measuring how people spend their time: A time-use survey design. Monthly Labor Review, 122(8), 12.

Young, S.J. & Ross, C.M. (2000, June). Web questionnaires: a glimpse of survey research in the future. Parks & Recreation, 35(6), 30-31.

Annotated Bibliography

Ames, S.L., Gallaher, P.E., Sun, P. & Pearce, S. (2005). A Web-based program for coding open-ended response protocols. Behavior Research Methods, 37(3), 470-471.

Authors provide a description of a Web-based application that provides researchers with the ability to analyze participant-generated and open-ended data. Authors note that the application was developed in order to take advantage of online surveying based on its ease of use and flexibility. Authors note that this application may be of particular value to researchers who are employing large sample sizes that are frequently needed for projects in which frequency analyses are required. The application uses a grid-based set of criteria to establish codes for participant-generated and open-ended data collected from online surveys and can be applied for scoring results from stem completion,-word or picture associations, and comparable purposes in which such participant-generated responses require categorization and coding. Authors advise that they use this application for their professional online surveying purpose in experimental psychology to examine substance abuse patterns derived from participant-generated responses to various verbal and nonverbal associative memory problems, but that the application is also appropriate for other research areas as well. Authors also note that the application helps improve survey reliability by providing a systematic approach to coding participant-generated responses as well as evaluating the quality of coding and interjudge reliability by researchers with little or no specific training for the purposes. Authors conclude that the coding application is helpful for survey research that uses open-ended responses in virtually any research area of interest.

Austin, T.M., Richter, R.R. & Reinking, M.F. (2008). A primer on Web surveys. Journal of Allied Health, 37(3), 180-181.

Authors report that survey research has become a widely accepted research methodology that has been facilitated through the introduction of computer-based and online survey methods. Authors also emphasize that although electronic survey methods are useful in a wide range of settings for a variety of purposes, they are not appropriate in every situation. Online surveys involve various technologies that have not been available (or required) for paper-and-pencil surveys and require special considerations involving their design, pilot testing, and response rates. Authors present the results of their empirical observations and professional experience in using Web-based surveys to illustrate some of the advantages and disadvantages of the approach, including security and confidentiality issues (they make the point that electronic surveys are particularly vulnerable to compromise and that survey data must be protected as the research progresses) as well as the special considerations that must be taken into account as they apply to this surveying approach. Authors also discuss issues such as sampling error, a “how-to” guide to writing survey questions for online media, and how to order questions to ensure that respondents answer accurately and faithfully. All in all, this was a very timely guide for researchers for identifying when Web-based surveys are most appropriate and what factors should be taken into account in the design, posting and analysis of online surveys.

Bartholomew, S. & Smith, a.D. (2006). Improving survey response rates from chief executive officers in small firms: The importance of social networks. Entrepreneurship: Theory and Practice, 30(1), 83-84.

Citing the growing popularity of survey research for small companies and entrepreneurs as well as the proliferation of social network sites such as Facebook and MySpace, authors investigate the influence of such forums on survey response rates received from small companies with regards to the impact of trade association testimonials and geographic affiliations. The results of this study found that published support from trade associations help to improve survey response rates. Moreover, when researchers include a description of any specific social ties they may have to a small company’s region, the survey response rate is also improved. The results of this study formed the basis for authors’ recommendations for survey research that is focused on smaller enterprises. In particular, researchers who take the time to personally follow up with potential respondents who area geographically proximate to the researcher, the response rate is improved significantly in a highly cost-effective fashion. Because the response rate for surveys has been traditionally lower than needed, these recommendations represent some valuable guidance for researchers who need to realize as much relevant information from their surveys as possible. Notwithstanding a number of constraints and limitations to their study, authors conclude that researchers who want to improve their survey response rates, particularly when small companies are the target of their research, should take the time needed to conduct follow-up contacts with potential respondents and to establish a degree of affiliation between the researcher and potential respondents.

Bedggood, R.E. & Pollard, R.J. (1999). Uses and misuses of student opinion surveys in eight

Australian universities. Australian Journal of Education, 43(2), 129.

Authors cite the growing popularity of using survey research in educational settings and point out that that use of student opinion surveys in particular is becoming commonplace in many universities around the world to gauge student perceptions concerning various aspects of their education, including the performance of the teaching staff with a goal of improving the quality of educational services that are provided. This study reviewed the use of student opinion surveys as well as survey design and analytical methods in eight

Australian universities. Despite the availability of sound guidelines and advice concerning survey design, authors found many universities continue to use survey instruments that are poorly designed or structured based on the recommendations contained in current best practices. Because individual careers are affected by the results of student opinion surveys that are used to measure teacher performance, authors emphasize the need for ensuring surveys are designed thoughtfully and follow the recommendations of survey experts depending on the goals of the particular study. In addition, authors present several valuable recommendations concerning how to achieve the appropriate level of validity required of survey researchers as well as the typically sources for introducing systematic and random errors into the initial survey design and ways to avoid or mitigate these sources of error. Finally, authors conclude that although student opinion surveys can play an important role in identifying current student perceptions concerning a wide range of issues, when they are used to measure teacher performance, special care must be taken in survey design and data interpretation.

Hopkins, K.D. (1999). Response rates in survey research: a meta-analysis of the effects of monetary gratuities. Journal of Experimental Education, 61(1), 52.

Author cites the notoriously low response rates traditionally received from mailed questionnaire instruments and examines the effect of including a gratuity in the form of various bill denominations that were included with a mailed survey. By comparing the response rates for surveys that included a gratuity with those that did not, author found that the response rate for the gratuity-included surveys was significant higher (by almost

20%). Based on a meta-analysis of 50 years’ worth of studies that indicated whether the survey researcher included a gratuity or not, author determined that the amount of the gratuity was a less important influence on the response rate than the fact that the survey researcher included a gratuity in the first place, but that larger gratuities still resulted in higher response rates. For example, author determined that a one-dollar increase in the amount of gratuity provided increased the response rate by a whopping 20%. Based on these findings, author recommends that survey researchers consider the inclusion of some type of modest gratuity (author gives a dollar bill as an example but other researchers have used lottery tickets and other comparably priced items as well), as opposed to compensation, to help improve the response rates for survey researchers’

mailed questionnaires. Author emphasizes the need, though, to make the distinction between compensation and gratuity clear to the respondent and that a promised gratuity rather than an enclosed gratuity tends to function just as well.

Marbach-Ad, G. & Mcginnis, J.R. (2009). Beginning mathematics teachers’ beliefs of subject matter and instructional actions documented over time. School Science and Mathematics.

109(6), 338-339.

Authors report their findings from survey research that analyzed responses from 31

novice mathematics and science teachers (K-9) who had completed a reform-based mathematics and science teacher preparation program with a goal of comparing the responses received from beginning teachers over the course of two separate survey administrations. The survey was conducted in two separate administrations spanning a 4-

year period. The focus of the survey research project was of less interest for the purposes of the study envisioned in this project than the surveying methods that were used. For example, authors employed the technique of including a token gratuity such as a one-dollar coin or a $2 bill in the first survey mailing as well as a $20 gratuity in their final mailing, together with a reminder letter with duplicate copy of the survey; authors also employed telephonic and email reminders as a follow-up to improve their response rates for both batches of surveys. Authors report that they experienced a return rate on their

mailed surveys of 60% each, a healthy response rate they attribute to the gratuities provided as well as their diligent follow-up efforts as described above. Authors also present the results of their follow-up interviews with selected respondents, an addition to the survey component they believed allowed for more in-depth probing concerning the respondents’ views.

Maclin, K. & Calder, J.C. (2006). Interviewers’ ratings of data quality in household telephone surveys. North American Journal of Psychology, 8(1), 163.

The focus of this study was on the quality of data quality that is collected through survey research interviewers. Authors note that the quality of such data is evaluated in a number of ways, including the level of error that is experienced, the effects of the interviewers themselves, the manner in which the survey instrument affects the type and quality of the data that is collected during the survey process, and the effect of respondent cognitive and motivational abilities. Authors note that a gap exists in the literature concerning interviewers’ perceptions of the quality of the interview process as well as any data that is collected. For this study, authors requested that survey research interviewers with a minimum of one-year of experience provide their evaluations of the quality of data that was collected from a computer assisted telephonic interview of individuals who had received treatment for substance abuse. For this purpose, authors used a survey instrument that was designed in order to assess post-treatment abstinence and other psychosocial adjustment factors of individuals who had successfully completed treatment at residential alcohol and drug treatment centers. Survey questions were used to elicit information concerning current substance abuse practice, current employment status, respondent experiences with post-treatment services, and other views concerning the different treatment modalities they had completed. Here again, the focus of the study was of less interest than the telephonic surveying techniques that were used by these researchers. Authors used a series of 13 Likert-scaled questions to gauge respondent views of their treatment and follow-up interview sessions with interviewers concerning their perceptions of the quality of the data that were collected during the multi-year research process. Authors found that a large majority (85%) of the interviewers assessed the quality of the data collected as being satisfactory.

Mitra, a., Jain-Shukla, P., Robbins, a., Champion, H. & Durant, R. (2008). Differences in rate

of response to Web-based surveys among college students. International Journal on ELearning, 7(2), 265-266.

Authors point to the increasing popularity of Web-based surveys as well as the numerous benefits that can accrue to researchers who use these surveying methods. Authors also note the lack of a universal definition for Web-based surveys but suggest the defining characteristics of the approach include an Internet-based approach wherein potential or already recruited respondents are provided with hyperlinks or a URL address to an online survey in an email invitation. In this study, authors identify almost 60 different types of Web-based surveys that have been used, ranging from consumer surveys to marketing and government-sponsored surveys. Authors cite the primary benefit of Web-based surveys as including their cost effectiveness and ease of administration. The purpose of this review was to provide a set of best practices concerning the application and conceptual design of Web-based surveys. Authors examine the strengths and weaknesses of online surveying and report the results of such a Web-based survey that was used to analyze patterns of alcohol use among respondents who were attending 10 different colleges in North Carolina. The primary advantage of Web-based survey for these types of educational research projects was the minimal costs that are involved. Excepting the researchers’ time and effort in survey design which will differ from project to project, the costs of posting and administrating Web-based surveys can range from absolutely free to just a few dollars compared to the average costs of almost $700 for traditional paper-

and-pencil surveys.

Kline, W.B. & Farrell, C.A. (2005). Recurring manuscript problems: Recommendations for writing, training, and research. Counselor Education and Supervision, 44(3), 166-167.

Authors analyzed manuscripts that were submitted for publication in the peer-reviewed journal, Counselor Education and Supervision to identify common failures and mistakes made by researchers using quantitative research methodologies. The majority of the manuscripts that were submitted for publication by this journal during the 1-year period under review were survey research analyses. Not surprisingly, this category also accounted for the majority of the manuscripts that were rejected for publication for several different reasons. Authors emphasize the need for good response rates to survey instruments and report that even when researchers are able to provide compelling reasons for lackluster response rates, quantitative survey studies with return rates of less than

50% were highly unlikely to be accepted for publication. Therefore, notwithstanding the need to obtain as robust of results as possible otherwise, authors suggest that high response rates on survey research projects are an essential element for having the results accepted as being valid. As a result, researchers are encouraged to review the relevant survey research literature to identify methods that can be used to improve survey return rates. Furthermore, researchers must pay particular attention to the sample framework they use for their surveys and include a rationale in support of this sampling approach in their studies. Such supporting rationale must include descriptions of the usefulness of the sampling approach as it applies to the guiding research questions as well as the purpose of the study, the ability of the results to be generalized further, and the overall significance of the survey findings.

Krosnick, J.A. (1999). Survey research. Annual Review of Psychology, 537.

Author provides a comprehensive review of survey research and how it has evolved over the years, first with face-to-face and then telephonic interviewing, as well as the impact of innovations in telecommunications on survey research. Author also provides an interesting discussion concerning sampling techniques and the criticisms that have been leveled at researchers who demand a representative sampling but still tend to prefer certain groups for sampling because of their availability or convenience to the researcher, thereby diminishing the representative of the sample and the ability of the survey researcher to generalize any findings that result. Especially noteworthy was author’s explanation concerning how survey researchers may be mislead by responses to survey questions that relate to socially desirable behaviors, with more positive responses being received for activities such as voting in the last presidential election than voting records confirm. In other words, the higher the level of social acceptability of the activity, the higher the positive response rate that is typically received. Even when researchers take extra steps to establish a comfort zone and a good rapport with respondents concerning the acceptability of behaviors that do not match these socially desirable behaviors, the response rates are still skewed in favor of the pro-social behavior. Finally, author provides a discussion concerning how survey respondents tend to form answers to questions and the cognitive processes that may affect their responses to illustrate that different people will invest different levels of thought into formulating answers to survey questions (“answer fatigue” may cause respondents to answer haphazardly or at random), and these differences should be considered when interpreting the survey results, and that shorter surveys would appear to be better than long ones.

Rogelberg, S.C., Spitzmueller, C., Little, I. & Reeve, C.L. (2006). Understanding response behavior to an online special topics organizational satisfaction survey. Personnel Psychology, 59(4), 903-904.

Authors point out that researchers frequently collect and analyze data from college students concerning their general attitudes, opinions and views concerning a wide range of topics, with the most commonly used instrument for this purpose being the survey.

General opinion surveys and special topic surveys are among some of the more commonly used instruments. Survey research projects will inevitably encounter some members of the population being sampled who are reluctant to participate and these survey hold-outs can adversely affect the response rate for surveys to the extent that researchers will be unable to use the survey results in a meaningful fashion because of a lack of data credibility. Small response rates also adversely affect the generalizability of the survey data that is collected. In an effort to improve survey response rates, authors describe several models that can be used to better understand why some people are willing to participate in survey research while others are not. In an organizational setting, even when respondents are able to remain anonymous (authors emphasize that it is very difficult to ensure anonymity for online research in organizational settings), respondents may be reluctant to participate in survey research based on their perception of greater psychological risk as well as a previous failing track record on the part of the surveying organization to act on the survey data they have collected. Therefore, authors conclude that analyzing previous response rates and the reasons for lack of participation can be used to improve the response rate for future survey projects.

Smith, P.B., MacQuarrie, C.R., Herbert, R.J., Cairns, D.L. & Begley, L.H. (2004). Preventing data fabrication in telephone survey research. Journal of Research Administration, 35(2),


After authors determined that survey company they had retained to conduct telephone surveys had jeopardized their research project by making up some of the data, these

University of Prince Edward Island authors submitted this analysis to educate other researchers concerning the potential for this with third-party vendors and provide several recommendations concerning how to ensure the credibility of survey data. Authors identified several inconsistencies in the data provided by the third-party vendor that led them to question the overall survey findings. Upon further investigations, authors identified several instances of the survey company fabricating the requisite data in order

to satisfy the contractual requirements for 1,410 surveys. Based on their experiences and review of approaches that can eliminate or minimize such occurrences, authors provide several alternatives to private third-party survey firms including university-based survey centers (a preferred choice because of the emphasis on knowledge acquisition rather than a profit motive), but also some common sense methods such as checking the references of private survey companies, including provisions in the contract that allow the researchers to renegotiate the contract in the event of problems involving recruitment of sufficient numbers of respondents, and dividing the data-collecting process between two or more survey companies and then comparing the results to determine congruence; this was regarded as a more costly alternative and authors emphasize as well that there will be problems inherent in trying to reconcile any discrepancies that are identified between the results provided by the different survey companies.

Thompson, E.H., Stainback, G.H. & Stovall, J.G. (2007, January). Survey says: Data to guide policy decisions — superintendents and boards can find formal research — rather than anecdotes and impressions — an advantage on key issues. School Administrator, 64(1),


Authors suggest that many educators and administrators tend to make important decisions based on their experience and intuition rather than the timely and valuable findings that survey research can provide. Authors report that by using a survey to measure taxpayer attitudes concerning a proposed tax increase to be used for construction projects for an Alabama school district, administrators were able to make an informed decision concerning how to structure their tax increase proposal to maximize its chance of approval. The report’s focus on the use of survey research in educational settings to measure stakeholder attitudes was deemed highly relevant to the purposes of the study envisioned in this project concerning surveying students in a rural Nigerian university

concerning their attitudes about e-learning opportunities. Authors emphasize the importance of this type of survey research in order to keep abreast of shifts in stakeholders’ attitudes but caution against using an in-house surveyor because of the expertise required to formulate survey questions that achieve the research project’s goals by avoiding ambiguity and bias in question design. Defining an appropriate population for the survey and drawing a sample are also important considerations, as well as the need to determine which type of survey is best suited to the research goals. In this regard, authors note that four commonly used survey methods are personal interviews, mail surveys, telephonic surveys and Web surveys; however, authors caution that the studies to date suggest that Web surveys are more appropriate for specialized populations.

Authors also recommend that the data analysis should also be contracted for with a professional survey company and the expectations concerning this area should be made clear from the outset.

Wiley, J.W. & Legge, M. (2006). Disciplined action planning drives employee engagement.

Human Resource Planning, 29(4), 8-9.

Authors are human resource management consultants who emphasize the need to design survey instruments with organizational goals in mind and to ensure that follow-up action is taken when the results of surveys become available. Failure to act on survey results translates into a dual loss of the resources that are required to conduct the initial and subsequent surveys as well as the potential opportunities for improvement that may be missed or postponed if timely action is not taken. Authors also point out that there is no one universal best approach that can be used with equally good effect in all settings, but that surveys can serve as useful benchmarks in any environment to gauge the effectiveness of interventions that are taken and make changes where necessary. In support of their position, authors present the results of their review of how one company used employee surveys to better align worker performance with organizational goals in ways that are analogous to surveying students concerning their attitudes about e-learning approaches. Of particular interest in this review was the importance played by the follow-

up steps taken based on the recommendations that were developed using the survey findings and the importance of measuring the effectiveness of the actions that were taken in response. Also of interest was the need to act on survey results in organizational

settings where respondents are watching the process with a great deal of interest to determine if their views were considered valid and worthwhile and if the organization’s leadership feels strongly enough about the process to validate their views through action.

Young, S.J. & Ross, C.M. (2000, June). Web questionnaires: a glimpse of survey research in the future. Parks & Recreation, 35(6), 30-31.

Although somewhat dated, authors present an informed and accurate assessment of the proliferation of online survey services and the value these resources have for survey researchers. Authors reiterate many of the same points that were raised by other authorities concerning electronic surveys, such as the fact that because they are anonymous, ensuring that the respondents that complete the survey are the intended respondents remains problematic and will continue to erode confidence in these methods unless and until a solution is found. Besides the useful guidelines authors provide concerning posting survey instruments online, there are some valuable insights provided as well about the potential for this survey approach in the future. In contrast to the highly labor-intensive and costly traditional mail and telephonic survey approaches, Web-based surveys can be used to gauge the attitudes of selected populations in a flexible and cost-

effective fashion. Authors also suggest that the proliferation of computers in the home will make Web-based surveys an increasingly attractive alternative for researchers of all types, but particularly in the social sciences. Authors also point out that while potential respondents can be emailed an invitation to participate in an online survey by including a hyperlink or URL to the survey, an alternative is to simply email the survey itself either embedded in the email message itself or as an attachment to it (the date of the analysis likely accounts for the authors’ failure to mention the reluctance of many email recipients to open email attachments out of fear of contacting a virus or other harmful malware-type applications).

Application Abstract

The final part of the study, the application component, provides details concerning how the survey research method will be specifically used to determine attitudes and behavioral intentions of the students in a private university in a rural area of Nigeria regarding their acceptance of e-learning techniques. This is achieved by identifying a problem for the research, the research purpose, research questions, theoretical foundations of the proposed research, and the methodology used to conduct the research.

PART 3: The Application Component


Distance learning programs are certainly not new, with correspondence courses being a good example of how teachers can deliver educational services in non-face-to-face settings. The actual evolution of the use of technology for distance learning purposes, though, began with closed circuit television presentations, followed by satellite and interactive television broadcasts to the wide range of sophisticated online educational (“e-learning”) programs at all levels that are offered today (West & Jones, 2007). To use e-learning techniques to their maximum advantage, though, requires an assessment of the learners’ readiness levels and attitudes concerning the technology, issues that form the focus of the research project described further below.

Application Objectives

The objectives of the application component were to provide details of how the survey research methodology will be used to determine attitudes and behavioral intentions of the students in a private university in a rural area of Nigeria regarding their acceptance of e-learning. To this end, this section:

1. Identifies a problem of the research, the purpose of the research, the research questions, and the research hypotheses.

2. Presents the theoretical foundations of the proposed research model and hypotheses.

3. Explains the methodology used to conduct the research and provide an overview of the target population, data collection and analysis of the data.

Application Demonstration

The demonstration of the application component will be achieved through the design of a prototype of the proposal by identifying a problem for the research, the research purpose, research questions, research hypotheses, theoretical foundations of the proposed research, and the methodology used to conduct the research.

Review and Analysis

Prototype Design

Identification of a problem. The traditional brick-and-mortar educational institution is rapidly being supplemented by, and in some cases, entirely replaced by distance learning approaches. These trends have proven to be a dual-edged sword for some educators who are faced with a need to embrace new technologies while ensuring that they are used in ways that make their investment worthwhile. For instance, Stevens emphasizes that, “The rapid growth and educational application of the Internet has led to a challenge to traditional ways of teaching and learning at a distance that were based on paper and the postal system. E-Learning is Internet-based and does not require the degree of central control that distance educators have traditionally employed within dedicated institutions” (2006, p. 119). Despite all of the promise that e-learning holds for students at all levels around the world, it is clear that some are better prepared than others and Nigeria continues to rank low (158th in the world) in terms of Internet access while retaining an inordinately high percentage of population who live in rural regions of the country (Nigeria, 2010).

Further exacerbating the problem at hand is the overall inadequacy of the primary school system in Nigeria to satisfy the demands of students in the 21st century who continue to be adversely affected by the relatively meager amount of tertiary educational services that are provided and who therefore enter higher educational institutions ill equipped to achieve successful academic outcomes without effective remedial interventions being used. Heaping yet more problems on the introduction of e-learning initiatives in rural Nigerian universities is the fact that the readiness, abilities and attitudes of these young learners remains largely conjectural in basis and there have been no studies to date to the author’s knowledge that have directly confronted this need.

The purpose of the research. The purpose of the research envisioned in this study is use an appropriate survey methodology in order to accurately determine the attitudes and behavioral intentions of the students in a private university in a rural area of Nigeria regarding their acceptance of e-learning.

The research questions. To achieve the above-stated research purpose, the research project will be guided by the following research questions:

1. What percentage of rural Nigerian university undergraduate students are computer literate?

2. What percentage of rural Nigerian university undergraduate students possess needed e-skills that are required for using e-learning methods effectively?

3. Do most Nigerian university undergraduate students prefer a traditional classroom (e.g., face-to-face) setting?

4. What cultural factors may play a role in preventing the widespread acceptance of e-learning methods in rural Nigerian universities?

The research hypotheses. It will be the working hypothesis of the research project that formed the basis of this study those rural Nigerian university undergraduate students who enter school with computer literacy and e-skills will have a more favorable attitude toward e-learning initiatives compared to their counterparts who do not.

The theoretical foundations of the proposed research model and hypotheses.

It is the theoretical foundation of the proposed research model and hypotheses that the digital divides has adversely affected the attitudes of university students who lack the requisite skills to navigate e-learning modules effectively and that educators require an accurate assessment of their students’ readiness and abilities for e-learning programs prior to the design and implementation.

Methodology used to conduct the research

Based on the respective strengths and weaknesses of qualitative and quantitative research, a combination of these two research traditions in a single custom survey instrument was deemed a superior approach. The quantitative data for the survey will include years of educational attainment, gender, age and so forth. The quantitative data for the survey concerning respondent attitudes toward e-learning initiatives will be collected using a series of Likert-scaled questions ranging from “strongly agree” to “strongly disagree” as well as the inclusion of a qualitative open-ended comment section. The survey will be administered in a the availability of Internet access at the targeted university in order to ensure that all respondents are comfortable with the format and formulate their answers based on their critical thinking skills rather than trying to learn how to use a new interface.

Overview of the target population, data collection and analysis of the data

The target population for the survey will be all freshmen entering the rural Nigerian university in question during school year 2010-2011. The data collection will take place during normal school hours and the researcher will enlist the assistance of two or three research assistants to assist in the administration of the survey; the research assistants will be provided with instructions from the principal researcher concerning survey administration methods. The quantifiable data that results from the survey administration will be analyzed using SPSS Version 11.0 to develop relevant frequency analyses, means and standard deviations. The statistical data will be presented in tabular and graphic form for illustration purposes, and will be interpreted in a narrative fashion in the data analysis chapter. The qualitative data analysis will include reporting all comments received in the open-ended comment section verbatim as well as identifying key metaphors and theme that may be involved in these open-ended responses.

Application References

Roffe, I. (2004). Innovation and e-learning: E-business for an educational enterprise. Cardiff,

Wales: University of Wales Press.

Stevens, K. (2006). Rural schools as regional centres of e-learning and the management of digital knowledge. International Journal of Education and Development using

Information and Communication Technology, 2(4), 119-120.

West, E. & Jones, P. (2007). A framework for planning technology used in teacher education programs that serve rural communities. Rural Special Education Quarterly, 26(4), 3-4.