Post Job Free

Resume

Sign in

Administration

Location:
Nigeria
Salary:
100,000 - 200,000
Posted:
February 15, 2018

Contact this candidate

Resume:

**

AN OVERVIEW OF QUANTITATIVE

AND QUALITATIVE DATA

COLLECTION METHODS

5. DATA COLLECTION METHODS:

SOME TIPS AND COMPARISONS

In the previous chapter, we identified two broad types of evaluation methodologies: quantitative and qualitative. In this section, we talk more about the debate over the relative virtues of these approaches and discuss some of the advantages and disadvantages of different types of instruments. In such a debate, two types of issues are considered: theoretical and practical.

Theoretical Issues

Most often these center on one of three topics:

• The value of the types of data

• The relative scientific rigor of the data

• Basic, underlying philosophies of evaluation

Value of the Data

Quantitative and qualitative techniques provide a tradeoff between breadth and depth, and between generalizability and targeting to specific

(sometimes very limited) populations. For example, a quantitative data collection methodology such as a sample survey of high school students who participated in a special science enrichment program can yield representative and broadly generalizable information about the proportion of participants who plan to major in science when they get to college and how this proportion differs by gender. But at best, the survey can elicit only a few, often superficial reasons for this gender difference. On the other hand, separate focus groups (a qualitative technique related to a group interview) conducted with small groups of men and women students will provide many more clues about gender differences in the choice of science majors, and the extent to which the special science program changed or reinforced attitudes. The focus group technique is, however, limited in the extent to which findings apply beyond the specific individuals included in the groups.

III

Section

44

Scientific Rigor

Data collected through quantitative methods are often believed to yield more objective and accurate information because they were collected using standardized methods, can be replicated, and, unlike qualitative data, can be analyzed using sophisticated statistical techniques. In line with these arguments, traditional wisdom has held that qualitative methods are most suitable for formative evaluations, whereas summative evaluations require “hard” (quantitative) measures to judge the ultimate value of the project.

This distinction is too simplistic. Both approaches may or may not satisfy the canons of scientific rigor. Quantitative researchers are becoming increasingly aware that some of their data may not be accurate and valid, because the survey respondents may not understand the meaning of questions to which they respond, and because people’s recall of events is often faulty. On the other hand, qualitative researchers have developed better techniques for classifying and analyzing large bodies of descriptive data. It is also increasingly recognized that all data collection quantitative and qualitative operates within a cultural context and is affected to some extent by the perceptions and beliefs of investigators and data collectors.

Philosophical Distinction

Researchers and scholars differ about the respective merits of the two approaches, largely because of

different views about the nature of knowledge and how knowledge is best acquired. Qualitative researchers feel that there is no objective social reality, and all knowledge is “constructed” by observers who are the product of traditions, beliefs, and the social and political environments within which they operate.

Quantitative researchers, who also have abandoned

naive beliefs about striving for absolute and objective truth in research, continue to adhere to the scientific model and to develop increasingly sophisticated

statistical techniques to measure social phenomena. This distinction affects the nature of research designs. According to its most orthodox practitioners, qualitative research does not start with clearly specified research questions or hypotheses to be tested; instead, questions are formulated after open-ended fie ld research has been completed (Lofland and Lofland, 1995) This approach is difficult for program and project evaluators to adopt, since specific questions about the effectiveness of interventions being evaluated are expected to guide the evaluation. Some researchers have suggested that a distinction be made between Qualitative work and qualitative work: Qualitative work

(large Q) involves participant observation and ethnographic field work, Researchers and

scholars differ

about the respective

merits of the two

approaches, largely

because of different

views about the

nature of knowledge

and how knowledge

is best acquired.

45

whereas qualitative work (small q) refers to open-ended data collection methods such as indepth interviews embedded in structured research

(Kidder and Fine, 1987). The latter are more likely to meet NSF evaluation needs.

Practical Issues

On the practical level, four issues can affect the choice of method:

• Credibility of findings

• Staff skills

• Costs

• Time constraints

Credibility of Findings

Evaluations are designed for various audiences, including funding agencies, policymakers in governmental and private agencies, project staff and clients, researchers in academic and applied settings, and various other stakeholders. Experienced evaluators know that they often deal with skeptical audiences or stakeholders who seek to discredit findings that are too critical or not at all critical of a project’s outcomes. For this reason, the evaluation methodology may be rejected as unsound or weak for a specific case.

The major stakeholders for NSF projects are policymakers within NSF and the federal government, state and local officials, and decisionmakers in the educational community where the project is located. In most cases, decisionmakers at the national level tend to favor quantitative information because these policymakers are accustomed to basing funding decisions on numbers and statistical indicators. On the other hand, many stakeholders in the educational community are often skeptical about statistics and “number crunching” and consider the richer data obtained through qualitative research to be more trustworthy and informative. A particular case in point is the use of traditional test results, a favorite outcome criterion for policymakers, school boards, and parents, but one that teachers and school administrators tend to discount as a poor tool for assessing true student learning. Staff Skills

Qualitative methods, including indepth interviewing, observations, and the use of focus groups, require good staff skills and considerable supervision to yield trustworthy data. Some quantitative research methods can be mastered easily with the help of simple training manuals; this is true of small-scale, self-administered questionnaires in which most questions can be answered by yes/no checkmarks or selecting numbers on a simple scale. Large-scale, complex surveys, however, usually require more skilled personnel to design the instruments and to manage data collection and analysis.

46

Costs

It is difficult to generalize about the relative costs of the two methods: much depends on the amount of information needed, quality standards followed for the data collection, and the number of cases required for reliability and validity. A short survey based on a small number of cases

(25-50) and consisting of a few “easy” questions would be inexpensive, but it also would provide only limited data. Even cheaper would be substituting a focus group session for a subset of 25-50 respondents; while this method might provide more “interesting” data, those data would be primarily useful for generating new hypotheses to be tested by more appropriate qualitative or quantitative methods. To obtain robust findings, the cost of data collection is bound to be high regardless of method.

Time Constraints

Similarly, data complexity and quality affect the

time needed for data collection and analysis.

Although technological innovations have shortened

the time needed to process quantitative data, a good survey requires considerable time to create and

pretest questions and to obtain high response rates. However, qualitative methods may be even more

time consuming because data collection and data

analysis overlap, and the process encourages the

exploration of new evaluation questions. If

insufficient time is allowed for evaluation, it may be necessary to curtail the amount of data to be

collected or to cut short the analytic process, thereby limiting the value of the findings. For evaluations that operate under severe time constraints for example, where budgetary decisions depend on the findings choosing the best method can present a serious dilemma.

The debate with respect to the merits of qualitative versus quantitative methods is still ongoing in the academic community, but when it comes to the choice of methods in conducting project evaluations, a pragmatic strategy has been gaining increased support. Respected practitioners have argued for integrating the two approaches by putting together packages of the available imperfect methods and theories, which will minimize biases by selecting the least biased and most appropriate method for each evaluation subtask (Shadish, 1993). Others have stressed the advantages of linking qualitative and quantitative methods when performing studies and evaluations, showing how the validity and usefulness of findings will benefit from this linkage (Miles and Huberman, 1994). Using the Mixed-Method Approach

We feel that a strong case can be made for including qualitative elements in the great majority of evaluations of NSF projects. Most of the programs sponsored by NSF are not targeted to participants in a carefully For evaluations that

operate under severe

time constraints for

example, where

budgetary decisions

depend on the findings

choosing the best method

can present a serious

dilemma.

47

controlled and restrictive environment, but rather to those in a complex social environment that has a

bearing on the success of the project. To ignore the complexity of the background is to impoverish the

evaluation. Similarly, when investigating human

behavior and attitudes, it is most fruitful to use a variety of data collection methods. By using

different sources and methods at various points in the evaluation process, the evaluation team can build on the strength of each type of data collection and minimize the weaknesses of any single approach. A

multimethod approach to evaluation can increase

both the validity and the reliability of evaluation data. The range of possible benefits that carefully designed mixed-method designs can yield has been conceptualized by a number of evaluators. The validity of results can be strengthened by using more than one method to study the same phenomenon. This approach called triangulation is most often mentioned as the main advantage of the mixed-methods approach. Combining the two methods pays off in improved instrumentation for all data collection approaches and in sharpening the evaluator’s understanding of findings. A typical design might start out with a qualitative segment such as a focus group discussion alerting the evaluator to issues that should be explored in a survey of program participants, followed by the survey, which in turn is followed by indepth interviews to clarify some of the survey findings

(Exhibit 12).

Exhibit 12. Example of mixed-methods design

Methodology: Qualitative Quantitative Qualitative

Data Collection Approach: Exploratory focus

group

Survey Personal

Interview

It should be noted that triangulation, while very powerful when sources agree, can also pose problems for the analyst when different sources yield different, even contradic tory information. There is no formula for resolving such conflicts, and the best advice is to consider disagreements in the context in which they emerge. Some suggestions for resolving differences are provided in Altshuld and Witkin (2000). But this sequential approach is only one of several that evaluators might find useful. Thus, if an evaluator has identified subgroups of program participants or specific topics for which indepth information is needed, a limited qualitative data collection can be initiated while a more broad- based survey is in progress.

Mixed methods may also lead evaluators to modify or expand the adoption of data collection methods. This can occur when the use of mixed methods uncovers inconsistencies and discrepancies that should A strong case can

be made for

including

qualitative

elements in the

great majority of

evaluations of

NSF projects.

48

alert the evaluator to the need for re-examining data collection and analysis procedures. The philosophy guiding the suggestions outlined in this handbook can be summarized as follows:

The evaluator should attempt to obtain the most useful information to answer the critical questions about the project and, in so doing, rely on a mixed-methods

approach whenever possible.

This approach reflects the growing consensus among evaluation experts that both qualitative and quantitative methods have a place in the performance of effective evaluations, be they formative or summative. References

Altshuld, J., and Witkin, B.R. (2000). Transferring Needs into Solution Strategies. Newbury Park, CA: Sage.

Kidder, L., and Fine, M. (1987). Qualitative and Quantita tive Methods: When Stories Converge. Multiple Methods in Program Evaluation. New Directions for Program Evaluation, No. 35. San Francisco, CA: Jossey-Bass.

Lofland, J., and Lofland, L.H. (1995). Analyzing Social Settings: A Guide to Qualitative Observation and Analysis. Belmont, CA: Wadsworth Publishing Company.

Miles, M.B., and Huberman, A.M. (1994). Qualitative Data Analysis, 2nd Ed. Newbury Park, CA: Sage.

Shadish, W.R. (1993) Program Evaluation: A Pluralistic Enterprise. New Directions for Program Evaluation, No. 60. San Francisco, CA: Jossey-Bass.

49

6. REVIEW AND COMPARISON OF SELECTED TECHNIQUES

In this section we describe and compare the most common quantitative and qualitative methods employed in project evaluations. These include surveys, indepth interviews, focus groups, observations, and tests. We also cover briefly some other less frequently used qualitative techniques. Advantages and disadvantages are summarized. For those interested in learning more about data collection methods, a list of recommended readings is provided at the end of the report. Readers may also want to consult the Online Evaluation Resource Library (OERL) web site

(http://oerl.sri.com), which provides information on approaches used in NSF project evaluations, as well as reports, modules on constructing designs, survey questionnaires, and other instruments. Surveys

Surveys are a very popular form of data collection, especially when gathering information from large groups, where standardization is important. Surveys can be constructed in many ways, but they always consist of two components: questions and responses. While sometimes evaluators choose to keep responses “open ended,” i.e., allow respondents to answer in a free flowing narrative form, most often the

“close-ended” approach in which respondents are asked to select from a range of predetermined answers is adopted. Open-ended responses may be difficult to code and require more time and resources to handle than close-ended choices. Responses may take the form of a rating on some scale (e.g., rate a given statement from 1 to 4 on a scale from “agree” to

“disagree”), may give categories from which to choose (e.g., select from potential categories of partner institutions with which a program could be involved), or may require estimates of numbers or percentages of time in which participants might engage in an activity (e.g., the percentage of time spent on teacher-led instruction or cooperative learning). Although surveys are popularly referred to as paper-and-pencil instruments, this too is changing. Evaluators are increasingly exploring the utility of survey methods that take advantage of the emerging technologies. Thus, surveys may be administered via computer-assisted calling, as e-mail attachments, and as web-based online data collection systems. Even the traditional approach of mailing surveys for self-guided response has been supplemented by using facsimile for delivery and return.

Selecting the best method for collecting surveys requires weighing a number of factors. These included the complexity of questions, resources available, the project schedule, etc. For example, web-based surveys are attractive for a number of reasons. First, because the data collected can be put directly into a database, the time and steps between data collection and analysis can be shortened. Second, it is possible to build in checks that keep out-of-range responses from being entered. However, at this time, unless the survey is fairly simple (no skip patterns, 50

limited use of matrices), the technology needed to develop such surveys can require a significant resource investment. As new tools are developed for commercial use, this problem should diminish. When to Use Surveys

Surveys are typically selected when information is to be collected from a large number of people or when answers are needed to a clearly defined set of questions. Surveys are good tools for obtaining information on a wide range of topics when indepth probing of responses is not necessary, and they are useful for both formative and summative purposes. Frequently, the same survey is used at spaced intervals of time to measure progress along some dimension or change in behavior. Exhibit 13 shows the advantages and disadvantages of surveys. Interviews

The use of interviews as a data collection method begins with the assumption that the participants’ perspectives are meaningful, knowable, and can be made explicit, and that their perspectives affect the success of the project. An in-person or telephone interview, rather than a paper-and- pencil survey, is selected when interpersonal contact is important and when opportunities for followup of interesting comments are desired. Two types of interviews are used in evaluation research: structured interviews, in which a carefully worded questionnaire is administered, and indepth interviews, in which the interviewer does not follow a rigid form. In the former, the emphasis is on obtaining answers to carefully phrased questions. Interviewers are trained to deviate only minimally from the question wording to ensure uniformity of interview Exhibit 13. Advantages and disadvantages of surveys Advantages:

• Good for gathering descriptive data

• Can cover a wide range of topics

• Are relatively inexpensive to use

• Can be analyzed using a variety of existing software Disadvantages:

• Self-report may lead to biased reporting

• Data may provide a general picture but lack depth

• May not provide adequate information on context

51

administration. In the latter, however, the interviewers seek to encourage free and open responses, and there may be a tradeoff between comprehensive coverage of topics and indepth exploration of a more limited set of questions. Indepth interviews also encourage capturing respondents’ perceptions in their own words, a very desirable strategy in qualitative data collection. This allows the evaluator to present the meaningfulness of the experience from the respondent’s perspective. Indepth interviews are conducted with individuals or a small group of individuals.

When to Use Interviews

Interviews can be used at any stage of the evaluation process. Indepth interviews are especially useful in answering questions such as those suggested by Patton (1990):

• What does the program look and feel like to the participants? To other stakeholders?

• What do stakeholders know about the project?

• What thoughts do stakeholders knowledgeable about the program have concerning program operations, processes, and outcomes?

• What are participants’ and stakeholders’ expectations?

• What features of the project are most salient to the participants?

• What changes do participants perceive in themselves as a result of their involvement in the project?

Specific circumstances for which indepth interviews are particularly appropriate include situations involving complex subject matter, detailed information, high-status respondents, and highly sensitive subject matter. Exhib it 14 shows the advantages and disadvantages of interviews. 52

Exhibit 14. Advantages and disadvantages of interviews Advantages:

• Usually yield richest data, details, new insights

• Permit face-to-face contact with respondents

• Provide opportunity to explore topics in depth

• Allow interviewer to experience the affective as well as cognitive aspects of responses

• Allow interviewer to explain or help clarify questions, increasing the likelihood of useful responses

• Allow interviewer to be flexible in administering interview to particular individuals or in particular circumstances Disadvantages:

• Expensive and time-consuming

• Need well-qualified, highly trained interviewers

• Interviewee may distort information through recall error, selective perceptions, desire to please interviewer

• Flexibility can result in inconsistencies across interviews

• Volume of information very large; may be difficult to transcribe and reduce data

Focus Groups

Focus groups combine elements of both interviewing and participant observation. The focus group session is, indeed, an interview not a discussion group, problem-solving session, or decision-making group. At the same time, focus groups capitalize on group dynamics. The hallmark of focus groups is the explicit use of the group interaction to generate data and insights that would be unlikely to emerge otherwise. The technique inherently allows observation of group dynamics, discussion, and firsthand insights into the respondents’ behaviors, attitudes, language, etc.

Focus groups are a gathering of 8 to 12 people who share some characteristics relevant to the evaluation. Originally used as a market research tool to investigate the appeal of various products, the focus group technique has been adopted by other fields, such as education, as a tool for data gathering on a given topic. Initially, focus groups took place in a special facility that included recording apparatus (audio and/or visual) and an attached room with a one-way mirror for observation. 53

There was an official recorder, who may or may not have been in the room. Participants were paid for attendance and provided with refreshments. As the focus group technique has been adopted by fields outside of marketing, some of these features, such as payment or refreshments, have sometimes been eliminated.

When to Use Focus Groups

Focus groups can be useful at both the formative and summative stages of an evaluation. They provide answers to the same types of questions as indepth interviews, except that they take place in a social context. Specific applications of the focus group method in evaluations include:

• Identifying and defining problems in project implementation

• Pretesting topics or idea

• Identifying project strengths, weaknesses, and recommendations

• Assisting with interpretation of quantitative findings

• Obtaining perceptions of project outcomes and impacts

• Generating new ideas

Although focus groups and indepth interviews share many characteristics, they should not be used interchangeably. Factors to consider when choosing between focus groups and indepth interviews are displayed in Exhibit 15.

Observations

Observational techniques are methods by which an individual or individuals gather firsthand data on programs, processes, or behaviors being studied. They provide evaluators with an opportunity to collect data on a wide range of behaviors, to capture a great variety of interactions, and to openly explore the evaluation topic. By directly observing operations and activities, the evaluator can develop a holistic perspective, i.e., an understanding of the context within which the project operates. This may be especially important where it is not the event that is of interest, but rather how that event may fit into, or be affected by, a sequence of events. Observational approaches also allow the evaluator to learn about issues the participants or staff may be unaware of or that they are unwilling or unable to discuss candidly in an interview or focus group.

54

Exhibit 15. Which to use: Focus groups or indepth interviews? Factors to consider Use focus groups when… Use interviews when… Group interaction interaction of respondents may

stimulate a richer response or new and

valuable thought.

group interaction is likely to be limited

or nonproductive.

Group/peer

pressure

group/peer pressure will be valuable in

challenging the thinking of respondents

and illuminating conflicting opinions.

group/peer pressure would inhibit

responses and cloud the meaning of

results.

Sensitivity of

subject matter

subject matter is not so sensitive that

respondents will temper responses or

withhold information.

subject matter is so sensitive that

respondents would be unwilling to talk

openly in a group.

Depth of individual

responses

the topic is such that most respondents

can say all that is relevant or all that

they know in less than 10 minutes.

the topic is such that a greater depth of

response per individual is desirable, as

with complex subject matter and very

knowledgeable respondents.

Data collector

fatigue

it is desirable to have one individual

conduct the data collection; a few

groups will not create fatigue or

boredom for one person.

it is possible to use numerous

individuals on the project; one

interviewer would become fatigued or

bored conducting all interviews.

Extent of issues

to be covered

the volume of issues to cover is not

extensive.

a greater volume of issues must be

covered.

Continuity of

information

a single subject area is being examined

in depth and strings of behaviors are

less relevant.

it is necessary to understand how

attitudes and behaviors link together on

an individual basis.

Experimentation

with interview

guide

enough is known to establish a

meaningful topic guide.

it may be necessary to develop the

interview guide by altering it after each

of the initial interviews.

Observation by

stakeholders

it is desirable for stakeholders to hear

what participants have to say.

stakeholders do not need to hear

firsthand the opinions of participants.

Logistics

geographically

an acceptable number of target

respondents can be assembled in one

location.

respondents are dispersed or not easily

assembled for other reasons.

Cost and training

quick turnaround is critical, and funds

are limited.

quick turnaround is not critical, and

budget will permit higher cost.

Availability of

qualified staff

focus group facilitators need to be able

to control and manage groups.

interviewers need to be supportive and

skilled listeners.

55

When to Use Observations

Observations can be useful during both the formative and summative phases of evaluation. For example, during the formative phase, observations can be useful in determining whether or not the project is being delivered and operated as planned. During the summative phase, observations can be used to determine whether or not the project has been successful. The technique would be especially useful in directly examining teaching methods employed by the faculty in their own classes after program participation. Exhibit 16 shows the advantages and disadvantages of observations.

Tests

Tests provide a way to assess subjects’ knowledge and capacity to apply this knowledge to new situations. Tests take many forms. They may require respondents to choose among alternatives (select a correct answer, select an incorrect answer, select the best answer), to cluster choices into like groups, to produce short answers, or to write extended responses. A question may address a single outcome of interest or lead to questions involving a number of outcome areas.

Exhibit 16. Advantages and disadvantages of observations Advantages:

• Provide direct information about behavior of individuals and groups

• Permit evaluator to enter into and understand situation/context

• Provide good opportunities for identifying unanticipated outcomes

• Exist in natural, unstructured, and flexible setting Disadvantages:

• Expensive and time consuming

• Need well-qualified, highly trained observers; may need to be content experts

• May affect behavior of participants

• Selective perception of observer may distort data

• Behavior or set of behaviors observed may be atypical 56

Tests provide information that is measured against a variety of standards. The most popular test has traditionally been norm-referenced assessment. Norm-referenced tests provide information on how the target performs against a reference group or normative population. In and of itself, such scores say nothing about how adequate the target’s performance may be, only how that performance compares with the reference group. Other assessments are constructed to determine whether or not the target has attained mastery of a skill or knowledge area. These tests, called criterion-referenced assessments, provide data on whether important skills have been reached but say far less about a subject’s standing relative to his/her peers. A variant on the criterion-referenced approach is proficiency testing. Like the criterion-referenced test, the proficiency test provides an assessment against a level of skill attainment, but includes standards for performance at varying levels of proficiency, typically a three- or four-point scale



Contact this candidate