Onwuegbuzie 2005
Onwuegbuzie 2005
To cite this article: Anthony J. Onwuegbuzie & Nancy L. Leech (2005) On Becoming a
Pragmatic Researcher: The Importance of Combining Quantitative and Qualitative Research
Methodologies, International Journal of Social Research Methodology, 8:5, 375-387, DOI:
10.1080/13645570500402447
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or
arising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Int. J. Social Research Methodology
Vol. 8, No. 5, Month 2005, pp. 375–387
The last 100 years have witnessed a fervent debate in the USA about quantitative and qual-
International
10.1080/13645570500402447
1364-5579
Original
Taylor
8502005
Department
AnthonyOnwuegbuzie
[email protected]
000002005
&Article
Francis
(print)/1464-5300
ofJournal
Educational
of Social
Measurement
(online)
Research Methodology
and Research, College of EducationUniversity of South Florida4202 East Fowler Avenue, EDU 162TampaFL 33620-7750USA
itative research paradigms. Unfortunately, this has led to a great divide between quantita-
tive and qualitative researchers, who often view themselves as in competition with each
other. Clearly, this polarization has promoted purists, namely, researchers who restrict
themselves exclusively either to quantitative or to qualitative research methods. Mono-
method research is the biggest threat to the advancement of the social sciences. Indeed, as
long as we stay polarized in research, how can we expect stakeholders who rely on our
research findings to take our work seriously? Thus, the purpose of this paper is to explore
how the debate between quantitative and qualitative is divisive and, hence, counterproduc-
tive for advancing the social and behavioural science field. This paper advocates that all
graduate students learn to utilize and to appreciate both quantitative and qualitative
research. In so doing, students will develop into what we term as pragmatic researchers.
Introduction
Throughout the 20th century, social and behavioural science researchers in the USA
have witnessed a great divide between two opposing camps of researchers. Specifically,
these camps have comprised positivists on one side and interpretivists on the other
side. Interestingly, as noted by Sechrest and Sidani (1995), it is only in the social and
Anthony J. Onwuegbuzie is based at the University of South Florida. Nancy L. Leech is based at the University of
Colorado at Denver and Health Sciences Center. Correspondence to: Anthony J. Onwuegbuzie, Department of
Educational Measurement and Research, College of Education, University of South Florida, 4202 East Fowler
Avenue, EDU 162,Tampa, FL 33620-7750, USA. Email: [email protected]
should be used by researchers. Thus, the purpose of this paper is to explore how the
debate between quantitative and qualitative is divisive and, hence, counterproductive
for advancing the social and behavioural science field. Instead, we advocate that all
graduate students learn to utilize and to appreciate both quantitative and qualitative
research. In so doing, students will develop into what we term as pragmatic researchers.
believing that ‘epistemological purity doesn’t get research done’ (Miles & Huberman,
1984, p. 21). In any case, researchers who ascribe to epistemological purity disregard
the fact that research methodologies are merely tools that are designed to aid our
understanding of the world.
Moreover, although in the natural sciences many properties of objects can be meas-
ured with near-perfect reliability, in the social sciences the vast majority of measures
yield scores that are, to some degree, unreliable. This is because constructs of interest
in the social science fields typically represent abstractions (e.g. personality, achieve-
ment, intelligence, motivation, locus of control) that must be measured indirectly
(Onwuegbuzie & Daniel, 2002). Failure to attain 100% score reliability implies meas-
urement error which, in turn, introduces subjectivity into any interpretations. For
example, the vast majority of quantitative researchers do not provide reliability
378 A. J. Onwuegbuzie & N. L. Leech
estimates for their own data, with estimates ranging from 64.4% (Vacha-Haase, Ness,
Nillsson, & Reetz, 1999) to 86.9% (Vacha-Haase, 1998). This unsound research prac-
tice is surprising, considering that good standards for the reporting of findings call for
researchers to provide reliability estimates and standard errors for ‘each total score,
subscore, or combination of scores that is to be interpreted’ (American Educational
Research Association, American Psychological Association, & National Council on
Measurement in Education, 1999, p. 31). According to Thompson and Vacha-Haase
(2000), the lack of reporting sample-specific score reliability estimates stems, in part,
from a failure to realize that reliability is not a characteristic of an instrument, but is a
characteristic of scores. Because every sample yields a unique set of scores for a partic-
ular instrument, and because every set of scores reveals a unique score reliability coef-
ficient to a sufficient number of decimal places, it cannot be assumed that every set of
Downloaded by [University Library Utrecht] at 01:47 23 July 2013
scores will yield equal or even similar score reliability coefficients. Consistent with this
contention, several researchers (e.g. Yin & Fan, 2000) have documented examples of
score reliability estimates that have varied significantly upon different administrations
of the same instrument. Therefore, reporting findings without also delineating
the associated score reliability coefficients builds in subjectivity to subsequent
interpretations.
Similarly, the finding that ‘the average [hypothesized] power of null hypothesis
significance tests in typical studies and research literature is in the .40 to .60 range
…[with] .50 as a rough average …[indicates a] level of accuracy [that] is so low that it
could be achieved just by flipping a (unbiased) coin!’ (Schmidt & Hunter, 1997, p. 40)
suggests that ‘at least some controversies in the social and behavioral sciences may be
artifactual in nature’ (Rossi, 1997, p. 178). Thus, low power, which is characteristic of
the majority of quantitative research studies, makes the claim of complete objectivity
suspect at best. Indeed, in the social science field, at least, the techniques used by
positivists are no more inherently scientific than are the procedures utilized by
interpretivists.
Interpretivists also are not safe from criticism. In particular, their claim that multi-
ple, contradictory, but valid accounts of the same phenomenon always exist is extremely
misleading, inasmuch as it leads many qualitative researchers to adopt an ‘anything
goes’ relativist attitude, thereby not paying due attention to providing an adequate
rationale for interpretations of their data. That is, many qualitative methods of analyses
‘often remain private and unavailable for public inspection’ (Constas, 1992, p. 254).
Yet, without standards, when do we know whether what we know is trustworthy?
analysed to obtain meta-themes that subsume the original themes, thereby describing
the relationship among these themes. As an example, Nolan and Ryan (2000) asked 59
undergraduate students, comprising 29 men and 30 women, to describe their most
memorable horror film. The researchers identified the 45 most prevalent adjectives,
nouns and verbs used to describe these films. Next, they constructed a 45(word)-by-
59(person) matrix, whereby the entries in each cell indicated whether each participant
had mentioned the key word in her/his description. The investigators also developed a
59(person)-by-59(person) matrix of students that stemmed from the co-occurrence of
the words in their descriptions. These matrices revealed that women were more likely
to use words that suggested they were afraid of spiritual possession and deceitful inti-
macy, whereas men were more likely to use words that suggested fear of rural people
and locations. As stated by Ryan and Bernard, such examples make ‘abundantly clear
the value of turning qualitative data into quantitative data: Doing so can produce infor-
mation that engenders deeper interpretations of the meanings in the original corpus of
qualitative data. Just as in any mass of numbers, it is hard to see patterns in words
unless one first does some kind of data reduction’ (2000, p. 777). Similarly, with respect
to quantitative-based research, the collection, analysis and interpretation of qualitative
data can aid the interpretation of statistically significant, practically significant, clini-
cally significant and economically significant findings (Onwuegbuzie & Leech, 2004).
Additionally, the popularization of complex multivariate analyses (e.g. structural equa-
tion modelling and hierarchical linear modelling), coupled with the increased empha-
sis on generalizability theory, allows quantitative researchers to pay better attention to
context effects than previously has been the case.
As noted by Newman and Benz (1998), rather than representing bi-polar opposites,
quantitative and qualitative research represent an interactive continuum. Moreover,
the role of theory is central for both paradigms. Specifically, in qualitative research the
most common purposes are those of theory initiation and theory building, whereas in
quantitative research the most typical objectives are those of theory testing and theory
modification (Newman & Benz, 1998). Clearly, neither tradition is independent of the
other, nor can either school encompass the whole research process. Thus, both quan-
titative and qualitative research techniques are needed to gain a more complete under-
standing of phenomena (Johnson & Onwuegbuzie, 2004; Newman & Benz, 1998).
International Journal of Social Research Methodology 381
Another way in which quantitative and qualitative research approaches are congru-
ent lies in the fact that both empirical and qualitative data are interchangeable. That is,
just as it could be contended that all data are basically qualitative (Berg, 1989), inas-
much as they represent an attempt to capture a raw experience, so it could be argued
that all data can be quantified (Sechrest & Sidani, 1995). More specifically, all data can
be binarized, a term coined by Onwuegbuzie (2003) to describe dichotomously
expressing a variable in binary form (i.e. ‘1’ vs. ‘0’). Indeed, just as experimental, quasi-
experimental and correlation research designs can incorporate the collection of
observational and interview data, so can qualitative designs include the collection of
empirical data. As aptly stated by Kaplan: ‘Quantities are of qualities, and a measured
quality has just the magnitude expressed in its measure’ (1964, p. 207). Additionally,
Onwuegbuzie illustrated how inferential statistics can be utilized in qualitative data
Downloaded by [University Library Utrecht] at 01:47 23 July 2013
analyses. According to this author, ‘this can be accomplished by treating words arising
from individuals, or observations emerging from a particular setting, as sample units
of data that represent the total number of words/observations existing from that
sample member/context’ (2003, p. 406). Onwuegbuzie argued that inferential statistics
can be used to provide more complex levels of verstehen than is presently undertaken
in qualitative research. Building on Onwuegbuzie’s work, Onwuegbuzie and Teddlie
(2003) outlined different ways of conducting mixed-method data analyses.
However, quantification should not be viewed as an end in itself, but instead as a
means of utilizing existing techniques that provide incremental validity to thematic
analyses (Weinstein & Tamur, 1978). Further, it should be stressed that mixed method
analyses are not always possible or even appropriate. Indeed, the challenge is knowing
when it is useful to count and when it is difficult or inappropriate to count (Gherardi
& Turner, 1987; Sandelowski, 2001).
As discussed above, many parallels exist between quantitative and qualitative
research. Regardless of orientation, all research in the social sciences represents an
attempt to understand human beings and the world around them. Thus, it is clear that
although, presently, certain methodologies tend to be associated with and utilized by
one particular research tradition or the other, as stated by Dzurec and Abraham: ‘the
objectives, scope, and nature of inquiry are consistent across methods and across para-
digms’ (1993, p. 75). Indeed, the purity of a research paradigm is a function of the
extent to which the researcher is prepared to conform to its underlying assumptions. If
differences exist between quantitative and qualitative researchers, these discrepancies
do not stem from different goals but because these two groups of researchers have
operationalized their strategies differently for reaching these goals (Dzurec & Abra-
ham, 1993). This suggests that methodological pluralism should be promoted. The best
way for this to occur is for as many investigators as possible to become pragmatic
researchers.
creative and exciting. Moreover, such courses would send a strong message to
students that both applied quantitative and qualitative research, for the most part,
have the same goal: to understand phenomena systematically and coherently. As
such, students enrolled in these courses would come to regard research as being a
collaborative undertaking. Additionally, these courses would allow students to focus
on the similarities of quantitative and qualitative research outlined above, rather than
on the differences. However, most importantly, such courses would help to develop
pragmatic researchers equipped to utilize both quantitative and qualitative
techniques.
studies: (a) triangulation (i.e. seeking convergence and corroboration of results from
different methods studying the same phenomenon); (b) complementarity (i.e. seeking
elaboration, enhancement, illustration and clarification of the results from one method
with results from the other method); (c) development (i.e. using the results from one
method to help inform the other method); (d) initiation (i.e. discovering paradoxes
and contradictions that lead to a re-framing of the research question); and (e) expan-
sion (i.e. seeking to expand the breadth and range of inquiry by using different methods
for different inquiry components). Greene et al.’s framework, as well as those outlined
in Tashakkori and Teddlie’s book entitled Handbook of Mixed Methods in Social and
Behavioral Research (Tashakkori and Teddlie, 2003), and mixed-method works such as
those of Bryman (1988), Brannen (1992), Johnson and Onwuegbuzie (2004), and the
work currently being undertaken under the auspices of the Economic and Social
Research Council’s Research Methods Programme,3 offer potential for developing
pragmatic researchers.
Conclusions
The last 100 years have witnessed a fervent debate about quantitative and qualitative
research paradigms. Unfortunately, this has led to a great divide in the USA between
quantitative and qualitative researchers, who often view themselves as in competition
with each other. Clearly, this polarization has promoted purists, namely researchers
who restrict themselves exclusively either to quantitative or to qualitative research
methods. Yet, relying on only one type of data (i.e. number or words) is extremely
limiting. As such, mono-method research is the biggest threat to the advancement of
the social sciences. Indeed, as long as we stay polarized in research, how can we expect
stakeholders who rely on our research findings to take our work seriously?
It has been shown throughout this paper that a false dichotomy exists between quan-
titative and qualitative research. In fact, as noted by Tashakkori and Teddlie (1998), all
distinctions between quantitative and qualitative research methods lie on continua. For
example, the extent to which an independent variable is manipulated lies on a contin-
uum ranging from situations in which the investigator is the agent of change in the
‘treatment’ to cases where the investigator has no control over such changes. Similarly,
International Journal of Social Research Methodology 385
the research setting used lies on a continuum ranging from natural to controlled.
Indeed, experiments can occur in natural settings (e.g. field experiments), while case
studies can occur in controlled settings (e.g. clinical case studies). Additionally,
hypotheses lie on a continuum ranging from exploratory to confirmatory. These are
just a few examples that illustrate the false dichotomy prevailing between the two tradi-
tions. Indeed, if a construct is measured using only one research method, then it would
be difficult to differentiate the construct from its particular mono-method operational
definition (Tashakkori & Teddlie, 1998).
As noted by Sechrest and Sidani (1995), a growth in the pragmatic researcher move-
ment has the potential to reduce some of the problems associated with singular meth-
ods. By utilizing quantitative and qualitative techniques within the same framework,
pragmatic researchers can incorporate the strengths of both methodologies. Most
Downloaded by [University Library Utrecht] at 01:47 23 July 2013
Notes
[1] The differences between positivists and interpretivists highlighted in this section represent
1
under the Economic and Social Research Council’s Research Methods Programme, please see
http://www.ccsr.ac.uk/methods
References
American Educational Research Association, American Psychological Association, & National
Council on Measurement in Education (1999). Standards for educational and psychological
testing (Rev. ed.). Washington: American Educational Research Association.
Berg, B. L. (1989). Qualitative research methods for the social sciences. Boston, MA: Allyn & Bacon.
Brannen, J. (Ed.). (1992). Mixing methods: Qualitative and quantitative research. Aldershot: Avebury.
Bryman, A. (1984). The debate about quantitative and qualitative research: A question of method or
epistemology? British Journal of Sociology, 35, 78–92.
Bryman, A. (1988). Quantity and quality in social research. London, Unwin Hyman.
Collins, R. (1984). Statistics versus words. In R. Collins (Ed.), Sociological theory (pp. 329–362). San
Francisco, CA: Jossey-Bass.
Constas, M. A. (1992). Qualitative data analysis as a public event: The documentation of category
development procedures. American Educational Research Journal, 29, 253–266.
Cook, T. D., & Reichardt, C. S. (Eds.). (1979). Qualitative and quantitative methods in evaluation
research. Beverly Hills, CA: Sage.
Creswell, J. W. (1995). Research design: Qualitative and quantitative approaches. Thousand Oaks, CA:
Sage.
386 A. J. Onwuegbuzie & N. L. Leech
Creswell, J. W. (1998). Qualitative inquiry and research design: Choosing among five traditions.
Thousand Oaks, CA: Sage.
Daft, R. L. (1983). Learning the craft of organizational research. Academy of Management Review, 8,
539–546.
Denzin, N. K. (1978). The research act: A theoretical introduction to sociological methods. New York:
Praeger.
Dzurec, L. C., & Abraham, J. L. (1993). The nature of inquiry: Linking quantitative and qualitative
research. Advances in Nursing Science, 16, 73–79.
Elmore, P. B., & Woehlke, P. L. (1998). Statistical methods employed in American Educational
Researcher and Review of Educational Research from 1978 to 1987. Educational Researcher,
17(9), 19–20.
Gherardi, S., & Turner, B. A. (1987). Real men don’t collect soft data. Quarderno 13, Dipartimento di
Politica Sociale, Università di Trento.
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-
Downloaded by [University Library Utrecht] at 01:47 23 July 2013
method evaluation designs. Educational Evaluation and Policy Analysis, 11, 255–274.
Gueulette, C., Newgent, R., & Newman, I. (1999, December). How much of qualitative research is
really qualitative? Paper presented at the annual meeting of the Association for the Advance-
ment of Educational Research (AAER), Ponte Vedra, FL.
Howe, K. R. (1988). Against the quantitative–qualitative incompatability thesis or dogmas die hard.
Educational Researcher, 17, 10–16.
Howe, K. R. (1992). Getting over the quantitative–qualitative debate. American Journal of Education,
100, 236–256.
Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose
time has come. Educational Researcher, 33(7), 14–26.
Johnson, R. B., Meeker, K., Loomis, E., & Onwuegbuzie, A. J. (2004, April). Development of the
philosophical and methodological beliefs inventory. Paper presented at the annual meeting of
the American Educational Research Association, San Diego, CA.
Kaplan, A. (1964). The conduct of inquiry. San Francisco, CA: Chandler.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.
Madey, D. L. (1982). Some benefits of integrating qualitative and quantitative methods in program
evaluation, with some illustrations. Educational Evaluation and Policy Analysis, 4, 223–236.
McLoughlin, Q. (1991). Relativistic naturalism: A cross-cultural approach to human science. New
York: Praeger.
Miles, M., & Huberman, M. (1984). Qualitative data analysis: An expanded sourcebook. Beverly Hills,
CA: Sage.
Miller, S. I., & Fredericks, M. (1991). Uses of metaphor: A qualitative case study. Qualitative Studies
in Education, 1(3), 263–272.
Newman, I., & Benz, C. R. (1998). Qualitative–quantitative research methodology: Exploring the
interactive continuum. Carbondale, IL: Southern Illinois University Press.
Nolan, J. M., & Ryan, G. W. (2000). Fear and loathing at the Cineplex: Gender differences in descrip-
tions and perceptions of slasher films. Sex Roles, 42(1–2), 39–56.
Onwuegbuzie, A. J. (2002). Positivists, post-positivists, post-structuralists, and post-modernists:
Why can’t we all get along? Towards a framework for unifying research paradigms. Education,
122, 518–530.
Onwuegbuzie, A. J. (2003). Effect sizes in qualitative research: A prolegomenon. Quality & Quantity:
International Journal of Methodology, 37, 393–409.
Onwuegbuzie, A. J., & Daniel, L. G. (2002). A framework for reporting and interpreting internal
consistency reliability estimates. Measurement and Evaluation in Counseling and Development,
35, 89–103.
Onwuegbuzie, A. J., & Leech, N. L. (in press). Enhancing the interpretation of “significant” findings:
The role of mixed methods research. Qualitative Report, 9(4), 770–792. Retrieved March 8,
2005, from http://www.nova.edu/ssss/QR/QR9-4/onwuegbuzie.pdf
International Journal of Social Research Methodology 387
Onwuegbuzie, A. J., & Teddlie, C. (2003). A framework for analyzing data in mixed methods
research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and
behavioral research (pp. 351–383). Thousand Oaks, CA: Sage.
Onwuegbuzie, A. J., DaRos, D. A., & Ryan, J. (1997). The components of statistics anxiety: A
phenomenological study. Focus on Learning Problems in Mathematics, 19, 11–35.
Rossi, J. S. (1997). A case study in the failure of psychology as a cumulative science: The spontaneous
recovery of verbal learning. In L. L. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What if there
were no significance tests? (pp. 175–197). Mahwah, NJ: Erlbaum.
Rossman, G. B., & Wilson, B. L. (1985). Numbers and words: Combining quantitative and qualitative
methods in a single large-scale evaluation study. Evaluation Review, 9, 627–643.
Ryan, G. W., & Bernard, H. R. (2000). Data management and analysis methods. In N. K. Denzin &
Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 769–802). Thousand Oaks,
CA: Sage.
Sandelowski, M. (1986). The problem of rigor in qualitative research. Advances in Nursing Science,
Downloaded by [University Library Utrecht] at 01:47 23 July 2013
8(3), 27–37.
Sandelowski, M. (2001). Real qualitative researchers don’t count: The use of numbers in qualitative
research. Research in Nursing & Health, 24, 230–240.
Schmidt, F. L., & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of
significance testing in the analysis of research data. In L. L. Harlow, S. A. Mulaik, & J. H.
Steiger (Eds.), What if there were no significance tests? (pp. 37–64). Mahwah, NJ: Erlbaum.
Sechrest, L., & Sidani, S. (1995). Quantitative and qualitative methods: Is there an alternative?
Evaluation and Program Planning, 18, 77–87.
Sieber, S. D. (1973). The integration of fieldwork and survey methods. American Journal of Sociology,
73, 1335–1359.
Smith, J. K. (1983). Quantitative versus qualitative research: An attempt to clarify the issue. Educa-
tional Researcher, 12, 6–13.
Smith, J. K., & Heshusius, L. (1986). Closing down the conversation: The end of the quantitative–
qualitative debate among educational inquirers. Educational Researcher, 15, 4–13.
Snizek, W. E. (1976). An empirical assessment of sociology: A multiple paradigm science. The
American Sociologist, 11, 217–219.
Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative
approaches. Applied Social Research Methods Series (Vol. 46). Thousand Oaks, CA: Sage.
Tashakkori, A., & Teddlie, C. (Eds.). (2003). Handbook of mixed methods in social and behavioral
research. Thousand Oaks, CA: Sage.
Thompson, B., & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reliable.
Educational and Psychological Measurement, 60, 174–195.
Vacha-Haase, T. (1998). Reliability generalization: Exploring variance in measurement error affect-
ing score reliability across studies. Educational and Psychological Measurement, 58, 6–20.
Vacha-Haase, T., Ness, C., Nilsson, J., & Reetz, D. (1999). Practices regarding reporting of reliability
coefficients. A review of three journals. The Journal of Experimental Education, 67, 335–341.
Vidich, A. J., & Shapiro, G. (1955). A comparison of participant observation and survey data. Ameri-
can Sociological Review, 20, 28–33.
Weinstein, E. A., & Tamur, A. J. (1978). Meaning, purposes, and structural resources in social
interaction. In J. G. Manis & B. N. Meltzer (Eds.), Symbolic interaction (3rd ed., pp. 138–140).
Boston, MA: Allyn & Bacon.
Willems, E. P., & Raush, H. L. (1969). Naturalistic viewpoints in psychological research. New York:
Holt, Rinehart, & Winston.
Yin, P., & Fan, X. (2000). Assessing the reliability of Beck Depression Inventory scores: Reliability
generalization across studies. Educational and Psychological Measurement, 60, 201–223.