The E-Learning Curve Blog has moved!

You will be automatically redirected to the new address in 10 seconds. If that does not occur for some reason, visit
http://michaelhanley.ie/elearningcurve/
and update your bookmarks.

Thursday, March 20, 2008

Evaluating Non-formal Learning: Using "tests of rigor" to validate findings

Without rigour, research is worthless, becomes fiction, and loses its utility. Hence, a great deal of attention is applied to reliability and validity in all research methods.

(Morse et al, 2002, p.1)

Morse and his colleagues develop their position by asserting that the challenges to rigour in qualitative inquiry paralleled the growth of statistical packages and the development of computing systems in quantitative research: without “the certainty of hard numbers and p values, qualitative inquiry expressed a crisis of confidence” (Morse et al, 2002, p.2).


A number of leading qualitative researchers (Leininger, 1994, Altheide & Johnson, 1998) argued that the two approaches are not compatible and that they should not be combined, due to the fundamental differences between the two methods in the context of the nature of knowledge, the relationship between researcher and the object of the research, and the means of generating data.

My personal view aligns with the criteria suggested by Guba & Lincoln (1981) and Yin (1994) to determine reliability, and validity, and thus ensuring rigour, in qualitative research. Reliability and validity relate to the degree of confidence we have that the data are representing the participants’ reality or “truth” (Russ-Eft & Preskill, 2001, p.153). How can we define the truth (or even the accuracy) of participants’ responses in the subjective domain of qualitative research? Guba and Lincoln (1981) propose that researchers use so-called “tests of rigour” to establish validity through the naturalistic concept of “trustworthiness,” containing four aspects: credibility, transferability, dependability, and confirmability, that align to parallel scientific terms associated with the quantitative method.

Table 1 Definition of naturalistic terms (Russ-Eft & Preskill, 2001, pp153-155)

Naturalistic Term

Definition

Credibility

The scientific paradigm asserts that there is one reality and that information is valid when all relevant variables can be controlled; a naturalistic paradigm assumes that multiple realities exist in the minds of individuals. Hence when using qualitative methods, the research seeks to establish the credibility of individuals’ responses. The study must be believable by providing a detailed depiction of the multiple perspectives that exist can enhance the data’s credibility. For example, learner satisfaction surveys, along with interviews with training managers and instructors would provide a more holistic picture of the learning experience.

Fittingness/transferability

How transferable the findings are to another setting is called generalisability in the empirical context. The goal of qualitative methods is to provide richly detailed description to help the reader relate certain findings to their own experience. We often think of these as “lessons learned.” For example, as a stakeholder reads an evaluation report, they realise that something very similar has occurred in their organisation and sees where the findings can be used. Although the entire set of findings may not be applicable in their context, some issues identified may have applicability in other contexts.

Dependability/auditability

In the scientific paradigm, the notion of consistency is called reliability where a study’s consistency, predictability or stability is measured. Since reliability is necessary for validity, it is critical that data of any kind be reliable. Instead of considering data unreliable if it is inconsistent, evaluators using qualitative methods look for reasons that cause the data to appear unstable (inconsistent). For example, an interviewee may give an opinion one day, and when asked again the following week might say something slightly different. What would be important to understand and capture are the reasons for this change in perception. Such inconsistencies may stem from respondent error, an increase in available information, or changes in the situation. An audit trail that includes collecting documents and interview notes and a daily journal of how things are proceeding can help to uncover some of the reasons for such inconsistencies.

Confirmability

Objectivity is often viewed as the goal of most evaluation and research studies. Evaluators and researchers who use qualitative methods don’t necessarily believe that true objectivity can ever be fully achieved, rather that it is impossible to completely separate the evaluator from the method. Instead of trying to ensure that the data are free from the evaluator’s biases, the goal is to determine the extent to which the data provide confirming evidence. “This means that data (constructions, assertions, facts and so on) can be tracked to their sources, and that the logic used to assemble the interpretations into structurally coherent and corroborating wholes is both explicit and implicit” (Guba & Lincoln, 1989, p.243). Establishing confirmability, like consistency, often takes the form of auditing.

References:

Altheide, D., & Johnson, J. M. C. (1998). Criteria for assessing interpretive validity in qualitative research. IN: Denzin N. K. & Lincoln Y. S. (Eds.), Collecting and interpreting qualitative materials. Thousand Oaks, CA: Sage.

Guba, E.G. & Lincoln, Y.S. (1981) Effective Evaluation. San Francisco: Jossey-Bass.

Leininger, M. (1994). Evaluation criteria and critique of qualitative research studies. In J. M. Morse (Ed.), Critical Issues in Qualitative Research Methods. Newbury Park, CA: Sage.

Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification strategies for establishing reliability and validity in qualitative research. [Internet] International Journal of Qualitative Methods 1 (2), Article 2. Available from: http://www.ualberta.ca/~ijqm/ [Accessed 14th March 2008]

Russ-Eft, D. & Preskill H. (2001) Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance and Change. New York, NY. Perseus Books.

Yin, R. K. (1994). Discovering the future of the case study method in evaluation research. Evaluation Practice [Internet] 15. Available from: http://aje.sagepub.com/cgi/reprint/15/3/283 [Accessed 15th January 2007]

--

No comments: