We're approaching the 40th anniversary of the first moon landing. I’ve no doubt that there will be a bombard of documentaries, retrospectives, and "why aren't we there now?" features coming this July, surrounding the big day itself. This will brighten up my summer no end. Despite its Cold War beginnings, I happen to think that the Apollo-era US Manned Space Program represents the epitome of human vision and endeavor.
What has this got to do with instructional design, say you?
Well, read on...
NASA wouldn't have got to the Moon, or even to the next town, without gimbals. Not only does NASA use gimbals for orienting rocket engines, but also when designing navigational systems and instrument panels. Without gimbals, it would have been very difficult for NASA to find a way to send astronauts safely into space.
A gimbal is a mechanism that helps to keep an object on target: it's built into the platform's systems to correct deviations from a pre-determined goal.
On the Saturn V rocket, for example, gimbals were used to set the rocket at the correct pitch and yaw angles to safely "clear the tower" - that is, not bump into the rocket's support gantry on lift-off. Later in the flight, gimbals pitched the rocket's trajectory to align with the Earth's curve for it's journey into orbit (rockets don't go "straight up" but rather ascend in an arc until they attain the required altitude).
So what space nerd. What has this to do with instructional design, say you again, losing patience?
In my view, the task gimbals* perform space flight is similar to the role evaluation performs in instructional design.
According to Donald Clark (2009)
Evaluation is the systematic determination of merit, worth, and significance of a learning or training process by using criteria against a set of standards.The evaluation phase is ongoing throughout the ISD process. The primary purpose is to ensure that the stated goals of the learning process will actually meet a required business need. Thus, it is performed during the first four phases of the ISD process.
Indeed, we can see that this strategy is codified in Dick and Carey's approach (see Figure 1), where an ongoing review process indicated during the first six phases of the process.
Figure 1. Dick and Carey's Model
[Click to enlarge]
Formal evaluations proper are undertaken in steps 7-9 of their model:
1. Determine the instructional goal
2. Analyze the instructional goal
3. Analyze the learners and contexts
4. Write performance objectives
5. Develop assessment instruments
6. Develop instructional strategy
7. Design and conduct formative evaluation
8. Revise instruction
9. Undertake summative evaluation
Dick and Carey (2001) recommend three categories of of formative evaluations to support this process: one-to-one (or clinical) evaluation, small-group evaluation, and field evaluation, but in my view they don't suggest a mechanism for evaluation per se, as the activities they suggest are standard ethnographical research methodologies. Similarly, while they consider on-going reviews to be a component the their ID model, the research suggests that In her 1989 article Evaluation of training and development programs: A review of the literature, Marguerite Foxon describes herself as "surprised" at the "general" and "superficial" nature of the research undertaken on evaluation, and considered that what was there was "difficult to understand and apply."
She continues:
Where evaluation of programs is being undertaken it is often a 'seat of the pants' approach and very limited in its scope. ...trainers often revert to checking in the only way they know - post-course reactions - to reassure themselves the training is satisfactory.
If the literature is a reflection of general practice, it can be assumed that many practitioners do not understand what the term evaluation encompasses, what its essential features are, and what purpose it should serve. ...Many practitioners regard the development and delivery of training courses as their primary concern, and evaluation something of an afterthought."
She suggests that many practitioners prefer to "remain in the dark," concerned that any actual evaluation will "confirm their [the instructional designers'] worst fears" about the educational quality of the courseware they deliver, with the result that they "choose to settle for a non-threatening survey” of Kirkpatrick Level 1-style trainee reactions.
As we have seen in our look at the Three-Phase Design (3PD, in this model evaluation is not viewed as a post-delivery activity (Sims, 2008 p.5): the nature of Web-based education is such that changes can be made immediately (that is, during Phase 2 - Evaluate, Enhance, Elaborate), as long as those changes don't affect the integrity of the learning program's objectives. The second phase can be
"conceptualised to take place during course delivery, with feedback from both teachers and learners being used to modify and/or enhance delivery.
(p5)
Sims and Jones (2003) call this process "proactive evaluation" (see Figure 2).
Figure 2 Proactive evaluation in 3PD
[Click to enlarge]
Using this approach, formative "feedbacks" occur between instructor and students during course implementation. The authors assert that this mechanism continues the dynamic collaboration between the members of the development team enhances. The second phase enables
generational changes in the course structure, with emphasis on the production (completion) of resources, and where learners can take a role of research and evaluation assistants. By developing and building effective communication paths between each of these three roles, a shared understanding of the course goals and learning outcomes can be established, thereby minimising and compromise in educational quality and effectiveness.
In my view, (as shown in Figure 3), the evaluation in this model is founded upon recursion. The enhancement process is undertaken by the actors (instructors, designers, and learners) using a strategy similar to the concept of optimal (or dynamic) programming, where complex problems are solved by breaking them down into simpler sub-problems.
Figure 3 Recursive evaluation in the 3PD Model
[Click to enlarge]
In essence, the enhancement process is repeated until the learning program is considered complete.
Even during the Maintenance Phase, the ongoing process of
gathering and incorporating evaluation data caters for the sustainability of the course.
(Sims, 2008 p.6)
Unlike the Dick and Carey and Kemp Models, 3PD supports overlapping roles, skills, and responsibilities. These contributions may well change through the lifecycle of a learning program, as the model promotes and supports the development of instructors and students' knowledge, skill and experience via the virtuous circle of ongoing collaboration and communication between the actors, and the development of working relationships. The inclusion of learners in the content development process differentiates 3PD from the other models discussed here.
More...
*(Note to hardcore design-heads: this is a metaphor†: I'm not suggesting they're literally equivalent. Go with it).
†Metaphor (n) - a figure of speech in which a word or phrase literally denoting one kind of object or idea is used in place of another to suggest a likeness or analogy between them (Merriam-Webster Online Dictionary)
___________
References:
Clark, D. (2009). Evaluation in Instructional Design. [Internet] Available from: http://www.nwlink.com/~donclark/hrd/sat6.html Accessed 12 June 2009
Foxon, M. (1989). Evaluation of training and development programs: A review of the literature. Australian Journal of Educational Technology, 5(2), 89-104. [Internet] Available from: http://www.ascilite.org.au/ajet/ajet5/foxon.html Accessed 12 June 2009
Sims, R., & Jones, D. (2003). Where practice informs theory: Reshaping instructional design for academic communities of practice in online teaching and learning. Information Technology, Education and Society, 4(1), 3-20.
Sims, R. (2008). From three-phase to proactive learning design: Creating effective online teaching and learning environments, In: J. Willis (Ed), Constructivist Instructional Design (C-ID): Foundations, Models, and Practical Examples.
--