A REVIEW:
MODELS OF CURRICULUM EVALUATION
Presented by
OKPARAUGO OBINNA JOSEPH
CURRICULUM EVALUATION
DEPARTMENT OF EDUCATION FOUNDATION
FACULTY OF EDUCATION
SCHOOL OF POST GRADUATE STUDIES
FEDERAL UNIVERSITY, DUTSIN-MA
KATSINA STATE
FEBRUARY, 2021
INTRODUCTION
With the dynamic nature of the global sphere and the place of education as the yardstick
adopted to achieving global and national goals and objectives, it is a good time for evaluators to
critically appraise their program evaluation approaches and decide which ones are worthiest of
continued application and further development. It is equally important to decide which
approaches are best abandoned. In this spirit, this essay document identifies and reviews twenty
models often employed to evaluate educational programs. These models, in varying degrees, are
unique and cover most educational program evaluation efforts. These models of evaluation are
efforts to have education placed on the right path for global uniformity.
Evaluation is a systematic investigation of the value of a program. More specifically an
evaluation is a process of delineating, obtaining, reporting, and applying descriptive and
judgmental information about some object’s merit, worth, probity, and significance. A sound
evaluation model provides a link to evaluation theory, a structure for planning evaluations, a
framework for collaboration, a common evaluation language, a procedural guide, and standards
for judging evaluation. The scope and focus of evaluation generally, and of curriculum evaluation
in particular, has changed markedly over' recent' times. With the move towards school-based
curriculum development attention has shifted away from measurement and testing alone. More
emphasis is now being placed upon a growing number of facets of curriculum development,
reflecting the need to collect information and make judgements about all aspects of curriculum
activities from planning to implementation. While curriculum theorists and some administrators
have realized the significance of this shift many teachers still appear to feel that curriculum
evaluation activities are something which do not directly concern them. However, the general
public, as well as the authorities, expect teachers to know about the effectiveness of their teaching
process and programmes. Given the need, why is it then, that teachers may not become as
involved in evaluation as we might like. Hunkins (1980, p 297) suggests that it might be because
the teacher has to be:
"the doer, the person who reflects on his own
behaviour during the planning and implementation
phases; the observer of the students and the
resources used during the implementation; the
judge. who, receives .and interprets the data
collected; and the actor who _acts upon and makes
informed decisions based upon the date collected".
Expressed this way it does appear that this task may simply be too onerous when forced to
compete against all other activities in which teachers must engage. Seiffert (1986, p 37) expands
on this point by noting that
" ... there are limitations to the amount and nature
of the evaluative role that a teacher may take. First,
a teacher's life is a busy one, arid time constraints
will limit the amount of effort that most teachers
may put into evaluation. Second, because a teacher
is a teacher, and thus a significant person in the
learning process, her roles as evaluator will be
limited. It is possible to be too closely involved in a
situation, politically and emotionally, to ask
questions that might challenge one's own
interests."
We then need to be able to discuss our findings in such a way that individuals do not feel
threatened, so that positive and constructive evaluation can be made. Twenty have been selected
for review in this paper which include: Educational Connoisseurship and Criticism Model, the
Logical Model, Tyler’s Model, CIPP Model, Stufflebeam’s CIPP Model, Stake’s Model,
Kaufman Roger’s Model, Goal-free Evaluation Model, KirkPatrick’s Four Levels Model,
Product Model, Process Model, ADDIE, Bloom’s Taxonomy Model, Taba’s Model, Biggs
Model of Constructive Alignment, Wheeler’s Model, The Curriculum-Instruction-Assessment
(CIA) Triad Model, Lawton’s Cultural Analysis Model, Davis’ Process Model, Systems Design
Model and Stake’s Responsive Model. Reviews of twenty selected curriculum evaluation models
were done, considering their merits and demerits and also comments on each of these models.
CURRICULUM EVALUATION MODELS
1. EDUCATIONAL CONNOISSEURSHIP AND CRITICISM MODEL
Proponent: Elliot Eisner (1975)
Purpose: The study’s purpose is to describe, critically appraise, and illuminate a particular
program’s merits.
Discuss: Educational Connoisseurship and Criticism Model was developed by Eisner, on the
basis of expertise-oriented program evaluation approach, which grounds on the professional
expertise of the program evaluators while evaluating an institution, program, product or activity
(Eisner, 1976). This approach can be used in broad context from education to different areas in
accordance with the evaluand and the expertise of the evaluator. The context of program
evaluation, this approach is represented by Educational Connoisseurship and Criticism Model.
―Educational Connoisseurship and ―Educational Criticism are the two basic concepts of the
model. Both of these concepts are related to art. According to Eisner (1976, 1985), the aim of the
expertise, which is defined as ―the art of appreciation and evaluation, is to reveal the awareness
of the qualifications composing a process or an object and to emphasize them. This model applied
in education, the quality of program, students’ activities, quality of education, learning processes
and equipment so forth can be focused and perceived in the Connoisseurship process. On the
other hand, criticism, this was defined by Eisner (1976, 1985) as ―the art of disclosing the
quality of events or objects that connoisseurship perceives. In order to share the connoisseurship,
criticism is required. By emphasizing that teaching requires artistic skills, Eisner stated that
education is a cultural art and this is a process differing from an individual to another or from
one environment to another (1985). In this context, he defined the aim of the educational
evaluation as not only to review the products or evaluate the activities within the process but also
to increase the skill that a teacher would gain. Contrary to common meaning of criticism, rather
than making negative comments, criticism refers to reproduce the perception of the object. Like
an art critic, who attempts to provide different viewpoints on sculpture or painting and to make
them comprehensible, an education critic wants to reveal the events in the class such as class
rules, the quality of education, changes in students’ behaviours. According to Eisner (1976), an
expert has not got a role of critic but he evaluates the works and appreciates.
In Eisner’s (1976) model, the program evaluator resembles to the art expert and the
evaluation process to the art criticism. In this context, while an evaluator is doing educational
criticism on a program, class or school, firstly he describes what he sees, then interprets and lastly
evaluates (Eisner, 1976, 1985). Eisner’s model develops in dimension:
Descriptive Dimension: According to Eisner (1976), the descriptive dimension of educational
criticism is related to describing the current state of program, class and school etc.
Interpretative Dimension: Eisner (1985) stated that the interpretative dimension of the
educational criticism is related to the attempt to understand the meaning and significance of many
activities in social environment. This dimension reveals the expert’s knowledge of using multiple
theories, viewpoint and models while interpreting the activities at education environments
(Koetting, 1988). For instance, a critic should answer the interpretive questions such as how the
teacher and students interprets the raising hands in class, what means class environment for all
participants etc.
Evaluative Dimension: The last dimension of educational criticism is the evaluation. In this
dimension, the educational significance and effect of the interpreted experience/activities are
evaluated. During this process, there should be some educational criteria to judge about the
experience. According to Koetting (1988), this situation addresses to the normative feature of the
educational criticism.
Merit
 It exploits the particular expertise and finely developed insights of persons who have
devoted much time and effort to the study of a precise area.
 Specific and detailed steps of evaluation
 Promotes teacher development unlike others that focus on learners and learning content.
Demerit
 It is dependent on the expertise and qualifications of the particular expert doing the
program evaluation, leaving room for much subjectivity.
 Segregates science oriented educational process as emphasis are laid on art oriented
educational process.
 Requires only experts in the process.
 The model is too cumbersome. Since evaluation is in stages and at some point will not
require expertise to carryout evaluation, this model has limitation on that as expertise is
high required for process effectiveness.
Comment: Eisner’s Connoisseurship and Criticism Model, upholds solely the evaluation of
educational processes by professionals with the promise of having eyes on the content and
application of requisite measures to yield expected results. This evaluation model by its process,
is wholesomely touches every aspect of education not limiting to classroom situation only. With
Eisner’s point on expertise, it gives the teacher room to evaluate even himself thereby developing
his knowledge, skills and methods. This model does not focus on the learners alone or the
learning content but it can be applicable to every facets of the educational teaching and learning
process.
2. THE LOGIC MODEL
Purpose: This is to precisely describe the measures behind the program’s effects.
Discuss: A Logic evaluation model is a structured description of how a specific program achieves
an intended learning outcome. The influence of system theory on the Logic Model approach to
evaluation can be seen in its careful attention to the relationships between program components
and the components’ relationships to the program’s context. (Frechtling, 2007).
In this model, evaluation starts with input. The existence of input guarantees the occurrence of
activities. The presence and occurrence of activities will determine outputs and outputs will yield
results which is outcome Linlin and Rachel (2017).
Inputs: A Logic Model’s Inputs comprise all relevant resources, both material and intellectual,
expected to be or actually available to an educational program. Inputs may include funding
sources (already on hand or to be acquired), facilities, faculty skills, faculty time, staff time, staff
skills, educational technology, and relevant elements of institutional culture.
INPUT
S
ACTIVITIES OUTPUTS OUTCOMES
Activities: The second component of a Logic Model details the Activities, the set of ā€˜treatments’,
strategies, innovations or changes planned for the educational program.
Outputs: Outputs, the Logic Model’s third component, are defined as indicators that one of the
program’s activities or parts of an activity is underway or completed and that something
happened. The Logic Model structure dictates that each Activity must have at least one Output,
though a single Output may be linked to more than one Activity.
Outcomes: Outcomes define the short-term, medium-term, and longer-range changes intended
as a result of the program’s activities. A program’s Outcomes may include learners’
demonstration of knowledge or skill acquisition (e.g., meeting a performance standard on a
relevant knowledge test or demonstrating specified skills), program participants’ implementation
of new knowledge or skills in practice, or changes in health status of program participants’
patients.
Merits of Logic Model
 Logic Model can be very useful during the planning phases of a new educational project
or innovation or when a program is being revised. Because it requires that educational
planners explicitly define the intended links between the program resources (Inputs),
program strategies or treatments (Activities), the immediate results of program activities
(Outputs), and the desired program accomplishments (Outcomes), using the Logic Model
can assure that the educational program, once implemented, actually focuses on the
intended outcomes.
 It takes into account the elements surrounding the planned change (the program’s
context), how those elements are related to each other, and how the program’s social,
cultural, and political context is related to the planned educational program or innovation.
Demerit of Logic Model
 It will not generate evidence for causal linkages of program activities to outcomes.
 It will not allow the testing of competing hypotheses for the causes of observed
outcomes.
Comment: The content and processes outlined in this model can be used as a map to measure
achievement of a program and its expected goals and objectives. This model also gives room for
the kind of data required to ascertain if the program is achieving the desired goals and objectives
while in the process. If carefully implemented, it can, however, generate ample descriptive data
about the program and the subsequent outcomes.
3. TYLER’S MODEL (1949)
Purpose: To measure students’ progress towards objectives
Proponent: Raph Tyler
Discuss: Tyler’s goal attainment model or sometimes called the objectives-centered model is the
basis for most common models in curriculum design, development and evaluation. The Tyler
model is comprised of four major parts. These are: 1) defining objectives of the learning
experience; 2) identifying learning activities for meeting the defined objectives; 3) organizing
the learning activities for attaining the defined objectives; and 4) evaluating and assessing the
learning experiences.
Tyler’s Model begins by
defining the objectives of the
learning experience. These
objectives must have
relevancy to the field of study
and to the overall curriculum
(Keating, 2006). Tyler’s
model obtains the curriculum
objectives from three sources:
1) the student, 2) the society,
and 3) the subject matter.
When defining the objectives
of a learning experience Tyler
gives emphasis on the input of
students, the community and
the subject content. Tyler
believes that curriculum objectives that do not address the needs and interests of students, the
community and the subject matter will not be the best curriculum. The second part of the Tyler’s
model involves the identification of learning activities that will allow students to meet the defined
objectives. To emphasize the importance of identifying learning activities that meets defined
objectives, Tyler states that ā€œthe important thing is for students to discover content that is useful
and meaningful to themā€ (Meek, 1993, p. 83). In a way Tyler is a strong supporter of the student-
centered approach to learning.
Tyler’s Planning Model
What educational goals should the school seek to attain?
How can learning experiences be selected which are likely to
be useful in attaining these objectives?
How can learning experiences be organised for effective
instruction?
How can the effectiveness of learning experiences be
evaluated?
Merits
1. It evaluates the degree to which the pre-defined goals and objectives have been attained.
2. Its learners centered who happens to be the reason for the curriculum there every other process
of education.
3. Tyler’s model is product focused.
Demerits
1. Ignores process
2. It is difficult and time consuming to construct behavioral objectives.
3. This is a cumbersome process. It is difficult to arrive to consensus easily among the various
stakeholder’s groups.
4. It is too restrictive and covers a small range of student skills and knowledge.
5. It is too dependent on behavioral objectives and it is difficult to declare plainly in behavioral
objectives the objectives that covers none specific skills such as those for critical thinking,
problem solving, and the objectives related to value acquiring processes (Prideaux, 2003).
6. The objectives in the Tyler’s model are too student centered and therefore the teachers are not
given any opportunity to manipulate the learning experiences as they see fit to evoke the kind of
learning outcome desired.
Comment: Tyler’s objective centered model haven come a long way in practice by individuals
and institutions over the years, stands out as a significant evaluation model in educational
process. Though with the demerits posed by this model which is not out of place as no man is
said to be an island, so also Tyler’s model with the observed limitations gave rise to other forms
of evaluative models for educational purpose. Also since the learner forms the base of every
educational activity, Tyler’s objective model which centers on is poised to achieve, what the
learner should receive and the expected manifestations of desired outcomes. Meanwhile, this
Objectives
Selecting learning
experiences
Organising learning
experience
Evaluation of
students’ performance
model cannot be effective without the teacher even though the model circle around the learning
and his activities, it’s the teacher that activates these activities.
4. STUFFLEBEAM’S CIPP MODEL (1983)
Proponent: Daniel Stufflebeam
Context Input Process and Product evaluation
Purpose: Decision-making.
Discuss: This model supports the facilitation of rational and continuing decision-making while
through Evaluation, it Identify potential alternatives, set up activity quality control systems. One
very useful model to educational evaluation is the CIPP approach, developed by Stufflebeam
(1983). The model provides
a systematic way of looking
at many different aspects of
the evaluation process. The
concept of evaluation
underlying the CIPP Model
is that ā€œevaluations should
assess and report an entity’s
merit (its quality), worth (in
meeting needs of targeted
beneficiaries), probity (its
integrity, honesty, and freedom from graft, fraud, and abuse), and significance (its importance
beyond the entity’s setting or time frame), and should also present lessons learnedā€ (Stufflebeam,
2007).
The CIPP evaluation model can be used in both formative and summative modes (Stufflebeam,
2003b). In the formative type, based on the four core concepts, the model guides an evaluator to
ask (a) what needs to be done? (b) how should it be done? (c) is it being done? (d) is it
succeeding? In the summative form the evaluator uses the already collected information to
address the following retrospective questions (a) were important needs addressed? (b) was the
effort guided by a defensible plan and budget? (c) was the service design executed competently
and modified as needed? (d) did the effort succeed? Overall, the purpose of the CIPP evaluation
is not to prove but to improve (Stufflebeam, 2003b) and hence is a useful evaluation tool in
curriculum or project evaluation. It has been used to evaluate the effectiveness of introducing
academic advisor for student guidance and to evaluate programmes at masters and undergraduate
levels (Allahvirdiyani, 2011).
Merits
 The model was not designed with any specific program or solution in mind; thus, it can
be easily applied to multiple evaluation situations.
 Its comprehensive approach to evaluation can be applied from program planning to
program outcomes and fulfillment of core values.
 Rational decision making among alternatives
 The model is well established and has a long history of applicability.
Demerits
 The model could be said to blur the line
between evaluation and other
investigative processes such as needs
assessment.
 It is not as widely known and applied in
the performance improvement field as
other models.
 But undervalues students’ aims
Comment: It gives room for multiple observers
and collection of information through the process
of the program. There is high independence and interdependence of each stage and feedback as
the process is ongoing. The CIPP model is thus a widely used model in programme and
curriculum evaluation.
5. STAKE’s MODEL (1967)
Proponent: Robert Stake
Purpose: The countenance model aims to capture the complexity of an educational innovation.
Discuss: The countenance model aims to capture the complexity of an educational innovation or
change by comparing intended and observed outcomes at varying levels of operation. Three sets
of Data 1. Antecedents •Conditions existing before implementation. 2. Transactions •Activities
occurring during implementation. 3. Outcomes •Results after implementation •Describe the
program fully •Judge the outcomes against external standards. Stake divides descriptive acts
according to whether they refer to what was intended or what was actually observed. He argues
that both intentions and what actually took place must be fully described. He then divided
judgmental acts according to whether they refer to the standards used in reaching judgements or
to the actual judgements themselves. He assumes the existence of a rationale for guiding the
design of a curriculum. Stake wrote that greater emphasis should be placed on description, and
that judgement was actually the collection of data.
Antecedent is any condition existing prior to teaching and learning which may relate to
outcome.
Transactions are the countless encounters of students with teacher, student with student,
author with reader, parent with counsellor
Outcome include measurements of the impact of instruction on learners and others
ANTECEDENTS
• Conditions Existing prior to Curriculum Evaluation
Students interests or prior learning
Learning Environment in the Institution
Traditions and Values of the Institution
TRANSACTIONS
Interactions that occur between:
TEACHERS STUDENTS
STUDENTS STUDENTS
STUDENTS CURRICULAR MATERIALS
STUDENTS EDUCATIONAL ENVIRONMENT
TRANSACTIONS = PROCESS OF EDUCATION
OUTCOMES
Antecedent
Transactions
Outcomes
Rationale Intents Observation
Description Matrix Judgement Matrix
Standards Judgment
• Learning outcomes
• Impact of curriculum implementation on
Students Teachers Administrators Community
• Immediate outcomes Vs Long range outcomes
Merits:
 This model of evaluation, supports the application of various theories of education in its
implementation.
 The communication between each of the players in the curriculum process is good not
withstanding who or what.
Demerits:
 Stirs up value Conflicts.
 Ignores causes.
 The disconnections between each stage and the independent communication of each stage
will make progress and cohesion difficult.
Comment: Stake’s model of curriculum evaluation is more than just an evaluation process.
Stake’s model also looks at the development of the curriculum. When using this model, it is
necessary to compare the developed curriculum with what actually happens in the classroom.
6. KIRKPATRICK'S FOUR LEVELS MODEL
Purpose: Assessing Training Effectiveness often entails using the four-level model
Proponent: Donald Kirkpatrick (1994).
Discuss: In Kirkpatrick's four-level model, each
successive evaluation level is built on information
provided by the lower level.
• Kirkpatrick’s model which is also known as
Hierarchy mode, connote evaluation should
always begin with level one, and then, as time and
budget allows, should move sequentially through
levels two, three, and four. Information from each
prior level serves as a base for the next level's
evaluation.
Level 1 - Reaction
 Evaluation at this level measures how participants in a training program react to it.
 It attempts to answer questions regarding the participants' perceptions - Was the material
relevant to their work? This type of evaluation is often called a ā€œsmilesheetā€.
 According to Kirkpatrick, every program should at least be evaluated at this level to provide
for the improvement of a training program.
Level 2 - Learning
 Assessing at this level moves the evaluation beyond learner satisfaction and attempts to assess
the extent students have advanced in skills, knowledge, or attitude.
To assess the amount of learning that has
occurred due to a training program, level
two evaluations often use tests conducted
before training (pretest) and after training
(posttest).
Level 3 - Transfer
 This level measures the transfer that has
occurred in learners' behavior due to the
training program.
 Are the newly acquired skills, knowledge, or attitude being used in the everyday
environment of the learner?
Level 4 - Results
This level measures the success of the program in terms that school and teacher can understand.
Merits:
 The model can be used to evaluate classroom training as well as eLearning.
 The model provides a logical structure and process to measure learning
 When used in its entirety, it can give organizations an overall perspective of their training
program and of the changes that must be made.
 It has been used to gain a deep understanding of how eLearning affects learning, and if there
is a significant difference in the way learners learn.
Demerits:
 Despite its popularity, Kirkpatrick’s model is not without its critics. Some argue that the model
is too simple conceptually and does not take into account the wide range of organisational,
individual, and design and delivery factors that can influence training effectiveness before,
during, or after training.
 As Bates (2004) points out, contextual factors, such as organisational learning cultures and
values, support in the workplace for skill acquisition and behavioural change, and the adequacy
of tools, equipment and supplies can greatly influence the effectiveness of both the process and
outcomes of training. Other detractors criticise the model’s assumptions of linear causality,
which assumes that positive reactions lead to greater learning, which in turn, increases the
likelihood of better transfer and, ultimately, more positive organisational results (Alliger et al,
1997).
 Training professionals also criticise the simplicity of the Kirkpatrick model on a practical
level.
 Training professionals also criticise the simplicity of the Kirkpatrick model on a practical
level. Bersin (2006) observes how practitioners struggle routinely to apply the model fully. Since
it offers no guidance about how to measure its levels and concepts, users often find it difficult to
translate the model’s different initiatives.
Comment: Kirkpatrick’s model of curriculum evaluation with the system of evaluating each
process or stage before proceeding, is commendable as evaluation is a continuous process from
beginning in between and the end of a program. But the nature of this model will pose a challenge
to the actualization of set goals and objectives in record time as every curriculum process has a
time and target objective. Also, the evaluation of each stage with the upward movement of the
process as shown in the modeled diagram will be more difficult and likely to be misappropriated
as the result of a particular stage may not entirely be or give the best to approve continuation or
proceed.
7. KAUFMAN ROGER’S MODEL
Proponents: Roger Kaufman and John M. Keller
Purpose:
Discuss: Roger Kaufman and John M. Keller published Levels of evaluation: Beyond
Kirkpatrick in the winter 1994 edition of Human Resource Development Quarterly. This work
became known as Kaufman’s Five Levels of Evaluation and is commonly referred to as
Kaufman’s Model of Learning Evaluation. Kaufman’s model is one of a number of learning
evaluation models that build on the Kirkpatrick Model, one of the most popular and widely-used
training evaluation models of all time. Kaufman’s Five Levels of Evaluation is a response, or
reaction to, Kirkpatrick’s model and aims to improve upon it in various ways. With Levels of
evaluation: Beyond Kirkpatrick, Kaufman and Keller were aiming to develop a ā€œmore effective
approach to evaluationā€ by using an ā€œexpanded concept of evaluationā€. Their aim was to develop
Kirkpatrick’s Model
to include result-
related questions.
They believed that
this approach would
ā€œcontribute to continuous improvement by comparing intentions with resultsā€. (Kauffman,
Keller, 1994).
Merits:
 Kaufman’s model clearly illustrates the value of separating quality standards for your
materials from standards for your delivery method.
 The other useful lesson to draw from Kaufman’s model is the value of data besides the
input you receive from learners.
Demerits

Comment: This model intending to close the lacuna observed in the modeled Kirkpatrick’s
model, have only the process of evaluation in check without considering the main actors of the
process which are the learner and teacher. Though with graphic display of this process, every
stage is connected via a hose that serves as the link and channel through which every end result
content is passed but that doesn’t solve the problem of independent stage as observed in
Kirkpatrick’s model. Improvement of determining the interdependence of each stage and the
inclusion of the curriculum actors and activities in the curriculum process should be considered
to make this model more befitting, specific and goal oriented.
8. GOAL FREE EVALUATION (1973)
Proponent: Michael Scriven
Purpose: Goals are only a subset of anticipated effects
Discuss: In the goal-free evaluation model, the evaluation looks at a program's actual effect on
identified needs. In other words, program goals are not the criteria on which the evaluation is
based. his is because they reveal the true, actual worth of the program, rather than simply its
intended worth.
Effects
Intended effects
Unintended effects
He has become increasingly uneasy about the separation between goals and unintended
outcomes, which he calls ā€˜side-effects’. What should be concerning us should be what side
effects the program had and evaluating those, whether or not they were intended. Scriven goes
further to argue that to consider the stated goals of a program when evaluating it can be
considered to be not only unnecessary, but also dangerous. He suggests an alternative approach:
evaluating the actual effects against a profile of demonstrated need. This is goal-free evaluation.
Intended goals are different from real goals
Merits:
 The goal-free model
works best for qualitative
evaluation because the
evaluator is looking at
actual effects rather than
anticipated effects
 Scriven suggests using
two goal-free evaluators, each working independently. In this way, the evaluation does
not rely solely on the observations and interpretations of one person.
Demerits:
 Goal-free evaluation simply substitutes its own goals from those of the project.
 There is a chance that some of the most important effects will be missed.
Comment: He broadened the perspective of curriculum and suggested evaluators should know
the educational program’s goals and objectives in other for the curriculum not to be influenced
by them. His goal-free method is of the view that evaluators should be free to look at the process
and procedures, outcomes and unanticipated effects therefore evaluators should be totally
independent.
9. PRODUCT MODEL Also known as Behavioural Objectives Model
Proponents: Tyler (1949), Bloom (1965)
Purpose: Model interested in product of curriculum
Discuss: This model asks such questions as; What are aims and objectives of curriculum? Which
learning experiences meet these aims and objectives? How can the extent to which these aims
and objectives have been met be evaluated? How can these learning experiences be organised?
This questions and the right answers with applicable measures given, will not only be a
contending and globally applicable model, it will
birth the drive of more research and innovation on
curriculum evaluation. (Adapted from Tyler 1949).
Merits of Product Model
 Avoidance of vague general statements of
intent
 Makes assessment more precise
 Helps to select and structure content
 Makes teachers aware of different types and
levels of learning involved in particular
subjects
 Guidance for teachers and learners about
skills to be mastered
Demerits of Product Model
 At lower levels, behavioural objectives may
be trite and unnecessary
 Difficult to write satisfactory behavioural objectives for higher levels of learning.
 Specific behaviours not appropriate for affective domain
 Discourages creativity for learner and teacher
 Enshrines psychology and philosophy of behaviourism
 Curriculum too subject and exam bound
10. PROCESS MODEL
Proponent: Stenhouse Lawrence (1975)
Purpose: Focusses on teacher activities and teacher’s role
Student and learner activities (perhaps most important feature)
Conditions in which learning takes place.
Process Model emphasis on means Rather than ends. It proposes that learner should have part in
deciding nature of learning activities in provided individualised atmosphere. Assumption that
learner makes unique response to learning experiences
Merits of Process Model
 Emphasis on active roles of teachers and
learners
 Emphasis on learning skills
 Emphasis on certain activities as
important in themselves and for ā€œlifeā€
Demerits of Process Model
 Neglect of considerations of appropriate
content
 Difficulty in applying approach in some
areas
Comment: This model having the center of
every curriculum process as its concentration,
has a limitation as curriculum doesn’t entirely
depend on the learner’s experiences, choices and
outcome and the teachers’ methods and
competence. Consideration including the
evaluation of environment, instructional
materials, etc in this model will improve and expand the realization of this model style of
curriculum evaluation.
11. ADDIE
Originally developed for the U.S. Army by the Centre for Educational Technology at Florida
State University, ADDIE was later implemented across all branches of the U.S. Armed Forces.
The concept of Instructional Design can be traced back to as early as the 1950s. But it wasn’t
until 1975 that ADDIE was designed.
The ADDIE model is a process used by training developers and instructional designers to plan
and create effective learning experiences.
ā€œADDIEā€ stands for Analyze, Design, Develop, Implement, and Evaluate. This sequence,
however, does not impose a strict linear progression through the steps. Educators, instructional
designers and training developers find this approach very useful because having stages clearly
defined facilitates implementation of effective training tools.
The ADDIE model is generic enough that it can be used to create any type of learning experience
for any learner.
ADDIE is an
Acronym
Analysis
Design
Development
Implementation Evaluation
1. Analysis
Before you start developing, the ADDIE model recommends to first analyze the current situation.
Basically, get a clear picture of where everything is currently to understand the gaps needed to
fill. A quality analysis helps identify learning goals and objectives. It also helps gather
information about what your audience already knows and what they still need to learn. How do
you perform a good analysis? Ask good questions – who, what, why, where, when, and how?
2. Design
In the Design phase, information from the Analysis phase are viewed and to make informed
decisions about creating the learning program. This phase is often time-intensive and requires
attention to detail. The Design phase helps to decide specific learning objectives, structure of the
content, mental processes needed by participants, knowledge or skills participants need to retain,
best tools to use, videos or graphics to create, the length of time for each lesson. Just to name a
few of the essentials. In a nutshell, this is where all expertise as an instructional designer comes
into play.
3. Development
The Development phase is where one actually begin creating or developing, program. In the
previous Design phase, the content ideas should have already been decided. At this stage,
Development is to bring those content ideas to life. This means laying out the content and
anything that has to do with creating the actual end-product for the learners. One major part of
the Development phase is testing. Testing helps to check the degree and quality of the contents
to ascertain if it will satisfy the intended learning objective.
4. Implementation
With objectives set, design made, Content Selected, tested, and approved. At this stage, learners
are exposed to the content. At the implementation state, close attention is highly required to see
if any issues arise, at what point, what content and why.
5. Evaluation
The Evaluation phase is all about gathering important information to see if the course needs to
be revised and improved.
Merits of ADDIE Model
 commonly used and widely accepted model
 proven to be effective for human learning
 foundation for other learning models
 easy to measure time and cost
Demerits of ADDIE Model
 rigid linear process that must be followed in order
 time-consuming and costly
 inflexible to adapt to unforeseen project changes
 does not allow for iterative
Comment: The model is meant to be completed in sequential order, from Analysis to Evaluation.
However, ADDIE is designed to be a flexible, continuous process of improvements and
iterations.
12. BLOOM’S TAXONOMY MODEL
Proponent: Benjamin Bloom (1956)
Purpose: Categorizing educational goals.
Discuss: Taxonomy of Educational Objectives, familiarly known as Bloom’s Taxonomy, this
framework has been applied by generations of K-12 teachers and college instructors in their
teaching.
Knowledge ā€œinvolves the recall of specifics and universals, the recall of methods and processes,
or the recall of a pattern, structure, or setting.ā€
Comprehension ā€œrefers to a type of understanding or apprehension such that the individual
knows what is being communicated and can make use of the material or idea being
communicated without necessarily relating it to other material or seeing its fullest implications.ā€
Application refers to the ā€œuse of abstractions in particular and concrete situations.ā€
Analysis represents the ā€œbreakdown of a communication into its constituent elements or parts
such that the relative hierarchy of ideas is made clear and/or the relations between ideas expressed
are made explicit.ā€
Synthesis involves the ā€œputting together of elements and parts so as to form a whole.ā€
Evaluation engenders
ā€œjudgments about the value
of material and methods for
given purposes.ā€
Merits
 This model is good
for cognitive
development
 The application of
analysis is a good
center for the
translation of
knowledge to skill
development
 This model has in consideration the application of the 3 taxonomy of education i.e.
cognitive, affective and psychomotor domain.
Demerits
 This model, having a pyramidal form, is limited to no internal evaluation of each stages
and its progress. Evaluation should be flexible and present as the process goes for
innovations and corrections.
Comments: This model is good for theoretical and practical learning outcomes. Though is
pyramidal nature poses a limitation to its free flow of evaluative measure. Since no evaluative
model is perfect, there is room for extensive reconstruction of this model to accommodate
flexibility, interdependence and yet independence of each state aiming at the successful
outcome of the curriculum.
13. TABA’S MODEL
Proponent: Hilda Taba (1902 – 1967)
an architect, a curriculum theorist, a curriculum reformer, and a teacher educator. She was born
in the small village of Kooraste, Estonia. Taba believed that there has to be a definite order in
creating a curriculum.
Purpose: Adequate Development and Implementation of the curriculum.
Discuss: Hilda Taba is the developer of the Taba Model of learning. This model is used to
enhance the thinking skills of students. Hilda Taba believed that there must be a process for
evaluating student achievement of content after the content standards have been established and
implemented.
Merits of Taba’s Model
 Gifted students begin
thinking of a concept, then
dive deeper into that
concept.
 Focuses on open-ended
questions rather than
right/wrong questions.
 The open-endedness
requires more abstract thinking, a benefit to our gifted students.
 The questions and answers lend themselves to rich classroom discussion.
 Easy to assess student learning.
Demerits of Taba’s Model
 Can be difficult for non-gifted students to grasp
 Difficult for heterogeneous classrooms
 Works well for fiction and non-fiction, may be difficult to easily use in all subjects
Comment: Taba’s Model is one of the model that has in consideration gifted and special need
learners. Concentration is on learners with the aim that for the outcome through evaluation to be
successful, learners must be given the proper attention and content.
14. BIGGS' MODEL OF CONSTRUCTIVE ALIGNMENT
Proponent: John Biggs (2003)
Purpose: The main theoretical
underpinning of the outcomes-based
curriculum is provided by Biggs (2003).
Discuss: He calls the
model constructive alignment which he
defines as the coherence between
evaluation, teaching strategies and
intended learning outcomes in an
educational programme (McMahon &
Thakore 2006). The model requires
alignment between the three key areas of
the curriculum, namely, the intended
learning outcomes, what the student
does in order to learn, how the student is
assessed. This is expressed in Figure 1
with a concrete example given as
Figure 2.
Merits
Critically reflection on Teaching /
Learning Activities
 Monitoring evaluation and
curriculum outcomes in stages.
 Observation cycle of teaching and
learning activities.
Demerits
 Too much concentration on
teaching and learning.
 Model is ambiguous.
 Little emphasis on evaluation and outcome.
Comment: This model is precise in respect to the activities of the teacher and learners in
determining the outcome of the evaluation process looking at the graphical representation above.
But in relation to evaluation, it became ambiguous which will make concentration on specific
outcomes difficult and tedious.
Figure 1: A Basic Model of an Aligned Curriculum.
Figure 2: An Example of Constructive Alignment in a Curriculum.
15. WHEELER’S MODEL
Proponent: Wheeler
Purpose: Curriculum should be a
continuous cycle which is
responsive to changes in the
education sector and makes
appropriate adjustments to
account for these changes.
Discuss: This model is an
improvement upon Tyler’s
model. Instead of a linear model,
Wheeler developed a cyclical
model Evaluation in Wheeler’s
model is not terminal as feedback
is got and helps for situation
analysis as implementation
progresses.
Finding from the evaluation are
fed back into the objectives and the goals which influence other stage. This model illustrates the
dynamic nature of the process of curriculum development. It goes on as the needs and interests
of society change and the objectives also change.
Step 1: Selecting aims, goals and objectives. Selection must be relevant to the specific content
area. This helps the planner to be focused about the direction of educational development.
Step 2: Selecting learning experiences. Occur in the classroom. It is about the learner and their
learning environment so as to achieve the desirable results in the changes of pupils’ behaviours.
Step 3: Selecting content Refer to the subject matter of teaching / learning. This refers to several
aspects such as significance, interest and learnability etc.
Step 4: Organising and integrating experiences. This step is important as they are connected to
the teaching / learning process. Organising learning activities based on pupils’ experiences.
Step 5: Evaluating on different phases and an examination of whether the goals have been
attained through formative and summative assessment.
Merits:
 Gives room for feedback and situation analysis is as the curriculum implementation
progress.
 It provides logical sequence
2.
Selection
of
learning
experienc
es
3.
Selection of
content
4.
Organisation
and integration
of learning
experiences
5.
Evaluatio
n
1.
Selection
of aims,
goals and
objectives
The
Wheeler’s
Model
 By this model, curriculum is a continuous activity and is open to change as new
information or practice becomes available.
Demerits:
 Evaluation maybe too frequent which may lead to distraction from the aim and
objective of the curriculum.
 The flexible nature of this model can lead to collection of irrelevant contents.
 Evaluation only takes place when the cycle of progress completes which means
unwanted contents cannot be detected for exclusion as the curriculum progress.
Comment: This model with its flexible nature will be easily adoptable in any society. Curriculum
experts have to be conscious of the various stages of the model and check for content validity as
information and innovation is given room as characteristics of this model. This model presents
the curriculum process as a continuing activity which is constantly in a state of change as new
information or practices become available. This flexibility to accommodate and align to change
is a good since no society or culture is static.
16. THE CURRICULUM-INSTRUCTION-ASSESSMENT (CIA) TRIAD MODEL.
Purpose: To understand how effective knowledge is transferred from the instructor to the student
in Engineering theoretical and practical lesson.
Discussion: The Curriculum-Instruction-Assessment triad as shown in figure 1 is used as a core
model for understanding how effective the knowledge is transferred from the instructor to the
student. The answer to ā€œHow People Learnā€ can be different base on how each part of the triad
is being conceptualized in other word the type of curriculum, method of instruction, and form of
assessment are critical. Each element of the triad is being explained based on the approach
designed for engineering design courses at university level. Instruction: Is referred to the methods
of teaching as well as learning activities that are used to help students to develop their
understanding of the course content and the curriculum objectives. The instructions were
transferred to the students by the instructor and teaching assistants.
Merits
 The people as the center focus of this model give a first-hand privilege to specifically
ascertain in between the program the success or failure of either of its processes.
 The interconnectivity of the various stages in this model shows its flexibility at different
point for correction and innovation.
 This model is advantageous for practical and skill oriented learning and outcome.
Demerits
 The application of this model test
was only for higher institutions.
 It is limited and discriminates other
levels of education like the primary
or Secondary levels which are the
base and foundation on which the
tertiary level is built.
 Though the people will exhibit the
expected outcome of the program,
measures or methods to carry out
the evaluation for determination of
such purposes was no considered.

Comment: This model unlike other
pyramidal concepts that goes either upwards
or downwards depicting how learning and
evaluation should go, it adopted the interdependence of each stages and at the same time
dependent with the learners at the center. This model will be difficult to interpret and implement
in certain society or culture as there will be conflicting stages of what should go first or when
which should be applied and how. With progressive innovation and trials, this model will go a
long way to been a competitive curriculum evaluation model amongst scholars.
17. LAWTON CULTURAL ANAYSIS MODEL
Proponent: Denis Lawton
Purpose: A reaction against what he sees as the danger of the Behavioral Objectives
Model/Taylor Model.
Discuss: Its provided us with a five stage flow-chart on curriculum planning. The first and second
stages of his model deal with the need to achieve clarity about the aims of education, and the
question about knowledge and values, which should be the concern of education irrespective of
the kind of society. The questions need to be considered by curriculum planners. This
contribution was to demonstrate how wider Cultural Issues and political ideologies shape
curriculum thinking.
1. (Philosophical Aims) Reality, Knowledge and Core Values.
2. (Sociology and Culture) Adjustment in the society, Rapid Social Change and Compatibility
with society and knowledge.
3. (Culture) Selection of Data (content) from culture.
4. (Psychology Method) Delivery of Contents (teaching Methodology) and Need and Level
of the students.
5. Evaluation) Feedback
Comment: Lawton's Cultural Analysis Model (LCAM) Lawton’s Focused on cultural systems
do help to develop the full extent of possible curriculum space and indicate what may be missed
by over specialization.
18. DAVIS' PROCESS MODEL
This model provides a simple overview' of the processes involved in" curriculum evaluation. It
is suitable for 'Use by individual teachers. The first stage of this model involves what Davis
(1980) calls the delineating sub-process. No investigation of classrooms or curricula will be able
to capture the total picture so decisions must be made which structure and focus the evaluation.
Evaluators should begin by asking for whom is the evaluation intended and what does the
audience want to fmd out. Examples of prospective audiences might include:
ļ‚· teacher
ļ‚· senior administrators (senior masters/mistresses, deputies, principals)
ļ‚· Ministry of Education Officials
ļ‚· parent and community groups
ļ‚· commercial organizations
The type of information will 'also vary and could include:
ļ‚· teacher attitudes
ļ‚· student performance
ļ‚· community perceptions
ļ‚· organizational structures
ļ‚· curriculum performance
ļ‚· strategy selection
Such decisions need to be made in consultation. The types of questions which need to be asked
have been comprehensively documented by Hughes, et. al., as part of their work on the Teachers
as Evaluators Project (CDC 1982, pp39-42).
19. SYSTEMS DESIGN MODEL
Proponents: Dick, W., & Carey, L (1996)
Purpose: Systematic Design of Instruction
Discuss: The Dick & Carey model is quite popular in the e-learning literature and within
academic "instructional design shops". It can be used a two different levels: (1) as a general
guideline that one can use as starting point for thinking about an own design rule and (2) with all
its details including its behaviorist/cognitivist background. Dick and Carey's detailed model is
based on the idea that instruction can be broken down into smaller components, that in
pedagogical terms can be described as measurable knowledge and skills.
The Dick and Carey model which has been published in several versions, contains about 10
elements as seen in the chart above.
According to Steven J. McGriff some of the key instructional design terms are interpreted as
follows in the dick and Carey model (2001):
ļ‚· performance objectives: a statement of what the learners will be expected to do when they
have completed a specified course of instruction, stated in terms of observable
performances (see also Mager)
ļ‚· instructional analysis: the procedures applied to an instructional goal in order to identify
the relevant skills and their subordinate skills and information required for a student to
achieve the goal
ļ‚· instructional strategy: an overall plan of activities to achieve an instructional goal;
includes the sequence of intermediate objectives and the learning activities leading to the
instructional goal
ļ‚· hierarchical analysis: technique used with goals in the intellectual skills domain to
identify the critical subordinate skills needed to achieve the goal, and their inter-
relationships
Figure taken from Hee-Sun Lee & Soo-Young Lee's presentation of the Dick and Carey Model (2004)
Merit
 Its branches of analysis and breakdown gives this model easy of interpretation.
 Contents are decongested.
Demerit
 This model was formed considering Electronics studies and content. This therefore pose
a limitation of this evaluation model’s efficacy and classroom situation.
 There may be difficulty of implementation.
 Requires only professionals to interpret.
Comment: This model with its branches and breakdown make it look simplified for
implementation but its ambiguous. Also some of its contents are closely related.
20. STAKE’S RESPONSIVE MODEL
Proponent: Robert Stake (1970)
Purpose: A system for carrying out evaluation in education in 1970s. (Popham, 1995)
Discuss: Stake’s responsive model is the model that ā€œsacrifices some precision in measurement,
hopefully to increase the usefulness of findings to persons in and around the programā€ (Stake,
2011, p.8). The evaluation is considered to be responsive ā€œif it orients more directly to program
activities than to program intents; responds to audience requirement for information; and if the
different value-perspectives present are referred to in reporting the success and failure of the
programā€ (Stake, 1975, p.14). In responsive model, the evaluator is a full, subjective partner in
the educational program who is really highly involved and interactive. The evaluator’s role is to
provide an avenue for continued communication and feedback during the evaluation process
(Stake, 1975). According to Stake, there is no single true value to anything, but the value is in
the eye of the beholder. It means that there may be many valid interpretations of the same events,
based on a person’s point of view, interest, and beliefs. The duty of the evaluator is collecting
the views, the opinions of people in and around the program (Stake, 1983).
Merit
 In responsive evaluations, questions are allowed to emerge during the evaluation process
rather than being formulated.
 This model helps evaluators to acquire a rapid understanding of the program and to
determine which issues and concerns are the most important.
 The responsive evaluation uses content-rich information to describe the program in the
way that is readily accessible to audiences (Stake, 1983; Hurteau & Nadeau, 1985).
 Furthermore, the responsive evaluation provides audiences with the chance to react to the
evaluator’s feedback and interact with the evaluator regarding their issues and concerns
(Paolucci-Whitcomb, Bright, Carlson, & Meyers, 1987). In other words, the values and
perspectives held by different audiences are explicitly recognized, which provides a
context to examine different concerns.
 It produces evaluation accessible to a large variety of stakeholders.
Demerit
 The application of the model requires much time as the process of evaluation using the
model takes a long time (Popham, 1995).
 It is not easy to apply the model to evaluate educational programs if the evaluator is not
an experienced one (Hurteau & Nadeau, 1985).
 The role of the evaluator is ambiguous and in this case the evaluator ā€œserves as a resource
person rather than a researcherā€ (Popham, 1995, p. 3).
 Finally, the model is very flexible; as a result, it may be not easy to maintain the focus of
the evaluation, which may result in a failure to answer specific questions.
Comment: This model of curriculum evaluation is
CONCLUSION
Educational programs are inherently about change: changing learners’ knowledge, skills, or
attitudes; changing educational structures; developing educational leaders; and so forth. The
educators who design and implement those programs know better than most just how complex
the programs are, and such complexity poses a considerable challenge to effective program
evaluation.
The proponents of the reviewed models are professionals in their respective fields and thereby
solving proffering solution as the case apply to evaluation and its processes. Since no curriculum.
With the growing needs and dynamic society, just like evaluation and curriculum, existing
models of curriculum evaluation should be review and new ones created. Through this man will
develop diverse measure and ways to solve the global challenge facing curriculum evaluation
presented by the dynamism of nature.
REFERENCE
Alliger, G. M., Tamnenbaum, S. I. ; Bennett, W. Jr. ; Traver, H. and Shotland, A (1997) A meta-
analysis on the relations among training criteria. Personnel Psychology 50, 341-
358Armstrong, P. (2010). Bloom’s Taxonomy. Vanderbilt University Center for
Teaching. Retrieved [22/02/2021] from https://cft.vanderbilt.edu/guides-sub-
pages/blooms-taxonomy/
Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching
and assessing: A revision of Bloom’s taxonomy of educational objectives. New York:
Longman
Bates, R. (2004) A critical analysis of evaluation practice: The Kirkpatrick model and the
principle of beneficence Evaluation and Program Planning 27, 341-347
Bersin J., (2006) High-Impact Learning Measurement: Best Practices, Models, and Business-
Driven Solutions for the Measurement and Evaluation of Corporate Training. [Online]
Available from http://www.bersin.com
Biggs, J. (2003) Aligning Teaching and Teaching and Assessing to Course Objectives. Teaching
and Leaning in Higher Education: New Trends and Innovations, University of Aveiro,
April 2003.
Bloom, B. S., Englehart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy
of educational objectives: Handbook I: Cognitive domain. New York: David McKay.
Bloom, B. S. (1965). Taxonomy of educational objectives: The classification of educational
goals: Handbook I, Cognitive domain. In Alan C (2006), Bloom’s taxonomy of learning
domains. Retrieved 20/02/2021 from https://businssballs.com/self-
awarewness/blooms-taxonomy//
Bloom, B. S. (1969). Taxonomy of educational objectives: The classification of educational
goals: Handbook I, Cognitive domain. New York: McKay Curriculum Development
Centre, (1982) Curriculum Evaluation: How It Can Be Done, Canberra, CDC.
Daniel Pusca & Hoda Eiliat (2012). Study on Implementation of Tablet Computers in
Engineering. The 6th
Regional Conference on Engineering and Business Education,
Sibiu, Romania
Davis, E. (1980), Teachers as Curriculum Evaluators, Sydney George Alien and Unwin, Sydney.
Dick, W., & Carey, L. (1996). The Systematic Design of Instruction, (4th Ed.). New York: Haper
Collins College Publishers.
Dick, W., Carey, L., & Carey, J. O., (2001). The systematic design of instruction (5th ed.). New
York: Addison-Wesley, Longman.
Dick, W., and Carey, L. (2004). The Systematic Design of Instruction. Allyn & Bacon; 6th
edition. ISBN 0205412742
Essays, UK. (November 2013). A Continuum of Curriculum Development Models: The
Wheeler Model. Retrieved from https://www.ukessays.com/essays/education/a-
continuum-of curriculum-development models.php?cref=1
Eisner, E. W. (1975, March). The perceptive eye: Toward a reformation of educational
evaluation. Invited address, Division B, Curriculum and Objectives, American
Educational Research Association, Washington, DC.
Eisner, E. W. (1976). Educational connoisseurship and criticism: Their form and functions in
educational evaluation. Journal of Aesthetic Education, 10(3/4), 135-150.
Eisner, E. (1979), The Educational Imagination, MacMillan Publishing Company, New York.
Eisner, E. W. (1983). Educational connoisseurship and criticism: Their form and functions in
educational evaluation. In G. F. Madaus, M. Scriven, & D. L. Stufflebeam (Eds.),
Evaluation models. Boston: Kluwer-Nijhoff.
Eisner, E. W. (1985). The Art of educational evaluation: A Personal view. Philadelphia: The
Falmer Press, Taylor & Francis Inc.
Flinders, D. J., & Eisner, E. W. (2000). Educational criticism as a form of qualitative inquiry. In
D. Essays, UK. (November 2013). A Continuum of Curriculum Development Models:
The Wheeler Model. Retrieved from
https://www.ukessays.com/essays/education/a-continuum-of-curriculum-
development-models.php?cref=
Hunkins, F. (1980), Curriculum Development; Programme Improvement, Colombus, Ohio
Merrill.
Gautan Kumar Chaudhary & Rohit Kalia. (2015). Development curriculum and teaching models
of curriculum design for teaching institutes. International Journal of Physical
Education, Sports and Health 2015,1(4): 57-59.
Kaufman, R. A. (1969). Toward educational system planning: Alice in Educational and
Audiovisual Instructor, 14, 47–48.
Keating, S. (2006). Curriculum development and evaluation in nursing. Philadelphia,
Pennsylvania: Lippincott Williams & Wilkins.
Kirkpatrick, D. L., Kirkpatrick, J. D. & Kirkpatrick, W.K. (2009). The Kirkpatrick model.
Kirkpatrick partners
Koetting, J. R. (1988). Educational connoisseurship and educational criticism: Pushing beyond
information and effectiveness. Fifth Annual Open Forum: The Foundational Issues of
the Field (pp. 442-457). Paper presented at the Annual Meeting of the Association for
Educational Communications and Technology. New Orleans, LA.
Kurt, S. (2018). ADDIE Model: instructional design. Retrieved September 7, 2018, from
https://educationaltechnology.net/the-addie-model-instructional-design/
Linlin L & Rachel T. (2017). Logic Models for Curriculum Evaluation. Retrieved February 25,
2021 from https://www.evalu-ate.org/blog/tripathy-li-2017-6/
McMahon & Thakore (2006) Coherence between assessment, strategies and intended learning
outcomes in an educational programme. Retrieved from
http://www.ucdoer.ie/index.php/Using_Biggs%27_model_of_constructive_alignment_
in_curriculum_design/introduction
Meek, A. (1993). On setting the highest standards: A conversation with Ralph Tyler. Educational
Leadership, 50, 83-86.
Newlyn, D. (2016). Cyclical curriculum theory: Its place in the development of contemporary
law units. International Journal of Humanities and Social Science Research, 2, 59-62.
Prideaux, D. (2003). Curriculum design: ABC of learning and teaching in medicine. British
Medical Journal, 326 (7383): 268-270
Provus, M. M. (1971). Educational evaluation and decision making. Itasca, IL: Peacock.
Seiffert, M. (1986), "Outsiders Helping Teachers", Curriculum Perspectives 6, 2, 37-40.
Stake, R. E. (1967). The countenance of educational evaluation. Teachers College Record, 68,
523–540.
Stamm, Randy L. & Howlett Bernadette (2001), Creating Effective Course Content in WebCT,
An Instructional Design Model, PDF (retrieved 16:07, 25 February 2021).
Taba, H (1962). Curriculum: Theory and Practice. New York: Harcourt, Brace.
Stenhouse, L (1975). An introduction to curriculum research and development. London:
Heinemann.
Stufflebeam, D. L. (2007). CIPP evaluation model checklist. Second Edition. Western Michigan
University Evaluation Center.
Tyler, R. W. (1950). Basic principles of curriculum and instruction. Chicago: University of
Chicago Press.
Vallance, E. (1973). Aesthetic criticism and curriculum description. Ph.D. dissertation, Stanford
University.
Woods, J. D. (1988). Curriculum Evaluation Models: Practical Applications for Teachers.
Australian Journal of Teacher Education, 13(1).
http://dx.doi.org/10.14221/ajte.1988v13n2.1

A REVIEW MODELS OF CURRICULUM EVALUATION

  • 1.
    A REVIEW: MODELS OFCURRICULUM EVALUATION Presented by OKPARAUGO OBINNA JOSEPH CURRICULUM EVALUATION DEPARTMENT OF EDUCATION FOUNDATION FACULTY OF EDUCATION SCHOOL OF POST GRADUATE STUDIES FEDERAL UNIVERSITY, DUTSIN-MA KATSINA STATE FEBRUARY, 2021
  • 2.
    INTRODUCTION With the dynamicnature of the global sphere and the place of education as the yardstick adopted to achieving global and national goals and objectives, it is a good time for evaluators to critically appraise their program evaluation approaches and decide which ones are worthiest of continued application and further development. It is equally important to decide which approaches are best abandoned. In this spirit, this essay document identifies and reviews twenty models often employed to evaluate educational programs. These models, in varying degrees, are unique and cover most educational program evaluation efforts. These models of evaluation are efforts to have education placed on the right path for global uniformity. Evaluation is a systematic investigation of the value of a program. More specifically an evaluation is a process of delineating, obtaining, reporting, and applying descriptive and judgmental information about some object’s merit, worth, probity, and significance. A sound evaluation model provides a link to evaluation theory, a structure for planning evaluations, a framework for collaboration, a common evaluation language, a procedural guide, and standards for judging evaluation. The scope and focus of evaluation generally, and of curriculum evaluation in particular, has changed markedly over' recent' times. With the move towards school-based curriculum development attention has shifted away from measurement and testing alone. More emphasis is now being placed upon a growing number of facets of curriculum development, reflecting the need to collect information and make judgements about all aspects of curriculum activities from planning to implementation. While curriculum theorists and some administrators have realized the significance of this shift many teachers still appear to feel that curriculum evaluation activities are something which do not directly concern them. However, the general public, as well as the authorities, expect teachers to know about the effectiveness of their teaching process and programmes. Given the need, why is it then, that teachers may not become as involved in evaluation as we might like. Hunkins (1980, p 297) suggests that it might be because the teacher has to be: "the doer, the person who reflects on his own behaviour during the planning and implementation phases; the observer of the students and the resources used during the implementation; the judge. who, receives .and interprets the data collected; and the actor who _acts upon and makes informed decisions based upon the date collected". Expressed this way it does appear that this task may simply be too onerous when forced to compete against all other activities in which teachers must engage. Seiffert (1986, p 37) expands on this point by noting that " ... there are limitations to the amount and nature of the evaluative role that a teacher may take. First, a teacher's life is a busy one, arid time constraints
  • 3.
    will limit theamount of effort that most teachers may put into evaluation. Second, because a teacher is a teacher, and thus a significant person in the learning process, her roles as evaluator will be limited. It is possible to be too closely involved in a situation, politically and emotionally, to ask questions that might challenge one's own interests." We then need to be able to discuss our findings in such a way that individuals do not feel threatened, so that positive and constructive evaluation can be made. Twenty have been selected for review in this paper which include: Educational Connoisseurship and Criticism Model, the Logical Model, Tyler’s Model, CIPP Model, Stufflebeam’s CIPP Model, Stake’s Model, Kaufman Roger’s Model, Goal-free Evaluation Model, KirkPatrick’s Four Levels Model, Product Model, Process Model, ADDIE, Bloom’s Taxonomy Model, Taba’s Model, Biggs Model of Constructive Alignment, Wheeler’s Model, The Curriculum-Instruction-Assessment (CIA) Triad Model, Lawton’s Cultural Analysis Model, Davis’ Process Model, Systems Design Model and Stake’s Responsive Model. Reviews of twenty selected curriculum evaluation models were done, considering their merits and demerits and also comments on each of these models. CURRICULUM EVALUATION MODELS 1. EDUCATIONAL CONNOISSEURSHIP AND CRITICISM MODEL Proponent: Elliot Eisner (1975) Purpose: The study’s purpose is to describe, critically appraise, and illuminate a particular program’s merits. Discuss: Educational Connoisseurship and Criticism Model was developed by Eisner, on the basis of expertise-oriented program evaluation approach, which grounds on the professional expertise of the program evaluators while evaluating an institution, program, product or activity (Eisner, 1976). This approach can be used in broad context from education to different areas in accordance with the evaluand and the expertise of the evaluator. The context of program evaluation, this approach is represented by Educational Connoisseurship and Criticism Model. ―Educational Connoisseurship and ―Educational Criticism are the two basic concepts of the model. Both of these concepts are related to art. According to Eisner (1976, 1985), the aim of the expertise, which is defined as ―the art of appreciation and evaluation, is to reveal the awareness of the qualifications composing a process or an object and to emphasize them. This model applied
  • 4.
    in education, thequality of program, students’ activities, quality of education, learning processes and equipment so forth can be focused and perceived in the Connoisseurship process. On the other hand, criticism, this was defined by Eisner (1976, 1985) as ―the art of disclosing the quality of events or objects that connoisseurship perceives. In order to share the connoisseurship, criticism is required. By emphasizing that teaching requires artistic skills, Eisner stated that education is a cultural art and this is a process differing from an individual to another or from one environment to another (1985). In this context, he defined the aim of the educational evaluation as not only to review the products or evaluate the activities within the process but also to increase the skill that a teacher would gain. Contrary to common meaning of criticism, rather than making negative comments, criticism refers to reproduce the perception of the object. Like an art critic, who attempts to provide different viewpoints on sculpture or painting and to make them comprehensible, an education critic wants to reveal the events in the class such as class rules, the quality of education, changes in students’ behaviours. According to Eisner (1976), an expert has not got a role of critic but he evaluates the works and appreciates. In Eisner’s (1976) model, the program evaluator resembles to the art expert and the evaluation process to the art criticism. In this context, while an evaluator is doing educational criticism on a program, class or school, firstly he describes what he sees, then interprets and lastly evaluates (Eisner, 1976, 1985). Eisner’s model develops in dimension: Descriptive Dimension: According to Eisner (1976), the descriptive dimension of educational criticism is related to describing the current state of program, class and school etc. Interpretative Dimension: Eisner (1985) stated that the interpretative dimension of the educational criticism is related to the attempt to understand the meaning and significance of many activities in social environment. This dimension reveals the expert’s knowledge of using multiple theories, viewpoint and models while interpreting the activities at education environments (Koetting, 1988). For instance, a critic should answer the interpretive questions such as how the teacher and students interprets the raising hands in class, what means class environment for all participants etc. Evaluative Dimension: The last dimension of educational criticism is the evaluation. In this dimension, the educational significance and effect of the interpreted experience/activities are evaluated. During this process, there should be some educational criteria to judge about the experience. According to Koetting (1988), this situation addresses to the normative feature of the educational criticism. Merit  It exploits the particular expertise and finely developed insights of persons who have devoted much time and effort to the study of a precise area.  Specific and detailed steps of evaluation
  • 5.
     Promotes teacherdevelopment unlike others that focus on learners and learning content. Demerit  It is dependent on the expertise and qualifications of the particular expert doing the program evaluation, leaving room for much subjectivity.  Segregates science oriented educational process as emphasis are laid on art oriented educational process.  Requires only experts in the process.  The model is too cumbersome. Since evaluation is in stages and at some point will not require expertise to carryout evaluation, this model has limitation on that as expertise is high required for process effectiveness. Comment: Eisner’s Connoisseurship and Criticism Model, upholds solely the evaluation of educational processes by professionals with the promise of having eyes on the content and application of requisite measures to yield expected results. This evaluation model by its process, is wholesomely touches every aspect of education not limiting to classroom situation only. With Eisner’s point on expertise, it gives the teacher room to evaluate even himself thereby developing his knowledge, skills and methods. This model does not focus on the learners alone or the learning content but it can be applicable to every facets of the educational teaching and learning process. 2. THE LOGIC MODEL Purpose: This is to precisely describe the measures behind the program’s effects. Discuss: A Logic evaluation model is a structured description of how a specific program achieves an intended learning outcome. The influence of system theory on the Logic Model approach to evaluation can be seen in its careful attention to the relationships between program components and the components’ relationships to the program’s context. (Frechtling, 2007). In this model, evaluation starts with input. The existence of input guarantees the occurrence of activities. The presence and occurrence of activities will determine outputs and outputs will yield results which is outcome Linlin and Rachel (2017). Inputs: A Logic Model’s Inputs comprise all relevant resources, both material and intellectual, expected to be or actually available to an educational program. Inputs may include funding sources (already on hand or to be acquired), facilities, faculty skills, faculty time, staff time, staff skills, educational technology, and relevant elements of institutional culture. INPUT S ACTIVITIES OUTPUTS OUTCOMES
  • 6.
    Activities: The secondcomponent of a Logic Model details the Activities, the set of ā€˜treatments’, strategies, innovations or changes planned for the educational program. Outputs: Outputs, the Logic Model’s third component, are defined as indicators that one of the program’s activities or parts of an activity is underway or completed and that something happened. The Logic Model structure dictates that each Activity must have at least one Output, though a single Output may be linked to more than one Activity. Outcomes: Outcomes define the short-term, medium-term, and longer-range changes intended as a result of the program’s activities. A program’s Outcomes may include learners’ demonstration of knowledge or skill acquisition (e.g., meeting a performance standard on a relevant knowledge test or demonstrating specified skills), program participants’ implementation of new knowledge or skills in practice, or changes in health status of program participants’ patients. Merits of Logic Model  Logic Model can be very useful during the planning phases of a new educational project or innovation or when a program is being revised. Because it requires that educational planners explicitly define the intended links between the program resources (Inputs), program strategies or treatments (Activities), the immediate results of program activities (Outputs), and the desired program accomplishments (Outcomes), using the Logic Model can assure that the educational program, once implemented, actually focuses on the intended outcomes.  It takes into account the elements surrounding the planned change (the program’s context), how those elements are related to each other, and how the program’s social, cultural, and political context is related to the planned educational program or innovation. Demerit of Logic Model  It will not generate evidence for causal linkages of program activities to outcomes.  It will not allow the testing of competing hypotheses for the causes of observed outcomes. Comment: The content and processes outlined in this model can be used as a map to measure achievement of a program and its expected goals and objectives. This model also gives room for the kind of data required to ascertain if the program is achieving the desired goals and objectives while in the process. If carefully implemented, it can, however, generate ample descriptive data about the program and the subsequent outcomes. 3. TYLER’S MODEL (1949)
  • 7.
    Purpose: To measurestudents’ progress towards objectives Proponent: Raph Tyler Discuss: Tyler’s goal attainment model or sometimes called the objectives-centered model is the basis for most common models in curriculum design, development and evaluation. The Tyler model is comprised of four major parts. These are: 1) defining objectives of the learning experience; 2) identifying learning activities for meeting the defined objectives; 3) organizing the learning activities for attaining the defined objectives; and 4) evaluating and assessing the learning experiences. Tyler’s Model begins by defining the objectives of the learning experience. These objectives must have relevancy to the field of study and to the overall curriculum (Keating, 2006). Tyler’s model obtains the curriculum objectives from three sources: 1) the student, 2) the society, and 3) the subject matter. When defining the objectives of a learning experience Tyler gives emphasis on the input of students, the community and the subject content. Tyler believes that curriculum objectives that do not address the needs and interests of students, the community and the subject matter will not be the best curriculum. The second part of the Tyler’s model involves the identification of learning activities that will allow students to meet the defined objectives. To emphasize the importance of identifying learning activities that meets defined objectives, Tyler states that ā€œthe important thing is for students to discover content that is useful and meaningful to themā€ (Meek, 1993, p. 83). In a way Tyler is a strong supporter of the student- centered approach to learning. Tyler’s Planning Model
  • 8.
    What educational goalsshould the school seek to attain? How can learning experiences be selected which are likely to be useful in attaining these objectives? How can learning experiences be organised for effective instruction? How can the effectiveness of learning experiences be evaluated? Merits 1. It evaluates the degree to which the pre-defined goals and objectives have been attained. 2. Its learners centered who happens to be the reason for the curriculum there every other process of education. 3. Tyler’s model is product focused. Demerits 1. Ignores process 2. It is difficult and time consuming to construct behavioral objectives. 3. This is a cumbersome process. It is difficult to arrive to consensus easily among the various stakeholder’s groups. 4. It is too restrictive and covers a small range of student skills and knowledge. 5. It is too dependent on behavioral objectives and it is difficult to declare plainly in behavioral objectives the objectives that covers none specific skills such as those for critical thinking, problem solving, and the objectives related to value acquiring processes (Prideaux, 2003). 6. The objectives in the Tyler’s model are too student centered and therefore the teachers are not given any opportunity to manipulate the learning experiences as they see fit to evoke the kind of learning outcome desired. Comment: Tyler’s objective centered model haven come a long way in practice by individuals and institutions over the years, stands out as a significant evaluation model in educational process. Though with the demerits posed by this model which is not out of place as no man is said to be an island, so also Tyler’s model with the observed limitations gave rise to other forms of evaluative models for educational purpose. Also since the learner forms the base of every educational activity, Tyler’s objective model which centers on is poised to achieve, what the learner should receive and the expected manifestations of desired outcomes. Meanwhile, this Objectives Selecting learning experiences Organising learning experience Evaluation of students’ performance
  • 9.
    model cannot beeffective without the teacher even though the model circle around the learning and his activities, it’s the teacher that activates these activities. 4. STUFFLEBEAM’S CIPP MODEL (1983) Proponent: Daniel Stufflebeam Context Input Process and Product evaluation Purpose: Decision-making. Discuss: This model supports the facilitation of rational and continuing decision-making while through Evaluation, it Identify potential alternatives, set up activity quality control systems. One very useful model to educational evaluation is the CIPP approach, developed by Stufflebeam (1983). The model provides a systematic way of looking at many different aspects of the evaluation process. The concept of evaluation underlying the CIPP Model is that ā€œevaluations should assess and report an entity’s merit (its quality), worth (in meeting needs of targeted beneficiaries), probity (its integrity, honesty, and freedom from graft, fraud, and abuse), and significance (its importance beyond the entity’s setting or time frame), and should also present lessons learnedā€ (Stufflebeam, 2007). The CIPP evaluation model can be used in both formative and summative modes (Stufflebeam, 2003b). In the formative type, based on the four core concepts, the model guides an evaluator to ask (a) what needs to be done? (b) how should it be done? (c) is it being done? (d) is it succeeding? In the summative form the evaluator uses the already collected information to address the following retrospective questions (a) were important needs addressed? (b) was the effort guided by a defensible plan and budget? (c) was the service design executed competently and modified as needed? (d) did the effort succeed? Overall, the purpose of the CIPP evaluation is not to prove but to improve (Stufflebeam, 2003b) and hence is a useful evaluation tool in curriculum or project evaluation. It has been used to evaluate the effectiveness of introducing academic advisor for student guidance and to evaluate programmes at masters and undergraduate levels (Allahvirdiyani, 2011).
  • 10.
    Merits  The modelwas not designed with any specific program or solution in mind; thus, it can be easily applied to multiple evaluation situations.  Its comprehensive approach to evaluation can be applied from program planning to program outcomes and fulfillment of core values.  Rational decision making among alternatives  The model is well established and has a long history of applicability. Demerits  The model could be said to blur the line between evaluation and other investigative processes such as needs assessment.  It is not as widely known and applied in the performance improvement field as other models.  But undervalues students’ aims Comment: It gives room for multiple observers and collection of information through the process of the program. There is high independence and interdependence of each stage and feedback as the process is ongoing. The CIPP model is thus a widely used model in programme and curriculum evaluation. 5. STAKE’s MODEL (1967) Proponent: Robert Stake Purpose: The countenance model aims to capture the complexity of an educational innovation. Discuss: The countenance model aims to capture the complexity of an educational innovation or change by comparing intended and observed outcomes at varying levels of operation. Three sets of Data 1. Antecedents •Conditions existing before implementation. 2. Transactions •Activities occurring during implementation. 3. Outcomes •Results after implementation •Describe the program fully •Judge the outcomes against external standards. Stake divides descriptive acts according to whether they refer to what was intended or what was actually observed. He argues that both intentions and what actually took place must be fully described. He then divided judgmental acts according to whether they refer to the standards used in reaching judgements or to the actual judgements themselves. He assumes the existence of a rationale for guiding the
  • 11.
    design of acurriculum. Stake wrote that greater emphasis should be placed on description, and that judgement was actually the collection of data. Antecedent is any condition existing prior to teaching and learning which may relate to outcome. Transactions are the countless encounters of students with teacher, student with student, author with reader, parent with counsellor Outcome include measurements of the impact of instruction on learners and others ANTECEDENTS • Conditions Existing prior to Curriculum Evaluation Students interests or prior learning Learning Environment in the Institution Traditions and Values of the Institution TRANSACTIONS Interactions that occur between: TEACHERS STUDENTS STUDENTS STUDENTS STUDENTS CURRICULAR MATERIALS STUDENTS EDUCATIONAL ENVIRONMENT TRANSACTIONS = PROCESS OF EDUCATION OUTCOMES Antecedent Transactions Outcomes Rationale Intents Observation Description Matrix Judgement Matrix Standards Judgment
  • 12.
    • Learning outcomes •Impact of curriculum implementation on Students Teachers Administrators Community • Immediate outcomes Vs Long range outcomes Merits:  This model of evaluation, supports the application of various theories of education in its implementation.  The communication between each of the players in the curriculum process is good not withstanding who or what. Demerits:  Stirs up value Conflicts.  Ignores causes.  The disconnections between each stage and the independent communication of each stage will make progress and cohesion difficult. Comment: Stake’s model of curriculum evaluation is more than just an evaluation process. Stake’s model also looks at the development of the curriculum. When using this model, it is necessary to compare the developed curriculum with what actually happens in the classroom. 6. KIRKPATRICK'S FOUR LEVELS MODEL Purpose: Assessing Training Effectiveness often entails using the four-level model Proponent: Donald Kirkpatrick (1994). Discuss: In Kirkpatrick's four-level model, each successive evaluation level is built on information provided by the lower level. • Kirkpatrick’s model which is also known as Hierarchy mode, connote evaluation should always begin with level one, and then, as time and budget allows, should move sequentially through levels two, three, and four. Information from each prior level serves as a base for the next level's evaluation. Level 1 - Reaction  Evaluation at this level measures how participants in a training program react to it.
  • 13.
     It attemptsto answer questions regarding the participants' perceptions - Was the material relevant to their work? This type of evaluation is often called a ā€œsmilesheetā€.  According to Kirkpatrick, every program should at least be evaluated at this level to provide for the improvement of a training program. Level 2 - Learning  Assessing at this level moves the evaluation beyond learner satisfaction and attempts to assess the extent students have advanced in skills, knowledge, or attitude. To assess the amount of learning that has occurred due to a training program, level two evaluations often use tests conducted before training (pretest) and after training (posttest). Level 3 - Transfer  This level measures the transfer that has occurred in learners' behavior due to the training program.  Are the newly acquired skills, knowledge, or attitude being used in the everyday environment of the learner? Level 4 - Results This level measures the success of the program in terms that school and teacher can understand. Merits:  The model can be used to evaluate classroom training as well as eLearning.  The model provides a logical structure and process to measure learning  When used in its entirety, it can give organizations an overall perspective of their training program and of the changes that must be made.  It has been used to gain a deep understanding of how eLearning affects learning, and if there is a significant difference in the way learners learn. Demerits:  Despite its popularity, Kirkpatrick’s model is not without its critics. Some argue that the model is too simple conceptually and does not take into account the wide range of organisational, individual, and design and delivery factors that can influence training effectiveness before, during, or after training.  As Bates (2004) points out, contextual factors, such as organisational learning cultures and values, support in the workplace for skill acquisition and behavioural change, and the adequacy of tools, equipment and supplies can greatly influence the effectiveness of both the process and outcomes of training. Other detractors criticise the model’s assumptions of linear causality,
  • 14.
    which assumes thatpositive reactions lead to greater learning, which in turn, increases the likelihood of better transfer and, ultimately, more positive organisational results (Alliger et al, 1997).  Training professionals also criticise the simplicity of the Kirkpatrick model on a practical level.  Training professionals also criticise the simplicity of the Kirkpatrick model on a practical level. Bersin (2006) observes how practitioners struggle routinely to apply the model fully. Since it offers no guidance about how to measure its levels and concepts, users often find it difficult to translate the model’s different initiatives. Comment: Kirkpatrick’s model of curriculum evaluation with the system of evaluating each process or stage before proceeding, is commendable as evaluation is a continuous process from beginning in between and the end of a program. But the nature of this model will pose a challenge to the actualization of set goals and objectives in record time as every curriculum process has a time and target objective. Also, the evaluation of each stage with the upward movement of the process as shown in the modeled diagram will be more difficult and likely to be misappropriated as the result of a particular stage may not entirely be or give the best to approve continuation or proceed. 7. KAUFMAN ROGER’S MODEL Proponents: Roger Kaufman and John M. Keller Purpose: Discuss: Roger Kaufman and John M. Keller published Levels of evaluation: Beyond Kirkpatrick in the winter 1994 edition of Human Resource Development Quarterly. This work became known as Kaufman’s Five Levels of Evaluation and is commonly referred to as Kaufman’s Model of Learning Evaluation. Kaufman’s model is one of a number of learning evaluation models that build on the Kirkpatrick Model, one of the most popular and widely-used training evaluation models of all time. Kaufman’s Five Levels of Evaluation is a response, or reaction to, Kirkpatrick’s model and aims to improve upon it in various ways. With Levels of evaluation: Beyond Kirkpatrick, Kaufman and Keller were aiming to develop a ā€œmore effective approach to evaluationā€ by using an ā€œexpanded concept of evaluationā€. Their aim was to develop Kirkpatrick’s Model to include result- related questions. They believed that this approach would
  • 15.
    ā€œcontribute to continuousimprovement by comparing intentions with resultsā€. (Kauffman, Keller, 1994). Merits:  Kaufman’s model clearly illustrates the value of separating quality standards for your materials from standards for your delivery method.  The other useful lesson to draw from Kaufman’s model is the value of data besides the input you receive from learners. Demerits  Comment: This model intending to close the lacuna observed in the modeled Kirkpatrick’s model, have only the process of evaluation in check without considering the main actors of the process which are the learner and teacher. Though with graphic display of this process, every stage is connected via a hose that serves as the link and channel through which every end result content is passed but that doesn’t solve the problem of independent stage as observed in Kirkpatrick’s model. Improvement of determining the interdependence of each stage and the inclusion of the curriculum actors and activities in the curriculum process should be considered to make this model more befitting, specific and goal oriented. 8. GOAL FREE EVALUATION (1973) Proponent: Michael Scriven Purpose: Goals are only a subset of anticipated effects Discuss: In the goal-free evaluation model, the evaluation looks at a program's actual effect on identified needs. In other words, program goals are not the criteria on which the evaluation is based. his is because they reveal the true, actual worth of the program, rather than simply its intended worth. Effects Intended effects Unintended effects
  • 16.
    He has becomeincreasingly uneasy about the separation between goals and unintended outcomes, which he calls ā€˜side-effects’. What should be concerning us should be what side effects the program had and evaluating those, whether or not they were intended. Scriven goes further to argue that to consider the stated goals of a program when evaluating it can be considered to be not only unnecessary, but also dangerous. He suggests an alternative approach: evaluating the actual effects against a profile of demonstrated need. This is goal-free evaluation. Intended goals are different from real goals Merits:  The goal-free model works best for qualitative evaluation because the evaluator is looking at actual effects rather than anticipated effects  Scriven suggests using two goal-free evaluators, each working independently. In this way, the evaluation does not rely solely on the observations and interpretations of one person. Demerits:  Goal-free evaluation simply substitutes its own goals from those of the project.  There is a chance that some of the most important effects will be missed. Comment: He broadened the perspective of curriculum and suggested evaluators should know the educational program’s goals and objectives in other for the curriculum not to be influenced by them. His goal-free method is of the view that evaluators should be free to look at the process and procedures, outcomes and unanticipated effects therefore evaluators should be totally independent. 9. PRODUCT MODEL Also known as Behavioural Objectives Model Proponents: Tyler (1949), Bloom (1965) Purpose: Model interested in product of curriculum
  • 17.
    Discuss: This modelasks such questions as; What are aims and objectives of curriculum? Which learning experiences meet these aims and objectives? How can the extent to which these aims and objectives have been met be evaluated? How can these learning experiences be organised? This questions and the right answers with applicable measures given, will not only be a contending and globally applicable model, it will birth the drive of more research and innovation on curriculum evaluation. (Adapted from Tyler 1949). Merits of Product Model  Avoidance of vague general statements of intent  Makes assessment more precise  Helps to select and structure content  Makes teachers aware of different types and levels of learning involved in particular subjects  Guidance for teachers and learners about skills to be mastered Demerits of Product Model  At lower levels, behavioural objectives may be trite and unnecessary  Difficult to write satisfactory behavioural objectives for higher levels of learning.  Specific behaviours not appropriate for affective domain  Discourages creativity for learner and teacher  Enshrines psychology and philosophy of behaviourism  Curriculum too subject and exam bound 10. PROCESS MODEL Proponent: Stenhouse Lawrence (1975) Purpose: Focusses on teacher activities and teacher’s role Student and learner activities (perhaps most important feature) Conditions in which learning takes place. Process Model emphasis on means Rather than ends. It proposes that learner should have part in deciding nature of learning activities in provided individualised atmosphere. Assumption that learner makes unique response to learning experiences
  • 18.
    Merits of ProcessModel  Emphasis on active roles of teachers and learners  Emphasis on learning skills  Emphasis on certain activities as important in themselves and for ā€œlifeā€ Demerits of Process Model  Neglect of considerations of appropriate content  Difficulty in applying approach in some areas Comment: This model having the center of every curriculum process as its concentration, has a limitation as curriculum doesn’t entirely depend on the learner’s experiences, choices and outcome and the teachers’ methods and competence. Consideration including the evaluation of environment, instructional materials, etc in this model will improve and expand the realization of this model style of curriculum evaluation. 11. ADDIE Originally developed for the U.S. Army by the Centre for Educational Technology at Florida State University, ADDIE was later implemented across all branches of the U.S. Armed Forces. The concept of Instructional Design can be traced back to as early as the 1950s. But it wasn’t until 1975 that ADDIE was designed. The ADDIE model is a process used by training developers and instructional designers to plan and create effective learning experiences. ā€œADDIEā€ stands for Analyze, Design, Develop, Implement, and Evaluate. This sequence, however, does not impose a strict linear progression through the steps. Educators, instructional designers and training developers find this approach very useful because having stages clearly defined facilitates implementation of effective training tools.
  • 19.
    The ADDIE modelis generic enough that it can be used to create any type of learning experience for any learner. ADDIE is an Acronym Analysis Design Development Implementation Evaluation 1. Analysis Before you start developing, the ADDIE model recommends to first analyze the current situation. Basically, get a clear picture of where everything is currently to understand the gaps needed to fill. A quality analysis helps identify learning goals and objectives. It also helps gather information about what your audience already knows and what they still need to learn. How do you perform a good analysis? Ask good questions – who, what, why, where, when, and how? 2. Design In the Design phase, information from the Analysis phase are viewed and to make informed decisions about creating the learning program. This phase is often time-intensive and requires attention to detail. The Design phase helps to decide specific learning objectives, structure of the content, mental processes needed by participants, knowledge or skills participants need to retain, best tools to use, videos or graphics to create, the length of time for each lesson. Just to name a few of the essentials. In a nutshell, this is where all expertise as an instructional designer comes into play. 3. Development The Development phase is where one actually begin creating or developing, program. In the previous Design phase, the content ideas should have already been decided. At this stage, Development is to bring those content ideas to life. This means laying out the content and anything that has to do with creating the actual end-product for the learners. One major part of the Development phase is testing. Testing helps to check the degree and quality of the contents to ascertain if it will satisfy the intended learning objective. 4. Implementation
  • 20.
    With objectives set,design made, Content Selected, tested, and approved. At this stage, learners are exposed to the content. At the implementation state, close attention is highly required to see if any issues arise, at what point, what content and why. 5. Evaluation The Evaluation phase is all about gathering important information to see if the course needs to be revised and improved. Merits of ADDIE Model  commonly used and widely accepted model  proven to be effective for human learning  foundation for other learning models  easy to measure time and cost Demerits of ADDIE Model  rigid linear process that must be followed in order  time-consuming and costly  inflexible to adapt to unforeseen project changes  does not allow for iterative Comment: The model is meant to be completed in sequential order, from Analysis to Evaluation. However, ADDIE is designed to be a flexible, continuous process of improvements and iterations. 12. BLOOM’S TAXONOMY MODEL Proponent: Benjamin Bloom (1956) Purpose: Categorizing educational goals. Discuss: Taxonomy of Educational Objectives, familiarly known as Bloom’s Taxonomy, this framework has been applied by generations of K-12 teachers and college instructors in their teaching. Knowledge ā€œinvolves the recall of specifics and universals, the recall of methods and processes, or the recall of a pattern, structure, or setting.ā€ Comprehension ā€œrefers to a type of understanding or apprehension such that the individual knows what is being communicated and can make use of the material or idea being communicated without necessarily relating it to other material or seeing its fullest implications.ā€ Application refers to the ā€œuse of abstractions in particular and concrete situations.ā€ Analysis represents the ā€œbreakdown of a communication into its constituent elements or parts such that the relative hierarchy of ideas is made clear and/or the relations between ideas expressed are made explicit.ā€
  • 21.
    Synthesis involves theā€œputting together of elements and parts so as to form a whole.ā€ Evaluation engenders ā€œjudgments about the value of material and methods for given purposes.ā€ Merits  This model is good for cognitive development  The application of analysis is a good center for the translation of knowledge to skill development  This model has in consideration the application of the 3 taxonomy of education i.e. cognitive, affective and psychomotor domain. Demerits  This model, having a pyramidal form, is limited to no internal evaluation of each stages and its progress. Evaluation should be flexible and present as the process goes for innovations and corrections. Comments: This model is good for theoretical and practical learning outcomes. Though is pyramidal nature poses a limitation to its free flow of evaluative measure. Since no evaluative model is perfect, there is room for extensive reconstruction of this model to accommodate flexibility, interdependence and yet independence of each state aiming at the successful outcome of the curriculum. 13. TABA’S MODEL Proponent: Hilda Taba (1902 – 1967) an architect, a curriculum theorist, a curriculum reformer, and a teacher educator. She was born in the small village of Kooraste, Estonia. Taba believed that there has to be a definite order in creating a curriculum. Purpose: Adequate Development and Implementation of the curriculum.
  • 22.
    Discuss: Hilda Tabais the developer of the Taba Model of learning. This model is used to enhance the thinking skills of students. Hilda Taba believed that there must be a process for evaluating student achievement of content after the content standards have been established and implemented. Merits of Taba’s Model  Gifted students begin thinking of a concept, then dive deeper into that concept.  Focuses on open-ended questions rather than right/wrong questions.  The open-endedness requires more abstract thinking, a benefit to our gifted students.  The questions and answers lend themselves to rich classroom discussion.  Easy to assess student learning. Demerits of Taba’s Model  Can be difficult for non-gifted students to grasp  Difficult for heterogeneous classrooms  Works well for fiction and non-fiction, may be difficult to easily use in all subjects Comment: Taba’s Model is one of the model that has in consideration gifted and special need learners. Concentration is on learners with the aim that for the outcome through evaluation to be successful, learners must be given the proper attention and content.
  • 23.
    14. BIGGS' MODELOF CONSTRUCTIVE ALIGNMENT Proponent: John Biggs (2003) Purpose: The main theoretical underpinning of the outcomes-based curriculum is provided by Biggs (2003). Discuss: He calls the model constructive alignment which he defines as the coherence between evaluation, teaching strategies and intended learning outcomes in an educational programme (McMahon & Thakore 2006). The model requires alignment between the three key areas of the curriculum, namely, the intended learning outcomes, what the student does in order to learn, how the student is assessed. This is expressed in Figure 1 with a concrete example given as Figure 2. Merits Critically reflection on Teaching / Learning Activities  Monitoring evaluation and curriculum outcomes in stages.  Observation cycle of teaching and learning activities. Demerits  Too much concentration on teaching and learning.  Model is ambiguous.  Little emphasis on evaluation and outcome. Comment: This model is precise in respect to the activities of the teacher and learners in determining the outcome of the evaluation process looking at the graphical representation above. But in relation to evaluation, it became ambiguous which will make concentration on specific outcomes difficult and tedious. Figure 1: A Basic Model of an Aligned Curriculum. Figure 2: An Example of Constructive Alignment in a Curriculum.
  • 24.
    15. WHEELER’S MODEL Proponent:Wheeler Purpose: Curriculum should be a continuous cycle which is responsive to changes in the education sector and makes appropriate adjustments to account for these changes. Discuss: This model is an improvement upon Tyler’s model. Instead of a linear model, Wheeler developed a cyclical model Evaluation in Wheeler’s model is not terminal as feedback is got and helps for situation analysis as implementation progresses. Finding from the evaluation are fed back into the objectives and the goals which influence other stage. This model illustrates the dynamic nature of the process of curriculum development. It goes on as the needs and interests of society change and the objectives also change. Step 1: Selecting aims, goals and objectives. Selection must be relevant to the specific content area. This helps the planner to be focused about the direction of educational development. Step 2: Selecting learning experiences. Occur in the classroom. It is about the learner and their learning environment so as to achieve the desirable results in the changes of pupils’ behaviours. Step 3: Selecting content Refer to the subject matter of teaching / learning. This refers to several aspects such as significance, interest and learnability etc. Step 4: Organising and integrating experiences. This step is important as they are connected to the teaching / learning process. Organising learning activities based on pupils’ experiences. Step 5: Evaluating on different phases and an examination of whether the goals have been attained through formative and summative assessment. Merits:  Gives room for feedback and situation analysis is as the curriculum implementation progress.  It provides logical sequence 2. Selection of learning experienc es 3. Selection of content 4. Organisation and integration of learning experiences 5. Evaluatio n 1. Selection of aims, goals and objectives The Wheeler’s Model
  • 25.
     By thismodel, curriculum is a continuous activity and is open to change as new information or practice becomes available. Demerits:  Evaluation maybe too frequent which may lead to distraction from the aim and objective of the curriculum.  The flexible nature of this model can lead to collection of irrelevant contents.  Evaluation only takes place when the cycle of progress completes which means unwanted contents cannot be detected for exclusion as the curriculum progress. Comment: This model with its flexible nature will be easily adoptable in any society. Curriculum experts have to be conscious of the various stages of the model and check for content validity as information and innovation is given room as characteristics of this model. This model presents the curriculum process as a continuing activity which is constantly in a state of change as new information or practices become available. This flexibility to accommodate and align to change is a good since no society or culture is static. 16. THE CURRICULUM-INSTRUCTION-ASSESSMENT (CIA) TRIAD MODEL. Purpose: To understand how effective knowledge is transferred from the instructor to the student in Engineering theoretical and practical lesson. Discussion: The Curriculum-Instruction-Assessment triad as shown in figure 1 is used as a core model for understanding how effective the knowledge is transferred from the instructor to the student. The answer to ā€œHow People Learnā€ can be different base on how each part of the triad is being conceptualized in other word the type of curriculum, method of instruction, and form of assessment are critical. Each element of the triad is being explained based on the approach designed for engineering design courses at university level. Instruction: Is referred to the methods of teaching as well as learning activities that are used to help students to develop their understanding of the course content and the curriculum objectives. The instructions were transferred to the students by the instructor and teaching assistants. Merits  The people as the center focus of this model give a first-hand privilege to specifically ascertain in between the program the success or failure of either of its processes.  The interconnectivity of the various stages in this model shows its flexibility at different point for correction and innovation.
  • 26.
     This modelis advantageous for practical and skill oriented learning and outcome. Demerits  The application of this model test was only for higher institutions.  It is limited and discriminates other levels of education like the primary or Secondary levels which are the base and foundation on which the tertiary level is built.  Though the people will exhibit the expected outcome of the program, measures or methods to carry out the evaluation for determination of such purposes was no considered.  Comment: This model unlike other pyramidal concepts that goes either upwards or downwards depicting how learning and evaluation should go, it adopted the interdependence of each stages and at the same time dependent with the learners at the center. This model will be difficult to interpret and implement in certain society or culture as there will be conflicting stages of what should go first or when which should be applied and how. With progressive innovation and trials, this model will go a long way to been a competitive curriculum evaluation model amongst scholars. 17. LAWTON CULTURAL ANAYSIS MODEL Proponent: Denis Lawton Purpose: A reaction against what he sees as the danger of the Behavioral Objectives Model/Taylor Model. Discuss: Its provided us with a five stage flow-chart on curriculum planning. The first and second stages of his model deal with the need to achieve clarity about the aims of education, and the question about knowledge and values, which should be the concern of education irrespective of the kind of society. The questions need to be considered by curriculum planners. This contribution was to demonstrate how wider Cultural Issues and political ideologies shape curriculum thinking.
  • 27.
    1. (Philosophical Aims)Reality, Knowledge and Core Values. 2. (Sociology and Culture) Adjustment in the society, Rapid Social Change and Compatibility with society and knowledge. 3. (Culture) Selection of Data (content) from culture. 4. (Psychology Method) Delivery of Contents (teaching Methodology) and Need and Level of the students. 5. Evaluation) Feedback Comment: Lawton's Cultural Analysis Model (LCAM) Lawton’s Focused on cultural systems do help to develop the full extent of possible curriculum space and indicate what may be missed by over specialization. 18. DAVIS' PROCESS MODEL This model provides a simple overview' of the processes involved in" curriculum evaluation. It is suitable for 'Use by individual teachers. The first stage of this model involves what Davis (1980) calls the delineating sub-process. No investigation of classrooms or curricula will be able to capture the total picture so decisions must be made which structure and focus the evaluation. Evaluators should begin by asking for whom is the evaluation intended and what does the audience want to fmd out. Examples of prospective audiences might include: ļ‚· teacher ļ‚· senior administrators (senior masters/mistresses, deputies, principals) ļ‚· Ministry of Education Officials ļ‚· parent and community groups ļ‚· commercial organizations The type of information will 'also vary and could include: ļ‚· teacher attitudes ļ‚· student performance ļ‚· community perceptions ļ‚· organizational structures
  • 28.
    ļ‚· curriculum performance ļ‚·strategy selection Such decisions need to be made in consultation. The types of questions which need to be asked have been comprehensively documented by Hughes, et. al., as part of their work on the Teachers as Evaluators Project (CDC 1982, pp39-42). 19. SYSTEMS DESIGN MODEL Proponents: Dick, W., & Carey, L (1996) Purpose: Systematic Design of Instruction Discuss: The Dick & Carey model is quite popular in the e-learning literature and within academic "instructional design shops". It can be used a two different levels: (1) as a general guideline that one can use as starting point for thinking about an own design rule and (2) with all its details including its behaviorist/cognitivist background. Dick and Carey's detailed model is based on the idea that instruction can be broken down into smaller components, that in pedagogical terms can be described as measurable knowledge and skills. The Dick and Carey model which has been published in several versions, contains about 10 elements as seen in the chart above. According to Steven J. McGriff some of the key instructional design terms are interpreted as follows in the dick and Carey model (2001): ļ‚· performance objectives: a statement of what the learners will be expected to do when they have completed a specified course of instruction, stated in terms of observable performances (see also Mager) ļ‚· instructional analysis: the procedures applied to an instructional goal in order to identify the relevant skills and their subordinate skills and information required for a student to achieve the goal ļ‚· instructional strategy: an overall plan of activities to achieve an instructional goal; includes the sequence of intermediate objectives and the learning activities leading to the instructional goal
  • 29.
    ļ‚· hierarchical analysis:technique used with goals in the intellectual skills domain to identify the critical subordinate skills needed to achieve the goal, and their inter- relationships Figure taken from Hee-Sun Lee & Soo-Young Lee's presentation of the Dick and Carey Model (2004) Merit  Its branches of analysis and breakdown gives this model easy of interpretation.  Contents are decongested. Demerit  This model was formed considering Electronics studies and content. This therefore pose a limitation of this evaluation model’s efficacy and classroom situation.  There may be difficulty of implementation.  Requires only professionals to interpret. Comment: This model with its branches and breakdown make it look simplified for implementation but its ambiguous. Also some of its contents are closely related. 20. STAKE’S RESPONSIVE MODEL Proponent: Robert Stake (1970) Purpose: A system for carrying out evaluation in education in 1970s. (Popham, 1995) Discuss: Stake’s responsive model is the model that ā€œsacrifices some precision in measurement, hopefully to increase the usefulness of findings to persons in and around the programā€ (Stake, 2011, p.8). The evaluation is considered to be responsive ā€œif it orients more directly to program
  • 30.
    activities than toprogram intents; responds to audience requirement for information; and if the different value-perspectives present are referred to in reporting the success and failure of the programā€ (Stake, 1975, p.14). In responsive model, the evaluator is a full, subjective partner in the educational program who is really highly involved and interactive. The evaluator’s role is to provide an avenue for continued communication and feedback during the evaluation process (Stake, 1975). According to Stake, there is no single true value to anything, but the value is in the eye of the beholder. It means that there may be many valid interpretations of the same events, based on a person’s point of view, interest, and beliefs. The duty of the evaluator is collecting the views, the opinions of people in and around the program (Stake, 1983). Merit  In responsive evaluations, questions are allowed to emerge during the evaluation process rather than being formulated.  This model helps evaluators to acquire a rapid understanding of the program and to determine which issues and concerns are the most important.  The responsive evaluation uses content-rich information to describe the program in the way that is readily accessible to audiences (Stake, 1983; Hurteau & Nadeau, 1985).  Furthermore, the responsive evaluation provides audiences with the chance to react to the evaluator’s feedback and interact with the evaluator regarding their issues and concerns (Paolucci-Whitcomb, Bright, Carlson, & Meyers, 1987). In other words, the values and perspectives held by different audiences are explicitly recognized, which provides a context to examine different concerns.  It produces evaluation accessible to a large variety of stakeholders.
  • 31.
    Demerit  The applicationof the model requires much time as the process of evaluation using the model takes a long time (Popham, 1995).  It is not easy to apply the model to evaluate educational programs if the evaluator is not an experienced one (Hurteau & Nadeau, 1985).  The role of the evaluator is ambiguous and in this case the evaluator ā€œserves as a resource person rather than a researcherā€ (Popham, 1995, p. 3).  Finally, the model is very flexible; as a result, it may be not easy to maintain the focus of the evaluation, which may result in a failure to answer specific questions. Comment: This model of curriculum evaluation is CONCLUSION Educational programs are inherently about change: changing learners’ knowledge, skills, or attitudes; changing educational structures; developing educational leaders; and so forth. The educators who design and implement those programs know better than most just how complex the programs are, and such complexity poses a considerable challenge to effective program evaluation. The proponents of the reviewed models are professionals in their respective fields and thereby solving proffering solution as the case apply to evaluation and its processes. Since no curriculum. With the growing needs and dynamic society, just like evaluation and curriculum, existing models of curriculum evaluation should be review and new ones created. Through this man will develop diverse measure and ways to solve the global challenge facing curriculum evaluation presented by the dynamism of nature.
  • 32.
    REFERENCE Alliger, G. M.,Tamnenbaum, S. I. ; Bennett, W. Jr. ; Traver, H. and Shotland, A (1997) A meta- analysis on the relations among training criteria. Personnel Psychology 50, 341- 358Armstrong, P. (2010). Bloom’s Taxonomy. Vanderbilt University Center for Teaching. Retrieved [22/02/2021] from https://cft.vanderbilt.edu/guides-sub- pages/blooms-taxonomy/ Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman Bates, R. (2004) A critical analysis of evaluation practice: The Kirkpatrick model and the principle of beneficence Evaluation and Program Planning 27, 341-347 Bersin J., (2006) High-Impact Learning Measurement: Best Practices, Models, and Business- Driven Solutions for the Measurement and Evaluation of Corporate Training. [Online] Available from http://www.bersin.com Biggs, J. (2003) Aligning Teaching and Teaching and Assessing to Course Objectives. Teaching and Leaning in Higher Education: New Trends and Innovations, University of Aveiro, April 2003. Bloom, B. S., Englehart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: Handbook I: Cognitive domain. New York: David McKay. Bloom, B. S. (1965). Taxonomy of educational objectives: The classification of educational goals: Handbook I, Cognitive domain. In Alan C (2006), Bloom’s taxonomy of learning domains. Retrieved 20/02/2021 from https://businssballs.com/self- awarewness/blooms-taxonomy// Bloom, B. S. (1969). Taxonomy of educational objectives: The classification of educational goals: Handbook I, Cognitive domain. New York: McKay Curriculum Development Centre, (1982) Curriculum Evaluation: How It Can Be Done, Canberra, CDC. Daniel Pusca & Hoda Eiliat (2012). Study on Implementation of Tablet Computers in Engineering. The 6th Regional Conference on Engineering and Business Education, Sibiu, Romania Davis, E. (1980), Teachers as Curriculum Evaluators, Sydney George Alien and Unwin, Sydney. Dick, W., & Carey, L. (1996). The Systematic Design of Instruction, (4th Ed.). New York: Haper Collins College Publishers. Dick, W., Carey, L., & Carey, J. O., (2001). The systematic design of instruction (5th ed.). New York: Addison-Wesley, Longman. Dick, W., and Carey, L. (2004). The Systematic Design of Instruction. Allyn & Bacon; 6th edition. ISBN 0205412742
  • 33.
    Essays, UK. (November2013). A Continuum of Curriculum Development Models: The Wheeler Model. Retrieved from https://www.ukessays.com/essays/education/a- continuum-of curriculum-development models.php?cref=1 Eisner, E. W. (1975, March). The perceptive eye: Toward a reformation of educational evaluation. Invited address, Division B, Curriculum and Objectives, American Educational Research Association, Washington, DC. Eisner, E. W. (1976). Educational connoisseurship and criticism: Their form and functions in educational evaluation. Journal of Aesthetic Education, 10(3/4), 135-150. Eisner, E. (1979), The Educational Imagination, MacMillan Publishing Company, New York. Eisner, E. W. (1983). Educational connoisseurship and criticism: Their form and functions in educational evaluation. In G. F. Madaus, M. Scriven, & D. L. Stufflebeam (Eds.), Evaluation models. Boston: Kluwer-Nijhoff. Eisner, E. W. (1985). The Art of educational evaluation: A Personal view. Philadelphia: The Falmer Press, Taylor & Francis Inc. Flinders, D. J., & Eisner, E. W. (2000). Educational criticism as a form of qualitative inquiry. In D. Essays, UK. (November 2013). A Continuum of Curriculum Development Models: The Wheeler Model. Retrieved from https://www.ukessays.com/essays/education/a-continuum-of-curriculum- development-models.php?cref= Hunkins, F. (1980), Curriculum Development; Programme Improvement, Colombus, Ohio Merrill. Gautan Kumar Chaudhary & Rohit Kalia. (2015). Development curriculum and teaching models of curriculum design for teaching institutes. International Journal of Physical Education, Sports and Health 2015,1(4): 57-59. Kaufman, R. A. (1969). Toward educational system planning: Alice in Educational and Audiovisual Instructor, 14, 47–48. Keating, S. (2006). Curriculum development and evaluation in nursing. Philadelphia, Pennsylvania: Lippincott Williams & Wilkins. Kirkpatrick, D. L., Kirkpatrick, J. D. & Kirkpatrick, W.K. (2009). The Kirkpatrick model. Kirkpatrick partners Koetting, J. R. (1988). Educational connoisseurship and educational criticism: Pushing beyond information and effectiveness. Fifth Annual Open Forum: The Foundational Issues of the Field (pp. 442-457). Paper presented at the Annual Meeting of the Association for Educational Communications and Technology. New Orleans, LA. Kurt, S. (2018). ADDIE Model: instructional design. Retrieved September 7, 2018, from https://educationaltechnology.net/the-addie-model-instructional-design/ Linlin L & Rachel T. (2017). Logic Models for Curriculum Evaluation. Retrieved February 25, 2021 from https://www.evalu-ate.org/blog/tripathy-li-2017-6/
  • 34.
    McMahon & Thakore(2006) Coherence between assessment, strategies and intended learning outcomes in an educational programme. Retrieved from http://www.ucdoer.ie/index.php/Using_Biggs%27_model_of_constructive_alignment_ in_curriculum_design/introduction Meek, A. (1993). On setting the highest standards: A conversation with Ralph Tyler. Educational Leadership, 50, 83-86. Newlyn, D. (2016). Cyclical curriculum theory: Its place in the development of contemporary law units. International Journal of Humanities and Social Science Research, 2, 59-62. Prideaux, D. (2003). Curriculum design: ABC of learning and teaching in medicine. British Medical Journal, 326 (7383): 268-270 Provus, M. M. (1971). Educational evaluation and decision making. Itasca, IL: Peacock. Seiffert, M. (1986), "Outsiders Helping Teachers", Curriculum Perspectives 6, 2, 37-40. Stake, R. E. (1967). The countenance of educational evaluation. Teachers College Record, 68, 523–540. Stamm, Randy L. & Howlett Bernadette (2001), Creating Effective Course Content in WebCT, An Instructional Design Model, PDF (retrieved 16:07, 25 February 2021). Taba, H (1962). Curriculum: Theory and Practice. New York: Harcourt, Brace. Stenhouse, L (1975). An introduction to curriculum research and development. London: Heinemann. Stufflebeam, D. L. (2007). CIPP evaluation model checklist. Second Edition. Western Michigan University Evaluation Center. Tyler, R. W. (1950). Basic principles of curriculum and instruction. Chicago: University of Chicago Press. Vallance, E. (1973). Aesthetic criticism and curriculum description. Ph.D. dissertation, Stanford University. Woods, J. D. (1988). Curriculum Evaluation Models: Practical Applications for Teachers. Australian Journal of Teacher Education, 13(1). http://dx.doi.org/10.14221/ajte.1988v13n2.1