Giancola, Program Evaluation 1e
SAGE Publishing, 2021
Lecture Notes
Complete the chapter note guide by filling in the blank or missing content
information. Underline, highlight, or use a different color font or type of font to
distinguish your answers.
Chapter 1: Evaluation Matters
Learning Objectives
1.1 Define evaluation.
Evaluation is a systematic process used to determine the value, merit, or worth of a
program, policy, or intervention by comparing its operations and outcomes to explicit or
implicit standards. For example, historical evaluations like the Cambridge-Somerville
Youth Study assessed the effectiveness of delinquency prevention programs, while
modern evaluations measure the impact of policies like the Elementary and Secondary
Education Act.
1.2 Identify programs and policies that might be evaluated.
Programs and policies that might be evaluated include:
Social programs: Head Start, Job Training Programs (e.g., Manpower
Development and Training Act), Community Mental Health Centers.
Education policies: Elementary and Secondary Education Act, standardized
testing systems (e.g., Horace Mann’s student assessments in Boston).
Healthcare initiatives: The Tuskegee syphilis experiments (as a cautionary
example of unethical evaluation).
Government policies: Civil Rights Acts, NASA’s post-Sputnik initiatives.
1.3 Describe the purpose of evaluation and its relationship to research.
Purpose of evaluation: To improve program effectiveness, allocate resources
efficiently, and ensure accountability (e.g., the Eight-Year Study in Chicago
schools aimed to refine educational practices).
Relationship to research: Both use scientific methods (e.g., randomized controlled
trials in the What Works Clearinghouse), but evaluation is action-
oriented (focused on practical improvements), while research seeks generalizable
knowledge (e.g., Milgram’s obedience experiments explored psychological
theories).
1
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
1.4 Distinguish between formative and summative evaluation.
Formative evaluation: Conducted during program implementation to provide
feedback for improvement. Example: The Cambridge-Somerville Youth
Study included formative assessments of counselling fidelity and participant
engagement.
Summative evaluation: Conducted after program completion to judge overall
effectiveness. Example: The same study’s summative analysis found no long-term
reduction in delinquency, leading to program discontinuation.
1.5 Compare and contrast internal and external evaluation.
Internal Evaluation External Evaluation
Conducted by organization staff (e.g., Conducted by independent
Boston’s early student assessments by evaluators (e.g., the CIA’s external
Horace Mann). review of MK-Ultra experiments).
Pros: Deep contextual knowledge. Pros: Higher credibility and objectivity.
Cons: Risk of bias (e.g., Holmesburg Cons: Limited understanding of program
Prison experiments lacked impartiality). nuances.
1.6 Discuss the embedded evaluation model.
The embedded evaluation model is a dynamic, cyclical approach integrated into all
stages of a program’s lifecycle. Key features include:
Continuous improvement: Refines processes iteratively (e.g., the What Works
Clearinghouse updates evidence-based practices).
Theory-based: Aligns with program logic (e.g., Ralph Tyler’s logic models in the
Eight-Year Study).
Utilization-focused: Prioritizes stakeholder needs (e.g., AEA’s emphasis on
ethical use of findings).
Five steps: Understand, Plan, Implement, Analyze, Report/Revise (as seen in
modern frameworks like the Campbell Collaboration).
2
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
1.7 Explain the first step in embedded evaluation.
The first step in embedded evaluation is “Understand: What is the program?” This
involves:
Defining the program’s goals, logic model, and stakeholders (e.g., the
Cambridge-Somerville Study began by clarifying its aim to reduce delinquency
through counseling).
Identifying contextual factors (e.g., historical evaluations of Sputnik-era
education reforms started by understanding gaps in STEM training).
Aligning with the AEA Guiding Principles to ensure ethical and systematic
inquiry.
Chapter Summary
Chapter 1 introduces the reader to evaluation and the fundamental terminology associated with
the different types of evaluation. Additionally, the chapter serves to introduce the reader to the
embedded evaluation model and its initial steps.
Annotated Chapter Outline
I. Introduction
A. Evaluation is a method that is used to determine the value or worth of a program.
B. Embedded evaluation is an evaluation approach based on continuous
improvement, in which program processes and practices are examined and
refined in order to improve outcomes.
II. What Is Evaluation
A. Evaluation
i. Evaluation is a method used to determine the value of something.
ii. Examples of evaluation include:
a. Estimating a product is worth buying
b. Judging whether spending extra time on a homework assignment is
worth a higher grade.
c. Rating a professor.
d. Appraising work ethic and quality of one of our coworkers or
fellow students.
e. Determine if one should rent or purchase a textbook.
f. Decision about whether one can afford to rent an apartment alone
or if a roommate is necessary.
iii. Program evaluation is an evaluation used to determine the merit or worth of
a program.
iv. A program is broadly defined to include a group of activities ranging from
a small intervention to a national or international policy.
B. What Is the Purpose of Evaluation?
3
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
i. Evaluation intends to determine merit and worth.
ii. Evaluation is a systematic, examination of the operations and/or the
outcomes of a program or policy, compared to a set of explicit or implicit
standards, as a means of contributing to the improvement of program or
policy.
iii. Evaluation is a systematic examination of a program that uses the scientific
method.
iv. The term systematic refers to something that is planned and organized,
and that is undertaken according to a plan.
v. The operations of a program include both what is implemented as part of
the program and processes it is implemented; operations are the outcomes
involved with implementing the activities of a program.
vi. Evaluation also examines the what of a program.
vii. Outcomes are the results that occur during and after implementing a
program.
viii. Evaluation evidence is compared to a standard.
ix. A standard is a target or yardstick that informs us of the ideal state.
x. Standards are used, implicitly or explicitly, to judge the merit or worth of a
program.
xi. Evaluation is intended to provide information that will help to improve
programs and policies, making them more effective and efficient.
C. How Is Evaluation Different from Research?
i. Research is a systematic investigation in a field of study.
ii. Evaluation and research use the same methods and design.
iii. Evaluation is more action oriented than research.
iv. Evaluation findings are intended for use within a program or policy to
effect change.
v. Evaluators have an obligation to disseminate their research.
III. Why Evaluate?
A. Evaluation Is an Ethical Obligation
i. When presented with information regarding the cost of ineffective
programs, it is not a leap to conclude that it is an ethical obligation of those
who implement programs and policies to also have those programs and
policies evaluated.
ii. According to Nielson, using policy and program resources to collect the
necessary data to evaluate effectiveness is the only way to live in the light.
B. Evaluation Fosters Quality
i. The very nature of evaluation increases accountability and this knowledge
can be used to improve programs.
4
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
ii. Evaluation provides the necessary information to strategically improve a
program, allocate resources in ways that can maximize effectiveness, and
refine program strategies for greater impact.
C. Evaluation Is a Viable Career
i. Russ-Eft and Preskill (2009) included in their list of reasons to evaluate
that evaluation is increasingly regarded as a skill that is highly marketable.
ii. There are many opportunities for evaluators internationally.
iii. The importance of accountability makes it a requirement that programs
show evidence of impact in order to receive continued funding.
IV. Values and Standards in Evaluation
A. How Is Data Used
i. The people who use evaluation findings are called stakeholders.
ii. A stakeholder is anyone who has an interest in or is involved with the
operation or success of a program.
iii. Key stakeholder groups often include program staff, program participants,
community members, and policymakers.
iv. The valuing that is part of evaluation is influenced by context.
v. It is important that we are clear about the values and standards upon
which our evaluative judgments are based.
vi. A value is a principle or quality used to estimate importance; an estimate of
importance.
vii. The worth of an evaluation is dependent upon context and who is making
the judgment of worth.
viii. In order to design an evaluation that is useful to stakeholders, it is
important for an evaluator to understand stakeholders’ values.
B. Guiding Principles for Evaluators
i. The American Evaluation Association (AEA) Guiding Principles for
Evaluators is a set of five principles that embody the values of the
American Evaluation Association, an international professional association
of evaluators.
ii. The guiding principles are intended to guide the professional behavior and
conduct of evaluators, and address ideals that cross disciplinary boundaries,
such as an evaluator’s obligation to be professional and culturally
competent.
iii. These principles provide guidance around the following domains:
a. Systematic inquiry,
b. Competence,
c. integrity,
d. Respect for people, and
e. common good and equity.
5
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
C. Program Evaluation Standards
i. The Joint Committee on Standards for Education Evaluation is a group of
representatives from multiple professional organizations that have an
interest in improving evaluation quality.
ii. The Joint Committee issues the Joint Committee’s Program Evaluation
Standards, which is a set of 30, standards intended to guide evaluators in
the areas of utility, feasibility, propriety, accuracy and evaluation
accountability.
iii. The “standards” provide practical guidance on how to conduct effective
and equitable evaluations that produce credible findings and promote
accountability.
iv. In addition to the AEA Guiding Principles and the Joint Committee’s
Program Evaluation Standards, experienced evaluators have also provided
resources to guide evaluators in the appropriate consideration of values and
standards in their evaluation work, among these are a checklist provided by
Stufflebeam (2001) and House and Howe’s (1999) Values in Evaluation
and Social Research.
v. One’s own biases affect all aspects of evaluation.
V. Types of Evaluation
A. Important Evaluation Terms
i. A Request for Proposals (RFP) is a solicitation for organizations to
submit a proposal on how they would complete a specified project.
ii. An RFP will often use language indicating that formative and summative
evaluation is required, or an external evaluator is preferred.
iii. A formative evaluation is an evaluation aimed at providing information to
improve a program while it is in operation.
iv. A summative evaluation is an evaluation aimed at providing information
regarding effectiveness in order to make decisions about whether to
continue or discontinue a program.
B. Formative Evaluation
i. Formative evaluation is performed to provide ongoing feedback to
program staff for continuous improvement.
ii. A process evaluation is an aspect of formative evaluation that is aimed at
understanding the implementation of the program.
iii. Implementation assessment is an aspect of formative evaluation that
examines the degree to which a program is implemented with fidelity.
iv. Prior to implementing new program or restructuring an existing program,
needs assessment can be used to shape a program by examining the needs
of proposed participants, needs of stakeholders, and how to meet the needs
of both.
6
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
v. Evaluation assessment is used to determine if an evaluation is feasible and
the role stakeholders might take in shaping the evaluation design.
C. Summative Evaluation
i. The primary purpose of evaluation is to make summative decisions.
ii. Outcome evaluation is summative evaluation focused on how well the
program met its specified long-term goals.
iii. Impact evaluation is a summative evaluation that measures both the
intended and unintended outcomes of a program.
iv. Cost-benefit/cost-effectiveness analysis is summative evaluation that
focuses on estimating the efficiency of a program in terms of dollar cost
saved or outcomes observed.
v. Meta-analysis is a form of summative evaluation that integrates the effects
of multiple studies to estimate the overall effect of a program.
vi. A meta-evaluation is an evaluation of an evaluation.
VI. Internal and External Evaluation
A. Evaluation Relationships
i. An evaluation can be conducted by someone internal to the organization
within which a program operates or someone external to the organization.
ii. An external evaluator is an evaluator who is employed outside of the
organization in which the program operates.
iii. Credibility, in terms of evaluation, is the degree of confidence someone
has that findings are reported accurately and should be believed.
iv. In evaluation, objectivity is the degree to which an evaluator can put aside
any biases and impartially interpret and report findings.
v. The choice of who conducts your evaluation should depend upon the
anticipated use of the results and the intended audience.
vi. If resources are not available for an external evaluator and there is no office
or department in your organization that is not affected by your program,
you may want to consider other potentially affordable evaluation options,
such as:
a. Community members with evaluation experience
b. Contacting local community colleges and/or universities to enquire
if experienced faculty or staff might work with you at a reduced
rate
c. Doctoral students at local universities
d. Explore grant opportunities that fund evaluation activities
VII. Embedding Evaluation Into Programs
A. Grounded in Continuous Improvement
7
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
i. Embedded evaluation is an evaluation approach based on continuous
improvement, in which program processes and practices are examined and
refined in order to improve outcomes.
B. Theory Based and Utilization Focused
i. Embedded evaluation combines elements from several common evaluation
approaches, including theory-based evaluation, logic modeling, stakeholder
evaluation, participatory evaluation, and utilization-focused evaluation.
ii. Theory-based evaluation focuses on indicators related to the logic
underlying in a program to guide evaluation.
iii. Utilization-focused evaluation is based on the premise that an evaluation’s
worth rests in how useful it is to the program’s stakeholders.
C. Dynamic and Cyclical
i. Embedded evaluation steps build on each other and depend upon decision
made in prior steps, and information learned in one step may lead to
revisions in a previous step.
ii. The focus of embedded evaluation is to enable program staff to build and
implement high-quality programs that are continuously improving, as well
as to determine when programs are not working and need to be redesigned.
D. Program Specific
i. The first step in conducting an evaluation is to understand what you want to
evaluate.
ii. Understanding the program enables you to develop evaluation questions
and define criteria that are meaningful and useful to stakeholders.
E. A Framework for Evaluation
i. Embedded evaluation is based on five steps, which include:
a. Step 1: Understand: What is the program?
b. Step 2: Plan: How do I plan the evaluation?
c. Step 3: Implement: How do I evaluate the program?
d. Step 4: Analyze: How do I interpret the results?
e. Step 5: (a) Report and (b) Revise: How do I use the results?
ii. Whether the program is a new program or one that has been in operation
for many years, the process of embedding evaluation into your program is
the same.
iii. For a new program, embedding evaluation into the program development
process allows data to be built into all future cycles for the program,
providing opportunity for information to be the foundation of the
program’s operation.
iv. Existing programs with good documentation and established data
management systems may find embedding evaluation into the program a
relatively straightforward and education process.
8
Giancola, Program Evaluation 1e
SAGE Publishing, 2021
v. The process of embedding evaluation into existing programs can also aid in
developing a common understanding of program goals and help to foster
collaboration among stakeholders.
vi. All programs should routinely examine the theory underlying a program
and refine that logic as necessary as lessons are leaned and results are
measured.