Program Evaluation
Program Evaluation
program development
FIVE PHASES OF EVALUATION
1. Supporting collaborative planning
2. Documenting community implementation, action, and change
3. Assessing community adaptation, institutionalization, and capacity
4. Evaluating more distal outcomes
5. Promoting dissemination
1. Identify key stakeholders and what they care about (i.e.,
people or organizations that have something to gain or lose
from the evaluation).
• Include:
Those involved in operating the program or initiative (e.g., staff,
volunteers, community members, sponsors, and collaborators)
• Those prioritized groups served or affected by the effort (e.g., those
experiencing the problem, public officials)
• Primary intended users of the evaluation (e.g., program or initiative
staff, community members, outside researchers, funders).
TYPES OF STAKEHOLDERS:
1. Community groups
2. Grant makers and funders
3. University-based researchers
2. Describe the program or initiative’s framework or logic
model (e.g., what the program or effort is trying to
accomplish and how it is doing so).
• Include information about:
Purpose or mission (e.g., the problem or goal to which the program, effort, or initiative is addressed)
• Context or conditions (e.g., the situation in which the effort will take place; factors that may affect
outcomes)
• Inputs: resources and barriers (e.g., resources may include time, talent, equipment, information, money,
etc.). Barriers may include history of conflict, environmental factors, economic conditions, etc.
• Activities or interventions (i.e., what the initiative will do to effect change and improvement) (e.g.,
providing information and enhancing skills; enhancing services and support; modifying access, barriers
and opportunities; changing the consequences; modifying policies and broader systems)
• Outputs (i.e., direct evidence of having performed the activities) (e.g., number of services provided)
• Intended effects or outcomes
• Planning and Implementation Issues: How well was the initiative planned and implemented?
Did those most affected contribute to the planning, implementation and evaluation of the
effort? How satisfied are participants with the program?
• Outcome measures
• Attainment of objectives (e.g., How well has the program or initiative met its stated
objectives?)
• Impact on participants (e.g., How much and what kind of a difference has the program or
initiative made for its prioritized groups?)
• Impact on community (e.g., How much and what kind of a difference has the program or
initiative made on the community? Were there any unintended consequences, either
positive or negative?)
3. contd.
• Methods: what type of measurement and study design should be used to evaluate the effects of the
program or initiative? Typical designs include case studies and more controlled experiments. By what
methods will data be gathered to help answer the evaluation questions? Note appropriate methods to
be used including:
Surveys about satisfaction and importance of the initiative
• Goal attainment reports
• Behavioral surveys
• Interviews with key participants
• Archival records
• Observations of behavior and environmental conditions
• Self-reporting, logs, or diaries
• Documentation system and analysis of contribution of the initiative
• Community-level indicators of impact (e.g., rates of HIV)
• Case studies and experiments
4. Gather credible evidence- decide what evidence is, and
what features affect credibility of the evaluation, including:
• Indicators of success - specify criteria used to judge the success of the program or initiative. Translate into
measures or indicators of success, including
• Program outputs
• Participation rates
• Levels of satisfaction
• Changes in behavior
• Community or system changes (i.e., new programs, policies, and practices)
• Improvements in community-level indicators
• Sources of evidence (e.g., interviews, surveys, observation, review of records). Indicate how evidence of
your success will be gathered
• Quality – estimate the appropriateness and integrity of information gathered, including its accuracy
(reliability) and sensitivity (validity). Indicate how quality of measures will be assured.
• Quantity – estimate what amount of data (or time) is required to evaluate effectiveness.
• Logistics – indicate who will gather the data, by when, from what sources, and what precautions and
permissions will be needed.
5. Outline and implement an evaluation
plan. Indicate how you will:
• Involve all key stakeholders (e.g., members of prioritized groups, program implementers,
grantmakers) in identifying indicators of success, documenting evidence of success, and
sense making about the effects of the overall initiative and how it can be improved.
• Track implementation of the initiative’s intervention components
• Assess exposure to the intervention
• Assess ongoing changes in specific behavioral objectives
• Assess ongoing changes in specific population-level outcomes
• Examine the contribution of intervention components (e.g., a program or policy) and
possible improvements in behavior and outcomes at the level of the whole
community/population
• Consider the ethical implications of the initiative (e.g., Do the expected benefits
outweigh the potential risks?)
6. Make sense of the data and justify conclusions.
Indicate how each aspect of the evaluation will be met.
• Standards – values held by stakeholders and how they will be assured.
Indicate how each key standard will be assured:
• Utility standards: to ensure that the evaluation is useful and answers the
questions that are important to stakeholders, including:
• Service orientation: evaluations should be designed to help organizations effectively serve the
needs of all participants.
• Formal agreements: the responsibilities in an evaluation (what is to be done, how, by whom,
when) should be agreed to in writing, so that those involved are obligated to follow all
conditions of the agreement, or to formally renegotiate it.
• Rights of participants: evaluation should be designed and conducted to respect and protect
the rights and welfare of all participants in the study.
• Complete and fair assessment: the evaluation should be complete and fair in its examination,
recording both strengths and weaknesses of the program being evaluated.
• Conflict of interest: conflict of interest should be dealt with openly and honestly, so that it
does not compromise the evaluation processes and results.
6.
• Accuracy standards: to ensure that the evaluation findings are considered correct. Indicate how
the accuracy standards will be met, including:
• Program documentation: the intervention should be described and documented clearly and accurately, so
that what is being evaluated is clearly identified.
• Context analysis: the context in which the initiative exists should be thoroughly examined so that likely
influences on the program’s effects can be identified.
• Valid information: the information gathering procedures should be chosen or developed and then
implemented in such a way that they will assure that the interpretation arrived at is valid.
• Reliable information: the information gathering procedures should be chosen or developed and then
implemented so that they will assure that the information obtained is sufficiently reliable.
• Analysis of quantitative and qualitative information: quantitative information (i.e., data from observations
or surveys) and qualitative information (e.g., from interviews) should be appropriately and systematically
analyzed so that evaluation questions are effectively answered.
• Justified conclusions: the conclusions reached in an evaluation should be explicitly justified, so that
stakeholders can understand their worth.
6.
• Analysis and synthesis – indicate how the evaluation report will analyze
and summarize the findings.
• Sensemaking and interpretation – how will the evaluation report
communicate what the findings mean? How will the stakeholders use
the information to help answer the evaluation questions? How will the
group communicate what the findings suggest?
• Judgments – statements of worth or merit, compared to selected
standards. How will the group communicate what the findings suggest
about the value added by the effort?
• Recommendations – how will the group identify recommendations
based on the results of the evaluation
7. Use the information to celebrate, make
adjustments, and communicate lessons learned
• Take steps to ensure that the findings will be used appropriately, including:
Design – communicate how questions, methods, and findings are constructed
to address agreed-upon uses
• Preparation – anticipate future uses of findings; how to translate knowledge
into practice
• Feedback and sense-making – how communication and shared interpretation
will be facilitated among all users
• Follow-up – support users’ needs during evaluation and after receiving
findings, including to celebrate accomplishments and make adjustments
• Dissemination – communicating lessons learned to all relevant audiences in a
timely manner
STEPS TO DEVELOPING AN
EVALUATION PLAN
1. Clarify program objectives and goals - what are the main things you want to accomplish and how have you set out to
accomplish them?
2. Develop evaluation questions -
a. How well was the program/initiative planned out and put into practice?
b. How well has the program/initiative met its stated objectives?
c. How much/what kind of difference has the program/ initiative made for its targets of change?
d. How much/what kind of difference has the program/initiative made on the community as a whole?
3. Develop evaluation methods - various forms
a. Monitoring and feedback systems
b. Member surveys about the initiative
c. Goal attainment report
d. Behavioral surveys
e. Interviews with key participants
f. Community-level indicators of impact
4. Set up a timeline for evaluation activities - create a table listing:
a. Key evaluation questions (four above with specific Qs under each)
b. Type of evaluation measures to be used to answer them
c. Type of data collection
d. Experimental design chosen
PRODUCTS EXPECTED FROM EVALUATION? WHAT IS
IT YOU SHOULD BE ABLE TO USE FROM YOUR
EVALUATION ?
• Effects expected by shareholders - anticipate what people will want to
know so you make sure you find out
• Differences in key behaviors - choose which behaviors of targets and
agents of change to measure
• Differences in conditions in the community - public awareness,
support, cooperation, initiative
Implementing preventive
interventions
Prevention and promotion: implementing programs