3.5.
Monitoring and Evaluation
1 of 2
http://www.microlinks.org/print/6312
Published on USAID Microlinks (http://www.microlinks.org)
Home > 3.5. Monitoring and Evaluation
3.5. Monitoring and Evaluation
Introduction
A well-designed monitoring and evaluation process provides information to program managers and implementers that is critical to judging the effectiveness of
particular interventions so that modifications can be made to optimize project impact. The goal of a monitoring and evaluation system is to increase the density and
quality of information flow to improve decision-making at all levels, from the field through managers to donors and other stakeholders. Since those changes will be
most helpful during a project rather than after, monitoring and evaluation should be an ongoing feedback mechanism used throughout the projects implementation.
Review basic information on monitoring and evaluation.
What Constitutes Good Performance Monitoring?
Performance monitoring consists of a number of related tasks. Chief among them is the selection of "key performance indicators" that allow managers to monitor
project performance over time.
Monitoring key performance indicators alone does not provide sufficient information to evaluate and assess project performance. It typically needs to be
supplemented with other quantitative and qualitative data collection methods so as to understand the background drivers for the trends and results revealed by the
key performance indicators. Useful data collection methods include key informant interviews, focus group discussions, small-scale targeted surveys, market
scanning, secondary research, or rapid assessments. By utilizing a "tool box" of quantitative and qualitative data gathering methodologies that complement and
mutually reinforce each other, projects can "triangulate" to gain a greater understanding of their effectiveness.
Performance monitoring, however, entails more than data collection. Data collection needs to be embedded within a "system." Implied by the word "system" is a
process for transforming data into useful information. Included in this process are a number of tasks that must be performed if the system is to operate efficiently
and effectively; tasks that include, among other things, the reporting, management, analysis, dissemination, verification, and use of data. A breakdown in any of
these tasks will compromise the validity and usefulness of the performance monitoring system.
In developing a performance monitoring system, value chain projects can follow a set of widely validated best practices that includes matching system design to
resources and technical capacities, training, participation, pilot testing, and oversight and monitoring.
Once the performance monitoring system has been finalized, the details should be captured in a series of Performance Indicator Reference Sheets (PIRS) for each
key performance indicator. The PIRS is a summary resource that describes how the performance monitoring system is operationalized.
What Constitutes A Rigorous Impact Assessment?
Impact assessment rigor is determined by the following four criteria: internal validity, external validity, construct validity, and statistical conclusion validity.
1. Internal validity is the extent to which the impact assessment establishes a credible counterfactual. Internal validity can be suspect when certain types of
biases in the design or conduct of the impact assessment could have affected observed results, thereby obscuring the true direction, magnitude, or certainty
of the treatment effect. Selection bias is a primary source of bias. Selection bias occurs when there are systematic differences in observable (e.g., gender,
education, climate, market access) and unobservable (e.g., ambition, risk orientation, entrepreneurial spirit) characteristics between the treatment and
control groups.
2. External validity is the extent to which the impact assessment findings are generalizable to other value chain projects.
3. Construct validity is the extent to which the impact assessment design and data collection instruments accurately measure the project's causal model.
4. Statistical conclusion validity means that the researchers have correctly applied statistical methods and identified the statistical strength/certainty of the
results.
Impact assessment rigor further depends on a variety of other factors that need to be incorporated into the assessment design, implementation, and analysis,
including triangulation, methodological transparency, sound data collection methods, and methodological appropriateness. Learn more about what constitutes a
rigorous impact assessment.
For more on impact assessment methodologies, see the Impact Assessment Primer Series article #2, Programs" and Primer Series article #3 Collecting and Using
Data for Impact Assessment.
What Are the Steps in Implementing an Impact Assessment?
Conducting a good impact assessment of a value chain project involves the following steps (the steps assume two research roundsa baseline and follow-up):
1.
2.
3.
4.
5.
6.
Select the Project(s) to be Assessed.
Conduct an Evaluability Assessment.
Prepare a Research Plan.
Contract and Staff the Impact Assessment.
Carry out the Field Research and Analyze its Results.
Disseminate the Impact Assessment Findings.
The Private Sector Development Impact Assessment Initiative (PSD-IAI) team conducted a series of impact assessments on four USAID private sector development
projects to demonstrate and refine this approach to impact assessment:
5/27/2014 12:49 PM
3.5. Monitoring and Evaluation
2 of 2
http://www.microlinks.org/print/6312
Brazil: Micro and Small Enterprise Trade-Led Growth Program
India: Growth-oriented Micro Enterprise Development (GMED) Program
Kenya: Kenya BDS and Kenya Horticultural Development Programs
Zambia: Production, Finance and Improved Technology (PROFIT) Program
Another impact assessment of the USAID-funded Development of a BDS Market in Rural Himalayas project was conducted in 2007.
Resources
The PSD-IAI developed, tested, and published guidelines for credible impact assessments including an Impact Assessment Primer Series that explains many of the
key concepts and operational steps summarized above:
1.
2.
3.
4.
5.
6.
7.
IA
IA
IA
IA
IA
IA
IA
Primer
Primer
Primer
Primer
Primer
Primer
Primer
Number
Number
Number
Number
Number
Number
Number
1:
2:
3:
4:
5:
6:
7:
Assessing the Impact of New Generation Private Sector Development Programs
Methodological Issues in Conducting Impact Assessments of Private Sector Development Programs
Collecting and Using Data for Impact Assessment
Developing a Causal Model for Private Sector Development Programs
Causal Models as a Useful Program Management Tool: Case Study of PROFIT Zambia
Planning for Cost Effective Evaluation with Evaluability Assessment
Common Problems in Impact Assessment Research
The PSDIAI published two additional papers, with their associated Breakfast Seminar events and screencasts available below:
PDF of Assessing the Effectiveness of Economic Growth Programs
Breakfast Seminar and Screencast of Assessing the Effectiveness of Economic Growth Programs
PDF of Time to Learn: An Evaluation Strategy for Revitalized Foreign Assistance
Breakfast Seminar and Screencast of Using Evaluation Findings to Drive USAID Learning
COMMENTS (0)
5/27/2014 12:49 PM