0% found this document useful (0 votes)
107 views56 pages

Roject Monitorig and Evaluation Complete Syllabus Notes For Students

Uploaded by

kamaujeff378
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views56 pages

Roject Monitorig and Evaluation Complete Syllabus Notes For Students

Uploaded by

kamaujeff378
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

DIPLOMA IN PROJECT MANAGEMENT –YEAR TWO

TECHNICAL, INDUSTRIAL, VOCATIONAL AND ENTREPRENUERSHIP


TRAINING

PROJECT MONITORING AND EVALUATION –KNEC CODE 2922/203

This unit is intended to equip the trainees with knowledge, skills and attitudes
essential in monitoring and evaluation of a project

GENERAL OBJECTIVES
a) By the end of the unit the trainee should be able to :
b) Undertake established techniques of monitoring and evaluating projects
c) Understand the performance of established projects
d) Understand problems in individual projects
e) Appreciate the importance of project audits and reports
f) Appreciate the role of legal documents in a project

COURSE TOPICS
1. Project Monitoring
2. Project Evaluation
3. Process of Monitoring and Evaluation
4. Indicators in Project Monitoring and Evaluation
5. Project Audits
6. Project Monitoring and Evaluation Reports
7. Emerging Issues

PREPARED BY ANNE MUSAU

1
TOPIC 1: PROJECT MONITORING
Sub-topics
 Meaning of Project Monitoring
 Importance of Project Monitoring
 Types of Monitoring
 Steps in Monitoring
 Characteristics of a good Monitoring System
 Challenges faced in Monitoring of projects

BASIC CONCEPTS IN MONITORING

 Monitoring procedure
This is a group of activities done by people (monitors) who follow a plan to check
program implementation
 A monitor
This is a person who uses a monitoring procedure .The person may or may not have
another program responsibility

 Monitoring plan
This a description of specific compositions from a monitoring system will be used when
they when it will be used and how the results will be reported

 Monitoring system
This is a group of components related to some aspect of program implementation
.Each component includes a description of a standard group of indicators and some
description of some adjustments to make when the situation deviates from the standard

 Indicators
Indicators are units of measurement that give us sources of reality when worked at in
totally, they give reality. They are parts of a whole.

 Adjustment procedures
It is a description of what to do when an indicator shows the situation deviation from
ascent and performance or conditions

MEANING OF PROJECT MONITORING


2
 Refers to a continuous tracking of a project progress with a view of ensuring efficiency. It
is a systematic and continuous collection analysis and interpretation of dates with a view of
ensuring that everything is moving on as planned.

 Project monitoring is an integral part of day – to- day management. It provides information
by which management can identify and solve implementations problems and access
progress.

 Monitoring also involves giving feedback about the progress of the project to the donor –
implementation and beneficiaries of the project enables gathering information used in
making decision for improving project performance .the dates acquired through monitoring
is used for evaluation .

 Monitoring usually focuses on process such as when and where activities occur, who
delivers them and how and how many people or entities they reach.

 Monitoring is conducted after a program has begun and continuous throughout the
program implementation period.
 Monitoring clarifies program objectives and their resources to objectives; translates
objectives into performance indicators and sets a target. Routinely collects data on this
indicators, compares actual results with targets.

 Monitoring gives information on where a policy, program or project at any given time (or
over time) relative to respective targets and outcomes ,monitoring focuses in particular on
efficiency and the use of resources .
IMPORTANCE OF PROJECT MONITORING

The main purposes for carrying out Monitoring and Evaluation include;

 Ensuring that planned results are achieved,


 Supporting and improving project management,
 Generating shared understanding,
 Generating new knowledge and support learning,
 Building capacities of those involved,
 Motivating stakeholders and
 Fostering public and political support
The following benefits accrue from monitoring:

 Improved performance of all activities through timely feedback to stakeholders


 Means of ensuring that performance takes place in accordance with work-plans
3
 Improved coordination and communication through readily available information.
 Provision of greater transparency expected by all stakeholders
 Improved awareness about programme activities among all stakeholders
 Enhanced external/Governments support due to accurate and timely reporting on use
of funds
 Confirmation of whether the project addresses the needs of special groups like the
poor, disabled, children etc.
 Assessment of whether the project is on track in meeting the programme goals
 Informed contribution to future programme designs
 Help make decisions and recommendations about future directions
 Identify the strengths and weaknesses of a project
 Feed data back to support programs and policies
 Assess and determine stakeholder and target group satisfaction
 Determine whether the project is meeting its objectives
 Meet demands for accountability to funding bodies
 Develop the skills and understanding of people involved in a project
 Promote a project to the wider community.

TYPES OF MONITORING

1. Input monitoring
Input monitoring is a type of monitoring which continually assess the impact of project
activities to the target population .Impact is usually the long-term effects of a project.

However for projects with long-term lifespan of programs (programs have no defined
timelines) there emerges a need for measuring impact change in order to show weather the
general conditions of the intended beneficiaries improving or otherwise.

The manager monitors impact the predetermined set of impact indicators. Monitoring both
the positive and negative impacts intended, impacts of the project\ program becomes
Imperative
For example : In a water and sanitation program there may be a need to monitor change in
under 5 mutuality in the program are over time .In this case rather than being identified as an
impact revaluation this would be identified as impact monitoring.

2. Performance monitoring

4
Projects are mainly designed and funded to achieve desired outcomes. Assessing those
outcomes and changes are the key functions of M$E unit “value for money” of a project is
assessed through assessment of performance indicators

Performance or outcome indicators are usually outlined from the project proposals and those
are inserted into an M$E plan

To access progress for performance indicators baseline is important .Baseline data shows the
pre project status of performance outcome indicators.

Baseline studies are conducted through systematic process and methodologies


Once baseline data realistic targets are together with the project implementing staff.
Once target for the outcome /performance indicators are set sources of data and methods of
data collection for the data indicators is identified

This enables the M$E staff to be aware of data collection sources and sampling to followed
.The M$E plan also provides information about frequency/ timeline for each performance
indicator to be assessed over a period of time.
Periodic assessment are conducted by drafting a concept note
The concept note briefly outlines
 Purpose of assessment
 Scope and indicators to be tracked
 Methodology tools and sampling size
 Types of tools to be used in the data collection
 Who will participate in the data collection
 Who and where the data will be analyzed and managed
 Reporting and timeline

3) Technical monitoring
Technical monitoring involves assessing the strategy that is being used in project
implementation to establish weather its achieving the required results .It involves the technical
aspects of the project such as the activities to be conducted .In a safe water project for example
Physical progress monitoring may show that there is little or no uptake of chlorine as a water
treatment strategy.
Technical monitoring may establish that this could be a result of installing chlorine dispensers
at the source and women are too time constrained that they have no time to line up and to
get chlorine from dispensers .This may prompt a change of strategy where the project might
opt for house hold distribution of bottled chlorine.

4) Financial monitoring
Just like the name suggests financial monitoring simply refers to monitoring projects
/program expenditure and comparing them with the budget prepared of the planning stage
.The use of funds at the disposal of the program/project is crucial for ensuring there are no
excesses or wastages

5
Financial monitoring is also important for accountability and reporting purposes as well as for
measuring financial efficiency (The maximum of the outputs with minimal inputs)

5)Process monitoring
 Process monitoring is a key component of any M$E system process monitoring informs
management and a donor about the actual implementations of project activities in the
field
 At the same time process monitoring lets the project staff on ground , know how well
they implement the project and what they can bring to the work they are doing on the
field
 Project monitoring is conducted using checklists and guidelines .Those checklists are
developed jointly with the project staff
 The same checklists and guide lines are used by field staff while implementation project
activities .Following the same checklist /guidelines by both the monitoring staff and the
field staff help the M$E staff to identify and share gaps that are identified during the
process monitoring .Participants share a sample of monitoring guidelines.
 In order to undertake process monitoring. A monitoring tool is required that capture the
following information:::
 Purpose of the monitoring visits
 Which activities does the visit cover
 Methodology adopted for the visit
 Key findings for the field
 Feedback by the field staff
 The breaking points agreed
 Deadlines and responsibilities

STEPS IN PROJECT MONITORING


1. Conducting a readiness assessment
How monitoring will support attainment of objectives, Reaction to results

2. Agreeing on what to monitor

Stakeholder identification and involvement

Identify stakeholder’s major concerns

Disaggregate data to capture key desired outcomes (gender age economic status, rural, urban)

3. Selecting the key performance indicators to monitor outcomes

-translate outcomes to a set of measurable performance indicators

 Indicators are signs that show changes in certain conditions, an indicator is simply a
measurement which are compared over time in order to assess change.
4. Setting baselines and collecting data on the indicators
6
Baseline is info before the monitoring period (critical measurement of indicator) Identify the
sources of data, collection methods, who to collect, frequency, cost, reporting and use

5. Select result targets


Consider baseline indicator level, set the desired level of improvement spread over
specified time

6. Decide on the method of monitoring to use

Activity based-activity implemented on schedule


Result based –focuses on impact

7. Monitor the project

Mechanisms include:

-use of a filling system to organize all communications ,reports ,minutes of meetings and
any other that can be used to keep track of project activities
-use of document logs like activity logs to track events and progress of the project activities
and contact logs to record the time and details of contacts
-tracking software for project documents or recording websites and other technology
related activities

8. Document and disseminate results

CHARACTERISTICS OF APPROPRIATE MONITORING SYSTEM


 Focuses on results and follow up
What is going as planned and what’s not
Report and recommend

 Depends largely on good policy/program design


Review assumptions
Treat risks explicitly

 Requires regularity: visits by program staff and analysis of reports


Data management of feedback from stakeholders

Monitoring institutionalized –culture


 Focuses on participation to ensure passion, commitment and ownership
7
Stakeholder identification and full involvement

 Based on a clear criteria and indicators


Identify success criteria for scope, quality, cost and time objectives

Assess and review relevance of project

 Develop quality indicators- Monitoring is not about the quantity of indicators, it is about
their quality
 Necessity of electronic media for memory and sharing lessons
 Proper documentation
CHALLENGES FACED IN MONITIRING OF PROJECTS
Organizational
 Absence of separate monitoring unit,- Lack of prior planning for M&E as an important
component it being a by the way
 Existence of weak unit, lack of coordination,
 Lack of ownership of the M&E process and results
 Lack of stakeholder involvement
 Corruption –influence by interested parties
 Poor data management skills
Financial
 Inadequate financial resources and/or liquidity etc)
 Different funders –different reporting systems
Staff and Training
 Lack of adequate trained/qualified staff, transfer of experienced staff.
 Lack of commitment
Information
 Frequency of flows, content, quality
Intangible elements
 Negative attitude toward monitoring –a fault finding activity
Delays
 (procedures, sanctions, late action etc

TOPIC 2: PROJECT EVALUATION


Sub-Topics
Meaning of Evaluation
Purpose of Evaluation
8
Types of Evaluation
Steps in Evaluation
Key Project Evaluation Criteria

MEANING OF EVALUATION
 Project evaluation is the process of determining the extent to which objectives have
been achieved
 It’s a set of procedures to appraise a projects merits and information about its goals
.objectives activities outcomes and input
 Project evaluation is a systematic and objective assessment of an ongoing or completed
project .the purpose of carrying out evaluation is to determine relevance and level of
achievement of project objectives , develop efficiency effectiveness impact and
sustainability
 An evaluation can be done during implementation at its end or afterwards
PURPOSE OF AN EVALUATION
 To determine efficiency of a project
 To access effectiveness of the project
 To measure sustainability of the project
 To determine relevance of a project
 To measure the impact of an intervention or project
KEY EVALUATION CRITERION
This gives the scope of evaluation. It’s the criteria that a project team uses when undertaking
an evaluation exercise. The following are addressed
1. Efficiency-
This is the measure of the relationship between input and output
The evaluation involves assessing whether the stocks of items available on time and in
the right qualities and quantities
It also assessed weather the activities were implemented on schedule and within budget
2. Effectiveness
It is the measure of the relationship between project output and objective .its the extent
to which the development intervention techniques are achieved
The evaluation involves determining whether the project leads to the intended outcome
3. Relevance
It measures the relationship between project output to the needs that were identified .it
investigates the extent to which outputs of a project have met the needs or beneficiaries.
4. Sustainability
It estimates the extent to which the project will continue after external findings has
terminated
Determines whether the benefits are likely to be maintained for an extended period
after the assistance ends
9
5. Impact
Assesses the changes the intervention brought and whether the changes were planned
up or unplanned intended or un-intended
TYPES OF EVALUATION
 Needs assessment / ex-anti evaluation/ baseline survey
 Formative/ ongoing evaluation
 Summative / final evaluation
 Impact evaluation

A. Needs Assessment / Ex-Ante Evaluation /Baseline Survey


 This is carried out before project implementation
 Its about gathering the baseline information
 helps in addressing
 what the project sets out to achieve
 what the objectives of the project are
 who the beneficiaries are and how they relate to the intended objectives
 what proposed approaches or alternative method of achieving project objectives
 what the planned activities are
Primary users of the project are: donors, Implementer agencies, beneficiaries communities,
researchers and institution of higher learning

B.Formative / On-Going Evaluation


 It can be done at any the during the implementation period of the project
 It accesses the ongoing project activities and then provides information to monitor and
improve the project
 It enables the implementer personnel to check on different aspects of the project and
their effects as well as detect problems or shortcomings in good time to make necessary
changes
Components of Formative Evaluation
Implementation evaluation
Its main purpose is to access whether the project is being conducted as planned .its also called
process evaluation and occurs several times during the project implementation period
Progressive evaluation
It accesses the progress of the project in relation to achieving the goals .Information is
collected to determine impact of activities/ strategies on beneficiaries
The users of the report are;
Donors, Project management team, Target group

10
C.Summative /Final Evaluation
 This is done at the end of the project implementation
 It is used to examine the projects effectiveness in achieving its objectives and its
contribution to the development of the area
 Final evaluation is concerned with all aspects of a project
 The main purpose is to find out ;
1. The extent the project activities and strategies correspond to those stated in the plan

2. Whether the activities are carried out by the right personnel


3. To what extent the project is moving towards the anticipated goals and objectives
4. Which of the projects activities and strategies are the most effective?
5. What barriers have been identified and how they can be dealt with
6. What the main strengths and weaknesses of the project are
7.To what extend are the project beneficiaries active in decision making
8. To what extent the project beneficiaries are satisfied with the project
9. The results and effects of the intervention
10. Lessons that can be learnt
Users of Summative Evaluation Report Are;
 Donors
 Government
 Target groups
 External evaluations
 Internal evaluations
 Project staff
 Beneficiary communities
Impact Evaluation
 It is done 1-10 years after project implementation
 It is main purpose is to establish sustainability of the results of the projects
 It is used to ensure direct and indirect changes and draw lessons from the project
Main users of the report are;
Donors
Planners
Government
Researchers
Academics
Evaluation Involves:
A. Measuring

11
Evaluations will 1st measure what has been done in relation to what should have
been done. If the evaluating agency is observing the work continuously the
performance evaluation is to be done
B. Reviewing
The purpose of the review is basically participate problem solving. It’s also keeps
the implementing team informed and alerted that their performance is closely
checked
C. Reporting
All information related or collected during evaluation must be presented inform of
a report that is represented to respective bodies i.e the donor agency
D. Deciding and taking corrective measures
This involves determining the cause of action possible to deviate from an undesired
cause of action if any
STEPS IN EVALUATION
Phase A: Planning the Evaluation
 Determine the purpose of the evaluation.
 Decide on type of evaluation.
 Decide on who conducts evaluation (evaluation team)
 Review existing information in programme documents including monitoring
information.
 List the relevant information source
 Describe the programme.

Phase B: Selecting Appropriate Evaluation Methods


 Identify evaluation goals and objectives.
 Formulate evaluation questions and sub-questions
 Decide on the appropriate evaluation design
 Identify measurement standards
 Identify measurement indicators
 Develop an evaluation schedule
Develop a budget for the evaluation.
Phase C: Collecting and Analyzing Information
• Develop data collection instruments
• Pre-test data collection instruments
• Undertake data collection activities
• Analyze data
• Interpret the data

Phase D: Reporting Findings


• Write the evaluation report.
12
• Decide on the method of sharing the evaluation results and on communication strategies.
• Share the draft report with stakeholders and revise as needed to be followed by follow up.
• Disseminate evaluation report.

Phase E: Implementing Evaluation Recommendations


• Develop a new/revised implementation plan in partnership with stakeholders.
• Monitor the implementation of evaluation recommendations and report regularly on the implem
• Plan the next evaluation

Challenges Faced During Evaluation


 Lack of skill and knowledge
 Lack of 1st hand information and knowledge especially for external evaluation
 Inadequate financial resources
 It is expensive with concern to external evaluation
 Lack of personal commitment to the project

TOPIC 3: PROCESS OF MONITORING AND EVALUATION


Sub –Topics

 Difference between Monitoring and Evaluation


 Monitoring and Evaluation Methods
 Monitoring and Evaluation Tools
 Internal Monitoring and Evaluation
 Participatory Monitoring and Evaluation
 Qualities of a good Evaluator
 Response to Monitoring and Evaluation Results
 Communication of Monitoring and Evaluation Result

DIFFERENCE BETWEEN MONITORING AND EVALUATION

Timing Continuous throughout Periodic review at


the project significant point in project
progress end of project
end of project, change
phase.

Scope Day and day activities, Asses overall delivery of


output indicators of output and progress
progress. towards objectives and
impacts.

Main participants Project staff and partners, External


stake holders. evaluators\facilitators,
13
project staff , donors stake
holders.

Process Regular meetings, Extra ordinary meetings


interviews, monthly, addition data collection
quarterly review e.t.c exercises etc.

Written outputs Regular reports and Written report with


updates to project recommendations to
management partners and changes to project
stakeholders and donors presented in the workshop
to different stake holder

Definition Ongoing analysis of Assessment of the


project progress towards efficiency impact relevance
achieving planned results and sustainability of the
with the purpose of project actions
improving management
decisions making

Usually incorporate
WHO? Internal Management external agencies
responsibility
Usually at competition but
WHEN? Ongoing also of midterm expost
and ongoing

Learn broad lessons


WHY? Check progress ,take applicable to other
remedied actions and programs and project
update plans provides accountability.

Function Monitoring keeps track of Evaluation consists in


the progress of estimating the value of
implementation something .It involves the
Aim process of finding facts
Monitoring aims at
periodic checking progress Evaluation aims at making
made in the conduct of the a study about the
project against the targets effectiveness of the
and goals laid down projects

14
Purpose The purpose of The purpose of evaluation
monitoring lies in lies in bringing about the
providing constructive process of accounting thus
suggestions perfection

MONITORING AND EVALUATION METHODS

1. Monitoring and Evaluation Systems


Uses performance indicators to measure progress, particularly actual results against
expected results.

2. Extant Reports and Documents


Existing documentation, including quantitative and descriptive information about the
initiative, its outputs and outcomes, such as documentation from capacity development
activities, donor reports, and other evidence.
3. Questionnaires
Provides a standardized approach to obtaining information on a wide range of topics
from a large number or diversity of stakeholders (usually employing sampling
techniques) to obtain information on their attitudes, beliefs, opinions, perceptions, level
of satisfaction, etc. concerning the operations, inputs, outputs and contextual factors
of a project/programme
4. Interviews
Solicit person-to-person responses to predetermined questions designed to obtain in-
depth information about a person’s impressions or experiences, or to learn more about
their answers to questionnaires or surveys.
5. On-Site Observation

Entails use of a detailed observation form to record accurate information on-site about
how a programme operates (ongoing activities, processes, discussions, social
interactions and observable results as directly observed during the course of an
initiative).
6. Group Interviews (FDGs)
A small group (6 to 8 people) are interviewed together to explore in-depth stakeholder
opinions, similar or divergent points of view, or judgements about a development
initiative or policy, as well as information about their behaviors, understanding and
perceptions of an initiative or to collect information around tangible and non-tangible
changes resulting from an initiative.
7. Key Informants

15
Qualitative in-depth interviews, often one-on-one, with a wide range of stakeholders
who have first-hand knowledge about the initiative operations and context. These
community experts can provide particular knowledge and understanding of problems
and recommend solutions
8. Expert Panels
A peer review, or reference group, composed of external experts to provide input on
technical or other substance topics covered by the evaluation
9. Case Studies
Involves comprehensive examination through cross comparison of cases to obtain in-
depth information with the goal to fully understand the operational dynamics, activities,
outputs, outcomes and interactions of a development project or programme

MONITORING AND EVALUATION TOOLS

Techniques for monitoring and evaluating a project vary from one situation to another
Whichever technique is used the main aim is to measure for efficiency and effectiveness of
the system and competencies that exist in individuals overseeing the system and ensure
conformity to project goals, objectives and proposals
The decision on tools to use depends on the kind of Information that one is gathering
The information is classified information
 Quantative data
 Qualitative data

Quantitative data

Quantitate information is obtained by counting your measuring


The data is usually presented in whole numbers, fractions, decimal points, percentages, ratios,
proportions etc.
Example;
How many students passed module {II} examinations?
What percentage of the class passed?
What was the ratio of men to women?

Qualitative data
This tell us about how people feel about a situation, how things are done and how people
behave
The data is obtained by observing and interpreting
Some of the tools used in data collection include;
 Questionnaires
 Interviewers guide
 Observation guide

16
Questionnaires
 A questionnaire is a tool used in information gathering information from individuals or
organizations
 They can be used to measure opinions, attitudes, behavior and perceptions in specific
topics
 Well-designed questionnaires are essential for collecting valid and reliable data
 Questionnaire items refer to each question that appear in a questionnaire

Format of a questionnaire item


 The items can either be open ended or close ended
 Open ended items are used to gather qualitative data and the respondents are asked to
respond using their own words
 The information gathered is usually bulky and requires to be categorized in terms of
patterns
 Closed ended items require the respondents select a consumer from a list of provided
alternatives
 It could be anonymous or like a scale (Strongly Agree ,Agree, Disagree ,Strongly Disagree)

Wording and order of item in a questionnaire


 It should be easy to read and understand
 Each item should explore one piece of information only
 All items should be related to the other on the questionnaires
 Items should be free from bias
 The information asked for should be precise and solicit accurate answers

How to design a questionnaire


1. Research questionnaire items
2. Identify the topics or area you want to research on in relation to the objective of the
objective of the project
 Find out if others have undertaken same research before i.e. evaluation research .Note
the projects and evaluation assignments require a lot of reading

When undertaking evaluation the client provides;


 Project plan – containing project objectives, the score, result frame work. The result
framework contains the project indicators ,set targets ,baseline, timeline and budget
 Project operations manual - It contains details of all activities of the project
 Project proposals - Legal frame work .Implementation status report

3. Determine the format of the items


Identify the closed ended terms of and develop response option or scale for them
Identify items to be open ended and decide how to analyse the responses

4. Test and finance the questionnaires (piloting)


One testing way is administering the questionnaire, create a more data set based on
questionnaire then revise the items
17
Interview Schedule / Guide
Refers to a set of questions that the interviews learnt to ask when interviewing
Used to standardize the interview so that interviews can ask the same questions in the same
manner to all the respondents
There are two types of interview scheduled guides
 Personal
 Telephone interviews
They can be categorized as either
1. Structured and unstructured interviews
2. Focus interviews and non -directive
Structured Interviews
This makes use of a set of pre-determined questions and highly standardized technique of the
recording. The interviewer is allowed ‘freedom to ask supplementary question or omit some
organization ’
The flexibility results in lack of comparability of one interviewer with another
Analysis of responses is more difficult and time consuming

Focused interviews
It is meant to focus attention on to the given experience of the respondent and its effect
The main task of the interview is to confine the respondent to the topic
Such kind of interviews are mainly unstructured

Non- directed interview


Interviewer’s role is simply to encourage the respondent to talk about the topic with a
minimum of direct questions The interviews often act as a catalyst to a comprehensive
expression of the respondent feelings and believes

Advantages of interviews
1) They provide in depth data which is not possible to get using questionnaires
2) Unlike questionnaires the interviewers can clarify the questions resulting to more
relevant responses
3) They are more flexible because the interviewers can adopt to the situation
4) Sensitive and personal information can be extracted from the respondent by honest
and personal interaction
5) Unlike questionnaires the interviewers can get more information by asking probing
questions
6) Interviews yield higher responses rate since it is difficult for a respondent to completely
refuse to answer a question
Disadvantages of interviews
1) They are more expensive since the interviewer has to travel to meet the respondents
2) Interviewing requires a high level of skill. It requires communication interpersonal skills
3) Responses may be influenced by the respondent reaction to the interviewer.

18
Observation Guide
A list of questions that guide on what to observe

Tools used when evaluating a project


i. Evaluation plan –It outlines how the project should be evaluated and may include a
tracking system on the implementation of evaluation fellow
ii. Project evaluation Information sheet – A report or questionnaire presenting the
project evaluation with evaluators rating
iii. Annual project report –Assessment of a project during a given year by target group
project management government donors
iv. Terminal report prepared by implementing organizations and includes lesson learnt
v. Field visit report- It involves visiting all project and reporting immediately after the
visit
vi. Minutes of annual Review- An annual meeting which generates annual reports to
assess annual outputs (results)
vii. Project status reports – Helps one to understand the current status performance
schedule costs and hold ups deviations from original schedules
viii. Project schedule chart- Indicates the time schedule for implementation of the project.
From this one can understand any delay leads to ultimate loss.
ix. Project Financial status report – The evaluation team is able to tell whether or not
the project is being implemented within the budget.

INTERNAL MONITORING AND EVALUATION


Internal evaluation (self-evaluation), in which people within a program sponsor, conduct
and control the evaluation

Advantages Internal Evaluation Disadvantages Internal Evaluation

• Knows the implementing organization, its • May lack objectivity and thus reduce
programme and operations credibility of findings
• Understands and can interpret behavior • Tends to accept the position of the
and attitudes of members of the organization
organization • Usually too busy to participate fully
• May possess important informal • Part of the authority structure and may
information be constrained by organizational role
• Known to staff, less threat of anxiety or conflict
disruption • May not be sufficiently knowledgeable or
• More easily accept and promote use of experienced to design and implement an

19
evaluation results evaluation
• Less costly • May not have special subject matter
• Doesn’t require time-consuming expertise
recruitment negotiations

• Contributes to strengthening national

External evaluation, in which someone from beyond the program acts as the evaluator and
controls the evaluation
Advantages of External evaluation Disadvantages of External evaluation

 May be more objective and find it easier • May tend to produce overly theoretical
 to formulate recommendations evaluation results
May be free from organizational bias • May be perceived as an adversary
 May offer new perspective and additional arousing unnecessary anxiety
 insights • May be costly
 May have greater evaluation skills and • Requires more time for contract,
 expertise in conducting an evaluation negotiations, orientation and monitor
 May provide greater technical expertise
 Able to dedicate him/herself full time to the evaluation
 Can serve as an arbitrator or facilitator between parties
 Can bring the organization into contact with additional technical resources.

PARTICIPATORY MONITORING AND EVALUATION


An ongoing and regular process which actively involves stakeholders in all the stages of
collecting, analyzing and using information on an intervention with a view to assessing the
processes and results and making recommendations (provide information about decision-
making).
It is a process to support the implementation of a development project/programme by
grassroots communities and stakeholders which strengthens appropriation, mutual
responsibility, transparency and knowledge of interrelations between the results,
implementation factors and the environment.

20
Advantages of Participatory Monitoring and Evaluation
Empowers beneficiaries to analyze and act on their own situation (as “active participants”
rather than “passive recipients”)
▪ Builds local capacity to manage, own, and sustain the project. People are likely to accept and
internalize findings and recommendations that they provide.
▪ Builds collaboration and consensus at different levels—between beneficiaries, local staff
and partners, and senior management

▪ Reinforces beneficiary accountability, preventing one perspective from dominating the


M&E process
▪ Saves money and time in data collection compared with the cost of using project staff or
hiring outside support
▪ Provides timely and relevant information directly from the field for management decision
making to execute corrective actions
Potential disadvantages of Participatory Monitoring and Evaluation

Requires more time and cost to train and manage local staff and community members
▪ Requires skilled facilitators to ensure that everyone understands the process and is equally
involved
▪ Can jeopardize the quality of collected data due to local politics. Data analysis and decision
making can be dominated by the more powerful voices in the community (related to gender,
ethnic, or religious factors).
▪ Demands the genuine commitment of local people and the support of donors, since the
project may not use the traditional indicators or formats for reporting findings

Stages of the PME process


1. Decide to set up the system
2. Identify the Actors
3. Define expectations and objectives
4. Identify the criteria and indicators
5. Choose information collection methods and tools
6. Collect and analyze information
7. Implement actions for change

21
Who are the actors?
The actors are those who have an influence on or are affected by the project or programme
concerned by the participatory monitoring-evaluation.
The identification and analysis of actors are an important phase in the establishment of the
PME sys-tem.
The PME is set up to meet the concerns of these actors, particularly those directly affected
by the intervention of the project or programme (direct and indirect beneficiaries).
Some actors are more visible because of the positions they occupy and the roles they play in
the com-munity or project. But there are also less visible actors who generally belong to so-
called vulnerable groups and are most affected by the activities undertaken. These groups,
which constitute the primary beneficiaries of the project, should play a central role in the
design and management of the PME sys-tem.
It is therefore important to have an appropriate approach and tools to identify them and
examine their interests and what they expect from the project, the type of influence they can
exert on the project activities and the arrangements to be made.
How to conduct the actors’ analysis?
There are several tools for that purpose. They include:
The power-interest grid:
It is used to make a simple mapping of actors by taking into consideration the interest that
each of them could have for the PME system to be set up, as well as the influences (positive
or negative) that he/she could have on the system. To apply it, one should first, identify all
the project actors and second, prepare a typology by placing each actor in one of the 4 spaces
of the grid, corresponding to its interest and the importance of his/her influence. Of course
such a classification should be justified. At the end of the classification, one should examine
the actions to be undertaken for each of the 4 categories of actors identified for the success
of the PME system to be set up.
High Power High Interest –important actors /project beneficiaries, maintain them
High Power Low Interest –Keep them informed
Low power Low Interest –people or groups living in the area but no interest in the project
High Power low Interest- set up level of information and training needs –need for
negotiating capacity

The Actors’ Analysis Grid:


It facilitates the identification of the different actors, their interest for the system, their
influence on the system, and the actions to be taken to improve their participation. As can be
observed, this grid makes it possible to produce the same type of information as those

22
generated by the power-interest grid, the only difference being that the former is used to
classify actors.
The two grids are also complementary since the actors’ analysis grid can be used as a back-
up for the organization of information generated by the reflection which accompanies the
preparation of the power-interest grid. But, for reasons of simplicity, one can choose to use
only one of them. One should not lose sight of the fact that this process is, first and foremost,
meant for the local populations and that, as a result, one should avoid using a variety of tools
which would complicate the process even more.
The Actors’ Analysis Grid
Actors What interest What positive What How to step up the
does this interest does Negative participation of
group have on this group interest does these Actors
the PME have on the this group
PME System have on the
PME System

QUALITIES OF A GOOD EVALUATOR

 Detail-oriented –with good mastery of the knowledge on the project


 Strives for accuracy in his/her work
 Thorough and persistent, following through on issues that seem to develop slowly or have
some dead-ends
 Inquisitive, curious about how and why things work/operate the way they do
 Organized mentally (but his/her desk could be cluttered!)
 Strong planning skills to guide analyses and to decide on tasks and priorities
 Quick learner of complex issues
 Quick on their feet in interviews and meetings
 Intuitive sense of which issues are important to explore
 Sees the "big picture"
 Works well with people in a team setting-relates well with agency/program staff and gains
their confidence
 Works well under pressure and/or with tight time deadlines
 Analytical approach to issues and creative (in considering analytical approaches and
solutions to problems)
 Flexibility to adapt to changing situations

23
RESPONSE TO MONITORING AND EVALUATION RESULTS
One of the most direct ways of using knowledge gained from monitoring and evaluation is to
inform ongoing and future planning and programming.
Lessons from evaluations of programmes, projects and initiatives and management responses
should be available when new outcomes are being formulated or projects or programmes are
identified, designed and appraised.
At the time of revising or developing new programmes, projects, policies, strategies and other
initiatives organisations should call for a consultative meeting with key partners and
stakeholders to review and share evaluative knowledge in a systematic and substantive manner.

Users Monitoring and evaluation results are


 Donors
 Government
 Target groups
 External evaluations
 Internal evaluations
 Project staff
 Beneficiary communities

COMMUNICATION OF MONITORING AND EVALUATION RESULT

Data collected is used in generating monitoring and evaluation reports and the reports can be
disseminated by a newsletter through websites seminars or press releases

Why disseminate Monitoring and Evaluation reports


M&E results help improve your program interventions

Using M&E results keeps you and your staff in a “learning mode” as you gain understanding
about how and why your program is working. M&E results also help you make decisions
about the best use of resources. For example, outcome and impact evaluations may provide
further insight on certain risk and protective factors, thus shaping your future efforts. As
staff use results to reflect on the program’s implementation and make necessary
improvements, they are more likely to feel supported by the M&E process.

M&E results strengthen your program institutionally.


M&E results can help stakeholders and the community understand what the program is
doing, how well it is meeting its objectives and whether there are ways that progress can be
improved. Sharing results can help ensure social, financial and political support and help
your program establish or strengthen the network of individuals and organizations with
similar goals of working with young people. By publicizing positive results, you give public

24
recognition to stakeholders and volunteers who have worked to make the program a
success, and you may attract new volunteers.

M&E results can be used to advocate for additional resources


. Disseminating M&E results can raise awareness of your program among the general public
and help build positive perceptions about young people and youth programs. M&E results
often shape donors’ decisions about resources in terms of what and how many to allocate to
youth programs. Results can also be used to lobby for policy or legislative changes that relate
to youth by pointing out unmet needs or barriers to program success.

M&E results should also lead to decisions about changes in program


implementation.

Periodic staff meetings devoted to discussing M&E results can engage staff in collectively
making program adjustments. If you identify problems early in implementation, you can
respond promptly by modifying your program strategy, reassigning staff or shifting financial
resources to improve the chances of meeting your program goals and objectives. If you used
a participatory evaluation approach, you should ensure that participants are involved in
reviewing results and determining how to use them.

M&E results contribute to the global understanding of “what works.”

By sharing M&E results, you allow others to learn from your experience. The dissemination
of M&E results—both those that show how your program is working and those that find
that some strategies are not having the intended impact— contributes to our global
understanding of what works and what doesn’t work

To ensure that the information reaches the target audience who include the funding
organization the project manager staff, the board of directors ,partner organizations
,interested community groups , the general public ,beneficiaries and other
stakeholders.(Researcher consultants and professional agencies)

M&E results can help you design new or follow-on activities.

Programs often begin on a small scale in order to test their feasibility. Evaluation results
document the strengths, limitations, successes or failures of these initial efforts and allow
program planners to make objective decisions about which elements of a program to continue,
modify, expand or discontinue. Elements that are not very successful but show promise can
be modified for improvement. Successful elements can be expanded by: ➤ increasing their
scale or scope, ➤ changing the administrative structure or staff patterns, ➤ expanding the
audience and/or targeting new audiences, or ➤ spinning off separate programs

Factors considered when preparing Monitoring and Evaluation Reports


 Medium of communication
 Language barriers
 Size of the report
25
 Sponsors or donor requirements
 Recordings e.g. board members and donor agencies
 Budget

Importance of Monitoring and Evaluation Reports


• To improve project /program performance e.g. lessons learnt are used as a basis of
improvement of subsequent prospects

• For policy development- Policy planners or makers make use of M&E reports for
decision making

• Well done evaluations recommend policy changes

• Advocacy to increase funding for the project

• M&E is a demonstration of accountability and transparency of an institution which


improves its image and therefore attracts funding from donors

• Used for development of new projects and are used in the next planning phase

• Provides baselines for future projects

• The M&E results enhance project sustainability

• M&E results reports are used to make evidence based organizational decision

TOPIC 4: INDICATORS
Sub-Topics
1) Meaning of Indicators
2) Importance of Monitoring and Evaluation Indicators
3) Types of Indicators
4) Characteristics of a good Indicator
5) Process of selecting Indicators
6) Process and change in Indicators

MEANING OF INDICATORS

What is an indicator?
 Indicators are signs that show changes in certain conditions
 They are eye openers, markers, units of measure, descriptors, reducers
 Five types: Input, Process, output, outcome and impact
 An indicator is simply a measurement
 Indicators are compared over time in order to assess change

26
 Operation is broken down into design elements such as inputs, activities, outputs,
outcomes and impacts referred to as “performance indicators”.
 We express indicators in number percentage or even ratios, proportions

IMPORTANCE OF MONITORING AND EVALUATION INDICATORS


1.At the initial phase of a project:
Indicators are important for the purposes of defining how the involvement will be
measured
It help to pre-determine how effectiveness will be evaluated in a precise and clear
manner.

2. During project implementation:


Indicators serve the purpose of aiding program managers assess project progress
and highlight areas for possible improvement.
Indicators are measured against project goals, managers can be able to measure
progress towards goals and inform the need for corrective measures against potential
catastrophe (Disasters).

3.At the evaluation phase:-


Indicators provide the basis for which the evaluators will assess the project impact.
Without the indicators, evaluation becomes an uncertain responsibility.

Performance Indicators are used to:


 Help organizations to understand their performance levels.
 Help setting realistic performance goals.
 Help aligning daily work to strategic goals.
 Help monitoring progress on a real-time basis.
 Help understanding the weaknesses and establishing improvement priorities.
 Determine whether an improvement is being made and
maintained.
 Help benchmark internally and externally.
 Identify if staff are doing well and to help them if they are not.
 Provide a basis for recognizing team and individual
performance.

TYPES OF INDICATORS
Resource Indicators
These concern the budget allocated at every level of the intervention, the human resources
mobilized, etc. The measurement of these indicators provides information about the efficiency
of the project’s intervention. Example: Grants, Annual Budget, Borrowings, Labour, Input
costs, Etc.

27
Activities Indicators
To achieve the project objectives, a certain number of activities are implemented. The
monitoring system may focus on the level of achievement of these activities. Example:
Meetings, Training, Visits, Etc.

Process Indicators:
They relate to the decision-making process and makes it possible to determine, for example,
the inclusion and/or participation of actors, the regularity of meetings, the regularity of
reports, etc. Example: Existence of reporting rules, Degree of women’s participation in
decisions, Frequency of debriefing meetings, Etc.

Product Indicators
When an activity is carried out, it generally generates a tangible or non-tangible product. For
example, at the end of a training, the number of people trained (product) can be assessed. The
product is the immediate and tangible result of the combination of a resource (expenditure)
and an activity (training): product: number of trainees. Example: Trainees, Number of text
books produced, Number of credit received,Etc.

The Results Indicators


These reflect behavior. For example, a few months after the training, one can assess how
many participants have applied what they learnt. Or if a project distributes equipment
(product) one can assess how many beneficiaries use this equipment. Example: Number of
people who applied the training received, Use of credit obtained, Number of people who use
the text books,Etc.

Categories of Indicators
Goal level indicators – These are the program of sector objective indicators to which the
program is directed .They include targets beyond the scope of the project It is important to
scrutinize project appraisals documents and projects operations manual which clearly
stipulates the project development documents from which the goal indicates would be derived
Purpose level indicators – This are used to define the changes in behavior of project
beneficiaries and the way changes in the institution functions as a result of the project outpost
Output level indicators- This indicators define deliverables and they are used to establish
the terms of reference to a project

Activity level indicators – At this level, there are usually the inputs or budgets. This can be
established using standard categories like commodities, technical services etc. The budget
statements are usually a summary of resources.

Quantitative and Qualitative Indicators

Quantitative indicators
Expressed in terms of specific numbers, percentages, proportions and rates of something.

28
Numbers on their own may not be sufficient to indicate e.g range of success and failure
likewise percentage by itself does not indicate the size of success therefore if we were to allow
the significance of the outcome then we typically require the data on both number and
percentage

Qualitative indicators
 They provide insights into changes in institutional processes, attitudes, beliefs,
motives and behaviors of individuals
 A qualitative indicator might measure perception, such as the level of
empowerment that employees feel to adequately do their jobs.
 Qualitative indicators might also include a description of a behavior, such as
the level of mastery of a newly learned skill.
 Although there is a role for qualitative data, it is more time consuming to collect,
measure, and distill, especially in the early stages.
 Qualitative indicators are harder to verify because they often involve subjective
judgments about circumstances at a given time.
 Qualitative indicators should be used with caution. It is not just about
documenting perceptions of progress.
 It is about obtaining objective information on actual progress that will aid
managers in making more well-informed strategic decisions, aligning budgets,
and managing resources.

Proxy Indicators
 Sometimes it is difficult to measure the type of indicators directly so we use proxy
indicators.
 Proxy indicators are used to give at least approximate evidence on performance.E.g.
They are used when direct indicators are not available or visible regularly. If it difficult
to conduct periodic household surveys in dangerous housing areas one could use the
number of the iron sheets or television antennae as a proxy measure of increased house
hold income.
 Proxy indicators may correctly take desired outcome but there could be other
contributing outcome.

Pre-designed indicators
 Pre-designed indicators are indicators established independently of an individual
country/organization ,program or sector content
 For example: a number of development institutions have created indicators to track
development goals e.g sustainable development goals indicators ,the united nations

29
development programs ( UNDPS) the World Bank National Development hard book,
The International Monitory Fund (IMF).

(Knec question: What is the importance of indicators)

CHARACTERISTICS OF A GOOD INDICATOR (CREAM)

Performance indicators should be as clear, direct, and unambiguous as possible.


Performance indicators should be relevant to the desired outcome, and not affected
by other issues tangential to the outcome.
The economic cost of setting indicators should be considered. This means that
indicators should be set with an understanding of the likely expense of collecting and
analyzing the data.
Indicators ought to be adequate. They should not be too indirect, too much of a proxy,
or so abstract that assessing performance become complicated and problematic.
Indicators should be monitor -able, meaning that they can be independently validated
or verified

Normally we state indicators in terms of quantity, quality and time and sometimes place and
costs putting numbers and date on indicators is called targeting.

Variables that affect Indicators


a) Size of the project -The bigger the project the more indicators you need.
b) Availability of resources-Lack of adequate resources affect the choice of indicators
c) The duration of the project- Some projects take longer to complete than others
d) The availability of technical capacity- Indicators need knowledge and technical know
how to set
e) Support from top management or institution

PROCESS OF SELECTING INDICATORS

The process of formulating indicators should begin with the following questions:
 How can we measure that the expected results are being achieved?
What type of information can demonstrate a positive change?
 What can be feasibly monitored with given resource and capacity
constraints?

30
 Will timely information be available for the different monitoring
exercises?
 What will the system of data collection be and who will be
responsible?
 Can national systems be used or augmented?
 Can government indicators be used

Factors to consider when selecting project indicators


Any appropriate M&E indicator must meet particular thresholds. They must be:
1. Precise/Well defined: Probably the most important characteristic of indicators is that
they should be precise or well defined. I other words, indicators must not be ambiguous.
Otherwise, different interpretations of indicators by different people implies different
results for each
2. Reliable: Reliability here implies that the indicator yields the same results on repeated
trials/ attempts when used to measure outcomes. If an indicator doesn’t yield consistent
results, then it is not a good indicator.
3. Valid: Validity here implies that the indicator actually measures what it intends to
measure. For example, if you intend to measure impact of a project on access to safe
drinking water, it must measure exactly that and nothing else.
4. Measurable: Needless to say that an indicator must be measurable. If an indicator
cannot be measured, then it should and must not be used as an indicator.
5. Practicable: In other cases, although an indicator can be measured, it is impracticable
to do so due to the cost or process constraints. An indicator must be able to utilize
locally available resources while at the same time being cost effective.

Plan Canada (46) has described the process of indicator development as involving
the following elements:
• Definition of the characteristics to be measured
• Identification of the target audience and the purpose of the indicator
• Choosing a framework (i.e. one based on goals, issues, sectors or stress condition-
response)
• Definition of criteria for selecting indicators • identification and evaluation of a
potential indicator on the basis of the selection criteria
• Pilot-testing of the indicator
• Choosing the final set and reviewing the indicator periodically

Many organizations have attempted to define criteria for the construction and selection of
indicators, depending on whether they apply to policy, analytical soundness or measurability.
They may also be assessed in relation to factors such as transparency, scientific validity,
robustness, sensitivity and the extent to which they are linkable, or according to whether

31
they are relevant to the issue they are intended to describe, whether they relate to changes in
policy and practice or whether they “strike a chord” with their intended audience .

Criteria for Indicators of use for International Purposes


These indicators should be:
• Linked to broadly identified common problems and global priorities
• Appropriate for inter-country comparisons
• Relevant to international initiatives such as SDGs, Agenda 2063 or to international
conventions and treaties
• Attractive to a range of sectors, partners and institutions
• Ideally usable for decision-making at different tiers of government
• Based on sound, internationally comparable data that are readily available or easily and
relatively inexpensively collected

Criteria for Indicators of use for Local Purposes


These indicators should:
• Be relevant both to individual citizens and to local government –Vision 2030
• Reflect local circumstances
• Be based on information that can be readily collected
• Show trends over a reasonable period of time
• Be meaningful both in their own right and in conjunction with other indicators
• Be clear and easy to understand, in order to educate and inform
• Provoke change (for example in policies, services or lifestyles)
• Lead to the setting of targets or thresholds

Indicators are more to be objectives if they include elements of quality, quantity, time and
location

 Indicators should be constructed to meet specific needs. They need to be direct


 When developing indicators we start with the basic indicators i.e. construct classrooms
in Thika technical
 Add the quantity e.g increase learning space by 80%. It is required that baseline data be
corrected i.e. The situation as it is before the intersection
 Add quality e.g 100% completed and operational classrooms
 Add time 100% classrooms in use, complete and in use by the end of year 2
 Add location 100% complete classroom in use by the end of year 2 at Thika Technical

Example: Basic Sanitation

Indicator: percentage of the population with adequate excreta disposal facilities.


Definition of indicator: proportion of the population with access to a sanitary facility for
human excreta disposal in the dwelling or in its immediate vicinity.

32
Unit of measurement: a percentage.
Measurement variables: the term “sanitary facility” should be defined, for instance as “a
unit for the disposal of human excreta which isolates faeces from contact with people,
animals, crops or water sources”. The facilities could range from simple, protected pit
latrines to flush toilets with sewerage. The population covered could be defined as that
served by connections to sewers, household systems (pit latrines, septic tanks) or communal
toilets. The term “immediate vicinity” should also be defined, perhaps as any sanitary facility
within 50 meters of a dwelling.
Purpose: the purpose of this indicator is to monitor progress in the access of a population
to sanitary facilities. It is important to assess access to adequate excreta disposal facilities, as
this is linked fundamentally to the risk for faecal contamination and disease and ill-health
among the population. When disaggregated by geographical area or by socioeconomic
status, it also provides evidence of inequalities. Users would include sanitary engineers,
planners, public health officials, nongovernmental organizations and others.
Linkages: the indicator could be linked to other indicators, such as the proportion of the
population with access to adequate and safe drinking-water, or to a health effects indicator
such as mortality and morbidity from diarrheal diseases.
Data requirements: data could be obtained from censuses or special surveys and should
be disaggregated by (for example) geographical area or urban-rural divide.
1 . Outline factors project team should consider when selecting an indicator in
the process of Monitoring and Evaluation.
2. Give 5 reasons a project team would give to jusify the need for developing
indicators before conducting M$E
3. Designing project monitoring and evaluation is taken as a challenging task to
project teams. Justify this statement
4. Describe the qualities of an indicator that an M$E officer would avoid for when
establishing the outcome of a given activity
5. Types of indicators
6. Challenges of Indicators
7. Importance of indicator

33
TOPIC 5: PROJECTS AUDITS

Sub –Topics
1. Meaning of Project Audits
2. Purpose of Project Audits
3. Types of Project Audits
4. Types of Project Audit Tools
5. The Project Audit Process
6. Terms of Reference for Project Audits

MEANING OF PROJECT AUDITS


Project auditing refers to the systematic evaluation into the ways project management ideals are
applied to the project. It involves thorough review process to establish best practices and serves
as pillar to support management decisions needed for the project.

There is no „best‟ time to conduct an audit; however, most project audits are conducted at the
start of the project, later in the project and after the project is completed, depending on the
following:
 The nature and size of the project
 To ensure that all necessary technical issues are resolved before proceeding with the project.
This actually reinforces the need for the early audit.
 Providing the parent organization with the required details as to whether the project is
conforming to the planned schedule, budget, scope and quality constraints. This refers to the
late audit usually conducted during the project.
 Follow up with the regulations set together with the client. Most clients demand for such audit
and its usually called the post project audit.

Project auditing serves as one of the primary vehicles for the evaluation of projects. It focuses
on thorough assessment of the project in order for senior management to establish how well the
project is doing in terms of performance. The auditors compared the present status of the project
with its planned activities to establish whether the project is delivering its deliverables on
schedule, cost, scope and quality.
PURPOSE OF PROJECT AUDITS

An audit is a monitoring system that uses quantitative and qualitative assessments tools to
measure performance outcomes. Risk management is built into the audit process in that it
enables project managers to identify and evaluate concerns, problems and challenges that
may have surfaced during the course of the project. When inefficiencies are identi fied, root

34
cause analysis can be performed, and corrective or preventive recommendations can be
included in audit reports for future reference
A project audit checklist serves as a pivotal tool in a project risk management process. It
helps senior leadership and project managers appraise internal elements and external factors
affecting the completion of the project. An auditor must comply with Generally Accepted
Auditing Standards (GAAS) when using a project audit checklist.

Projects are audited to:


Control Environment

An auditor learns about a project's control environment to become familiar with factors
affecting the project's progress and completion dates. These factors may be internal or
external. External elements that may affect a project completion date are laws and
regulations. Internal factors may relate to corporate policies and guidelines, top
management's ethical qualities and staff members' skill set.

Test Internal Controls

A project auditor tests internal controls and procedures to ensure that such controls are
adequate and functional. A control is a set of directives that a project manager puts into
place to prevent operating losses resulting from technological breakdowns, error, fraud or
theft. An adequate control instructs project staff members on how to perform tasks,
highlight problems and make decisions. A functional control provides proper solutions to
project breakdowns.

Change Management

The project management function is used to drive enterprise change. A company's goals and
objectives might be pursued through a series of strategic projects designed to facilitate
systemic changes. Audits of strategic projects assess whether they have succeeded in meeting
specific and measurable goals and objectives. For example, an audit evaluation might reveal
that a goal related to sales projections was not met and the deficiency was due to insufficient
training of project team members in skills required to perform core project duties. This
information might be used to drive change in employee development initiatives.

Time Management

Audits are used to evaluate project schedules and timetables established for a project, as well
as its tasks and activities. This generally includes a comparison of timetable and schedule
estimates against actual performance. Milestone reports may reveal overestimations or
underestimations on specific tasks and activities during the course of the project. External
or internal factors might be identified as the cause of the delay. For example, supplier delays
are one type of external factor that can impact project schedules.

35
Resource Guidance

Project audits might identify excesses or shortfalls in resource allocations associated with a
project. For example, project audits may reveal whether project performance deficiencies
were tied to insufficient resource allocations. It might also reveal over budgeting in allocating
resources in certain areas for a project -- assessments that are important when developing
future project budgets.

Vendor Assessments

Project management includes the use of third-party suppliers and vendors for certain
products or services. While supplier performance is generally audited as an independent
assessment, it can also be performed as part of a project management audit. The results
might impact future contracting and procurement decisions.

Regulatory Compliance

A project audit might be required to satisfy regulatory requirements. Companies that must
comply with such regulations might gain a significant amount of data through the auditing
process. Consult with legal counsel to determine your company's governmental reporting
requirements.

Benefits of Project Audits

 Improved project performance


 Reduced cost
 Evaluates risk and protects assets the earlier they are discovered, the better the opportunity to
avoid or mitigate them effectively
 Identify and avoid scope creep
 Avoid wasted time (& cost) on wild goose chases
 Better clarity and focus amongst project team members
 Improved relationship with the client
 Consolidate learning and carry that forward onto future projects
 Provides objective insight
 Improves efficiency of operation
 Assesses control
 Ensures compliance with laws and regulations
 Promotes accountability

TYPES OF PROJECT AUDITS

Compliance Audit

A compliance audit is an examination of the policies and procedures of an entity or


department, to see if it is in compliance with internal or regulatory standards. This audit
is most commonly used in regulated industries or educational institutions.
36
Construction Audit

This is an analysis of the costs incurred for a specific construction project. Activities may
include an analysis of the contracts granted to contractors, prices paid, overhead costs
allowed for reimbursement, change orders, and the timeliness of completion. The intent
is to ensure that the costs incurred for a project were reasonable.

Financial Audit

A financial audit is an analysis of the fairness of the information contained within an


entity's financial statements. It is conducted by a CPA firm, which is independent of the
entity under review. This is the most commonly conducted type of audit.

Information Systems Audit

An information systems audit involves a review of the controls over software


development, data processing, and access to computer systems. The intent is to spot any
issues that could impair the ability of IT systems to provide accurate information to users,
as well as to ensure that unauthorized parties do not have access to the data.

Internal Audit

An internal audit is usually conducted by an in-house audit team, and is focused on control
assessments, process assessments, legal compliance, and the safeguarding of assets . The
team’s reports are sent to management and the organization’s audit committee, and may
result in recommended changes being implemented.

Investigative Audit

An investigative audit is an investigation of a specific area or individual when there is a


suspicion of inappropriate or fraudulent activity. The intent is to locate and remedy
control breaches, as well as to collect evidence in case charges are to be brought against
someone.

Operational Audit

An operational audit is a detailed analysis of the goals, planning processes, procedures,


and results of the operations of a business. The audit may be conducted internally or by
an external entity. The intended result is an evaluation of operations, likely with
recommendations for improvement.
37
Tax Audit

A tax audit is an analysis of the tax returns submitted by an individual or business entity,
to see if the tax information and any resulting income tax payment is valid. These audits
are usually targeted at returns that result in excessively low tax payments, to see if an
additional assessment can be made. If the taxpayer disagrees with the outcome of a tax
audit, there is an appeal process that may overturn the initial finding.

Process Audit – This type of audit verifies that processes are working within established limit.
It defines requirements such as time, accuracy, pressure etc. and also examines the resources
and checks the effectiveness of the process
Product Audit – It is examination of a particular product or service such as hardware or
software to evaluate whether it confirms to requirements
System Audit- It is conducted on management systems. It is an activity performed to rectify
by examinations of objectives evidence that the systems are appropriate and effective and have
been developed in accordance to specified requirements.
Objective evidence that systems are appropriate and effective have been developed in
accordance to specified specifications
Quality Management System Audit-- evaluates an existing management program to
determine if conformance to company policies, contract and regulatory requirement

Environmental System Audits- examines environmental merit system


Project Auditing ---is a formal type of project review most often designed to evaluate the
extent to which project management standards are being followed

THE PROJECT AUDIT PROCESS

1. Audit initiation:
This is the first phase that signals the start of the entire audit process and it involves the
following activities:
a) Defines the what, where, when and how the auditing process should go. The purpose of
the auditing should be clearly delineated for the audit team to know their target.

b) The scope of the audit should be established. Whether the audit is focusing on specific
areas of the project or the entire project. Whether the audit is probing into deeper aspects or
just superficial areas of the project. Auditing deeper areas of the project requires technical and

38
highly experienced project auditors. The narrower the audit scope, the lesser the challenges
faced by the auditors whereas broader audit scope are trickier and more tedious to manage
. c) Data collection is an integral component of the whole process and that takes place at this
phase of the audit lifecycle.
d) Auditing methods and practices suitable for auditing the project are established in this
phase of the process. e) Rules, guidelines and other auditing protocols are fundamental at this
stage. Hence, members of the audit team should be fully aware of the ground rules governing
the process to ensure compliance and effective audit exercise

2. Baseline definition of the project:

This phase focuses on the establishment of concrete benchmarking parameters used to assess
the project.
These parameters can be previous project implemented, standards set by project management
bodies like PMBOK, PRINCE2 etc. or agreed standards set by the parent organization and
the audit team. The output of the project will be evaluated based on these parameters
established at this phase. Infact that is more the reason why the parent organization should be
deeply involved at this phase of the audit life cycle
3. Audit database: Once the yardsticks for the assessment of the project are established then,
a database should be created to enable the audit team in discharging its duties.
The information gathered in the initial stage of the audit exercise should be stored in this
database for the evaluation of:
a) The management of the project

b) The current and future status of the project


c) The project schedule, budget and quality in terms of its performance and meeting client’s
requirements
d) This database can serve as an important repository that the parent organization can use to
manage similar and future projects.
4. Initial analysis of the project:

When parameters are set and valuable data collected then, decisions can be made from an
informed position.
Auditors are not the ones making decisions but rather present the facts for senior management
to make decisions regarding the project.

39
This preliminary assessment done by the auditors should be presented to the project manager
and team for their inputs before making it available to senior management.
Project manager and team should accommodate this initial analysis and see it as a supportive
move to make the project better than seeing it as a dubious ploy to humiliate them. Such a
spirit is good for the success of the project.

5) Preparation of the audit report:


This involves collating the facts about the project and putting it in a format called audit report.
This format should be approved by the parent organization before being used by the project
audit team. The report should contain recommendations made by the team and possible
remedies if there be concerning the project. However, the decisions that should be made using
this report must come from senior management and it is their responsibility to publish the
audit report.
6) Termination of the audit:

This involves bringing to a close the entire audit process. However, it can only be terminated
when the report has been properly reviewed, recommendations addressed and released by
management. The report is reviewed to enhance the methods used throughout the audit
process. This phase closes with the dismissal of the entire audit team.

The auditing process in brief


1. Audit planning and preparation
2. Audit execution i.e. fieldwork data gathering cores the time period from arrival of the
audit location
3. Auditing report. the purpose of auditing repot is to communicate the results of the
investigations providing correct and clear data
4. Audit follow up and closure.
5. Correcting nonconformities- Corrective action. This is the action taken to eliminate
the cause of an existing non conformity detect as others indescribable situation in order
to prevent occurrence

Principles of Project Audit


1) Should be independent and supported in this by original board
2) Should be accountable within a governance and reporting system
3) Planned and coordinated as part of the organization
4) Should be risk based against an independent risk evaluation
5) Able to allow the impact of identified weakness to be identified and addressed by follow
up and escalation

40
TYPES OF PROJECT AUDIT TOOLS
i. Vouching -Audits verifies accounting transactions with documentary evidence
ii. Confirmation-Technique used by creditor to validate the correctness of the
transaction
iii. Reconciliation-Technique used by auditor to know the reason or difference in
businesses
iv. Testing-Technique used in selecting representative transaction of the whole
accounting data to draw a conclusion about all items
v. Physical examination -Requires verification and confirmation of the physical
existence of tangible assets as they appear in business balance sheet e.g cash in hand ,
land and building , plant and machinery e.t.c
vi. Analysis -Technique used by an auditor to segregate important facts and to further
studies
vii. Scanning-Books of accounts , an experienced auditor may identify those entries which
would require his attention
viii. Inquiry-Method used to collect in depth any information about any transaction.
ix. Observation -Through observation an auditor gets an idea about reliability of the
process and the procedures of an organization.
Different methods of obtaining audit evidence
i. Inspection -The examining records and documents or inspection of tangible assets.
ii. Observation-Looking at a process or procedure being performed by others.
iii. Inquiry and confirmation-Seeking appropriate information from knowledgeable
person inside or outside the entity
iv. Computation -The checking of arithmetical accuracy of source documents and
discounting records or performing independent calculations
v. Analytical review- The studying of significant ratios and trends and investigating
unusual fluctuation and item
TERMS OF REFERENCE (TORs) FOR PROJECT AUDITING

Terms of Reference is a document that explains the objectives, scope of work, activities, tasks
to be performed, respective responsibilities of the Employer and the Consultant, and expected
results and deliverables of the Assignment/job.

Terms of Reference for Auditing


i. Auditing mission statement - the audit original audit statement most clearly defines
the goals , objectives as well as types of audits to be conducted
ii. Audit skills specification - a detailed specification of an auditors skills and experience
, technical experience
41
iii. Stake holders roles and responsibilities - a detailed specification of all audit related
roles and responsibilities for both audit staff and project staff
iv. Audit Criteria - a full listing of all criteria by which projects will be selected. For an
audit you can’t audit every project. it would be too costly and time consuming
v. Audit initiation procedures a detailed specification of audit initiation procedures
including the process by which individual project managers are notified of a pending
audit and related pre persuasion requirements.
vi. Audit execution procedures - a detailing full listing of audit execution procedures
covering the methods and procedures to be employed during the audit itself.
vii. Audit reporting procedures -covering the manner and method which audit results
will be reported and reviewed In order to minimize the nature of the project audit.

I. Background -describes the project in the context. States the general note stakeholders
in doing project. Background provides an overview of history behind the project
II. Objectives -these are the desired accomplishments that can be reasonably desired
upon the project completion with consumption of available resources and within an
expected timeframe
III. Scope\ issues - project involves a number of issues and problematic areas that need
to be addressed in order for the project to be implemented smoothly
General issue evaluation criteria for projects

a. Efficiency - how well the given activity transforms available resources t desired
outputs.
b. Reference - analyze if a given activity is being performed to desired benefits.
c. Impact - extent to which the projects benefits received by the target audience.
d. Sustainability - criterion identifies whether the project positive outcomes will
continue after funding ends.
e. Methodology - how to carry out the project I a cost effective way
f. Expertise - the expertise needed for doing a project defines a set of professional
requirements for the individual and terms involved in project implementation.
g. Reporting - reporting provides valid information about a project performance over
a certain period.
h. Work plan - is a kind of strategy that aims to help solve problems through a project
and boost employee drive and focus.

Core constituents of projects mostly analyzed during an audit

a) Time management : activities duration


b) Resource management :: allocation of resources ,criteria on distribution analysis on
consumption, measures to control resource abuse
42
c) Personal management -allocation of staff and establishment of recruiting policies
,division of responsibilities regarding team does and training methods
d) Information management > policies regarding preparing and collecting information
, methods used in filling information updating , retrieving information

Grading system to rank each audited project management constitution


a) Critically deficient > suggests a serious inability to match project guidelines
b) Weak > unable to entirely comply with projects objectives
c) Satisfactory > basic project management principles are followed
d) Good > compatibility with the project goals and effectiveness of most tools
e) Very good > process defines ideal project performance and adheres to planning /
monitoring expectations

The Components of an Audit Status Report


This is a significant document that provides management and other stakeholders of the project
with the required knowledge about the progress made so far on the project. Such information
can help bring back the project on track in cases of deviation. It also evaluates the project with
respect to schedule and budget. Therefore, this report should consist of the following
components in order to provide the necessary project details:

 An introductory aspect: This focuses on the general overview of the project including the
goal and objectives of the project. It should be simple and clear for all to understand. This
aspect should be well written to provide readers with the purpose for which the project was
initiated.

 The present status of the project:

Four important parameters are considered here; these are cost, schedule, progress and quality.
Every project has a budget and hence, making project cost a significant constraint in project
management. Actual costs should be compared to what was budgeted at the start of the project
to see if the project is doing fine in terms of cost. All project cost should be clearly stated in
this report. The time required to complete project activities should be recorded and compared
to planned schedule in order to establish percentage completion of the project. Tasks that
were unable to finish at the stated time should also be captured; as this can help reinforce
learning for future projects. The progress made can be established by relating the amount of
work completed as against the project resources used. In addition, the quality of the project
should be recorded in this report. Quality measures the degree to which the project meets its
initial specifications. In actual fact, the success of the project hinges on its ability to meet
client’s requirements. Thus, making such information vital for management action.

 Status of future project: This points to the auditor’s take on what should be done in the
project. The previous approach used that led to several uncompleted tasks should be modified
43
or changed in order to achieve success within schedule and budget. The auditor has the right
to recommend changes to the project approach especially when the project is lagging behind
in critical areas.

 Management issues: All project issues that require the attention of senior management
especially those primary to the success of the project. In addition, decisions affecting the
schedule, cost and performance of the project should be made by management.

 Risk management: Assessment of risks associated with the project and how such should
be mitigated in order to save the project.

 Project assumptions and limitations: Stating the conditions under which this report is
deemed true. It is the duty of the auditor to prove the accuracy of this document by affirming
the circumstances or assumptions made in this report. It is worth noting that, the
interpretation of this report lies exclusively with senior management and not the auditor. The
auditor presents the facts for onward submission to management for proper decisions.

TOPIC 6: PROJECT MONITORING AND EVALUATION REPORT


Sub-Topics

1. Importance of report writing


2. Types of reports
3. Factors to consider in report writing
4. Format of Monitoring and Evaluation Reports
5. Dissemination of Reports

IMPORTANCE OF REPORT WRITING


A Report is
 A tool that tracks work, helps identify risks, provides documentation and keeps stakeholders
informed about projects.
 Any informational work made with an intention to relay information or recounting certain
events in a presentable manner.
 An administrative necessity.
 Most official form of information or work are completed via report.
 Reports are often conveyed in writing, speech, television, or film.
TYPES OF REPORTS
Reports can be referenced as:

Routine – occurring on a regular basis


Special – those that are required to cover a specific subject or task
Technical – these cover complex technical issues
44
The type of report you will write will be determined by the subject you are writing on. Based
on the subject, there are lots of different kinds of reports, such as:

Annual reports- An annual report is a comprehensive report on (Project /Programme


/company's/organization’s activities throughout the preceding year. Annual reports are
intended to give shareholders and other interested people information about the Project
/Programme /companies/organization’s activities and financial performance.

Budget reports- A report detailing a company's planned expenditures and allowing


comparison to what they actually were. ... The budget report helps project team, donor
,organization, Company determine how closely its budgets mirror reality and how well it
manages its costs. The difference between the budgeted and actual amount is called
the budget variance.

Project reports- Project reporting is the formalized recording of project progress and
(interim) project results. On the basis of a target/actual comparison of the individual
controlling aspects, project status reports are created and presented to a defined target group.

Recommendation reports- A recommendation report is written to propose or recommend


the options available to solve a problem or fill a need. The goal of the report is to compare
options, recommend one option, and support that recommendation.

Project Appraisal reports-Project appraisal is the process of assessing, in a structured way,


the case for proceeding with a project or proposal, or the project's viability. It often involves
comparing various options, using economic appraisal or some other decision analysis
technique.

Evaluation reports-The evaluation report is the key product of the evaluation process.
Its purpose is to provide a transparent basis for accountability for results, for decision-making
on policies and programmes, for learning, for drawing lessons and for improvement.

Research reports- Research reports are recorded data prepared by researchers or statisticians
after analyzing information gathered by conducting organized research, typically in the form
of surveys or qualitative methods.

Contractual reports-Reports on the legal, regulatory, and other compliance standards


through the contract lifecycle.

FACTORS TO CONSIDER IN REPORT WRITING

Audience: When you choose words to express your ideas, you have to think not only about
what makes sense and sounds good to you, but what will make sense and sound best to your

45
readers. Thinking about your audience and their expectations will help you make decisions
about word choice.
Purpose: Why do you write? Is it to educate or entertain? Is it to convince or inform?
Identifying the reason(s) behind the piece of writing you’re working on − be it a blog, a short
story or an academic essay− is essential to avoid missing the mark. After all, you wouldn’t use
the same language if you were sharing your findings with your fellow students as opposed to
writing about your research to educate a mass audience.
Context: Any topic exists within a larger context.. Understanding the big picture is key to
writing effectively no matter what your purpose is.
Media. The media will very much influence the style and formats in the report
Language: The language you use in your writing depends very much on how you’ve defined
the elements above. Always have a mental image of your readers, keep the purpose and context
in mind, and remember what medium your writing is intended for. Thinking about language
also means thinking about what not to write or choosing your words carefully and sensitively.
Information- ensures that the report gives all the information required by the terms or'
reference.
Structure- the subject should be arranged in the appropriate logical sequence and the
sentence structure is clear.
. Display- there should be a reasonable economy of paper and expressions language the report
should be free of grammatical errors and the vocabulary should not be abstract.

Features of a good Report

Proper Form: A report must be in the proper form. Sometimes there are statutory forms to
follow.

Presentation: A report needs an attractive presentation. It depends on the quality of typing


or printing as well as quality of paper used.

Readability: The keynote of a report is readability. The style of presentation and the diction
(use of words) shall be such that the readers find it attractive and he is compelled to read the
report from the beginning to the end.

Logical Sequence: The points in a report shall be arranged with a logical sequence, step by
step and not in a haphazard manner. A planning is necessary before a report is prepared.

Positivity: As far as possible positive statements should be made instead of negative ones.
For example, it is better to say what should be done and not what should not be done.

Impersonal Approach-When a report is prepared as a source of information and when it is


merely factual (e.g. a report on a meeting), the approach shall be impersonal and the sentences
should be in the third person and in indirect speech.

46
Simplicity: The language shall be as simple as possible so that a report is easily
understandable. Even in a technical report there shall be restricted use of technical terms if it
has to be presented to laymen.

Clarity: The language shall be lucid and straight, clearly expressing what is intended to be
expressed.

FORMAT FOR MONITORING AND EVALUATION REPORT

It is worth noting that different organization have and give reporting templets to project
implementers however the following parts are common to the said templates:

Title page: Includes the Name of organization, Name of the project, the period of the project
and the year the report was written. One can also include the author of the report.

Abbreviations: All abbreviations used in the report should be listed here. This should be in
alphabetical order

Acknowledgements: In this page, acknowledge all those who have been instrumental in
making the project a success, including donors, any experts, project staff, the government
where necessary and all project partners. This should be kept to not more than one page.
Shorter is better
.
Executive summary: The executive summary should contain a brief of the whole report,
encompassing the most important details that you would require any reader to take home.
Most people, especially the donors, will only read this section, therefore, and ensure it reflects
the whole report. A good way is to dedicate a paragraph each to the main sections of the
report including the background, the methodology, the results, conclusions, lessons learnt and
recommendations. It is recommended to keep this strictly between one and two pages
Table of contents: Contains a list of the items discussed in the report and their page
number.(Reports of 10 pages and more)

Introduction: In this section, introduce the project and provide some background
information. It should also include information on what the project goal, objectives,
indicators, partners, and information at baseline study if applicable. Necessary subsections can
be created where necessary for easier following.

Methodology: Here, information on how the evaluation was carried out is included. Issues
to consider include the research methodology, sources of information and how it was
gathered, sample size and sampling techniques, research instruments, validity and reliability
issues among others.

Results: This section includes a detailed analysis of the findings of the evaluation study of the
project. Results are compared against the objectives of the project, or against the project
indicators. Consider having sub-sections.
47
Conclusions, Lessons Learnt and Recommendations: What conclusions can be made
from the results shared? Were the objectives and the general purpose of the project met? What
lessons can be drawn from the project? What works and what does not? What
recommendations can be made for future similar projects? What should be maintained and
what should be made for these future similar projects?

Annexes: This section includes all the relevant documents necessary for interested persons.
Each annex should be put in separate page

DESSIMINATION OF MONITORING AND EVALUATION REPORTS

Disseminating M&E results to those outside your program is often complex because
different audiences will have different information needs. You will have more success
disseminating results if you involve major stakeholders, budget adequate resources and
develop a dissemination plan before results are finalized.

Considerations:

Determine the audience for M&E results and why you want to share them.

Many different audiences will be interested in evaluation results. Locally, there may be
interest among community organizations, the media, government officials and social service
agencies. At the regional or national level, professional colleagues, policymakers and funding
agencies may need to learn of your results.

Share both positive and negative findings.

While every program wants to highlight positive findings, sharing results about what didn’t
work is also important. Stakeholders also need to understand what is and isn’t working, to
guide their support toward the most effective youth strategies. Further, most donors
appreciate a program’s willingness to critically review its work; admitting what hasn’t worked
well

Tailoring Dissemination of Results to Different Audiences

Many possible channels exist for presenting evaluation results. For some audiences, one
approach may be sufficient (e.g., an all-day retreat with program staff). In other cases, you
may want to disseminate results via numerous channels to ensure that the message you are
trying to communicate reaches your targeted audience. For example, to reach community
members, you might prepare a newspaper story and hold an evening meeting.

In order to plan your dissemination, you must also assess:

➤ What budget is available,

48
➤ The cost of preparing and producing dissemination activities, and

➤ Who is capable of carrying out the activities.

Common Dissemination Formats

The most commonly used formats are written reports, oral presentations, press releases, fact
sheets and slide or computer presentations.

While these formats differ in length, detail and the amount of technical information, some
common elements are:

➤ logical organization,

➤ direct and concise language,

➤ use of appropriate illustrations and examples.

A written report combined with visual aids is an effective means of disseminating M&E
results. Written reports can be used to provide an update on the program’s progress;
document evaluation procedures, findings and recommendations; maintain an internal record
of evaluation findings for program staff; and publicize important program information and
experiences. To write informative reports that people are likely to read, you should:

➤ use clear, simple language in the active voice,

➤ be brief and to the point,

➤ use attractive layouts, including headings, sub-headings and white space,

➤ use boxes, bullets, italics and bold fonts to emphasize important points, and

➤ use quotes, anecdotes and case studies to put a human face on the statistics you present.

Visual aids such as maps, tables and charts, graphs, and photographs can be used
effectively to summarize information and add “life” to a written report:

➤ Maps can illustrate areas eg with high rates of adolescent births, low birthweight babies,
many school dropouts or high youth unemployment. They can also be used to show the
program location and the projected impact of activities on the target population.

➤ Tables and charts are often used to show comparisons—e.g., local statistics in relation to
state and national figures—or other information, such as a breakdown of teenage births
according to the age of the mother.

49
➤ Line graphs can be used to illustrate change over a number of years, such as the number
of adolescent births over the past 10 years in a community.

➤ Bar graphs can also illustrate change over time or changes among subgroups of the target
population.

➤ Photographs can show your program in action, putting a face on the numbers you are
presenting and making readers feel more connected to your project. They can also be used to
document community participation in program activities.

An evaluation report should emphasize only the most important and useful findings,
highlighting information that you think will shape the decisions made by staff, donors,
policymakers, communities and youth. Keep descriptive information, such as the background
of the program, to a minimum, as many readers will be familiar with the program. Include an
executive summary—i.e., an overview of your main findings. This summary should be written
so that it can be distributed independently, for example, to policymakers who may be less
likely to read a full report.

Oral presentations are another means of disseminating program results.

Oral presentations provide a direct, concise overview of your findings and allow for
discussion. You or your staff can give presentations at national meetings, in one on-one
meetings with your board of directors or donors or to community forums. Successful
presentations are direct and concise and feature visual aids such as slides or transparencies.
Call attention to the most important points and fill in the details when your audience has a
chance to ask questions. Visual aids help maintain the audience’s attention during
presentations.

Slides, overheads and posters—whether computer generated or hand-drawn—emphasize


important points by presenting information in an abbreviated form. Offer an appealing mix
of text (words) and graphics (images).

Press releases can generate media coverage of your findings.

As more people gain access to newspapers, radio, television and the Internet, media coverage
of your findings is gaining in importance. Many programs find that the most effective way to
reach policymakers is to encourage media coverage of their evaluation results. A press release
is a concise statement that presents an overview of your evaluation findings, which you give
to the media. The media will usually use the release to develop a story, and it may prompt
them to seek additional information about your program’s activities.

Tips for Writing a Press Release

• Keep the information simple and clear.

50
• State the most important information in the first lines.

• Present at most three key findings.

• Limit the release to two double-spaced pages.

• Avoid technical or statistical terms.

• Include the date and information about whom to contact for more information

. • Send to multiple newspapers or radio stations at the same time.

• Send to producers and editors, as well as their staff reporters

Fact sheets convey findings in a short, concise format.

Fact sheets are especially effective for advocacy, conveying information to policymakers and
others who do not have the time to read longer reports. A fact sheet can also be used as a
presentation handout or mailed to program stakeholders. Supply bulleted lists of major
findings, keeping the list to under two pages in length.

Common methods of dissemination include:

 Publishing program or policy briefs


 Publishing project findings in national journals and state wide publications
 Presenting at national conferences and meetings of professional associations
 Presenting program results to local community groups and other local stakeholders
 Creating and distributing program materials, such as flyers, guides, pamphlets and
DVDs
 Creating toolkits of training materials and curricula for other communities
 Sharing information through social media or on an organization's website
 Summarizing findings in progress reports for funders
 Disseminating information on an organization's website
 Discussing project activities on the local radio
 Publishing information in the local newspaper
 Issuing a press release
 Hosting promotion events at fairs and social function

51
SUPLEMENTARY NOTES

AREAS OF PROJECT MONITORING


1. Resource utilization--resource acquisition visualization, utilization and consumption are
a critical component for ensuring efficient and effective implementation of the project
To ensure that project activities are carried out there must be constant and regular flow of the
resources. The resources must be utilized for the intended purposes.
They should also be sourced and supplied as per the spent specification in terms of cst quantity
and quality
Time schedule adhere a significant element in project monitoring
2. Benefits flow analysis-- project monitoring is done to determine flow of project benefit
directly to the intended beneficiaries .These benefits must be shared and distributed equally
and equitably .The beneficiary must participate in the sharing of benefits and issues incurred
from the project
3 Community participation and engagement ---.>>> any project must ensure active genuine
voluntary and popular participation and involvement of not only the project beneficiaries but
also the community indirectly.
Genuine and popular participation will be monitored on the a/c of the following aspects
All the members of the community must participate \ actively involved and engaged
All members of the community must participate in all levels i .e project ides implementation
and management
Participation in terms of sharing profits and losses incurred from the project.
Participation must not only be actively but also voluntary purposely genuinely and objective
oriented

ELEMENTS OF PROJECT MONITORING FRAMEWORK


 This are issues that need to be put in place in profit monitoring
 We need to specify the people/ offices/ agencies/ that are going to use the information
depending on who will use the information determine how you will resent it
 Specify who will participate in project writing
 What re key objectives of the project
 Specify indicators to measure the progress
 What methods are you going to use to gather information
 Specify when monitoring will take place
 Specify how the monitoring system is going to be managed

52
 Specify who is going to manage the information

AN EXAMPLE OF A SHORT MONITORING PLAN

TIME PERIOD:
Sep to Oct, 2000\2001
MONITORS:
Project officers\coordinators\communities\village committee
STANDARDS:
Community meetings whole program decisions are made shall have at least half men and
half women present when live in the community
ADJUSTMENT:
if the percentage of men and women present is less than 40% then the project
manager helps the project committee create a plan to increase attendance of the next
meeting
PROCEDURES:

REPORTING:
The records attendance figures monitor own counts of the people attending the
meeting will be included in the monthly report of the project

Process of Monitoring and Evaluation


Comparison /relationship between monitoring and evaluation
I. Monitoring and evaluation (M&E) is a process that helps improve performance
and achieve results.
II. Its goals are to improve current and future management of outputs outcomes
and impact.
III. Monitoring and evaluation when carried out correctly and at the right time and
place are two of the most important aspects of ensuring the success of many
projects.
IV. There is a positive relationship between monitoring and evaluation and success
of NGO project among others.
V. It is mainly used to access the performance of projects, instructions and
programmed setup by the government international organizations and NGO’S.
VI. It establishes links between the past, present and future actions in project
management.
VII. M&E is used to access development thus many international organizations such
as the united nations ,the world bank group and the organization of American
States use M&E system to access the development project
VIII. The common ground for monitoring and evaluation is that they are both
management tools.
IX. Monitoring and evaluation are complementary .During an evaluation as much
use as possible is made of information from previous monitoring .In contrast to
monitoring, where emphasis is on the process (activities) and results (outputs)
,evaluation is used to provide insight into the relationship between results output
53
(for example improved the strengthened capacity of an organization) , outcomes
(for example improved living conditions for the ultimate target group).

THE RESULT CHAIN IN PROJECT MANAGEMENT

The results chain is a description of the various stages of an intervention that lead to the
changes that are intended – from the inputs at the start, to the end effects at a societal level
for the beneficiaries

The first three links in the chain are under the control of the implementer and present no
major issues for monitoring and quality management.

(a) Inputs: Funding, staff, vehicles, etc. Easily monitored by administrative, accounting and
audit procedures.

(b) Activities: What the intervention does. Easy to monitor by standard record keeping of
activities.

(c) Outputs: Anything that we make, do or buy as a result of inputs and activities. The output
of landmine clearance is safe land, the output of a training session is people with more skills
and knowledge, the output of risk education is people with more knowledge about safe
behaviour. Outputs are often straightforward for monitoring and QM.

The last three links of the chain are what the donor, implementer and/or beneficiaries want
to achieve. They are less and less under the control of the implementer as we move along the
chain, and more difficult for monitoring and quality management. These results links are all
based on behavior change where the definition of “behavior” also includes attitudes and
decision making.

(d) Immediate outcomes: Behavior changes by people other than the donor and
implementer, usually a direct result of the outputs. Usually behaviour change by people who
are stakeholders. The outcome of landmine clearance is when cleared land is used
productively (a behavior change from avoiding the land due to mines to making use of the
land), the outcome of training is when people who have been trained start to use their new
skills and knowledge, the outcome of risk education is when people demonstrate reduced risk
behavior in their everyday lives.

(e) Medium-term outcomes or intermediate outcomes: Downstream behavior changes by


people who have little or no direct involvement in the intervention. Measurement and QM of
medium-term outcomes can present challenges. One medium-term outcome of land that has
been cleared of mines and then used for agriculture is increased food supply within the
community. People not directly involved in the project may benefit if there is more food
available in the local marketplace. A medium-term outcome of training people how to develop
54
better plans is when the better plans are adopted and implemented, a medium-term outcome
of risk education is when people who did not attend the sessions adopt safe behavior because
they see and learn from the safe behavior of friends, neighbours and family members who
received training.

(f) Impacts: Usually defined as societal-level changes (rather than individual changes) that
eventually result from an intervention, and can include both direct and indirect effects as well
as both positive and negative effects (this is a very similar definition to the way “impacts” is
defined in evaluation criteria). Improved nutritional status of children in a village may be the
long-term result of clearing mines from farmland, this improvement may happen more quickly
if more people get their land cleared sooner due to the impact of better planning. Measuring
impacts is difficult, and requires a long-term commitment beyond the end of the project for
the effects to be realized. Very few interventions make any real provision to continue learning
after the end of a project and instead it is common to incorrectly label a few immediate
outcomes as impacts in order to supply data. Impacts frequently fall into four broad categories:
health (including nutrition, etc.), wealth (economic benefits of any type), wellbeing (social,
educational, emotional) and compliance with legal or political commitments (e.g. a national
poverty reduction strategy).

55
56 | P a g e

You might also like