Samirawit Final Research June 2021 Finall Draft
Samirawit Final Research June 2021 Finall Draft
MARY’S UNIVERSITY
SCHOOL OF GRADUATE STUDIES
SCHOOL OF BUSINESS
BY
SAMRAWIT MZENGIA
JUNE, 2021
ADDIS ABAB, ETHIOPIA
ST. MARY’S UNIVERSITY
SCHOOL OF GRADUATE STUDIES
SCHOOL OF BUSINESS
BY
SAMRAWIT MZENGIA
ID No: SGS/2206/2012A
JUNE, 2021
2
ADDIS ABAB, ETHIOPIA
ST. MARY’S UNIVERSITY
SCHOOL OF GRADUATE STUDIES
SCHOOL OF BUSINESS
BY
SAMRAWIT MAZEGIA
ID No: SGS/0226/2012A
JUNE, 2021
ADDIS ABABA, ETHIOPIA
3
ASSESSING THE PRACTICES AND CHALLENGES OF PROJECT
MONITORING AND EVALUATION SYSTEM OF LOCAL NGOs IN
ADDIS ABABA
BY
SAMRAWIT MAZEGIA
__________________ __________________
Dean, Graduate Studies Signature
__________________ __________________
Advisor Signature
__________________ __________________
External Examiner Signature
__________________ __________________
Internal Examiner Signature
4
DECLARATION
5
ENDORSEMENT
This thesis has been submitted to St. Mary’s University for examination with my
approval as a university advisor.
6
ACKNOWLEDGEMENT
First is to thank the almighty God for giving me courage all the way through my life. I wish to
appreciate my advisor, Dr. Dejene Mamo, for the continuous support, his patience, and
professional guidance in writing this research project. I would also want to acknowledge my
mother in heaven and all my family, friends and special person who were assisting me with
every aspect.
Last but not the least, I would like to thank all the 12 local NGOs Manager and staffs who
greatly assisted me with all the data and sources for the research, although they may not agree
with all of the interpretations/conclusions of this paper. I am grateful for their participation in the
survey that supported my work and get results of better quality.
i
ABSTRACT
A good monitoring and evaluation system is a key ingredient to good performance of a project. It
is a way of being answerable and signifying transparency to the stakeholders as it provides for
accountability and transparency. It also assists learning of an organization by documenting
lessons gained during the execution of the projects and using the same in the ensuing project
planning and implementation or by sharing with other implementers the experience earned. The
research project set out to identify the practices and challenges of Monitoring and Evaluation
system in selected 12 local NGOs in Addis Ababa, Ethiopia implementing youth and youth
related projects. To achieve the study objective a descriptive design with a qualitative approach
has been employed. The primary data were collected through survey questionnaire and interview
of M&E expertise, project managers, coordinators and officers in the 12 selected NGOs. The
findings of the study shows that the M&E practice of the NGOs under study is hindered by
inadequate fund allocated to M&E, absence of sufficient and skilled M&E expertise, poor usage
of ICT, undefined role and responsibility of M&E expert, poor recognition and involvement of
management, absence of capacity building trainings, unfamiliarity with M&E tools and
techniques, strict use of donor guideline and procedures, non-involvement of stakeholders
specifically beneficiaries in M&E process, not documenting lessons learned, and selective
dissemination of M&E findings. However, experts group have good educational background and
work experience, findings indicate that experts have poor M&E experience and practice. The
gap between the actual M&E practice and what are considered as best practices were huge.
Recommendations are given for both improvement of the practice and which studies in future
should stress on to conduct. As well some other points are discussed in the thesis.
ii
TABLE OF CONTENTS
Contents Page
Acknowledgements …………………………………………………………………………………...… i
Abstract ………………………………………………………………………………………….…….... ii
Table of content ……………………………………………………………………………………….... iii
List of tables …………………………………………………………………………………………......vii
List of Figures ………………………………………………………………………………………......viii
List of operational definition …………………………………………………………………………... ix
List of Acronyms ……………………………………………………………………………………….... x
CHAPTER ONE ......................................................................................................................................... 1
INTRODUCTION....................................................................................................................................... 1
1.1. Background of the Study ..................................................................................................................... 1
1.2. Statements of the Problem .................................................................................................................. 2
1.4 Research Questions ............................................................................................................................... 4
1.4 Research Objectives .............................................................................................................................. 4
1.4.1 General Objective .............................................................................................................................. 4
1.4.2 Specific Objectives ............................................................................................................................. 4
1.5 Significance of the Study ...................................................................................................................... 5
1.6 Scope of the Study ................................................................................................................................. 5
1.7 Limitations of the study ........................................................................................................................ 6
1.8. Organization of the study .................................................................................................................... 6
CHAPTER TWO ........................................................................................................................................ 7
2. REVIEW OF LITRATURE ................................................................................................................... 7
2.1 Introduction ........................................................................................................................................... 7
2.2. Theoretical Concept of monitoring and evaluations system ............................................................ 7
2.2.1. Monitoring and Evaluation practice ............................................................................................... 8
2.2.1.1. Monitoring practice ....................................................................................................................... 8
2.2.1.2. Evaluation practice ........................................................................................................................ 9
2.2.2. Disparities & complementary features of monitoring & evaluation .......................................... 11
2.2.3. The need for monitoring & evaluation in project Management ................................................. 11
2.2.4. Approaches of monitoring and evaluation.................................................................................... 12
2.2.4.1. Traditional Approach .................................................................................................................. 12
iii
2.2.4.2. Participatory Approach............................................................................................................... 12
2.2.5. Designing monitoring & evaluation systems................................................................................. 13
2.2.5.1. Establishing log frame ................................................................................................................. 13
2.2.5.2. Methods of monitoring and Evaluation Facts Collection ......................................................... 15
2.2.6 Features of Best M&E Practices of NGOs ..................................................................................... 16
2.2.6.1 M&E plan .................................................................................................................................. 16
2.2.6.2 Coherent framework................................................................................................................. 16
2.2.6.3 M&E budget .............................................................................................................................. 17
2.2.6.4 Personnel assigned for M&E activities ....................................................................................... 17
2.2.6.5 Specification of the frequency of data collection ....................................................................... 18
2.2.6.6 Stakeholder involvement .............................................................................................................. 18
2.2.6.6 Capture and documentation of lessons ........................................................................................ 19
2.2.6.8 Dissemination of M&E findings................................................................................................... 19
2.2.7 Policy guidelines for conducting monitoring and evaluation ....................................................... 19
2.2.7 Challenges of NGOs in Monitoring and Evaluation .................................................................... 20
2.2.7. Monitoring and evaluation of development projects in Ethiopia ............................................... 21
2.2.7.1. Overview of NGOs ....................................................................................................................... 23
2.2.7.2. Code of Conduct and regulatory Framework for NGOs in Ethiopia...................................... 23
2.3. Empirical Literature .......................................................................................................................... 25
2.4 Literature Gap .................................................................................................................................... 27
2.5. Conceptual Framework ..................................................................................................................... 27
CHAPTER THREE .................................................................................................................................. 29
3. RESEARCH METHODOLOGY ........................................................................................................ 29
3.1 Introduction ......................................................................................................................................... 29
3.2 Research Design .................................................................................................................................. 29
3.3 Research Approach ............................................................................................................................. 29
3.4. Population of the study ...................................................................................................................... 30
3.5. Sampling Techniques and Sampling Procedures ............................................................................ 30
3.5.1. Sample Techniques ......................................................................................................................... 30
3.5.2. Sample Size ...................................................................................................................................... 30
3.6. Data and Data Source ........................................................................................................................ 31
3.6.1 Methods of Data Collection ............................................................................................................. 31
3.7. Validity and reliability Test .............................................................................................................. 31
iv
3.9. Ethical Consideration ........................................................................................................................ 33
CHAPTER FOUR..................................................................................................................................... 34
PRESENTATION, ANALYSIS AND INTERPRETATION OF FINDING ....................................... 34
4.1 Introduction ......................................................................................................................................... 34
4.2 Demographic Characteristics ............................................................................................................ 34
4.2.2 Distribution of respondents by age group...................................................................................... 36
4.2.3 Educational level of respondents .................................................................................................... 36
4.2.4 Work experience of respondents .................................................................................................... 37
4.2.5 Employees work experience regarding M&E................................................................................ 38
4.3 Selection of Monitoring and Evaluation tools and techniques ........................................................ 38
4.3.1 The expected results clearly defined from program design and result chain ............................. 38
4.3.2 The indicators are (SMART) and have indicative targets with baselines ................................... 39
4.3.3 M&E information usage to assist in decision-making and planning ........................................... 39
4.3.4 The risks and assumptions in M&E activities have been defined ............................................... 40
4.3.5 There is an M&E strategic and work plan for the project ........................................................... 41
4.3.6 Usage of ICT enabled tools.............................................................................................................. 41
4.3.7 Ongoing supervision of record keeping and reporting ................................................................. 42
4.3.8 Donor procedures and guidelines usage on M&E system ............................................................ 42
4.4 Technical expertise on a particular M&E tools and techniques ..................................................... 43
4.4.3 M&E team are training and skills .................................................................................................. 44
4.4.4 Use of external consultants in monitoring and evaluation ........................................................... 44
4.4.5 M&E unit roles and responsibilities ............................................................................................... 45
4.4.6 Assessment and evaluation of compatibility of the M&E tools and techniques ......................... 45
4.4.7 M&E unit document and record lessons learnt ............................................................................ 46
4.5 Role of management on M&E practices ........................................................................................... 47
4.5.1 Management recognizes and supports the role of M&E .............................................................. 47
4.5.2 Sufficiency of stakeholders’ involvement on M&E process ......................................................... 48
4.5.3 The management ensure that M&E staff are trained................................................................... 49
4.5.4 Supportive supervision and guidance from leaders ...................................................................... 49
4.6 Interview Analysis ............................................................................................................................... 49
4.6.1 Allocation of enough budget for M&E ........................................................................................... 50
4.6.2 M&E tools and techniques know/used so far ................................................................................ 50
4.6.3 The availability of M&E Manual and policy ................................................................................. 50
v
4.6.4 Dissemination of M&E reports/Information ................................................................................. 51
4.6.5 Evaluation tool used so far in local NGOs ..................................................................................... 51
4.6.6 Challenges in practicing M&E........................................................................................................ 52
CHAPTER FIVE ...................................................................................................................................... 53
SUMMARY, CONCLUSION AND RECOMMENDATION ............................................................... 53
5.1 Introduction ......................................................................................................................................... 53
5.2 Summary of findings........................................................................................................................... 53
5.3 Selection of Monitoring and Evaluation tools and techniques influence ....................................... 53
5.4 Technical expertise on a particular M&E tools and techniques ..................................................... 54
5.5 Role of management on M&E Practices ........................................................................................... 55
5.6 Conclusion ........................................................................................................................................... 55
5.7 Recommendation................................................................................................................................. 56
5.8 Recommendation for future studies .................................................................................................. 57
REFERENCES ........................................................................................................................................... 58
Appendix I ................................................................................................................................................. 67
Appendix II ................................................................................................................................................ 71
vi
List of Tables
4.1 Response Rate…………………………………………………………………………...…...32
4.2 Distribution of respondents by sex .................................................................................................. 34
4.3 Distribution of respondents by age group……………………………………………… …….36
4.4 Work experience of respondents ........................................................................................................... 37
4.5 Work experience of respondents as Project M&E Expert and Manager………………...…...38
4.6 Selection of Monitoring and Evaluation tools and techniques............................................................. 38
4.7 The expected results clearly defined from program design and result chain ............................. 38
4.8 M&E information provided to assist in decision-making and planning ..................................... 39
4.9 The risks, assumptions and effects in carrying out M&E activities have been defined ........... 40
4.10 There is an M&E strategic and work plan for the project .......................................................... 41
4.11 Organization use ICT enabled tools to collect, manage and analyze data ............................... 41
4.12 There are ongoing supervision of record keeping and reporting............................................... 42
4.13 There is a dominant use of donor procedures and guidelines by M&E system…………….42
4.14 Carryout needs assessment & baseline survey for all granted projects .................................... 43
4.15 M&E team are trained in skills like leadership ........................................................................... 44
4.16 There is a use of external consultants in monitoring and evaluation ........................................ 44
4.17 The M&E unit has a well-defined roles and responsibilities ..................................................... 45
4.18 The M&E team regularly assess and evaluate compatibility of the M&E tools ..................... 45
4.19 M&E unit document and record lessons learnt for future use ……………………………...46
vii
LIST OF FIGURES
2.1. Conceptual Framework Source ………………………………………………………….... 28
4.1 Educational level of respondents ……………………………………………………………36
4.2 Senor management recognizes and support M&E process ………………………………….47
4.3 There is insufficient stakeholders’ involvement on M&E process ......................................... 48
4.4 The management ensure that M&E staff are trained regularly ............................................... 48
4.5 Supportive supervision and guidance from leaders to M&E expert/staff ............................... 49
viii
OPERATIONAL DEFINITION OF TERMS
M&E System – this is a set of components which are related to each other within a structure and
serve a common purpose of tracking the implementation and results of a project
M&E Training – this is the acquisition of practical tools that enhance result-based management
by strengthening awareness in
M&E Monitoring and Evaluation – this is the process of systematically collecting and
analysing information of ongoing project and comparison of the project outcome/impact against
the project intentions
Project - this is a specific activity to be carried out, which consumes resources and has a
beginning and an end
Tools and Techniques – these are methods and procedures used to meet the project’s M&E
needs
ix
List of Acronyms
x
CHAPTER ONE
INTRODUCTION
Whereas, PATH (2013), further states that without doubt M&E practices and approaches
overrides the academic social- science domains, nevertheless M&E purposes and techniques are
usefully distinguished as a variety of information collection, processing and use. Monitoring and
Evaluation has been a key performance management tool for planning, decision making and
economic policy management (IFRCS (2011). This includes decision to improve, reorient or
discontinue the evaluated intervention or policy. It could also be decisions that involve change of
organizations strategic plans or management structures.
National and international policy makers and funding agencies also use this to inform as well as
challenge the decision-making process (UNICEF, 2016). Many international organizations such
as the United Nations, the World Bank and the Organization of American States have been
utilizing this process for many years (USAID, 2012). The process is also growing in popularity
in the developing countries where the governments have created their own national M&E
systems. The main focus of implementing this is to assess the development projects, resource
management and the government activities or administration.
Similarly, Chikati (2009) emphasizes that M&E of development projects are progressively
recognized as essential management functions. This is because M&E strengthens the
performance of the project since it enables the stakeholders to make swift decisions on matters
relating to the projects.
Monitoring and evaluation (M&E) is a powerful public management tool that can be used to
improve the way governments and organizations achieve results Jozy and Ray (2014:5).
Implementation and planning failures are the widen M&E challenges which affect the proper
functions of M&E schemes in projects. Challenge in project M&E implementation involves
longer-term changes, and it may take months or years for such changes to become apparent.
Furthermore, it can be difficult to attribute observed changes to an intervention versus other
factors (called “attribution”). Despite these challenges, there is an increasing demand for
accountability among organizations working in humanitarian aid and development. Therefore,
careful consideration should be given to its measurement, including the required time period,
resources and specialized skills. M&E structure is crucial not only for the project/programs but
also for a country to determine the country’s socio-economic and political system. The study
from these M&E efforts will then lead to a clearer understanding of the existing M&E initiatives,
the overall sector environment, its institutional arrangements and opportunities for strengthening
and improving the existing M&E creativities, along-with using M&E information to use the
planned stakeholders. The significance of M&E information is to be used for the Managers‟
roles such as budget decision making and the continuing programs or projects activities to meet
its goal. More importantly, this study will help significant persons of the donor community to
recognize the strength and weaknesses of M&E in addition to the institutional arrangements,
(Seotesberg 2011). Therefore, this research aims to Assess the project monitoring, evaluation and
controlling practices and challenges of Local NGOs in Addis Ababa, Ethiopia.
3
Consequently, with the growing global movement to demonstrate accountability and tangible
results, many developing countries will be expected to adopt results-based M&E systems in the
future, due to the international donors focus on development impact (USAID, 2020). The above
shows that the M&E systems are not performing satisfactorily. Therefore, there are two key
reasons for undertaking the research on this topic. The first reason is to deal with a current
practice of monitoring and evaluation tools and challenges in the organization and the other
reason is to describe the monitoring and evaluation practices of M&E expertise together with
management and to provide empirical evidence that will inform an improved system.
1. How is the familiarity and practice of using M&E tools in projects of local NGOs in
Addis Ababa?
2. How is the role of management on project M&E practice of local NGOs in Ethiopia?
3. How is the technical competence of M&E officers/experts on project M&E practice of
local NGOs in Addis Ababa?
4. What challenges have local NGOs face commonly on M&E practice?
The general objective of the study is to describe the project Monitoring, Evaluation and
Implementation practices and challenges of local NGOs implementing youth and youth related
development projects in Addis Ababa Ethiopia.
5
1.7 Limitations of the study
Due to limited number of organizations implementing Youth Development projects, the research
addressed only 12 local nongovernmental organizations. As a result, the research finding may
not be generalized for all local nongovernmental organizations executing Youth Development
projects in Addis Ababa, Ethiopia. Some other limitations of the work include lack of research
findings specifically on NGO supported projects monitoring and evaluation; Lack of adequate
and formally organized available data, delay of responses(time) and budget; Covid-19 Pandemic
also make it difficult to conduct the interview part were major constraints.
The study has shown a number of relevant issues that the project did not explore before about
M&E, whereas, this study might be important for further research on investigating practices and
challenges in M&E. This study was conducted in 12 selected local NGOs implementing youth
development projects, in Addis Ababa, Ethiopia. Other studies should involve in other projects in
order to obtain more complete information on these challenges.
6
CHAPTER TWO
2. REVIEW OF LITRATURE
2.1 Introduction
Monitoring and evaluation systems have been in existence since the ancient times (Nigel and
Rachel, 2010). However today, the requirements for monitoring and evaluation systems as a
management tool to show performance has grown with demand by stakeholders for
accountability and transparency through the application of the monitoring and evaluation by the
NGOs and other institutions including the government (Gorgens et al., 2010). Development
banks and bilateral aid agencies also regularly apply monitoring and evaluation systems to
measure development effectiveness as well as demonstrate transparency.
7
2.2.1. Monitoring and Evaluation practice
The experience of development projects is fraught with problems; their implementation has
frequently run into serious difficulties and their results are far from always having met the hopes
placed in them (Magnen, 1991). Studying past experience, specialists have become aware that
the lack of reliable information on the implementation conditions and results of programmes and
projects was often at the heart of repeated problems and failures. Specialists noted that in the
absence of appropriate information:
➢ Managers can neither detect improper functioning, nor of course take early decisions.
➢ Decision-makers can neither analyze the causes of problems, nor choose more appropriate
objectives and implementation strategies on the basis of good understanding.
The need to develop and apply practical M&E system is increasingly recognized as an
essential tool for programme/project management, both to support the implementation and to get
feedback for the design of new initiatives (EMI, 2014).
monitoring and evaluation of development interventions provides government officials, funders,
and civil society with better means for learning from past experience, improving service
delivery, planning and allocating resources, and demonstrating results as part of accountability to
key stakeholders. monitoring and evaluation is a critical and donor often required means of
determining whether or not development assistance programs are achieving their planned targets
(USAID, 2012).
Though different scholars made effort to define monitoring and evaluation, yet there is often
confusion about what it entails. The next sub section defines the terms monitoring and valuation
separately, explain why we engage in monitoring and evaluation and clarify what both entails.
8
approaches. monitoring can be defined as a continuing function that aims primarily to provide
the management and main stakeholders of an ongoing intervention with early indications of
progress, or lack thereof, in the achievement of results (UNDP, 2009, 16).
This research adopts EMI’s definition of monitoring that reads as: monitoring is a tool for project
managers to use in judging and influencing the progress of implementation, it is a management
activity carried at different levels aimed at ensuring that the progress of a project conforms to its
plan’(EMI,2014).
This research argues that, the following are among the major items that have to be closely
monitored by local nongovernmental organizations while executing their development projects.
These are: Physical progress, work plan, resource utilization, and project outputs against the
plan. To convene monitoring and evaluation results the most widely used means of
communication employed in projects are reports, meetings, and site visits (EMI, 2014).
9
In terms of the periods of evaluation four types of evaluation are commonly distinguished:
Extant evaluation, mid- term evaluation, terminal evaluation and ex-post evaluation, details of
each presented below:
1. Ex-ante evaluation (Start-up evaluation): A form of evaluation conducted prior to startup of
implementation of a project/program. It is carried out in order to determine the needs and
potentials of the target group and its environment, and to assess the feasibility, potential effects
and impacts of the proposed programme/project. At a later stage the effects and impacts of the
programme/ project can be compared with this base line data (EMI, 2014).
2. Mid-term evaluation: This type of evaluation takes place while the implementation of the
planned project is on-progress. Such evaluations are conducted relatively early in the midway
of the project life and are usually external assessments. What distinguishes it from terminal and
ex-post evaluations is that correction to the current project still can be made on the basis of
findings and recommendations (EMI, 2014).
3. Terminal/Summative evaluation: It is conducted when the funding for the intervention or
the whole project activity comes to an end. But this may not mean that the services and inputs
being supplied by the programme/project terminate. In the terminal evaluation, in addition to the
existing records, documents and outputs, an inquiry should be made for secondary data that are
relevant for comparison. Recommendations from terminal evaluation are primarily directed to
improve the planning and design of future projects (EMI, 2014).
4. Ex-post /Impact evaluation: It is designed as in-depth studies of the sustainable impact of a
programme/project that has been already executed. It is carried some time (in most cases 3-5
years) after the programme/project activity has been terminated in order to determine its impact
on the target group and the local area. However, it is rarely done due to lack of willingness to
fund from the financers of the program/project. On the other hand, based on persons evaluating,
scholars classified evaluation into two: internal and external (EMI, 2014).
1. Internal evaluation: It is performed by persons who have a direct role in the
programme/project. On-going or formative evaluation can be done by the management team or
persons assigned from the implementing agency. Majority of local nongovernmental
organizations engaged in this type of evaluation because it cuts expenses (EMI, 2014).
2. External evaluation: The type of evaluation carried out by persons from outside the
program/project. Terminal and ex-post evaluations often conducted by external evaluators. In
10
most cases in local NGOs valuation is conducted by the funding agencies. Donors often prefer
external evaluators because it is believed that they can bring a range of expertise and experience
that might not be available within the organization, and they may have more independence and
credibility than an internal evaluator (EMI, 2014).
In general, monitoring and evaluation is a management tool that helps to judge if work was
going on in the right direction, whether progress and success could be claimed, and how future
efforts might be improved. It assists organizations to extract, from past and ongoing activities,
relevant information that can subsequently be used as the basis for programmatic fine-tuning, re-
orientation and planning (UNDAF, 2011, 58).
With regard to purposes of M&E, The Ethiopian management institute summarized the multiple
purposes of monitoring and evaluation is to forecast performance, gather information for early
warning, identify lessons, asses’ beneficiaries, asses output/results, track progress, check
schedule, asses project activities, asses’ objectives, enhance team work, mobilize stakeholders,
plan program improvement, practice benchmarking, ensure accountability, and ensure quality
management (EMI, 2014).
11
In development interventions, current trends employ monitoring and evaluation as an integral
part of project management. But contrary to these some development partners’ while planning
pays little or no attention for it (World Bank, 2004). Monitoring and evaluation plan, as an
integral part of the overall project plan, depending on the size of the project, could include: -
responsible parties for M&E, issues to monitor & evaluate, and methods employed, resources
and plan for dissemination of findings (MA, 2013).
For all organization, either governmental or nongovernmental having a clearly defined
monitoring and evaluation plan prior to execution helps to ensure resource allocation,
scheduling, determine M&E staff roles and responsibility, identify information sharing tools,
identify salient stakeholders, decide data collection tools, forecast possible M&E related
challenge and set coping mechanisms in advance.
The traditional approach to monitoring and evaluation according to the World Bank (2004) is an
approach to monitoring and evaluation where by specially trained experts involve. This approach
is criticized by planners by the fact that it understates the central idea that projects belong to
beneficiaries.
2.2.4.2. Participatory Approach
Sustainability of any project ultimately depends on how much concerned beneficiaries and
stakeholders are able to monitor and evaluate the process and performance of any interventions
carried out. Because it is accepted that projects belong to targeted beneficiaries. Jerry & Anne
(2008) define participatory M & E as a process in which primary and other stakeholders
collaborate and take an active part in assessing and evaluating the performance and achievement
of a project or an intervention. In this approach, ideally all the stakeholders are involved in
12
identifying the project, setting objectives, and identification of indicators that will be used in
monitoring and evaluation. Participation could be enhanced if monitoring and evaluation systems
are simple and easy for application by the stakeholders. Thus, during project formulation stage
organizations need to give adequate attention to design simple and locally applied systems and
tools.
Despite a requirement by donor community to pay attention to log frame, majority of local
nongovernmental organizations in Ethiopia, usually fail to use log frame because of lack of
expertise (Samuel, 2010). In establishing log frame though the terms used vary between
organizations, the basic four by four matrixes are a common pattern (EMI, 2014).
As shown on literature the vertical logic clarifies the causal relationships between the different
levels of objectives, and specifies the important assumptions and uncertainties beyond the activity
manager’s control. The horizontal logic defines how the activity objectives specified in columns
of the log frame will be measured and the means by which the measurement will be verified.
Though logical frame work approach has become widely accepted as useful tools for project
planning, monitoring and evaluation, however, it does have weaknesses that include: Focus too
much on problems rather than opportunities and vision; If used too rigidly lead people into a
‘blueprint’ approach; Limited attention to problems of uncertainty where learning and an adaptive
approach to project design and management is required; and key elements of analytical process
skipped.
In spite of these, limitations and provided due attention given to participation of stakeholders &
is not used rigidly the logical frame work approach remains a very valuable tool for project
monitoring and evaluation. On the process of setting log frame, once the objectives of a project
established, for monitoring and evaluation what often makes confusion particularly with less
14
experienced local nongovernmental organizations have identification and application of
appropriate indicators: the key tools that enable managers to track progress, demonstrate results
and take corrective actions to improve project/program performance (EMI, 2014).
15
2. Service recording: - This method entails recording attendance of participants in project
activities, for instance health service beneficiaries, awareness creation campaign participants
etc... It helps to determine how many beneficiaries have reached by the services of the project.
3. Questioners: - This method is very handy in determining the perceptions of the project
stakeholders about the implementation and can be used in monitoring and evaluating progress
and impacts of the project. Addis Ababa University (2009) defines a questioner as a formal set of
statements designed to gather information from respondents.
Monitoring and evaluation data collection methods could generate better results if they are
simple, clear, short and focused. Hence appropriate methods have to be identified and used based
on the extent and the type of information expected. The next sub section highlights importance of
policy back up to undertake project monitoring and evaluation.
The project should have M&E plan. The plan should be prepared as an integral part of project
plan and design (PASSIA, 2004: and McCoy et al., 2005). Planning for M&E must start at the
time of project design, and they must be planned together (UNDP, 2009). The integration is for
clear identification of project objectives for which performance can be measured. Effective and
timely decision making requires information from regular and planned M&E activities.
A framework is an essential guide to monitoring and evaluation as it explains how the project
should work by laying the steps needed to achieve the desired results. A framework therefore
16
increases the understanding of the project goals and objective by defining the relationships
between factors key to implementation, as well as articulating the internal and external elements
that could affect the project’s success. A good M&E framework can assist with ideas through the
project strategies and objectives on whether they are ideal and most appropriate to implement.
One of the best practices that have been adopted because of its structured approach is the use of
the LFA as a tool to aid both the planning and the M&E functions during implementation (Aune,
2000: and FHI, 2004). This gives it great leverage in that from the beginning the project design
hence implementation are integrated with performance measurement through identification of
indicators that will demonstrate how the project is performing during implementation.
The project t budget should provide a clear and adequate provision for M&E activities. A M&E
budget can be clearly delineated within the overall project budget to give the M&E function the
due recognition it plays in project management (Gyorkos, 2003: and McCoy et al., 2005). Some
authors argue for M&E budget to be about 5 to 10 percent of the total budget (Kelly and
Magongo, 2004: IFRC, 2001). The intention with this practice is not to be prescriptive of the
percentage that is adequate, but to come up with sufficient funds to facilitate the M&E activities.
Provision of a budget for M&E ensures that the M&E activities take place when they are due. It
also ensures that M&E are not treated as peripheral function.
Human capital, with proper training and experience is vital for the production of M&E results.
There is need to have an effective M&E human resource capacity in terms of quantity and
quality, hence M&E human resource management is required in order to maintain and retain a
stable M&E staff (World Bank, 2011). This is because competent employees are also a major
constraint in selecting M&E systems (Koffi-Tessio, 2002). M&E being a new professional field,
it faces challenges in effective delivery of results. There is therefore a great demand for skilled
professionals, capacity building of M&E systems, and harmonization of training courses as well
as technical advice (Gorgens and Kusek, 2009). There should also be an individual who is
directly in charge of the M&E as a main function (Kelly and Magongo, 2004) and an
17
identification of different personnel for the different activities of the M&E such as data
collection, analysis, report writing, dissemination of the monitoring and evaluation findings
(AUSAID, 2006: Gyorkos, 2003: and McCoy et al., 2005). Having staff clearly designated with
M&E roles and responsibilities ensures that some body is available to do M&E activities, and
staffs appreciate that the project managers value M&E not as a compliance to the funding agency
but as a tool for project management, learning and improving on the performance of the project.
There should be a clear specification of how often M&E data is to be collected and from whom.
There should also be a specification of a schedule for M&E reports to be written (Gyorkos,
2003).
18
2.2.6.6 Capture and documentation of lessons
learned Lessons learned from the implementation should be captured and documented for
incorporation into the subsequent projects and sharing with other stakeholders. The lessons
would include what went right in implementation and what went wrong and why so that the
mistakes are not repeated in the subsequent projects (PASSIA, 2004: Uitto, 2004). These lessons
should be shared with the implementing staff. M&E can only play a significant role in the
accountability process if measures to enhance learning are put in place. Through regular
exchange of information, reporting, knowledge products, learning sessions and the evaluation
management response system, information from M&E can be fed back into the learning process
and planning (UNDP, 2009).
There should be M&E findings dissemination plan. Only an efficient system of dissemination
will ensure that the target recipients receive the M&E feedback that is relevant to their specific
needs. M&E findings should be disseminated to the stakeholders by way of a report to the donor
depending on his requirement, communication or report to the community and beneficiaries and
to the implementing staff to improve on their implementation practices and strategies (Gyorkos,
2003: and McCoy et al., 2005).
19
supported unanimously and approved by many states was that of the UN Human rights council
on freedom of peaceful assembly and association passed in September 2010 (TECS, 2013).
In line with this international recognition of the roles of NGOs, Ethiopia has set clear legislative
and constitutional frameworks for the sector. Thus, analyzing pros and cons in 2009 the
government has established Federal Charities and Societies Agency and enacted Proclamation
621/2009 under the logic of decreasing dependency on foreign funds and ensuring NGOs
accountability.
The Proclamation specifies that no more than 30% of the project budget should be used for
administration, the major project component was monitoring and evaluation budget belongs
(Chasa, 2011). The impact of such assertion is that M&E gets reinforced at different levels and
becomes accepted as a politically, administratively and socially acceptable approach to promote.
Carrying out appropriate monitoring and evaluation is a key requirement set by the city
government.
But Addis Ababa Finance and Economic Development Bureau, and the relevant government
offices were reported that they were challenged by poor performance of nongovernmental
organization projects in monitoring and evaluation. In relation to this in the first half the Federal
charities and society’s agency in collaboration with the bureau had obliged to cease licenses of
seven nongovernmental organizations (Chasa, 2014).
There is too little or no monitoring of ‘other influencers’ that influence movement along the
results chain and ultimately, attainment of success. Recognition of such ‘influencers may bring
20
to light the non-linear relationship inherent in a project’s theory of change and the true
complexity of the initiative. According to Gudda (2011), exogenous indicators are those cover
factors outside the control of the project but which might affect its outcome, including risks
(parameters identified during economic, social, or technical analysis, that might compromise
project benefits); and the performance of the of the sector in which the project operates.
The performance measurement strategy in general tends to have serious gaps, in particular, lack
of relevant data/information sources and feasible measurement strategies; In general, when
performance information is collected, it tends to serve more of an administrative purpose, for
example, used by a program manager to report on activities and expenditures so as to justify or
release funds for further project activities. Broader use of results information is limited, certainly
Furthermore, it needs to be recognized that “growing” evaluators requires far more technically
oriented M&E training and development than can usually be obtained with one or two
workshops. Both formal training and on-the-job experience are important in developing
evaluators with various options for training and development opportunities which include: the
public sector, the private sector, universities, professional associations, job assignment, and
mentoring programs (Acevedo et al., 2010). Monitoring and evaluation carried out by untrained
and inexperienced people is bound to be time consuming, costly and the results generated could
be impractical and irrelevant. Therefore, this will definitely affect the success of projects (Nabris,
2012).
21
During the Dergueregime, the centrally planned command economy, the Central Planning
Commission was responsible for the overall monitoring and evaluation of development sectors
projects activities. Quarterly, bi-annual and annual progress reports, field inspection interviews
and discussions held with development sectors projects implementers were used as the basic
tools of data gathering for projects monitoring and evaluation (MoFED, 2008).
As the Ministry of Finance and Economic Development indicates, the overall development
sectors projects monitoring and evaluation of the past system had suffered from the following
basic limitations. The development sectors projects monitoring and evaluation system was too
rigid, and lack dynamism and project managers had limited autonomy of decision making. There
was also delay of monitoring and evaluation feedbacks to both managers and implementers. As a
result, the projects were incurred high cost for executing project monitoring and evaluation
activities and the outcome evaluation did not get attention.
In the early 1990‟s, the responsibility of coordinating and consolidating development sectors
projects monitoring and evaluation was provided to the Ministry of Planning and Economic
Development. During this period, the Ministry had developed the standard formats that were
used for both financial and physical project performance data collection and communication.
Minimal field trip to conduct projects monitoring and evaluation and poor feedback system were
some of the weaknesses of the development sectors projects monitoring and evaluation system of
the period (MoFED, 2008).
MoFED (2008) added that during the early 1990‟s, the responsibility of conducting externally
financed projects monitoring and evaluation was given to the Ministry of External Economic
Cooperation. The ministry had no its own projects monitoring and evaluation system and was
relied only on adopting donors driven projects monitoring and evaluation philosophy like field
visit, review meeting and periodic monitoring. And the observed major challenges were: review
meetings were conducted only on annual bases which created long interval to take corrective
measure on time, monitoring activities were dependent only on progress reports that had
obtained from projects implementing sectors and monitoring and evaluation lacked comparative
analysis of what was planned and achieved.
Following the decentralization process in the country, during the Federal Democratic
Redevelopment of Ethiopia, development sector projects monitoring and evaluation system has
22
begun to be conducted at both city and federal levels. As a result, the planning and program
departments both at the Federal Ministry of Finance and Economic Development and Addis
Ababa Bureaus of Finance and Economic Development is mandated to play a role of
coordinating and consolidating projects monitoring and evaluation (MoFED, 2008). At the
federal level, the MoFED has developed standard guidelines and formats for federal
development sectors to conduct development sectors development projects monitoring and
evaluation accordingly. In addition, Proclamation No.41/1993 vested power and responsibility
on the Ministry of Finance and Economic Development to following up and evaluate the
implementation of capital budget, external assistance, loan and Federal subsidies granted to the
regional states.
Ethiopia was hit with two devastating famines almost in a decade. The first famine occurred in
1973/74 and the second more devastating occurred in 1984/85 causing involuntary mass
migration and huge loss of lives and properties. These famines have largely contributed for the
influx and emergence of NGOs in Ethiopia (CCRDA, 2009).
The first indigenous organizations that were functioning apparently similar to the present NGOs
were traditional self-help groups. Iddir and Equb are the most common indigenous ones that
existed for generations in Ethiopia serving as funeral& saving associations, respectively. These
organizations are today known as community-based organizations (Addis Finance, 2011).
In the study period, Ethiopia was hosting about 3,056 re-registered civil society organizations of
which 2,650 are local and the remaining 406 are international organizations operating in different
parts of the country (ChSa, 2014). Likewise, the Addis Ababa city government in the same
period host about 700 NGOs of which 224 are local ones that signed formal project operational
agreement with the respective bureaus of the city government (AABoFED, 2014).
23
statement of principles by the sector and serves as a symbol that it is capable of self-regulation,
monitoring, and evaluation (Jeffrey, 2007).
The code of conduct for NGOs in Ethiopia was formally adopted in March 1999, when the
overwhelming majority of NGOs operating in the country swore to uphold its principles and its
formation is considered one of the major achievements for the sector since the onset of the
contemporary era for NGOs in 1991 (Debebe, 2012).
The regulatory framework for CSOs/NGOs in Ethiopia is in a state of transformation. The
provisions of the 1960 Civil Code and a 1966 Internal Security Act issued by the then Ministry
of Interior were used to govern the establishment and operation of the whole range of ‘civil
society organizations. On January 6, 2009, the Charities and Societies Proclamation No.
621/2009 of Ethiopia was enacted and defines two categories of formal CSOs in Ethiopia:
Charities and Societies (Debebe, 2010).
Charities are institutions established exclusively for charitable purposes and provide public
benefit. Societies, on the other hand, are associations or persons organized on a non-profit
making and voluntary basis for the promotion of the rights and interests of their members and to
undertake other similar lawful purposes as well as to coordinate with institutions of similar
objectives. Charities and Societies are given one of three legal designations, Ethiopian Charities
or Societies, Ethiopian Resident Charities or Societies or Foreign Charities, based on where the
organization was established, its source of income, composition of membership, and membership
residential status (Chasa, 2014).
Ethiopian Charities or Societies are institutions formed under the laws of Ethiopia, whose
members are all Ethiopians, generate income from Ethiopia and are wholly controlled by
Ethiopians. These organizations may not use foreign funds to cover more than 10% of their
operational expenses. Similar institutions that receive more than 10% of their resources from
foreign sources or whose members include Ethiopian residents are designated Ethiopian.
Resident Charities or Societies. Foreign Charities, on the other hand, are those formed under the
laws of foreign countries, or whose membership includes foreigners, or foreigners control the
organization, or the organization receives funds from foreign sources (Chasa, 2011).
The provisions of the Proclamation are applicable to charities or Societies that operate in more
than one regional state or Societies whose members are from more than one regional state;
foreign Charities and Ethiopian Resident Charities and Societies even if they operate only in one
24
regional state; and, charities or societies operating in the City Administration of Addis Ababa or
Dire-Dawa. This research identified and target Ethiopian local NGOs operating in Addis Ababa.
Accordingly, PATH, (2013) identified lack of baseline data, Budget, little time available for
evaluation, Weak political will to support comprehensive evaluation as a challenge for M&E and
listed their practical field-tested ideas to overcome them.
Peersman (2014) agrees on the common challenges in data collection and analysis can relate to
poor choices of methods as well as poor implementation of methods.
Another case study is conducted in Tanzania by Emel et al. (2012) under the title ‟problems with
reporting and evaluating mining community development project. they raised question about
reporting and evaluation of community development project that undertaken by Anglogold
Ashanti company in a community of Nyakabale and Nyamalembo, Geita District, mining project
in the Lake Victoria goldfield of Tanzania. They employed descriptive research design and
obtained data through field visit, interviews, questionnaires and use of archival and applied both
quantitative and qualitative analysis approach. Their findings revealed that the corporate
reporting is misleading, ambiguous and omissive. They proposed the following remedies:
increasing government inspection and fines, citizen involvement in monitoring and reporting
process.
Different factors can affect the performance of monitoring and evaluation of projects in different
manner. Gitahi Kenneth (2015) financial resources are central in determining the future and the
success of M&E processes. M&E needs a separate budget than the project undertaken. Financial
resources, availability of expertise on monitoring and evaluation of projects, management
commitment and involvement of different stakeholders in the M&E system have an influence in
the practices of monitoring and evaluation of projects.
25
Another study which is (ECPE, 2010), it investigates the main challenges of Ethiopian Country
Program Evaluation includes: the program/project evaluation always presents constraints in
terms of time and resources given for such evaluation, inconsistencies and limitations with the
quality and comparability of data available with reared to coding and disbursements did not give
a clear understanding of resource use and limited evaluative data was available.
Assessing the Practice of Project Monitoring and Evaluation: The Case of Commercial Bank of
Ethiopia projects by Tegbar work in (2018) which is in a way, the researcher tried to assess the
monitoring and evaluation planning, implementation of monitoring and evaluation process and
challenges in application of M&E system. Concerning planning of M&E, there are good
practices of M&E except the absence of separate budget for monitoring and evaluation. In the
process of M&E, finance, activities and schedule are checked at least once in month. The five
most challenges in M&E are lack of the right performance indicators, lack of expertise,
inaccuracy in data collection, failure to prepare appropriate data collection and failure to process
and analyze data.
26
2.3 Literature Gap
Yet, these studies have repeatedly said little beyond the more immediate effects of M&E. The
literature draws attention to problems relating to monitoring, and evaluation. Efforts to assess
accurately the impact of separate projects have often been hindered by the cumulative effect of a
number of common weaknesses, including: lack of clarity concerning the precise objectives of
projects and how they might best be assessed; poor or non-existent base-line data; inadequate
monitoring and project completion reports; the low priority given to assessment and the related
problems of inadequate in-house skills (NORAD, 2013).
In case study level there is gap has been seen there is no continuous examination of progress
achieved during the implementation of an undertaking to track compliance with the plan and to
take necessary decisions to improve performance and evaluation is a systematic and impartial
assessment of disparities & complementary features of monitoring & evaluation, approaches of
monitoring and evaluation, designing monitoring & evaluation system with establishing
logframe and policy guidelines for conducting monitoring and evaluation and further challenges
of NGO’s in monitoring and evaluation in order to understand achievements is still under
compromising issues.
27
✓ M&E Plan
✓ M&E Budget
✓ M&E policy
✓ Logical framework
✓ Documentation of Effective
M&E
lesson learned
Practice
✓ Dissemination of
Report
✓ Stakeholders’
involvement
✓ Level of M&E
Figure 2.1. Conceptual Framework Source: Mark (2012) and USAID (2020) with some
✓ Trainings modification by Author ,2021
28
CHAPTER THREE
3. RESEARCH METHODOLOGY
3.1 Introduction
An important part of the research activity is to develop an effective research design which shows
the logical link between the data collected and the analysis and conclusions to be drawn. This
will satisfy the most suitable methods of investigation, the nature of the research instruments, the
sampling plan and the types of data (De Wet, 1997). In this section the research design, sampling
type, research method use in the study are included.
Qualitative approach was used by conducting interview with internal Audit department heads
and finance managers under the study. Regardless of the above advantage, qualitative research
design has its own demerits: lack of standardized rules reduces the objectivity of the findings, the
personal view and stand of the researcher may induce bias in the interpretation of the data, and
the findings cannot be statistically generalized for a broader population under investigation
(Creswell, 2003).
29
3.4. Population of the study
According to Keller (2009), “a population is the group of all items of interest to a statistics
practitioner”. Target population is a total group of people from whom the researcher may obtain
information to meet the research objectives (McDaniel, 2001).
So, the target population is users or (Employees, M&E Expertise and Manager on project
operation and Managers) in selected 12 local NGOs implementing youth and youth related
development projects, operating in Addis Ababa. According to the Human Resource data for the
month of March, 2021 the total number of employees work in the selected 12 NGOs are 364
(permanent employees).
The employees are considered to be homogenous in their nature and also influenced by the
operation of the system. For this study the researcher believed that the target population of the
study were employees of selected local NGOs implementing youth and youth related
development projects, operating in Addis Ababa Ethiopia. Total number employees with in the
12 selected local NGOs implementing youth and youth related development projects, are 364
30
employees in development project operation area. Therefore, the sample size is 192 respondents
(project manager, coordinators and M&E officers) implementing youth and youth related
projects in Addis Ababa, Ethiopia.
In purpose of the data and data Source, the study used both primary and secondary data source.
The primary source of data collected through questionnaire from 12 selected local NGOs
implementing youth and youth related projects in Addis Ababa and employees from selected
departments.
Secondary data collected and gathered from, organization manual, website reports, published and
unpublished theoretical literatures, empirical studies and other relevant documents pertaining to
research under consideration in order to understand the analysis of the data from each source.
The study used both open and closed ended questionnaire. The researcher used both primary and
secondary data to conduct the research with concerned employees. The primary data was
obtained from questionnaires and secondary data obtained from various office document,
evaluations manual, asset monitoring report and other references that are related to internal and
M&E control.
31
Source: Stephanie.G Cronbach’s Alpha (2021)
While increasing the value of alpha is partially dependent upon the number of items in the scale,
it should be noted that this has diminishing returns. It should also be noted that an alpha of 0.8 is
probably a reasonable goal. It should also be noted that while a high value for Cronbach’s alpha
indicates good internal consistency of the items in the scale, it does not mean that the scale is
unidimensional.
The researcher developed the questionnaire and semi-structured interview questions based on the
elements of relevant academic literatures. Also, additional questions were also referred from
published papers of the same practice assessments done on M&E with reliability Cronbach alpha
greater than 0.825 which indicates the items in each of the domains are well understood by the
respondents. The items will be measured what will be designed to measure.
For this study both qualitative and quantitative data analysis techniques were used. The
supplementary data that was collected through interview and some open-ended items of the
questionnaires were analyzed qualitatively.
32
However, the data collected via questionnaires were edited and cleaned to reduce ambiguity.
Then coded into SPSS 22 and analyzed using descriptive statistics which is presented in
procedure of computing frequency, percentages and mean. In dealing with the qualitative
analysis based on the evidences collected from the different sources, an effort was made to
understand and interpret the information to use it to gather with the quantitative data.
By explaining these important details, the respondents were able to understand the importance of
their role in the completion of the study. With this, the participants were not be forced to
participate in the research. The confidentiality of the participants was also ensured by not
disclosing their names or personal information in the research question. Only relevant details that
helped in answering the research questions were included. Generally, this study avoided full of
harm on the organization and kept the confidentiality of the participants in the study.
33
CHAPTER FOUR
PRESENTATION, ANALYSIS AND INTERPRETATION OF FINDING
4.1 Introduction
This chapter deals with presentation, interpretation and discussion of the data obtained through
questionnaire and interview. The results are depicted in the form of figures, tables and also using
frequencies. The findings are presented and discussed in relevance the precise objectives of the
study. Which is to explore the project monitoring, evaluation and implementation practices and
challenges of local NGOs implementing youth and youth related projects in Addis Ababa
Ethiopia.
Table 4.1 Response Rate
The data from the questionnaires were statistically analyzed using SPSS. The findings are
discussed below.
Findings from the interviews were used as additional information to clarify relevant subject
matters of the assessment and if any variable affecting this assessment was left unaddressed.
Most of the information came from the interview as the semi-structured interview provided an
in-depth and rich data, at same time elicit data that are comparable from one subject to the
next.
The researcher collected the data and divided into various parts based on demographic
information gathered from the questionnaire and interview. And the researcher grouped the
responses across tabulation to compare the data across multiple demographics. Included: sex,
age, current academic qualification, work and M&E experience in the. The researcher did not
add more questions as the respondents may become concerned or aggravated by having to
34
answer large number of demographic questions. Additionally, they might feel that they will
compromise their confidentiality, and others might perceive the questions as an invasion of
privacy. The table below shows the number and percentage of the respondents based on Sex.
Cumulative
Frequency Percent Valid Percent Percent
Male 128 68.1 68.1 68.1
Female 60 31.9 31.9 100.0
Total 188 100.0 100.0
Source: Survey data, 2021
35
4.2.2 Distribution of respondents by age group
Age of the respondents is one of the most important characteristics in understanding their views
about the particular problems; by and large age indicates level of maturity and experience of
individuals in that sense age becomes more important to examine the response. Concerning the
age of the respondents 85(45.2%) of them are in the age range 31-40, while 53(28.2%) of the
respondents were in the age range from 41-50. 36(19.1%) were from 20-30 years of age and the
remaining 14(7.4%) were in the age range 50 and above, respectively. Accordingly, majority of
the respondents were in the age range between 31 to 40, and followed by age range from 41-50
years which they considered as matured and experienced enough to practice and implement
M&E.
36
The study sorts the respondents’ level of education in order to ascertain if they were well
equipped with the necessary knowledge and skills in their respective areas of specialization.
From the study findings majority 97(51.6%) indicated that they had MA/MSC degree, followed
by 83 or (44.5%) of the respondents who had BA/BSC qualification and only small number
4(2.13%) equal percent who had PhD. and Diploma. The findings therefore indicate that the
respondents have the capacity and skills to conduct M&E activities successfully in their
organizations.
Frequency Percent
1-2years 5 2.7
2-3years 25 13.3
3-4years 23 12.2
4-5years 40 21.3
More than 5years 95 50.5
Source: Survey data, 2021
Table 4.5 Work experience of respondents as Project M&E Expert and Manager
Frequency Percent
less than1year 8 4.3
1-2 years 26 13.8
2-3 years 58 30.9
3-4years 56 29.8
4-5years 32 17.0
More than 5years 8 4.3
Source: Survey data, 2021
37
4.2.5 Employees work experience regarding M&E
Based on the findings, majority 58(30.9%) of the respondents had worked in M&E and
managing projects or coordinating for between 2 to 3years followed by 56(29.8%) who had
M&E work experience between 3 to 4years in projects. While 32(17%) of the respondents had
worked in M&E projects for a period between 4 to 5years, 26(13.8%) of participants had an
experience of 1 to 2 years and 8(4.3%) had the same percentage which is less than one year and
more than 5years of experience. Which indicates that the respondents were experienced enough
to provide valuable responses of effective monitoring and evaluation systems in local NGOs.
Table 4.6 The expected results clearly defined from program design and result chain
Valid Cumulative
Frequency Percent Percent Percent
strongly agree 50 26.6 26.6 26.6
strongly disagree 6 3.2 3.2 29.8
neither agree not disagree 29 15.4 15.4 45.2
Agree 97 51.6 51.6 96.8
Disagree 6 3.2 3.2 100
Source: Survey data, 2021
4.3.1 The expected results clearly defined from program design and result chain
The respondents were asked their level of agreement on the expected results (outcome and
output) have been clearly defined from program design and result chain. Based on the findings,
97(51.6%) of the respondents agreed that the expected outcome and output have been clearly
defined from program, 50(26%) of the indicated they strongly agree; 29(15.4%) of the
respondents were neutral and 6(3.2%) indicated that the outcome and output have not been
clearly defined. It is also indicated that 6(3.2%) of the respondents strongly support that outcome
and output not defined clearly. It implies that the expected results (outcome and outputs) have
been adequately defined, which is a good practice.
38
Table 4.7 The indicators are (SMART) and have indicative targets with baselines
4.3.2 The indicators are (SMART) and have indicative targets with baselines
Respondents were asked to respond if indicators are Specific, Measurable, Attainable, Relevant
and Time-bounded and have indicative targets with baselines, The findings showed that
90(47.9%) of respondents neutral with the fact that indicators are Specific, Measurable,
Attainable, Relevant and Time-bounded and have indicative targets with baselines followed by
60(31.9%) of respondents who did agree and, 25(13.3%) of respondents strongly agreed that
7(3.3%) was strongly disagree and 6(3.2%) disagreed. Majority of the respondents replied
neutral, that means even though they followed by those who agreed on the specificness,
measurability, attainability, timely and relevance of indicators, the result shows it is not at
adequate or sufficient level which strongly affect the M&E practice and lead to poor
performance.
39
of M&E system for their decision making. Similarly, 74(39.9%) respondents strongly agreed by
saying that they use the information, 21(11.2%) was neutral and findings of M&E for its decision
making. 11(5.9%) of strongly agreed, and the rest 5(2.7%) replied disagree. From this analysis it
is possible to deduce that the information and findings of M&E are used for decision making
which is encouraging.
This shows that the information from the M&E system is widely consumed although it does not
actually show the level of satisfaction in its consumption. In regard to the authorities responsible
for the performance of the project and M&E activities.
Table 4.9 The risks, assumptions and effects in carrying out M&E activities have been defined
Cumulative
Frequency Percent Valid Percent Percent
strongly agree 41 21.8 21.8 21.8
strongly disagree 14 7.4 7.4 29.3
neither agree nor 69 36.7 36.7 66.0
disagree
Agree 30 16.0 16.0 81.9
Disagree 34 18.1 18.1 100.0
Source: Survey data, 2021
4.3.4 The risks and assumptions in M&E activities have been defined
The respondents were asked if the risks and assumptions in carrying out the planned M&E
activities have been defined and how they might affect the planned M&E events and the quality
of the data. The respondents' finding showed that 69(36.7%) of respondents neither agreed nor
disagree on those risks and assumptions in carrying out the planned M&E activities have been
defined, 41(21.8%) of respondents did strongly agree followed by 34(18.1%) of respondents
disagree, 30(16%) did agree and 14(7.4%) were strongly disagree. This means that there is
inadequate M&E risk assessment which highly affect the planned M&E events, activities and
outcome the quality of the data. It also indicates that there is poor or unsatisfactory risk
assumption and planning in local NGOs.
40
Table 4.10 There is an M&E strategic and work plan for the project
Frequency Percent Valid Percent
strongly agree 70 37.2 37.2
strongly disagree 9 4.8 4.8
neither agree not disagree 12 6.4 6.4
Agree 77 41.0 41.0
Disagree 20 10.6 10.6
Source: Survey data, 2021
4.3.5 There is an M&E strategic and work plan for the project
The question has been asked if there is an M&E strategic and work plan for the project, the
findings show that 77(41%) of respondents agreed that there is an M&E strategic and work plan
for the project followed by 70(32.2%) of respondents who did strongly agree, 20(10.6%) of
respondents disagreed, 12(6.4%) of respondents were neutral and 9(4.8%) of respondents
strongly disagree. This ensured that there is an M&E strategic and work plan for the project, this
helped to improve that there is efficient use of work plan in local NGOs.
Table 4.11 Organization use ICT enabled tools to collect, manage and analyze data
Cumulative
Frequency Percent Valid Percent Percent
strongly agree 24 12.8 12.8 12.8
strongly disagree 44 22.9 22.9 35.6
neither agree not 18 9.6 9.6 45.2
disagree
Agree 42 22.0 22.9 68.1
Disagree 60 31.9 31.9 100.0
Source: Survey data, 2021
4.13 There is a dominant use of donor procedures and guidelines by M&E system
Cumulative
Frequency Percent Valid Percent Percent
strongly agree 66 35.1 35.1 35.1
strongly disagree 4 2.1 2.1 37.2
neither agree nor disagree 31 16.5 16.5 53.7
Agree 86 45.7 45.7 99.5
Disagree 1 0.5 0.5 100.0
Source: Survey data, 2021
43
4.4.3 M&E team are trained and skills
Based on the findings, 57(30.3%) of the respondents indicated that the staff response was neutral
on the M&E team are trained in skills like leadership, financial management, facilitation,
supervision, advocacy and communication, followed by 53(28.2%) who indicated that they were
strongly disagreed. The findings also showed that 28(14.9%) disagree and 26(13.8%) were agreed
that the M&E staffs or people were capacitated. A small proportion of the respondents 24(12.8%)
strongly agree about the skillfulness of people involved in M&E. Therefore, the finding implies
that majority of staffs work in local NGOs lack skill and the opportunity to build up their capacity
through training. Finally, which leads to poor M&E performance and project outcome.
44
Table 4.17 The M&E unit has a well-defined roles and responsibilities
Cumulative
Frequency Percent Valid Percent Percent
strongly agree 16 8.5 8.5 8.5
strongly disagree 43 22.9 22.9 31.4
neither agree nor 52 27.7 27.7 59.0
disagree
Agree 50 26.6 26.6 85.6
Disagree 27 14.4 14.4 100.0
Source: Survey data, 2021
Table 4.18 The M&E team regularly assess and evaluate compatibility of the M&E tools
Cumulative
Frequency Percent Valid Percent Percent
strongly agree 16 8.5 8.5 8.5
strongly 65 34.6 34.6 43.1
disagree
neither agree 38 20.2 20.2 63.3
nor disagree
Agree 35 18.6 18.6 81.9
Disagree 34 18.1 18.1 100.0
Source: Survey data, 2021
4.4.6 Assessment and evaluation of compatibility of the M&E tools and techniques
The study requested the respondents to indicate the extent to which they agree or disagree on
how regularly M&E team/expert assess and evaluate compatibility of the M&E tools and
techniques in use. The responses were 65(34.6%) were strongly disagreed that M&E team/Expert
assess and evaluate compatibility of the M&E tools and techniques in use regularly, followed by
38(20.2%) of respondents rated neither agree nor disagree. On the other hand, 35(18.6%) of
45
respondents agreed that M&E expert regularly assess and evaluate compatibility of M&E tools,
with almost no difference level of respondents 34(18.1%) replied disagreed and 16(8.5%)
strongly agreed. Therefore, based on the respondent’s response the M&E team does not regularly
assess and evaluate compatibility of the M&E tools and techniques in use which basically
indicates that there is inadequacy of using compatible M&E tools and techniques in local NGOs
and which highly affect the M&E practice and is challenging to manage and implement projects/
programs effectively.
Table 4.19 M&E unit document and record lessons learnt for future use
Valid Cumulative
Frequency Percent Percent Percent
strongly agree 17 9.0 9.0 9.0
strongly disagree 67 35.6 35.6 44.7
neither agree nor 7 3.7 3.7 48.4
disagree
Agree 44 23.4 23.4 71.8
Disagree 53 28.2 28.2 100.0
Source: Survey data, 2021
46
4.5 Role of management on M&E practices
Figure 4.2 Senior management recognizes and supports the role of M&E
47
Figure 4.3 There is insufficient stakeholders’ involvement on M&E process
Figure 4.4 The management ensure that staff are trained on M&E regularly
48
4.5.3 The management ensure that M&E staff are trained
As figure 4.3 Shows that majority of respondents (31.91%) disagreed on that the management
ensure that staff are trained on M&E regularly, followed by (30.32%) strongly disagree and
(18.09%) of respondents agreed and the remaining (10.64%) and (9.04%), strongly agree and
neutral on that the management ensure that staff are trained on M&E regularly. It implies that the
management does not insure whether the M&E staffs are trained or not.
Figure 4.5 Supportive supervision and guidance from leaders to M&E staff
49
investigations on M&E practice of local NGOs. The member of participants was 18 and they are
from 9 of the 12 selected NGOs, head of management, project officers and expertise of M&E.
“Does the organization allocate enough budget for monitoring and evaluation activities”
Budget is an important input to carry out every type of tasks. Similarly, M&E activities require
sufficient budget. From this perspective, respondents were asked whether their organization
allocated sufficient budget for M&E activities. For this question more than half of the total
respondents (11 of them) said that the budget allocated is not sufficient for M&E activities or
not. The other 4 of the respondents do not know or have no information about the allocated
budget for M&E activities. The remaining 3 participants said that the budget allocated for M&E
is enough.
Respondents were asked to list the M&E tools and techniques they know or have used so far in
M&E system. Majority (14) of the respondents said that they know a number of M&E tools and
only (4) respondents replay they are not sure of whether they know the correct tools or not.
However, form the M&E tools they listed, almost about half of the respondents include unrelated
M&E tools they think they are.
Those from tools used by most of the organizations found to be; logical framework, theory of
change, work plan, baseline and need assessment, external consultant for M&E and site visits
and strategic planning frameworks.
“Does the organization have M&E policy? If yes, did the staffs introduced to the policy”
The other question that respondents were asked whether their organization have M&E manual
and policy or not. According to their response, 4 of the respondents do not know whether their
organization have M&E manual or not and were not introduced. The other 8 the respondents
50
know that their organization have M&E manual. Along with this, those respondents who said
that their organizations have M&E manual were asked whether they were introduced with the
manual and policy or not. 5 of these respondents are introduced and are familiar with those
manual and policy.
However, as above discussions indicate, the majority of respondents do not have information
about M&E policy and manual because perhaps the organization did not give them orientation
about these two important documents.
The other important issue in utilization of information produced by M&E system is the
accessibility of the information to relevant stakeholders, donors, employees, beneficiaries and
other concerned bodies. Since M&E officers and other pertinent employees will improve their
activities if they have access to the information produced by M&E the availability and
accessibility of the report plays a big role. With respect to accessibility of information,
respondents were asked to describe ‘When and how do the organization disseminate M&E
reports. Accordingly, From the total 18 respondents 13 of them concluded by expressing it in
different ways, that only limited M&E information were disseminated with which firstly, for
fulfillment of government or donor request and requirement. Secondly. For management bodies,
project/program officers, & stakeholders. However, the reality of the information provided are in
question, since most of the organizations management and staff think M&E as a controlling and
investigation way; which leads to wrong conclusion and most of them end up with disseminating
a cooked report/information. The remaining 5 respondents confirmed that there is regular and
quality M&E report were disseminated for This implies that M&E information were not
available/accessible to its employees though it should work hard to improve the dissemination of
M&E information to all staffs. Since not all the stakeholders received M&E findings, the projects
missed the full benefits of such practices.
However, from the local NGOs under study 8 of them focused on a mixture of both
process/implementation evaluation (focus on input/outputs) and outcomes assessment (i.e., the
identification of change in individuals, the larger community/environment and the staff carrying
out the program). On the other hand, there is no NGOs which conducted evaluation on the
highest level which is impact assessment. Regarding M&E level, it is clear that the twelve (12)
NGOs conducted M&E on the inputs/outputs level because it is the basic level for any evaluation
and all the NGOs have the responsibility towards their donors to be transparent by showing
donors how the fund has been spent.
In addition to that, donors are more interested to hear about the numbers of beneficiaries served
and services provided.
Most of the NGOs agreed they would like to conduct impact assessment than process and
outcome assessment, but they lack the required resources such expertise and money. This means
that their interest between NGOs to conduct impact assessment what they lacked is the resources.
practice of M&E report dissemination, Insufficient budget allocation, Lack of sufficient M&E
experts in the organization, Poor familiarity or absence of M&E policy and manual, lack or
limited training on M&E, less involvement of stakeholder, strict use of donor guideline and
procedures, poor risk identification and luck of mitigation plan, failure in selecting the
compatible M&E tools and techniques, luck of supportive supervision and guidance from
leaders, Poor management involvement in some of the M&E activities, Poor selection of
friendly tools and techniques and unfamiliarity or absence of M&E document of policy and
manual.
52
CHAPTER FIVE
SUMMARY, CONCLUSION AND RECOMMENDATION
5.1 Introduction
The aim of this study is to assess monitoring and evaluation practices of local NGOs
implementing youth and youth related in project at Addis Ababa, Ethiopia. The study reviewed
various sources of information written and presented by different scholars about monitoring and
evaluation. Review of related literature were conducted from previous research findings, annual
and quarterly reports, manuals, and other journal and internet sources. All these sources provided
necessary background to the study that provided the research gap to the researcher. The study
involved 188 respondents whereas sampling techniques and methods of data collection (Primary
data and secondary data) were used. Data analysis was done by SPSS 22, whereby tables and
narratives were drawn by using questionnaire and semi-structured interview assessments
conducted with project personnel by a thematic area.
This chapter presents the summary of the findings presented in chapter four according to the
study objectives. This chapter also presents the conclusions and the recommendations of the
study.
The objectives of the study were to find out; practice of using M&E tools in projects; the role of
management on project M&E practice; technical competence of M&E officers/experts on project
M&E practice and challenges that local NGOs face commonly on M&E practice.
Likewise, Table 4.13 shows 75% of the respondents agree and strongly agree on that need
assessment and baseline survey sufficiently conducted. Similarly, table 4.14 specifies that M&E
team are trained in skills like leadership, financial management, facilitation, supervision,
advocacy and communication, as 39.9% of the respondents neutral and 36.7% or the respondents
disagree and strongly disagree, it indicates that there is poor training facility. Table 4.14 shows
43.1% of respondents disagree and strongly disagree on the M&E team are well trained and
skilled on. Table 4.15 shows that there is a use of external consultants on M&E since 83% of the
respondent agree and strongly agree.
Also, Table 4.16 indicates the M&E unit/expertise do not have a well-defined roles and
responsibilities, as 37.3 of the respondents disagree and strongly disagree. Table 4.17 shows
there is poor assessment and evaluation of compatibility of M&E tools and techniques in use.
Since 52.7% of participants disagree and strongly disagree on. Table 4.18 shows there is poor
document recording and lesson learned for future use in other implemented programs, as 63.8%
of respondents disagree and strongly disagree.
54
5.5 Role of management on M&E Practices
The findings from Figure 4.1 indicated that there is lack of recognition and support on the role of
M&E from senior management, as 73.4% of respondents disagree and strongly disagree. Figure
4.2 indicates that there is insufficient stakeholders’ involvement on M&E system, as 44.68% of
respondents agree and strongly agree. Similarly, Figure 4.3 shows 62.23% of respondents
disagree and strongly disagree on that the management ensures that staff are trained on M&E
regularly. Also, figure 4.4 shows 72,87% of respondent concluded that there is lack of supportive
guidance and supervision from leaders.
5.6 Conclusion
The intent of this research is to examine monitoring and evaluation experiences and practices of
local NGOs implementing youth and youth related development projects in Addis Ababa,
Ethiopia. Based on the finding discussed in the previous chapter the following conclusions are
made in line with the objective and research questions to be answered.
The current M&E practices applied in local NGOs are need assessment, logical framework, work
plan, field visit, outcome and process evaluation and there is also inclusion of external
consultant, M&E provide information for management, risk and assumption are partially
identified.
Firstly, even though the NGOs have M&E plan and are aided by Log Frame, the M&E practices
compared with best practices is loose. According to the experience drawn from USAID Turkey
M&E plan, best practices not only include linking M&E to strategic plans and work plans, but
also focusing on efficiency and cost effectiveness, employing a participatory approach to
monitoring progress, utilizing both international and local expertise, disseminating results
widely, using data from multiple sources, and facilitating the use of data for program
improvement (Mathis et al. 2001). As per the findings, M&E practice of these organizations is
not on the acceptable best practices; this was mainly due to insufficient budget allocation for
M&E, luck of using ICT tools to manage, collect and analyze M&E data, M&E experts lack
skills on the area and not getting the necessary training to boost their capacity, undefined role
and responsibility which lucks accountability, poor practice on recording lesson learned for
future use, lack of recognition and support from management, insufficient stakeholders’
55
involvement, absence or lack of familiarity on M&E policy and manual, no impact evaluation
has done so far and strict use of donor guideline and procedures are the common reasons.
Secondly, the findings presented that the first major driver of adopting M&E is to establish
accountability and transparency towards donors. To a much lesser extent, NGOs conduct M&E
towards their community. In addition to being accountable towards donors they use M&E to
generate knowledge and enhancing organizational learning through documentation of M&E
findings and lessons learnt. It is concluded that there is a clear gap in down ward accountability
where only one NGO systematically involve beneficiaries.
5.7 Recommendation
Generally, the study made the following recommendations.
Allocation of enough budget and human power to see the benefits of M&E practices
organizations should allocate sufficient budget and must recruit necessary and enough M&E
experts. The costs incurred to M&E practice is an asset for the organization. Stakeholders should
be involved adequately in M & E activities. Participation should be in both lower and higher-
level activities from the initial to the last stage. This will ensure ownership of findings and
ensure projects are relevant to the beneficiaries needs. Organization leaders should take active
part in designing M & E system and offer timely support and guidance to projects’ staff and
ensure M&E activities are well executed and results and findings communicated and used in
decision making and planning. Need for more participatory approach so that Stakeholders,
specifically the beneficiaries should be involved adequately in M&E activities. The stakeholders
should be part and parcel of the activities. Their participation should range from initial planning
to opinion and decision making. This will ensure ownership of M&E results and also ensure
those projects are having relevance to the beneficiaries’ needs. Based on finding of this study,
lesson learned from development projects implemented were not documented adequately. An
effective Lessons Learned process should prevent the project from repeating mistakes and repeat
the project successes. Therefore, the project should have documented lesson learned for
continuous improvement of project implementation in the future. Give induction and introduce
the staffs with policy and manuals of the organization. Generally, as this study is not conclusive,
regarding the effectiveness of development projects monitoring and evaluation, further related
56
research work that covers a wider scope, areas, large sample size and takes more time appears to
be significant.
57
REFERENCES
AABoFED. (2011). NGOs Partnership Joint Forum Report. Addis Finance, 36-38: Mega
Printing P. L.C
AABoFED. (2013). Addis Ababa City Government Annual physical and Financial Report.
AABOFED. (2014). Addis Ababa city Government Growth and Transformation Plan (Revised):
AACGCB. (2014). Addis Ababa City 2005 EFY year book. Addis Ababa: Brana Printing Press.
AAHDP. (2013). Addis Ababa City Government Housing sector development annual Report for
AAULGDP. (2014). Addis Ababa City Government Capital Investment Plan. Addis Ababa:
Abiy Z., Alemayehu, W.,Daniel, T., Melese, G., Yilma, S. (2009). Introduction to research
Methods: preparatory Modules for Addis Ababa University Graduate Programs. AAU:
Unpublished.
Acevedo, G. L., Rivera, K., Lima., L, & Hwang., H. (Eds.). (2010). Challenges in monitoring
and evaluation: An opportunity to institutionalize M &E systems. Fifth conference of the Latin
America and the Caribbean Monitoring and Evaluation Network. Washington DC, World Bank.
Action aid Ethiopia, (2009). Human Resource policy manual. Addis Ababa: unpublished.
58
Andrew, P., Rebecca, H., Georgina, M. (2009). Monitoring and evaluation of UN-assisted
Aune B. (2000). Logical framework approach and PRA – mutually exclusive or complementary
CCRDA. (2009). Good practices of NGO’s Urban Development Interventions. Addis Ababa:
CCRDA. (2011). 2011 Annual CSOs' Contribution to development. Addis Ababa: Master
printing press.
http://www.csa.gov.et/reports/recent.htm.
Charities and Societies agency. (2011). Charities and Societies Proclamation No.621/2009.
Addis
Charities and Societies agency. (2012). Annual Report for 2004EFY.Retrived from
http://www.chasa.gov.et/guideline/recent.htm.
Charities and Societies agency. (2014). Annual Report for 2004EFY.Retrived from
http://www.chasa.gov.et/guideline/recent.htm.
Debebe,H.(2012). Contributions of Charities and Societies for the achievement of the MDGs&
Expectations and Practices: The Case of the Ministry of Mining and Geological Survey
of Ethiopia: MA thesis.
ESAPII. (2013). Ethiopian Protection of Basic services social accountability Program II, SAIP
Ethiopia Country Program Evaluation (ECPE). (2020). Synthesis Report. Retrieved from http://
oecd.org/countries/Ethiopia/45875541.pdf.
Ethiopian Management Institute. (2014). Program Monitoring and evaluation under Pfor R
Fekadu Sisay. (2011). The role of nongovernmental organizations in promoting the development
GAITANO, S. 2011. The Design of M&E Systems: A Case of East Africa Dairy Development
George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and
Gorgens, M. and Kusek, J. Z. (2009). Making Monitoring and Evaluation Systems Work.
Gorgens, M. and Kusek, J. Z. (2009). Making Monitoring and Evaluation Systems Work. World
Bank.
Gorgens, M. and Kusek, J. Z. (2009). Making Monitoring and Evaluation Systems Work.World
Bank.
Guijt, I., Randwijk and Woodhill, J. (2002). A Guide for project M&E: Managing for Impact in
Gyorkos T. (2003). Monitoring and Evaluation of large scale Helminth control programmes.
Acta Tropic.
61
http://fhi360.org/.../Monitoring%20HIV- AIDS%20programs%20/facilitator/%20
%20module%206.pdf.
International Federation of Red Cross and Red Crescent Societies (IFRCS). (2011).
Jack,E.(2014). Response rate and Responsiveness for surveys: Standards and the
Jerry, A. & Anne, G. (2008). Participatoty Monitoring and Evaluation in Practice. INTRAC-
Kelly K and Magongo B. (2004). Report on assessment of monitoring and evaluation capacity of
Luis,C. Lawrence M, Kaith M.(2007). Research Methods in Education (6th ed.). New York:
62
McCoy L, Ngari P and Krumpe E. (2005). Building Monitoring, Evaluations and Reporting
Organizations in Addis Ababa: The Assessment of Gaps between Expectations and Experience.
Michael, A. (2007). A hand Book of Employee Reward Management and practice,2nd edition.
London: Bell&Bain,Glasgow.
MLYAM. (2011). Meh lewetatoch Yebegoadragot Mahiber Strategic Plan Manual. Addis
Ababa: unpublished.
MoFED. (2012). Assessing Progress towards the Millennium Development Goals: Ethiopia
MDGs
MoFED. (2012). Ethiopian Progress toward Eradicating Poverity: Aninterim Report on poverty
Nabris, K. (2012). Monitoring and Evaluation. Palestinian Academic Society for the Study of
NGOs report (2018), Monitoring and Evaluating Behavior Change Communication Program.
Retrieved from;
Nigel, S. and Rachel, S. (2010). Monitoring and Evaluation Capacity Building: Is it Really that
Owur, O., Chepkuto, P., Tubey, R. and Kuto, L., (2011). Effectiveness of Monitoring and
PATH. (2013). Guide to Monitoring and Evaluation of Advocacy, Communication, and Social
Rachel, H., Thoms, L., Angela, C., Tlina, K., Joan, O., Brain, P. (2013). Legal frameworks and
Unpublished.
Samuel T., Biraj, S., Merga, A., Gadissa, B. (2010). Evaluation and design of Social
Unpublished.
Samuel, J., Mantel, Jr., Jack, M., Margaret M. (2001). Core Concepts of Project Management.
Sephanie,G. Cronbach’s Alpha: Simple definition, use and interpretation. Retrieved from
https://www.statisticshowto.com/probability-and-statistics/statistics-definitions/cronbachs-alpha-
TECS. (2013, April). Guideline to determine operational and administrative (&0/30): Early
evidence of impact. Tracking Trends in Ethiopia’s Civil society, Policy brief 5, pp. 2-5.
Tegbar Worku. (2018). Assessing the Practice of Project Monitoring and Evaluation: The Case of
64
The World Conservation. (2000). Planning, Monitoring and Evaluation of Programms and
UNDAF. (2011). Implementation Manual for UN agencies assisted program in Ethiopia. Addis
Ababa: Unpublished.
UNDP. (2009). Hand book on planning, Monitoring and Evaluation for development results.
UNECA. (2007). Documenting Successful NGOs Experience, A case of Ethiopia. Addis Ababa:
Unpublished.
UNEP (2019). Documenting Successful NGOs Experience, A case of Ethiopia. Addis Ababa:
Unpublished.
Evaluation Network.
USAID (2012). Ethiopia: United Nations Development Assistance Framework (2012- 2015),
USAID. (2012). Ethiopia: United Nations Development Assistance Framework (2012- 2015),
65
USAID. (2020). Ethiopia: United Nations Development Assistance Framework (20118-2019),
World Bank (2011). Monitoring & Evaluation: some tools, methods and approaches. The World
WORLD BANK 2011. Monitoring & Evaluation: Some Tools, Methods and Approaches
World Bank. (2004). Ethiopia General Education Quality Improvement Program: Project
66
Appendix I
ST. MARY’S UNIVERSITY
SCHOOL OF GRADUATE STUDIES
SCHOOL OF BUSINESS
Questionnaire
Code_____________
Section1: Introduction
67
Section2: Demographic characteristics
Put an “X” mark in the appropriate space or circle the choice you select whenever
necessary
1 Sex Male
Female
Below 20 years 20- 30 years 30-40 years 40-50 years Above 50 years
2 Age
4 Your work experience for Less 1 1-2 2-3 3-4 4-5 More than
the Organization? (In years) 5 years
AG= Agree
DA= Disagree
Put an “X” mark in the appropriate space or circle the choice you select whenever
necessary
SN Statement SA SD NN AG DA
1 The expected results (outcome and output) been clearly defined
from program design and result chain.
2 The Logical frame work (Logframe)indicators are (specific,
measurable, attainable, relevant and time-bounded) and have
indicative targets with baselines.
3 M&E information provided to program managers/officers to
assist in decision-making and planning.
4 The risks and assumptions in carrying out the planned M&E
activities have been defined and how they might affect the
planned M&E events and the quality of the data.
5 There is an M&E strategic and work plan for the project
6 Organization use information and communication technology
enabled tools to collect, manage and analyze data for M&E
purpose
7 There are ongoing supervision of record keeping and reporting
8 There is a dominant use of donor procedures and guidelines on
M&E system
69
Section 4: Technical expertise on a particular M&E tools and techniques
SN Statement SA SD NN AG DA
1 Carryout needs assessment & baseline survey for all guaranteed
projects
2 M&E team are trained in skills like leadership, financial
management, facilitation, supervision, advocacy and
communication.
3 The capacity of involved in the M&E system are highly skilled.
4 There is a dominant use of external consultants in monitoring and
evaluation
5 The M&E unit/staff has a well-defined roles and responsibilities
6 The M&E team regularly assess and evaluate compatibility of the
M&E tools and techniques in use.
7 M&E staff/expert document and record lessons learnt for future
use in other implemented programs.
Section 5: Role of management on M&E Practices
1 Senior management recognizes and supports the role of M&E
2 There is insufficient stakeholders’ involvement on M&E system
3 The management ensure that M&E staff are trained regularly
4 There is supportive supervision and guidance from leaders
70
Appendix II
Interview Questions
1. Does the organization allocate enough budget for monitoring and evaluation activities?
Yes [ ] No [ ]
2. What M&E tools and techniques did you know or have you used so far in your M&E
system?
3. Does the organization have M&E manual? If yes, did the staffs introduced to the manual?
4. Does the organization have M&E policy? If yes, did the staffs introduced to the policy?
5. When and how do the organization disseminate M&E reports?
6. Which level evaluation tool you have used so far in your organization?
• Process evaluation (Input/Resource and output)
• Outcome evaluation (Outcomes/Results)
• Impact evaluation (Measurable and tangible)
7. What are the challenges in or to practice M&E?
8. What recommendation/suggestion would you give to improve M&E practice of Local
NGOs?
71