0% found this document useful (0 votes)
31 views73 pages

Budu Martin Asiaw

This thesis examines project monitoring and evaluation practices in the construction industry in Ghana. It seeks to assess the various practices used, barriers faced, and drivers for implementation. A literature review was conducted on monitoring and evaluation development, types, activities, drivers and barriers. A quantitative research methodology was used including a field survey with a structured questionnaire administered to stakeholders in Accra. The results showed the most common practices are participatory monitoring and project mapping. Key barriers included over-reliance on donor guidelines, lack of lesson learning, and inadequate monitoring frameworks. The main drivers for monitoring and evaluation were identified as project budget, participation and capacity for M&E, project scope and size, and duration. The study contributes to understanding challenges to effective monitoring

Uploaded by

Wossen mesele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views73 pages

Budu Martin Asiaw

This thesis examines project monitoring and evaluation practices in the construction industry in Ghana. It seeks to assess the various practices used, barriers faced, and drivers for implementation. A literature review was conducted on monitoring and evaluation development, types, activities, drivers and barriers. A quantitative research methodology was used including a field survey with a structured questionnaire administered to stakeholders in Accra. The results showed the most common practices are participatory monitoring and project mapping. Key barriers included over-reliance on donor guidelines, lack of lesson learning, and inadequate monitoring frameworks. The main drivers for monitoring and evaluation were identified as project budget, participation and capacity for M&E, project scope and size, and duration. The study contributes to understanding challenges to effective monitoring

Uploaded by

Wossen mesele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

KWAME NKRUMAH UNIVERSITY OF SCIENCE AND TECHNOLOGY, KUMASI,

GHANA.

Assessment of Project Monitoring and Evaluation Practices on Construction Projects in

Ghana.

By

Martin Asiaw Budu (Bsc. Mathematics and Statistics)

A Thesis Submitted to the Department of Construction Technology and Management,

College of Art and Built Environment in Partial Fulfilment of the Requirement for the

Degree of

MASTER OF SCIENCE

NOVEMBER, 2018.
DECLARATION

I hereby declare that this thesis submission is my own work towards the MSc. Project
Management and that, to the best of my knowledge and belief, it contains no material
previously published or written by another person nor material which to a substantial extent
has been accepted for the award of any other degree or diploma at Kwame Nkrumah
University of Science and Technology, Kumasi or any other educational institution, except
where due acknowledgment is made in the thesis.

MARTIN ASIAW BUDU (PG 1148617)


Student Name &ID

………………….…………………

Signature

………..……………………………

Date

Certified by:
DR. (MRS.) THERESA Y. BAAH-ENNUMH
Supervisor’s name

……………………………………..

Signature

…………………………………….

Date
Certified by:
PROF. BENARD K. BAIDEN
Head of Department

……………………………………….

Signature

…………………………………………..

Date

ii
ABSTRACT

The aim of project delivery is to make sure the primary objectives are accomplished. And this
is immensely affected by project monitoring as well as evaluation. These play an important
role in achieving project success. Howbeit, the execution of these in our local construction
industry, i.e. Ghana, has witnessed a great deal of hindrances, difficulty and objections. This
has therefore resulted in the poor performance of construction firms all over the country. The
purpose of this study is to find and evaluate the various practices, barriers faced by projects
and the drivers that push for the implementation of monitoring and evaluation practices of
construction projects in the construction industry. The study used quantitative research
methodology and a field survey design as well as literature review. A structured questionnaire
was developed and administered for relevant response from the major stakeholders in the
Ghanaian construction industry in Accra using purposive sampling. The quantitative data was
analysed using the one sample t-test and mean score ranking in SPSS. The results indicated
that, the practices mostly used in the construction industry are; the use of participatory
monitoring approach which is important for guiding decision making as well as project
mapping which communicate what task needs to get done and which resource will be
allocated to complete in what timeframe. The results again indicated that, the barriers to
monitoring and evaluation are; dominant use of donor procedures and guidelines in
monitoring, not incorporating lessons learned which is a cost effective project management
tool to bring together any insights learned during any project for a future project,
improvement of PM&E targets which are not in line with the requirements and estimations of
proposed recipients, sustainability is often not considered and lack of a thorough national
database PM&E framework are the barriers . The results also indicated that, project budget,
which is the total amount of money that is allocated for the project, the extent of participation
and capacity for M&E, project scope and size and the duration are mostly the drivers for
monitoring and evaluation. The study contributes to the body of knowledge on the challenges
to effective monitoring and evaluation of construction projects. The drivers identified are
important in driving the application of project monitoring and evaluation and should be taken
note of. It is therefore recommended that, a best practise framework can be done on the
implementation of project monitoring and evaluation practices.

iii
TABLE OF CONTENT

DECLARATION......................................................................................................................ii

ABSTRACT ............................................................................................................................ iii

TABLE OF CONTENT.......................................................................................................... iv

LIST OF TABLES ............................................................................................................... viii

LIST OF ACRONYMS .......................................................................................................... ix

DEDICATION......................................................................................................................... ix

ACKNOWLEDGMENT ......................................................................................................... x

CHAPTER ONE ...................................................................................................................... 1

GENERAL INTRODUCTION ............................................................................................... 1

1.1 BACKGROUND OF THE STUDY .................................................................................... 1

1.2 STATEMENT OF THE PROBLEM ................................................................................... 3

1.3 RESEARCH QUESTIONS ................................................................................................. 4

1.4 RESEARCH AIM AND OBJECTIVES .............................................................................. 4

1.4.1 Aim ................................................................................................................................... 4

1.4.2 Objectives ......................................................................................................................... 5

1.5 SCOPE OF THE STUDY .................................................................................................... 5

1.6 JUSTIFICATION OF THE RESEARCH ............................................................................ 5

1.7 STRUCTURE OF THE STUDY ......................................................................................... 6

CHAPTER TWO ..................................................................................................................... 7

LITERATURE REVIEW ....................................................................................................... 7

2.1 INTRODUCTION ............................................................................................................... 7

2.2 MONITORING AND EVALUATION DEVELOPMENT................................................. 7

2.3 VARIOUS KINDS OF MONITORING AND EVALUATION ......................................... 8

2.4 MONITORING AND EVALUATING ACTIVITIES ........................................................ 9

iv
2.4.1 Planning Performance of Monitoring and Evaluation .................................................... 11

2.4.2 Monitoring and Evaluation Training and Project Performance ...................................... 12

2.4.3 Baseline Surveys and Project Performance .................................................................... 15

2.4.4 Information Systems and Project Performance............................................................... 18

2.4.5 Monitoring Planning ....................................................................................................... 20

2.4.6 Monitoring Tools ............................................................................................................ 21

2.4.7 Monitoring Techniques/Strategies .................................................................................. 21

2.4.8 Adoption of Monitoring Practices .................................................................................. 21

2.5 DRIVERS OF PROJECT MONITORING AND EVALUATION ................................... 22

2.6 BARRIERS TO THE EXECUTION OF PROJECT MONITORING AND

EVALUATION........................................................................................................................ 23

CHAPTER THREE ............................................................................................................... 28

RESEARCH METHODOLOGY ......................................................................................... 28

3.1 INTRODUCTION ............................................................................................................. 28

3.2 RESEARCH APPROACH ................................................................................................ 28

3.3 RESEARCH DESIGN ....................................................................................................... 29

3.4 RESEARCH STRATEGY ................................................................................................. 29

3.4.1 Quantitative Strategy ...................................................................................................... 30

3.4.2 Qualitative Strategy ........................................................................................................ 30

3.4.3 Mixed Strategy ................................................................................................................ 30

3.5 DATA COLLECTION AND SOURCE OF INFORMATION ......................................... 30

3.5.1 Primary Data Source ....................................................................................................... 31

3.5.2 Secondary Sources of Information.................................................................................. 31

3.6 RESEARCH POPULATION AND SAMPLING TECHNIQUE...................................... 32

3.6.1 Research Population........................................................................................................ 32

v
3.6.2 Sampling Procedure and Sample Size ............................................................................ 32

3.7 DATA COLLECTION INSTRUMENT............................................................................ 33

3.7.1 Questionnaire .................................................................................................................. 33

3.7.2 Content of Questionnaire ................................................................................................ 33

3.7.3 Questionnaire Administration ......................................................................................... 34

3.7.4 Pilot Testing .................................................................................................................... 34

3.8 ANALYSIS OF THE DATA ............................................................................................. 34

3.8.1 Mean score ...................................................................................................................... 35

CHAPTER FOUR .................................................................................................................. 36

DATA ANALYSIS AND DISCUSSION .............................................................................. 36

4.1 INTRODUCTION ............................................................................................................. 36

4.2 RESPONDENT CHARACTERISTICS ............................................................................ 36

4.4 BARRIERS IN PRACTICING PROJECT MONITORING AND EVALUATION ........ 42

4.5 DRIVERS FOR THE IMPLEMENTATION OF MONITORING AND EVALUATION

PRACTICES ............................................................................................................................ 44

CHAPTER FIVE ................................................................................................................... 47

SUMMARY OF FINDINGS, CONCLUSION AND RECOMMENDATION................. 47

5.1 INTRODUCTION ............................................................................................................. 47

5.2 SUMMARY OF THE RESEARCH OBJECTIVES.......................................................... 47

5.2.1 The first objective; To examine the project monitoring and evaluation practices on

construction projects in Ghana................................................................................................. 47

5.2.2 The second objective; To identify the barriers to monitoring and evaluation on

construction projects in Ghana................................................................................................. 48

5.2.3 The third objective; To identify the drivers that push for the implementation of

monitoring and evaluation practices on construction projects in Ghana. ................................ 48

vi
5.3 FINDINGS ......................................................................................................................... 49

5.4 CONCLUSION .................................................................................................................. 50

5.5 RECOMMENDATION ..................................................................................................... 51

5.7 RECOMMENDATION FOR FUTURE RESEARCH ...................................................... 51

REFERENCES ....................................................................................................................... 52

APPENDIX ............................................................................................................................. 61

vii
LIST OF TABLES

Table 4.1: Demographic Data of Respondent .......................................................................... 37

Table 4.2: Project Monitoring and Evaluation Practices ......................................................... 39

Table 4.3: Barriers in Practicing Project Monitoring and Evaluation ..................................... 43

Table 4.4: Drivers in the Implementation of M/E Practices .................................................... 45

viii
LIST OF ACRONYMS

CBPP Construction Best Practice Program

GDP Gross Domestic Product

IBM Implementation-Based Monitoring and Evaluation

IFAD International Fund for Agricultural Development

KPI Key Performance Indicator

KPO Key Performance Outcome

M&E Monitoring and Evaluation

PM&E Project Monitoring and Evaluation

RBM Results Based Monitoring and Evaluation

DEDICATION

This thesis is dedicated to my ever loving and supportive family and friends.

ix
ACKNOWLEDGMENT

I acknowledge the almighty God for His mercies and grace that has seen me come this far.
Then also, my heartfelt gratitude also goes out to my supervisor Dr. (Mrs.) Theresa Y. Baah-
Ennumh for her able support, patience and guidance and also my friends and those who
supported me in the study.

x
CHAPTER ONE

GENERAL INTRODUCTION

1.1 BACKGROUND OF THE STUDY

Ghana has seen a massive boost in its economic development and one major industry

that contributed to this growth is the construction industry, and has therefore been

hailed in recent times (Osei, 2013). This industry affects the social and economic

standing of the country by providing employment for citizens, especially those who

fall within the informal working class, amongst many others such as providing

infrastructure (Amoah et al., 2011; Tengan et al. 2014). It has also played a

significant role raising money which used to contribute to the country’s Gross

Domestic Product (GDP) (Agbodjah 2009; Laryea and Mensah 2010; Ofori 1980)

and also helped to bridge the gap in our infrastructural development by providing

educational, recreational, social, economic and health infrastructure and amenities to

ensure a progress in our economic growth (Ofori, 1980). Although all of such

benefits exist, there have been numerous problems that have been encountered and

documented in the works of others which have proven to cause a negative impact

performance of the local Ghanaian construction (Ofori 2012). Due to such a

downturn, and a damaging condition of non-performance of construction

professionals, this has arisen the petitions to consider and enhance the monitoring

and evaluation of works before, during and after their execution (Williams, 2015).

This has amounted to the recent increase in the attention given by key players in the

construction industry to recognize the role carried out by monitoring and evaluation

(M&E) in attaining consistent project success deliveries. The M&E system speaks a

few variety of management processes, indicators structure structures, plans, and

standards that make sure that observation and analysis functions of a project square

1
measure enforced effectively. In describing the same system, Hardlife and Zhou

(2013) compared it to a ‘management toolkit’ which can be employed to monitor the

progress of works. In recent years, it has become necessary to hand over completed

projects in a state that meets the aims, specifications and standards and this has

therefore enlightened the experts in the construction industry about the importance of

M&E system. This system’s relevance cannot be overlooked as it helps in tracking

the various stages of the project, makes sure that there is an appropriate and effective

use of the available materials, labour and other resources, and also strengthen the ties

between the project members and the members of the M&E team (Hardlife and

Zhou, 2013). During the process of monitoring and evaluating construction projects,

laid down structures ensure that all the various stages and procedures are inspected

and crosschecked frequently to as to achieve the expected results and effects, similar

to that which is expected with reference to standards. This system is also designed to

make sure the due processes (i.e, monitoring and evaluation) are carried out

comprehensively and in accordance to the schedule of works. This exposes all the

loopholes and bottlenecks that may occur during the implementation stage (Kerzner

2017). M&E provides guiding principles and also is helpful in facilitating the data

gathering stage, this frame work serves as guide which is used to facilitate the

process of collection, its scrutiny and reporting it with relation to the standards and

specifications already established (Omonyo, 2015). It conjointly serves the

overarching irresponsibleness challenge in most project implementation. Monitoring

and evaluating systems area unit necessary to supply data to live and guide the

project set up, certify processes, meet internal and external news necessities, and

inform future programming (Chaplowe, 2008). Monitoring and evaluation of projects

is not only important to projects but it is part and parcel of project design (PMBOK,

2
2001). Monitoring and evaluation has been used globally over the last several

decades as a tool in project management. Project monitoring and evaluation is an

integral part of the project cycle and of good management practice (Olive, 2002). For

the purposes of achieving an overall success in a project, attention must be paid to

monitoring and evaluation, as discussed by Olives. M&E generally makes a project

more efficient at every stage. According to UNDP (2002), the overall purpose of

monitoring and evaluation is the measurement and assessment of performance in

order to more effectively manage the outcomes and outputs known as development

results. It helps improve performance and achieve results.

Monitoring and Evaluation has been discovered as major factors that contribute to

the success of projects (Prabhakar, 2008). In a similar work of Papke-Shields et’ al

(2010), it was established that, all other things being equal, there is a very high

chance that projects will become successful if the singular construction stages are

carefully and repeatedly monitored. Time, risks, project scope, cost, human

resources, communication and quality. These factors which are important and can be

effectively be managed through monitoring and evaluation, as in accordance to their

study.

1.2 STATEMENT OF THE PROBLEM

One of the major factors that contributes to the achievement of organizational growth

and development is the success in the various projects undertaken by the

organisation. A large majority of project managers appreciate that monitoring and

evaluation of projects are important if the project objectives and success are to be

achieved. Project monitoring and evaluation exercise adds value to the overall

efficiency of project planning, management and implementation by offering

corrective action to the variances from the expected standard. “Project managers are

3
required to undertake more rigorous monitoring and evaluation of the projects and

develop frameworks and guidelines for measuring impact” (Kahilu, 2010). By so

doing they will achieve greater value creation for the organization through project

success (Kamau and Mohamed, 2015).

The industry is faced with a number of challenges that retard performance and

contribution to national economy. This industry provides a lot of job for the citizenry

and also feeds the nation’s coffers, that is, the Gross Domestic Product (GDP). The

major cause of these is the poor monitoring and evaluation strategies. Even though

counsel and strategies have been sought from construction consultants (project

supervisors), critical questions still have not been addressed. Such questions are; how

do these factors or indicators affect the success or failure of projects and also, which

of these indicators will allow works to be completed and handed over within the aims

and objectives (Tengan et al., 2014). There is therefore the need to assess the project

monitoring and evaluation processes of ongoing construction projects in Ghana.

1.3 RESEARCH QUESTIONS

1. What are the monitoring and evaluation approaches practiced on construction

projects in Ghana?

2. What are the barriers to monitoring and evaluation on construction projects in

Ghana?

3. What pushes for the implementation of monitoring and evaluation practices

on construction projects in Ghana?

1.4 RESEARCH AIM AND OBJECTIVES

1.4.1 Aim

The general objective of the study is to assess Monitoring and Evaluation Practices

on Construction Projects in Ghana.

4
1.4.2 Objectives

The specific objectives of the research study are to:

1. identify the project monitoring and evaluation approaches or practices on

construction projects in Ghana.

2. identify the barriers to monitoring and evaluation on construction projects in

Ghana.

3. to identify the drivers that pushes the implementation of monitoring and

evaluation practices on construction projects in Ghana.

1.5 SCOPE OF THE STUDY

For the aim of this research, the investigation focused on building construction

projects in the Greater Accra Region where most of the construction projects are

undertaken. Data collection was based on information provided by D1K1 contractors

because of the kind of project they undertake and the orderliness in their construction

practice. Geographically, the study is limited to the Greater Accra Region of the

country where most of these construction companies are situated.

Contextually, the study delved into how to apply project monitoring and evaluation

procedures to the Ghanaian construction industry in general and narrow it down to

building works for the data collection and analysis. Though the broader monitoring

and evaluation meaning will be looked at; the study focuses on the construction

industry.

1.6 JUSTIFICATION OF THE RESEARCH

Monitoring and evaluating of projects can be of great importance to various players

including project sponsors as it ensures similar projects are replicated elsewhere as

witnessed in various projects undertaken by the financial sector which revolve

around a few areas (Kamau and Mohamed, 2015; Marangu, 2012). The study is

5
expected to help researchers and policy makers improve in the areas of project M&E

procedures. Overall, the study recommendations might improve effectiveness of

M&E in projects and programmes and provide comprehensive guidance on how to

set up and implement a monitoring and evaluation system by avoiding the pitfalls

that may lead to its failure. The study also discovered areas related to the Monitoring

and Evaluation field that might require more research, hence a basis for further study.

1.7 STRUCTURE OF THE STUDY

The study was structured into five main chapters. The first chapter is the

introduction and it comprises of the background of the study, the research problem,

the research objectives and question, the significance of the study, the scope and

limitation and organization of the thesis. Chapter two is the literature review and

this comprises of both theoretical reviews and empirical findings. Chapter three is

the research methodology and this comprises of the various approaches and methods

that will be used to gather and analyse the data. These approaches include the

research design, the research population and sample size, the research sampling

technique, the source of data and the data collection instrument. Chapter four is the

discussion of data and analysis. This chapter analyses all data drawn from the

research studies. Chapter five is the summary of findings, conclusions and

recommendations. This chapter outline the studies into their respective findings,

conclusions and recommendations.

6
CHAPTER TWO

LITERATURE REVIEW

2.1 INTRODUCTION

In this chapter the literature surrounding monitoring and evaluation and its impact on

project management is presented. This chapter is divided into several parts, the first

section looks at or considers the history of M&E in project management examining

the various classifications of M&E. Following suit are the views on M&E tasks

which are Planning, training, baseline reviews and information frameworks by

considering their impact on project management. Considering the dialog of M&E

tasks, a hypothetical system of this investigation is then shown after comparing

conceptual structure. A list of the various loopholes in learning tended to by this

investigation will be introduced and ultimately, a brief of this Chapter.

2.2 MONITORING AND EVALUATION DEVELOPMENT

Over a period of time now, there has been major logical and conceptual

advancements that have been introduced to M&E. This has reflected the changes in

perspective that have happened management of projects, with M&E practice in the

mid1950s being overwhelmed by a solid accentuation on judicious usage of assets,

monitoring the social logical pattern of the period (Rogers and Williams, 2006). The

accentuation increased interest in the monitoring and evaluation of tasks reflected a

period during the late 1950s where people grew unhappy and weary about the project

management practice, which was thought to be a separate field emerging from the

top executives (Cleland and Ireland, 2007).

Presently, it is critical to attempt to determine the inquiry frequently made on the

possibility of M&E to be grouped as a discipline, an approach or a field. It is the

simple specific way in which M&E has developed that has brought about Scriven

7
(2010), alluding to the M&E field as “trans-disciplinary”, an idea often utilized lately

to portray M&E, instead of describing it as a field or discipline. An imperative

theoretical issue, like the way to group M&E, is "what is M&E?". Various writings

which have been explored demonstrate the inexistence of one singular, undisputed

answer to what M&E seems to be. This can be ascribed to the fact that, no agreement

still surrounds its motivation which may thusly be ascribed to the way that there is no

agreement around its motivation (Kohli and Chitkara, 2008; Wysocki and McGary,

2003; Shapiro, 2001; and Khan, 2001). The reason for this question in this manner

impacts the “what is?” question. The reason ranges from advancing responsibility, to

straightforward, to hierarchical learning, and relying upon the specific reason, the

approach would differ (Binnendijk, 1989). There would likewise be diverse changes

to the above, which thusly would rely upon the specific circumstance and topic. As a

result M&E can now and again be a fluid idea. The assorted variety can be found as

far as techniques utilized and the topic thought about including the kinds of M&E

(Jones, 2011) examined in Section 2.3.

2.3 VARIOUS KINDS OF MONITORING AND EVALUATION

Investigations assessed round characterizations of M&E by various researchers

indicate outstanding likenesses. In light of our attention, two sorts of M&E, namely;

Implementation-Based Monitoring (IBM) and Results-Based Monitoring and

Evaluation (RBM). The creation of RBM was intended to give input on the real

results and objectives of undertakings, which it still does (Kusek and Rist, 2004).

Similarly, Parks et al (2012) include the fact that RBM is ordinarily carried out

together with vital accomplices and includes foundational writing about advance

toward results. RBM, along these lines helps in knowing whether outcomes are being

met or surely will be met as the task advances (Naidoo, 2011).

8
On the other hand, Implementation-Based Monitoring and Evaluation (IBM) centers

around inputs, venture tasks and yields and advances joint learning of partners at

different levels and catalyzes duty to taking remedial activities where essential

(Kusek and Rist; 2004, Neubert; 2010). This point again underscores the part M&E

plays on project management. In this way, it may be presumed that the present job

monitoring and evaluating projects rotates between RBM and IMB in to the extent

zone of centre is concerned.

Concerning, Nyonje et al., (2012) in the book "Checking and Evaluation of Projects

and Programs", recognize classifications from methods of M&E and established that

three kinds of valuations exist: (I) Formative Evaluation – surveys current

undertaking tasks, (ii) Summative Evaluation – Its motivation is to survey a

developed task's achievement in achieving its expressed objectives and (iii) Ex-ante

Evaluation or Needs Assessment - is pre-project evaluation. It is also included by

Black (1993) that, summative valuation also exists as kind of undertaking valuation

which gathers data about results and linked procedures, systems and tasks that have

prompted them.

2.4 MONITORING AND EVALUATING ACTIVITIES

Looking at the dialogue above about the sorts of M&E, it is crucial to recognize

different perspectives on what M&E implies and what it ought to accomplish. The

utmost recognizable perspectives inside this range originates from the individuals

who consider M&E to be backing an absolutely responsibility work. This would

incorporate the correct management of spending plans, work force, lawful and

administrative consistence towards procedure and systems. Deviation from any of the

models welcomes rebuff (Naidoo, 2013). In this unique situation, M&E is viewed as

9
supporting an administrative work, which Cook (2006) calls attention to "envelops

the whole administration, working frameworks and culture of a firm".

Aside from M&E fulfilling vital need of responsibility, for reasons specified

previously, it is likewise intended to advance the “knowledge organisation” (PMI,

2006) - this could be at the stage of M&E, then comes to fruition after results have

been exhibited. The suspicion was that firms would turn out to be much open and

self-intelligent if subjected to evaluative data, however it isn't really the situation, as

operational learning isn't simple, given that unpredictable cluster of conventions with

administration culture, which ought to be arranged (PMI, 2006). It’s been

demonstrated that while it is certain that M&E should prompt learning and reflection,

this might not be the situation, in light of the fact that the manner in which firms

coordinate data might be perplexing, and not as fundamental as proposed in classic

M&E (Preskill, 2004).

As per Kennerly and Neely (2003), using evaluation in firms is difficult and is

affected by a few variables: relevant (political), specialized (methodological) and

bureaucratic (mental). These elements mix, yet what is clear is that except if "every

one of the components are arranged, organisational learning is troublesome".

Schwartz and Mayne (2005) evaluate this grouping regarding how M&E adds to

learning and reflection, and notes that in this mode M&E is viewed as one apparatus

that backs administration by enhancing the nature of data for leadership. While the

greater part of the examination has concentrated on NGOs, there is a developing

enthusiasm for perceiving how M&E manufactures learning associations in different

associations (Roper and Petitt, 2002; Hamer and Komenan, 2004). Organisational

learning can be easily prompted by evaluation, not simply responsibility that was

represented by Gray (2009). It has been made clear that M&E goal is vital, as it

10
could prompt diverse results – the focus of this paper. It ought to be recollected that

M&E has expected diverse characters, because of setting, and relying upon this, it

might be utilized for responsibility, advancing a conduct or practice, or learning, as

exhibited regarding the matter (Bamberger, 2008).

The ability to know that M&E are not enchantment potions that can be poured into

circumstances to influence issues to vanish, or to fix them, or to marvellously roll out

improvements without a considerable measure of diligent work being placed in by

the venture or association is vital. In themselves, they are not an answer, but rather

they are significant instruments (Verma, 2005). There are different procedures

associated with the M&E activities which when done effectively can prompt change

and great conveyance of projects in the future (Msila and Setlhako, 2013).

M&E can aid recognize issues including their roots then recommend conceivable

answers for issues (Shapiro, 2001). Along these lines, monitoring and evaluating

might have an impact on project management as there is deficient data on it (Singh

and Nyandemo, 2004). Thus, at that point, which tasks are engaged with monitoring

and evaluation? As per UNDP, (2009), directing M&E includes various

corresponding tasks of which the most imperative is to figure an arrangement for

M&E, which control whatever is left of the activity. Shapiro (2001) urged that

checking and evaluation ought to be a piece of the project planning procedure where

there is a necessity to start taking data about task execution in connection to targets

appropriate from the beginning.

2.4.1 Planning Performance of Monitoring and Evaluation

Many researchers of project M&E contend that planning for monitoring and

evaluation must be accurately carried out with the simple aim of undertaking

11
planning (Kohli and Chitkara, 2008) whilst some others oppose that it ought to be

made when the planning stage ends yet prior to the deign period of a project (Nyonje

et al., 2012). In spite of the divergence in views, all research fellows have come to a

mutual understanding that the plan has to provide and include data as to how a

project evaluation should be carried out (Cleland and Ireland, 2007).

Studies also discovers the fact that there exist certain imperatively uncertain

meditations concerning an M&E design: Brignal and Model (2012) sorts these

contemplations into assets – what amount of money and time will be required to

carry out the task. Limit – does the task have inward ability to carry out the projected

M&E tasks; with examination of information gathered? Various complex situations

which have also been highlighted in the work of Armstrong and Baron (2012) are;

feasibility- is the proposed tasks sensible? Can they be created in reality? Course of

events – will the available time frame practical to work with (in terms of directing

the work activities)? Morals – are there any ethical considerations and complexities

that should be considered in making the work come into reality, and is there any

available mechanism which can be put in place to cater for such circumstances?

2.4.2 Monitoring and Evaluation Training and Project Performance

Concerning M&E education, M&E asset and limit evaluation will be done before

project planning. This distinguishes initial limit gaps in M&E and the various

resources which are supposed to direct M&E training. From that point, training could

be undertaken in a casual manner, in view of the process of learning during staff

encounters whereas execution could be a formal procedure (Pfohl and Jacob, 2009).

Knowing which path to select depends on the size of the plan as well as the ability to

determine whether or not the plan can be put up. In the case of much larger contracts

involving many workforces, it is imperative to ensure that the planning preparation is

12
made to custom in accordance to staff limit gaps, due to the fact that there is a

limited number of opportunities to participate with staff individuals. As training

needs get distinguished, they have to create an M&E training and reduce building

arrangements for those integrated themes to be protected and make individuals well

equipped (Alcock, 2009). However, it should be noted that not every worker needs

training in every single subject that will be thought.

Essentially, there is going to be some education that is carried out occasionally which

will add up to an initial education for administration and the workers at M&E

framework and in-service training over the life of the project with a specific outcome

of enhancing rehearsal or preparation (Gray, 2009). This approach has always proven

to affect and impact positively on project performance. Opinions undertaken in M&E

training are essential in controlling the job in the entire process of collection of data.

They collaborate, at least, the M&E framework will be given out succeeding the

main implementation indicators for the project data collecting strategies including

apparatuses and information examination (UNDP, 2006). Such material of exercise

altogether changes the execution group in M&E. Gathering of data, which improves

the understanding of how an assignment is progressing at every point in time thus

could be emphatically affected.

According to UPWARD (2011), the subjects of M&E education are of real interest to

implementers and other information. However, authorities will try to understand

interrogations such as “this” identity for – who we are gathering information for, by

“what” means would we believe they will utilize data including why we have chosen

to accrue the information in the ways which we have. This is critical, particularly to

the persons whose job it is to gather and disseminate information for the M&E

framework which they understand the method of reasoning backing the framework

13
and their part in it UPWARD (2011). Here again is an additional indication of the

ways M&E could affect the progress of the project. This has been the motivation

behind this research.

As suggested before, monitoring and evaluating training must also include a survey

of important implementation signs which are to be collected. The meaning of every

sign, how to estimate the indicator, how to collect information on the indicator will

be collected, the purpose of collecting and displaying the indicator will be gathered,

and the ways the indicator satisfies the expectations of clients (Alcock, 2009).

Principally, these kinds of information give implementers a clear view of how M&E

improves and add on to project performance.

There has been a hefty amount of research work on M&E training which additionally

discovers that data accrual methods and devices are a critical element (Wysocki and

McGary2003; Preskill 2004; Acharya et al 2006). Matters canvassed in the literature

include the purpose of each technique and gadget and the justification for adding the

strategy or including the gadget into the M&E framework (Kusek and Rist, 2004).

These include ways each strategy or apparatus fulfils the expectations of the clients,

the technique or instrument's request for the legitimacy, and the concerns that bother

on strategy or device execution (Ward and Pene, 2009).

Woodhill et al (2012), showed that M&E training must include themes on the aspects

and obligations. By the end of the preparation, the administrative body and staff

need to have a solid understanding of: (1) the roles they play in ensuring a powerful

use of the M&E framework; and (2) where the roles they play link with the various

roles played by individuals and staff.

14
On the series of events during M&E training, researchers have seen that, typically it

is accustomed to the requirements of the work to the extent of the intricacy of the

work. And can therefore have the likelihood of changing from one to another

(Reviere et al, 1996). The most important piece of the training is the improvement of

M&E devices utilizing the project log frame lattice which has been critically

examined by many and responded to, indicating that new clients could be included

(Narayan-Parker and Nagel, 2009). Improvement of M&E gadgets by using the

strategy relating to those participating will improve the assimilation of project

determinants and their significance in adhering to project execution and usage

(Marsden, David, and Oakley, 2001). This assimilation is prime because it upgrades

the probabilities of gathering M&E information on strategy considering the detection

of mistakes and their remedies just in time (PAMFORK, 2007) – eventually

promoting transformation in project management.

In relation to the above mentioned, a conclusion can be made that training in M&E is

crucial. Assigning unqualified employees to collect data on results and effects can

bring about compromises to the lawfulness of the information resulting in consistent

nullification. The best way to begin is to start the training on the segments of the

framework put an effort in evaluating pieces and the limits that supposed to be built

within the team.

2.4.3 Baseline Surveys and Project Performance

All things being equal, when M&E planning is executed accurately, and adequate

information about situations have been collected in a quest to starting a mediation

process, this is what is termed as having baseline data (Hogger et al, 2011). A

baseline data, fundamentally, is a study undertaken with the aim of starting of a

project to generate the rank quo before a job is removed (Estrella and Gaven, 2010).

15
In a starting point study, figures for the recognized execution determinants are

gathered also. The baseline survey, which insists on collating of standard data

concerning a situation, is a beginning factor in monitoring and evaluating plan for the

persons whose information are used to efficiently assess the situations at which the

project begins (Frankel and Gage, 2007). This provides the foundation for

consequent assessment of how profitable the productively the project is undergoing

implementation and the movement is being actualized and the unavoidable results

attained (Armstrong and Baron, 2013), a major commitment to affecting project

management. A baseline survey accumulates key data quickly in a project with the

goal that later judgments can be made about the quality and improvement results

accomplished by the project.

The impact of project management is an essential area one must focus on. Especially

in relation to the baseline survey, numerous writers on M&E have done well to

provide data on the significance the baseline surveys. According to Action Aid

(2008), baseline surveys are vital due to these various factors: serves as the inception

stage for a task - One imperative procedure for beginning a project is to complete a

baseline study. From the findings, a baseline comes in as a yardstick for each future

action, which PMs can look to for the motivations behind settling on project

management choices: Creating the areas of need - Baseline studies are critical in

establishing zones of need for a project. This situation is predominantly noticed when

the task has a few targets. The results of a baseline study can show how a small

portion of the study requires a great amount of attention, as compared to others.

(Action Aid, 2008).

Talking about the attribution state, Krzysztof et al (2011) argued strongly that the

lack of a benchmark makes knowing the effect of a project unconvincing. A

16
benchmark study successfully enlightens managers what effect the task had had on

the object. The reporters additionally include that monitoring and evaluation

apparatuses utilized amid the study of a baseline typically comparable gadgets used

during evaluation since it is vital for ensuring that project administrators think about

"one type to its logical counterpart" Krzysztof et al (2011). All things considered,

directing a baseline suggests that the resource of time as well as other resources for

planning evaluation instruments are not in excess or even disposed of inside and out

and there is a genuine chance to recognize en route whether or not the project is

execution.

Dissimilar motives as to why the survey will be directed are that it is a giver

necessity by way of a feature of the task procedure (Abeyama et al., 2008). Since

monitoring and evaluating is basic for any individual to establish upcoming projects

accomplishment, they usually force actualizing bodies to do baseline studies.

Generally, this aids the individual in the future, to think about the appreciating of

outcomes as task advances. Very surprisingly for quite a number of bodies, the main

aim of M&E has become benefactor necessity, losing the original motives for which

these obligations to ensure monitoring and evaluation was created (Nyoje et al.,

2012).

Similar to unalike tasks of the M&E, a number of matters ought to be deliberated

afore conducting a survey baseline. According to the study “Checking and evaluating

urban development programs, a handbook for project managers and researchers”

Bambeger et al. (2008) calls attention to the fact that the sameness of the titles

recommends, survey of baselines should be done on a fresh page of a task also for an

obvious reason. Every manager must assure that every notable effort or work done

should be put on record for assessment. Where a benchmark ponder is directed as

17
soon as tasks begin, the precise picture shows the fundamental position cannot be

mirrored because the task now has certain effects, all the same. It is in this way

constantly finest exercise, to lead a gauge before task management (Bambeger,

2008).

Further overbearing problems to be done prior to baseline education is led are the ID

of signs, which can be quantified simply or substantial symbols proving a task has

been achieved (UNDP, 2009). They help is organising the poll and in determining

appraisal queries – organising and controlling the kind of data to collect and break

down. Another idea to be pondered over is the aim of the general public (Gosling et

al.2009). Similar to added action in project usage, for a person to complete to as a

benchmark study, capitals are required. Every single professional of M&E know that

subsidies are mandatory for directing a survey. The ability to have access or contact

with resources would direct the influence and vastness of the research. Much aid may

stipulate that both the quantitative and qualitative mechanisms are got, whereas

restricted resources could suggest that an association goes for computable techniques

(Amonia et al. 2006).

Once the study is done, consequent observing of task progress assembles and

examines information utilizing the same sensible structure and devices to analyze

advance made in accomplishing the project set results. Thusly, baseline survey add to

affecting task execution when the PM can decipher the penalties of M&E effectively.

2.4.4 Information Systems and Project Performance

Collecting information in relation to project performance in the process of

monitoring and evaluation at long last amassing of data depending on how complex a

project may be. There is a likelihood that this substantial amount of information will

18
raise the value of project management, there is the need to have to select how to

understand it or to simplify it to your level. Shapiro (2001) discussed that data

evaluation is a method aiming at transforming the systematic information into an

understanding of current situations and matters, changes and repetitions. At the

inception of the work, the analysis of a specific task has to do with sorting out

information - consequently the issue of information framework as an M&E activity

(Technopedia, 2013).

Principally, Information Systems (IS), is an information that controls and directs the

system and provides information that helps to control projects in a productive and

lucrative manner (Beynon-Davies,2008). Information systems consist of three major

resources: people, innovation, and data or basic leadership on how to interpret M&E

information. It becomes useless when M&E data is seen being used cheaply by

project management personnel to save, retrieve and simplify data. Considering

findings from this study, one can notice that a M&E information system is a major

factor when it comes to the impact of project management, since it is a gadget used

to organize important information collected about the task. Hailey and Sorgenfrei

(2009), suggest that the importance of developing an information system is for it to

serve as an easily accessible point for imperative information at every stage of

project management with which the implementation can be evaluated. Information in

the system additionally aids in including the primary elements for the efficient

working of the project (Cheng et al., 2007)

A feature of an information system that makes it a profitable segment of M&E is the

fact that it is management oriented - these advancements in IS could start from

examining the administrative requirements and usually project objectives and must

should begin from the best to the worst. According to Olive (2002), it must be

19
guaranteed that whichever information is deleted from the system is trustworthy

information which will at the long run be useful in information project management.

An also existing factor of information systems is that it is integrative-it is all

encompassing in its approach. This encompasses every utilitarian region of the task.

It mixes data from all regions of a task. Obviously, these highlights make a data

framework a spine of M&E that holds data.

Considering merits, the most important merit of owning an information system is the

fact that it has its own peculiar rights, which moves around as a correspondence,

planning and re-planning instrument. Information Systems encourage recovery,

association, recording, and disseminating, which includes reports, practice,

capabilities, methodology and archives (Beynon-Davies, 2008). Due to the purpose

of this research work, an individual may argue that a database with such

characteristics is a fountain of essential information that could be used to advice on

the implementation of the project.

2.4.5 Monitoring Planning

Monitoring planning has been identified to be one of the effective PM&E practices.

Throughout literature monitoring planning activities has been identified to include:

Monitoring of plans are well appropriate in firm events; Workers are all around

prepared on compelling monitoring planning hones in the firm’s projects; System

charts and structures are utilized as a part of planning firm’s projects; The firm leads

partner's investigation studies on its assets before it designs.

The staff's parts relate to their experience and capabilities in the firm; The firm

utilizes project management programming for monitoring plans and Fast appraisal is

done in monitoring plans utilized as a part of activities (Nyonje et al 2012).

20
2.4.6 Monitoring Tools

Various monitoring tools has been identified by Gyadu-Asiedu (2009) to be effective

ways of PM&E. These include: Monitoring devices are very much evaluated in the

event that they are pertinent in firm’s tasks; Workers are all around prepared on

Monitoring mechanisms in firm’s project; The firm seeks counsel generally on the

best Monitoring strategies to be utilized; The firm utilize Monitoring strategies which

are universally perceived; The firm reviews its budgetary tools in controlling its

project cost; Metrics are utilized to check risks in the firm; Investigation checklist are

utilized as a part of institutionalizing the firm’s monitoring practices.

2.4.7 Monitoring Techniques/Strategies

The monitoring strategies are techniques that is put in place to ensure PM&E in

various firms. Various techniques has been identified throughout literature and

summarized as follows: The firm directs month to month projects appraisals; There

is an appropriate system on anticipating project tasks; Fluctuations are directed on

execution, timetable and cost of project tasks ; Changes asked for have been all

round dealt with and recorded in the firm; Participatory monitoring and approach is

utilized to decide execution; Stochastic technique is utilized as a part of monitoring

practices and Project mapping is directed in projects tasks (Chaplowe, 2008; IFAD,

2002)..

2.4.8 Adoption of Monitoring Practices

This involves making sure that; Firm gives criticisms on monitoring practices done;

Formal Systems of monitoring selections are given in projects usage; There is

legitimate mindfulness on adopted practices done by the firm on its staffs; The firm

benchmarks its monitoring practices with different firms.; The Organization has

better procedures on receiving monitoring practices; The techniques on receiving

21
monitoring practices are complete, clear and effortlessly comprehended in the

project; Staffs are happy with the policies set up which give chance to adopting best

monitoring practices (Beynon-Davies, 2008).

2.5 DRIVERS OF PROJECT MONITORING AND EVALUATION

During the process of reviewing already existing literature, it was uncovered that the

resource of time is a propeller of project monitoring and evaluation. The construction

sector all over the world, it has been stated by experts that, the best indicator of

success and achievement in a project is the ideal opportunity for fruition of

significant works. Significant works are those sections of a project which require a

considerable amount of resources and energy to complete and which must be

completed to allow the progress of the other sections of the project to continue. A

typical example is the foundation of a construction project (Gyadu-Asiedu, 2009).

These are vital tasks and are relied on for the accomplishment and achievement of

the project under execution. One major inspiration is that, these significant tasks are

breakthroughs, which is a point at which periodic certificates are raised and thus,

professionals add exceptional significance to them. The control, that is, the reduction

or increment of this key indicator, falls within the responsibilities of the project

manager or the consulting group or project team to the extent that they can guarantee

a decent PM&E. In the Ghanaian construction industry, the period of remunerating

evaluated tasks undertaken is very paramount in determining the duration of the

project. In the extraordinary situation, these outcomes in contractual workers pausing

work until the point that they get instalments (Gyadu-Asiedu, 2009).

The general objective or wanted change/impact of the task is also a key influencer or

propeller of PM&E. The objective of IFAD for instance succeeding the 1955 World

Summit for Social Development was undertake projects to decrease neediness.

22
Important sections considering monitoring and evaluating progress in this manner

including: Poor people enhancing parts of their livelihood that they consider very

essential; poor people in the rural communities use enhanced occupation

methodologies, increasing expanded access to profitable resources and more

prominent impact and authority on approaches that influence them; IFAD, in

collaboration with debtors and stakeholders, setting up and reinforcing empowering

conditions for successful destitution decrease; IFAD enhancing its inner tasks and

procedures in the zones of speculation and strategy mediations, and improving its

ability to be considered a “learning organization” which advances as well as supports

development (Chaplowe, 2008; IFAD, 2002).

The principal recipients or group that looks to that project seeks to profit. This is also

a major driving force of monitoring and evaluation. A valid example is the

International Fund for Agricultural Development (IFAD), who try to profit

individuals whose salaries are short of what a US dollar for each day, and individuals

who experience the ill effects of starvation and hunger. Monitoring improvements in

achieving such objectives is hence the errand of the whole United Nations

framework facilitated by the Department of Economic and Social Affairs of the

United Nations Secretariat and the United Nations Development Program and in

serious collaboration with the World Bank, the International Monetary Fund and the

Organization for Economic Cooperation and Development (IFAD, 2002).

2.6 BARRIERS TO THE EXECUTION OF PROJECT MONITORING AND

EVALUATION

Many studies have encountered various hindrances during their execution. As an

answer, project monitoring and evaluation are crucial components in enhancing

project execution. These hindrances are essentially impacted by the sorts of measures

23
being utilized and the base measure of consideration handed over to the training. The

viability and accomplishment of each monitoring practices depend to a great extent

on the limit of the establishment or individual ordered to attempt the action.

Execution of project monitoring and evaluation of project monitoring and evaluation

is subsequently tested with powerless institutional limit. Limit working of firms is

important, not only for the prompt redress of bad execution, yet in addition for the

inclusion in light of an expansive point and result examination (Al-Najjar et al.

2012). Monitoring and Evaluation are procedures and thusly there is a requirement

for cooperative energy with different tasks in the project cycle, for example,

budgeting and planning. Feeble linkage amongst budgeting and planning from one

viewpoint and project monitoring and evaluation on the other will antagonistically

influence a definitive point of PM&E. A critical thought in getting ready for

information accumulation and investigation is to distinguish any confinements,

predispositions, and dangers to the precision of the information and examination

(Enshassi et al. 2007). It is likewise basic to painstakingly get ready for the

information administration of the framework which diminishes time and asset

wastage (Enshassi et al. 2007). Planning for PM&E projects and generally

obligations must be recorded and dissected where fundamental. Things related with

each assignment has to be resolved, including their cost, and there must b financial

plan for staffing, including full-time staff, outer advisors, limit building/training, and

other human asset costs. Moreover, the financial backing ought to incorporate every

single capital cost, including office costs, office gear and supplies, travel and hotel,

PC equipment and programming, and different costs. Planning must likewise decide

if all tasks are incorporated into the general task spending plan, for example, support

for a data administration framework, field transportation, vehicle support,

24
interpretation, and printing and distributing of M&E archives/instruments.

Ineffective communication between significant strides in project monitoring and

evaluation in the end represents a test (Lewis et al. 2007). The sort of measures

utilized as a part of estimating project monitoring and evaluation obliges the

successful usage of project monitoring and evaluation. IFAD (2002) hypothesizes

that an issue with the different monitoring and evaluation models is that the greater

part of the measures are just fit for giving an account of project after they have

happened. As per Ahadzie (2007), a meeting of top agents from a gathering of design

and development organizations noticed that significant issues with the key

performance indicators (KPIs) of the Construction Best Practice Program (CBPP)

were that they don't offer the chance to change and that they are composed as post-

comes about KPIs. An examination of alternate KPIs uncovers a comparable

circumstance (Chaplowe, 2008). Ahadzie (2007) clarify two choices of KPIs as

measures of appraisal under "slacking" or "driving" measures: key performance

outcomes (KPOs) and observation measures. KPOs could be utilized to evaluate a

sub-process and give signs for change in the next sub-process.

The project extension and the size are also a propeller of monitoring and evaluation.

In the development business, it is an imperative paradigm for surveying project

monitoring and evaluation. It has the accompanying indicators: proficiency of the

task group, overseeing a temporary worker, basic leadership process, correspondence

and reports, evaluation and endorsement of tasks, site meeting consistency. The

achievement or disappointment of these determinants will directly affect the nature

of the project and along these lines of monitoring and evaluation (Gyadu-Asiedu,

2009).

25
Contract length or the length of a project is a basic affecting variable of project

monitoring and evaluation (Chapwole, 2008). The degree of investment in and limit

with respect to the processes of monitoring and evaluation is by implication

influenced by the term of the project.

The general project spending plan drives the task of monitoring and evaluation. A

portion of the expenditure engaged with a project incorporate change expenditure,

administrative expenditure, ecological and social expenditure, accidental expenditure

and legal expenditure. Uncertainty expenditure is an imperative part of the general

expenses of the project at any stage. This likewise gives a decent sign on how

expenditures on projects are influenced by the "project outer condition". The

administrative expenses, which comprises the expenditure of using administrators on

the project and the project team, is basically a settled one (a level of the agreement

entirety) and can be different from the modifications in this whole because of

alterations in the specific factors of the project and its surroundings which includes

time, extension as well as value variations et cetera. Environmental and social

expenses are not based completely on the degree at which the project affects both the

society and the environment and the level of expenditure of the customer alleviating

the impacts. This for the most part frames a little portion of the price of projects of

government involving buildings as a result of the shapes, dimensions, sizes and

complexities as well as on the grounds that there are relatively few enforceable laws

in these respects. The situation of coincidental (costs identifying with mishaps,

severe climate, mechanical activities) and expenditure on legal issues demonstrate

that they speak to the minimum of the general cost of projects, more often than not.

Coincidental costs identifying with mischances and damages are secured by

protection of which a periodic contribution is made by the contractual worker to

26
reimburse the customer, aside from where those occurrences are cause by the

carelessness of the customer (provision 15, Articles of Agreement and Condition of

Contract for building works, 1988), the other viewpoint manages the misfortunes

because of period used in taking care of these (Gyadu-Asiedu, 2009).

The various limitations to the implementation of the monitoring and evaluation of

projects are summarized in the following: Feeble interest in and use of monitoring

and evaluation results; Frail institutional ability; Frail connection among budgeting,

evaluating and planning and monitoring; Constrained assets and budgetary

allotments for monitoring and evaluation; Rebelliousness towards rules concerning

planning and evaluation and monitoring; bad or poor quality of information ,

information gaps and irregularities; lack of a thorough nationwide record PM&E

framework; The improvement of PM&E goals which may not be quantifiable and

consequently can't be utilized to assess project management and accomplishments or

to convey project outcomes; The improvement of PM&E targets which may not be

reliable with the requirements and estimations of envisioned recipients and Project

tasks that don't convey the coveted result monetarily and don't have the coveted

effect.

27
CHAPTER THREE

RESEARCH METHODOLOGY

3.1 INTRODUCTION

This chapter describes the methodology, with the aim of adopting the best possible

method, used or employed in attaining the research aims and objectives. People were

expected to answer questions that were raised. This chapter put emphasis on the

ontological and epistemological considerations, the reasoning that is deductive and

inductive, research strategy, design, sampling technique and approach, the data

collection instrument and the analysis of the data.

Christou et al., (2008), defined methodology as gaining information about the world

and discovering the procedures by which we can find or discover the things we trust

or believe to be true, making it an all-encompassing method to the design procedure

from the beginning to the data gathering stage all through to the analysis employed

for the research.

3.2 RESEARCH APPROACH

A research approach can also be classified in terms of whether it is deductive or

inductive. The inferential method referred to as examining a theory is when the

investigator builds up a theory or hypothesis and designs the investigation procedure

to examine the newly formed theory. Trying to give an understanding to deductive

research, Perry (2001) added that, it is a research in which a conceptual and

theoretical framework is developed which is then tested by empirical observation;

thus, particular occurrences are deducted from overall impacts. In deductive studies,

theories are generally tested by empirical observations. The deductive method is

referred to as moving from the general to the particular and it often requires

considerable data which this study uses (Hussey et al., 1997).

28
The inductive approach which is known as building a theory, involves the researcher

collecting data in an attempt to develop a theory (Saunders et al, 2000). Hussey et al.

(1997) added that inductive research is a study in which theory is, “developed from

the observation of empirical reality; thus general inferences are induced from

particular instances, which is the reverse of the deductive method since it involves

moving from individual observation to statements of general patterns or laws,”.

3.3 RESEARCH DESIGN

A study’s design is a group of guidelines or instructions or data collection (Ogoe,

1993). These agrees with the framework of collecting information and examination;

the framework that affects the method involved in gathering of information and

analysing it and also supplies the link experiential information and also conclusions

in a rational pattern to the foremost research question of the investigation (Yin, 2009;

Baiden, 2006). By adopting positivism as the paradigm underpinning this study, the

epistemological and ontological assumptions dictated that either; case studies,

surveys and experiments would be most ideal as the research method. Survey was

used for this research as data was collected from D1K1 contractors in Ghana.

3.4 RESEARCH STRATEGY

Research strategy as the enquiry of research objectives (Naoum, 1998). According to

Bouma et al. (1995), research strategy is the way in which the research objectives are

questioned and Baiden (2006) stated that, the three main types of research strategies

are quantitative research strategy, qualitative research strategy, and triangulation.

Making a decision on which type of research strategy to use, depends on the purpose

of the study and type and availability of the information which is required.

29
3.4.1 Quantitative Strategy

Quantitative research is an inquiry into a social or human problem, based on testing a

hypothesis or a theory composed of variables, measured with numbers, and analysed

with statistical procedures, in order to determine whether the hypothesis or the theory

hold true (Creswell, 2005). This strategy was adopted for the study.

3.4.2 Qualitative Strategy

Qualitative studies emphasise the procedure for uncovering how the social import is

formed and highlights the relations between the researcher and the title studied

(Denzin et al, 1998). In the work of Berg (2001), it was established that qualitative

studies seek to talk about the definitions, metaphors, descriptions, meanings,

symbols, concepts and characteristics of things.

3.4.3 Mixed Strategy

In this study, this approach was not adopted based on the duration of the study. The

use of both procedures ensures an even more enriched comprehension of the

observable facts and an explanatory account of triangulation and illuminates the

important study conclusions. This procedure, that is, the mixed technique, is an

investigative design with logical assumptions and the procedures of scientific

enquiries. This method includes the use of logical presumptions that direct the

processes of data gathering and analysis and a combination of qualitative and

quantitative or statistical approach in man phases of the research problems and

process (Tashakkori & Teddlie, 2003).

3.5 DATA COLLECTION AND SOURCE OF INFORMATION

According to Bernard (2002), information collection is very paramount in

investigative studies, since the information helps provide a much clearer

comprehension of the hypothetical context. This results in the mandatory

30
requirement of selecting the method of obtaining information and the source of the

data. This should be carried out with sound judgement particularly because no

measure of analysis can reconcile for data that was gathered poorly and improperly

(Tongoco, 2007). Both secondary and primary data was collected for the purpose of

this study. The use of more than one data collection instrument fortify and makes the

study one of integrity (Patton, 1990).

3.5.1 Primary Data Source

Empirical data that is mostly involved with on-the-ground surveys or gathered on the

field of work is basically known as Primary data. Naoum (2002) described field

work as having three pragmatic procedures; the case study approach, the problem-

solving approach and the survey approach. The Survey approach was selected in this

research where the primary data were collected from construction professionals in

the Greater Accra region. It was the most economical and convenient for the study

(Hagget and Frey, 1977).

3.5.2 Secondary Sources of Information

Data gathered from books, articles, databases and from technical journals are referred

to as secondary sources of data. This is an extremely important part of the research

work because it sets the pace for the creation of the instruments used for the field

survey by the usage of interviews and questionnaires (Owusu-Manu, 2008). This

study made use of secondary data available from two major sources; namely, internal

and external. The internal sources are those published by the organizations or

companies themselves. These include magazines, financial reports, financial

information memoranda, plant and equipment registers, brochures and annual

reports. This type of internal secondary source of information for the study was

collected from the case study School Nkawie Senior High Technical School.

31
The most accurate sources of data are the primary sources, since they provide the

investigator with original research information. The magazines, newspapers,

textbooks, internet sources and technical journals, which are secondary sources of

data are also very important.

3.6 RESEARCH POPULATION AND SAMPLING TECHNIQUE

3.6.1 Research Population

The targeted population for the study are D1K1 contractors in the Greater Accra

region who have knowledge on monitoring and evaluation practices in the industry.

3.6.2 Sampling Procedure and Sample Size

According to Punch (1998), one cannot study everyone, everywhere, doing

everything and so sampling decisions are required not only about which people to

interview or which events to observe, but also about settings and processes. In view

of this, purposive and snowball sampling methods were adopted for the study.

Purposive sampling was adopted to select contractors knowledgeable in project

monitoring and evaluation that is the highest class of contractors in Ghana, D1K1

contractors. As a result of this, top ranking members of staff were approached and

questioned. Basically the selected professionals had their roles which were involved

with decision making with regards to monitoring and evaluation in the firm.

Snowball sampling technique was used to select contractors by referrals of their

colleague contractors since their database were hard to reach. From a review of

literature, a survey questionnaire was developed to collect data for the study. Data

was collected through the use of a written questionnaire hand-delivered to

participants in their offices and classroom. Questionnaires were filled out by

participants and returned to the researcher.

32
3.7 DATA COLLECTION INSTRUMENT

The main instrument that was used to collect information for the study was survey

questionnaire. The questionnaire was structured to consist of closed ended and open

ended type of questions in order to elicit feedback from participants. However, most

of the questions were cantered on monitoring and evaluation practices. These were

the main areas around which data gathered from clients were analysed.

3.7.1 Questionnaire

A questionnaire is a printed statement designed to collect information that can be

obtained through written responses of the subject. Structured questionnaire was

designed and self-administered to student, teachers and headmasters of the School.

The questionnaire used mainly closed and open ended questions that focused on the

subject matter extracted from the literature review.

These questionnaires were mainly administered face to face to selected students,

teachers and headmasters of the school. These were all done to ensure that the

objectives of the study could be achieved. There was less opportunity for bias as they

were presented in a consistent manner. Most of the items in the questionnaire were

closed, which made it easier to compare the responses to each item.

3.7.2 Content of Questionnaire

Largely, the questionnaire was developed to collect data from construction

professional. The questionnaire was grouped in categories to collect data on

monitoring and evaluation practices of the construction firms. Section A, solicited

the characteristics of the respondents using objective test.

Section B, solicited information on project monitoring and evaluation practices using

a rating Likert scale from 1 to 5 having strongly disagree to strongly agree. In section

C information was solicited on barriers in practicing project monitoring and

33
evaluation which was rated from 1-5 with the statement strongly disagree, disagree,

neutral, Agree, strongly agree. And also a space given to answer some of the barriers

in practicing project monitoring and evaluation they knew of. Lastly, section D

brought out the drivers that pushes for the implementation of monitoring and

evaluation practices which was scaled from 1-5 with the statement strongly disagree,

disagree, neutral, Agree, strongly agree.

3.7.3 Questionnaire Administration

Primary data was collected through a field survey of professionals within D1K1

construction firms. Data was collected from sixty (60) respondents through the use of

a designed questionnaire administered to participants in their office and sites.

Questionnaires was filled out by participants and the researcher had to go for the

questionnaires in three days’ time.

3.7.4 Pilot Testing

The pilot questionnaire was given to ten (10) professionals in the construction

industry to answer to correct errors which could take the form of repetition of

questions and typological mistakes and the avoidance of double questions.

3.8 ANALYSIS OF THE DATA

The raw data obtained from a study is useless unless it is transformed into

information for the purpose of decision making (Emery and Couper, 2003). The data

analysis involved reducing the raw data into a manageable size, developing

summaries and applying statistical inferences. Consequently, the following steps

were taken to analyse the data for the study. The data was edited to detect and

correct, possible errors and omissions that were likely to occur, to ensure consistency

across respondents.

34
The quantifiable data from the questionnaire was coded and analysed using SPSS

17.0 (Statistical Package for Social Sciences) software computer program and the

statistical tool employed was the mean method and also the Relative Importance

Index (RII) to determine the importance of the various ratings in ranks. Descriptive

and inferential statistics such as frequency tables, percentages, one sample t-test and

charts were used in the data analysis and summaries.

3.8.1 Mean score

The relative importance index was used to analyze some of the data by computing to

deduce their rankings as below. Data was also analyzed by ranking for example

whether the students agreed or disagreed with the statement. The ratings of the

statements by the respondents were placed against a five-point scale and were

combined and converted to deduce the Mean Score (MS) by the formula:

MS = Ʃ (f x s)
N

Where MS = Mean Score


S = the score given to each factor by respondent
F = frequency of responses for each rating
N = total number of respondents

The factor with the highest mean was then ranked as 1, and then followed by two as

the next higher rank and so on.

35
CHAPTER FOUR

DATA ANALYSIS AND DISCUSSION

4.1 INTRODUCTION

Chapter four analyses the various data collected from the respondents using

analytical software like the IBM SPSS 23 and also the use of Excel spreadsheet to

make the data easy to understand and interpret. A total of sixty questionnaires were

collected from professionals from the construction industry on the monitoring and

evaluation practices in their various place of work. The data was taken through mean

score ranking, one sample t test descriptive statistics and relative importance index to

ascertain and make meaning of the data. The results of the analysis are discussed

below.

4.2 RESPONDENT CHARACTERISTICS

Every research must be credible and reliable. This enabled the researcher to solicit

some needed information on the characteristics of the respondents. Questions like

their sex, education level, their occupation, working experience and their

professional affiliation. These characteristics aided the researcher to have an idea of

the professionals that answered the questions and their qualifications. The result of

the analysis is illustrated in the Table 4.1.

36
Table 4.1: Demographic Data of Respondent
Frequency Percent
Sex Male 57 95.0
Female 3 5.0
Educational Level Diploma / Professional Certificate 4 6.7
Bachelor’s Degree 43 71.7
Masters / Postgraduate Degree 12 20.0
PhD 1 1.7
Respondents Architect 12 20.0
Occupation Civil/Structural Engineer 17 28.3
Project Manager 10 16.7
Quantity Surveyor 21 35.0
Working Experience 1 – 5 years 10 16.7
6 - 10 years 39 65.0
11 – 15 years 11 18.3
Professional Affiliation Ghana Institute of Architects 10 16.7
Ghana Institutions of Surveyors 25 41.7
Ghana Institutions of Engineers 20 33.3
Project Management Professional 5 8.3
Total 60 100.0

Source: Field Survey August 2018

Table 4.1 gives the details of the characteristics of the respondents of the survey. It

was realised that, among the sixty questionnaires shared 95% of the respondents

were Male that is about fifty-seven of them whiles the female respondents were three

representing 5% of the whole sample. This proves the fact of a male dominated

construction industry in Ghana. On the topic of their educational level, four of the

respondents had Diploma / Professional Certificate making about 6.7%, forty-three

of them were having a Bachelor’s Degree which was the most dominant with a

percentage of 71.7, the number of respondents who completed with a Masters /

Postgraduate Degree were twelve in number also making a percentage of 20. The last

was those with a doctorate and per the analysis only one person making a percentage

of 1.7 had that qualification. This gives the researcher the assurance of the questions

answered by these respondents and their understanding of the subject under

consideration. The researcher then moved to their occupations. Architect were twelve

37
making a percentage of 20. Civil/Structural Engineer represented with a percentage

of 28.3 making their number 17. The number of Project Managers who answered the

questionnaire were ten which represents 16.7%. the last occupation was the Quantity

Surveyors with their number being twenty-one representing 35 percent. This proves

that the questionnaires were answered by qualified people in the construction

industry and their answers would be credible and reliable. Their experience in the

industry was also important to the researcher as it would buttress the point made

before. The analysis revealed that, almost all the respondents had six years and above

years of experience in the industry which was a plus to the researcher in confirming

the credibility and reliability of the responses. Respondents with experience of 1 – 5

years are ten representing 16.7 percent, 6 - 10 years were thirty-nine representing 65

percent and respondents with 11 – 15 years’ experience were eleven in number also

representing 18.3 percent of the total respondents. The last question the respondents

were asked was pertaining to their professional affiliation. All the professionals

belonged to one affiliation or the other. Respondents belonging to the Ghana Institute

of Architects made up 16.7 percent which was ten respondents. Ghana Institutions of

Surveyors had a number of twenty-five members representing 41.7 percent. Ghana

Institutions of Engineers had twenty members out of the respondents representing

33.3 percent. Those belonging to the Project Management Professional were five also

representing 8.3 percent. This showed the experience and qualifications the

respondents had in answering the questions posed. The data can be said to be

credible and reliable based on these results from the analysis.

4.3 PROJECT MONITORING AND EVALUATION PRACTICES

To achieve the first objective which was to examine the project monitoring and

evaluation practices on construction projects in Ghana, a question was posed to the

38
respondents to elicit information on the said objective. The questionnaire was

designed in such a way that the respondents will understand what was required of

them using the Likert scale from 1 to 5. The data collected from the respondents

went through data screening to make sure all the data was good enough for analysis.

Mean score ranking and one sample t test was used in the analysis of the data. Table

4.2 gives a detailed description of the analysis.

Table 4.2: Project Monitoring and Evaluation Practices


Std. Sig.
Practices Mean t df Rank
Deviation (2-tailed)
Participatory monitoring and 4.82 5.177 2.718 59 0.009** 1
approach are utilized to
decide execution
Project mapping is directed 4.52 5.196 2.261 59 0.027** 2
in projects tasks
Monitoring planning 4.08 0.424 19.813 59 0.000** 3
Stochastic technique is 3.92 0.381 18.616 59 0.000** 4
utilized as a part of
monitoring practices
Fluctuations are directed on 3.85 0.515 12.784 59 0.000** 5
execution, timetable and cost
of project tasks
Lack of adequate supervisory 3.80 0.480 12.907 59 0.000** 6
skills to monitor contracts
Approved procedures in 3.78 0.415 14.605 59 0.000** 7
place for contractor
monitoring
Contract performance 3.77 0.563 10.539 59 0.000** 8
appraisal is done during
project implementation
Technical audits are 3.73 0.482 11.774 59 0.000** 9
conducted during project
implementation
There is an appropriate 3.72 0.555 10.000 59 0.000** 10
system on anticipating
project tasks
The firm directs month to 3.71 0.524 10.599 59 0.000** 11
month projects appraisals
Contract supervisors do not 3.65 0.515 9.776 59 0.000** 12
prepare monitoring plans
There is poor record 3.43 0.500 6.717 59 0.000** 13
management on projects
No regular site inspections 3.35 0.515 5.264 59 0.000** 14
on road projects
No clear feedback 3.15 0.917 1.267 59 0.210 15
mechanism between the
contractor and employer on

39
projects
No clear dispute resolution 3.07 0.634 0.814 59 0.419 16
procedures for projects
There is no timely payment 3.03 0.843 0.306 59 0.760 17
of contractors
Project expectations are not 2.78 1.121 -1.497 59 0.140 18
clearly communicated to
contractors
**Significant
Source: Field survey August 2018

From the table above, comparing the means of the practices it was realised that

participatory monitoring and approach are utilized to decide execution had a mean of

4.82 and a standard deviation of 5.177 which was ranked first. Project mapping is

directed in projects tasks had a mean of 4.52 with a standard deviation of 5.196 and

ranked second. Monitoring planning had a mean of 4.08 with a standard deviation

0.424 and that ranked third. Stochastic technique is utilized as a part of monitoring

practices had a mean of 3.92 with a standard deviation 0.381 ranking fourth.

Fluctuations are directed on execution, timetable and cost of project tasks had a mean

of 3.85 with a standard deviation 0.515 ranking fifth. Lack of adequate supervisory

skills to monitor contracts had a mean of 3.80 with a standard deviation 0.480

ranking sixth. Approved procedures in place for contractor monitoring had a mean of

3.78 with a standard deviation 0.415 also ranking seventh. Contract performance

appraisal is done during project implementation had a mean of 3.77 with a standard

deviation 0.563 which also ranked eighth. Technical audits are conducted during

project implementation had a mean of 3.73 with a standard deviation 0.482 ranking

ninth. There is an appropriate system on anticipating project tasks had a mean of 3.72

with a standard deviation 0.555 ranking tenth. The firm directs month to month

projects appraisals had a mean of 3.71 with a standard deviation 0.524 ranking

eleventh. Contract supervisors do not prepare monitoring plans had a mean of 3.65

with a standard deviation 0.515 and was the twelfth ranked. There is poor record

40
management on projects had a mean of 3.43 with a standard deviation 0.500 ranking

thirteenth. No regular site inspections on road projects had a mean of 3.35 with a

standard deviation 0.515 ranking fourteenth. No clear feedback mechanism between

the contractor and employer on projects had a mean of 3.15 with a standard deviation

0.917 ranking fifteenth. No clear dispute resolution procedures for projects had a

mean of 3.07 with a standard deviation 0.634 ranking sixteenth. There is no timely

payment of contractors had a mean of 3.03 with a standard deviation 0.843 ranking

seventeenth. Project expectations are not clearly communicated to contractors had a

mean of 2.78 with a standard deviation 1.121 was the last ranked practice.

The significance of the proposed practices was again tested using the one sample t

test. With a 95% significance level and a test value of 3.0, p > 0.05 was deemed not

statistically significant with the significant ones having p < 0.05. After the test, all

the practices received a p value less than 0.05 except: no clear feedback mechanism

between the contractor and employer on projects, no clear dispute resolution

procedures for projects, there is no timely payment of contractors and project

expectations are not clearly communicated to contractors which also ranked from

fifteenth to eighteenth. This shows that the practices from the first ranked to the

fourteenth ranked are deemed very significant and important to monitoring and

evaluation practices.

There are different procedures associated with the M&E activities which when done

effectively can prompt change and great conveyance of projects in the future (Msila

and Setlhako, 2013). Many researchers of project M&E contend that planning for

monitoring and evaluation ought to be done exactly with the simple purpose of

undertaking planning (Kohli and Chitkara, 2008) whilst a couple oppose that it ought

to be made when the planning stage ends yet prior to the deign period of a project

41
(Nyonje et al., 2012). In spite of this distinction in view notwithstanding, all

researchers concur that the plan ought to incorporate data on how a project ought to

be evaluated (Cleland and Ireland, 2007). The results affirm the many practices by

different researchers on monitoring and evaluation practices in the Ghanaian

construction industry.

4.4 BARRIERS IN PRACTICING PROJECT MONITORING AND

EVALUATION

The second objective was aimed at identifying the barriers to monitoring and

evaluation on construction projects in Ghana to ascertain the relevance of the data

collected from the respondents on the said topic, the data was analysed using the

relative importance index. This ranked the barriers from the most predominant to the

least. Before this was done, a question was posed to the respondent using the Likert

scale from 1 to 5. They were asked to rate some identified barriers from literature on

monitoring and evaluation practices. After screening of the data, the IBM SPSS was

used in conjunction with the Excel spreadsheet to come up with the relative

importance index of the barriers. Table 4.3 gives a clear description of the analysis

conducted on the data.

42
Table 4.3: Barriers in Practicing Project Monitoring and Evaluation
RII=
NO Barriers Mean (ΣW) Rank
ΣW/(5*N)
1 There is a dominant use of donor 4.267 256 0.853 1
procedures and guidelines in
monitoring
2 Lessons learned are not incorporated 4.133 248 0.827 2
3 The improvement of PM&E targets 4.050 243 0.810 3
that are not reliable with the
requirements and estimations of
intended recipients
4 Sustainability is often not considered 4.000 240 0.800 4
5 Absence of a thorough national 3.983 239 0.797 5
database PM&E framework
6 Poor information quality, 3.950 237 0.790 6
information gaps and irregularities
7 Constrained assets and budgetary 3.883 233 0.777 7
allotments for monitoring and
evaluation
8 Rebelliousness with planning and 3.867 232 0.773 8
monitoring and evaluation rules
9 The improvement of PM&E goals 3.833 230 0.767 9
that are not quantifiable and
consequently can't be utilized to
assess projects
10 Frail institutional ability 3.800 228 0.760 10
11 Feeble interest in and use of 3.733 224 0.747 11
monitoring and evaluation results
12 Frail linkage between budgeting, 3.650 219 0.730 12
planning and monitoring and
evaluation
Source: Field survey (2018)

Table 4.3 reveals the result of the barriers of monitoring and evaluation practices in

the construction industry. The results have been ranked from one to twelve. The first

ranked barrier is there is a dominant use of donor procedures and guidelines in

monitoring having a mean of 4.267 and an RII of 0.853. The second ranked barrier is

lessons learned are not incorporated having a mean 4.133 and an RII of 0.827. The

third ranked barrier is the improvement of PM&E targets that are not reliable with

the requirements and estimations of intended recipients having a mean 4.05 and an

RII of 0.810. The fourth ranked barrier is Sustainability is often not considered

43
having a mean 4.000 and an RII of 0.800. The fifth ranked barrier is Absence of a

thorough national database PM&E framework having a mean 3.983 and an RII of

0.797. The sixth ranked barrier is Poor information quality, information gaps and

irregularities having a mean 3.950 and an RII of 0.790. The seventh ranked barrier is

constrained assets and budgetary allotments for monitoring and evaluation having a

mean 3.883 and an RII of 0.777. The eighth ranked barrier is Rebelliousness with

planning and monitoring and evaluation rules having a mean 3.867 and an RII of

0.773. The ninth ranked barrier is the improvement of PM&E goals that are not

quantifiable and consequently can't be utilized to assess projects having a mean

3.833 and an RII of 0.767. The tenth ranked barrier is Frail institutional ability

having a mean 3.800 and an RII of 0.760. The eleventh ranked barrier is feeble

interest in and use of monitoring and evaluation results having a mean 3.733 and an

RII of 0.747 and the last ranked barrier is Frail linkage between budgeting, planning

and monitoring and evaluation having a mean 3.65 and an RII of 0.730. these are the

challenges that is affecting the construction industry from implementing proper

monitoring and evaluation practices.

4.5 DRIVERS FOR THE IMPLEMENTATION OF MONITORING AND

EVALUATION PRACTICES

The last objective was aimed at identifying the drivers that push for the

implementation of monitoring and evaluation practices on construction projects in

Ghana. Through the review of related literature on the subject matter, a questionnaire

was developed with the factors identified in literature. The questionnaire was

designed using the Likert scale to allow the respondents to rate the various factors

identified. After data collection, the data was screened for mistakes and errors. These

44
were corrected and the results analysed using the relative importance index to rank

them. Table 4.4 shows the ranking of the various barriers using RII.

Table 4.4: Drivers in the Implementation of M/E Practices


RII=
NO Drivers Mean (ΣW) Rank
ΣW/(5*N)
1 The overall project budget of the 4.050 243 0.810 1
project
2 The extent of participation in and 3.933 236 0.787 2
capacity for M&E
3 The assumption that links the project 3.900 234 0.780 3
objectives to specific interventions or
activities
4 The project scope and size 3.883 233 0.777 4
5 The project duration 3.817 229 0.763 5
6 The main beneficiaries or audience 3.800 228 0.760 6
that the project seeks to benefit
7 The overall goal or desired change of 3.783 227 0.757 7
effect of the project
Source: Field survey August 2018

From Table 4.4 above, the drivers are ranked from the highest to the lowest. The

overall project budget of the project is at rank one with a mean of 4.050 with the

highest RII value of 0.810. The extent of participation in and capacity for M&E is

ranked second with a mean of 3.933 with an RII value of 0.787. The assumption that

links the project objectives to specific interventions or activities is ranked third with

a mean of 3.900 with an RII value of 0.780. The project scope and size is ranked

fourth with a mean of 3.883 with an RII value of 0.777. The project duration is

ranked fifth with a mean of 3.817 with an RII value of 0.763. The main beneficiaries

or audience that the project seeks to benefit is ranked sixth with a mean of 3.800 with

an RII value of 0.760 and the overall goal or desired change of effect of the project is

the last ranked driver with a mean of 3.783 with an RII value of 0.757.

As per Chaplowe (2008), project length is a basic affecting variable of project

monitoring and evaluation. The degree of investment in and limit with respect to

45
Monitoring and Evaluation is by implication influenced by the term of the project.

The principal recipients or group that the project looks to profit is likewise another

driver of project monitoring and evaluation (IFAD, 2002). There are a lot of

literature that backs these drivers and assertions on monitoring and evaluation and

they should be taken as important and significant if it is to be implemented.

46
CHAPTER FIVE

SUMMARY OF FINDINGS, CONCLUSION AND RECOMMENDATION

5.1 INTRODUCTION

The studies chapter five summarises the objectives and states the findings from the

data which was analysed using one sample t test, mean score ranking and relative

importance index. The conclusion of the study is made with several

recommendations stated. The limitation is then elaborated with further research

recommendations made. The aforementioned are discussed below.

5.2 SUMMARY OF THE RESEARCH OBJECTIVES

The study aimed at assessing Monitoring and Evaluation Practices on Construction

Projects in Ghana. The objects set to realise the aim of the study are as follows: To

examine the project monitoring and evaluation practices on construction projects in

Ghana; To identify the barriers to monitoring and evaluation on construction projects

in Ghana; and to identify the drivers that push for the implementation of monitoring

and evaluation practices on construction projects in Ghana. The objectives of the

study are summarised below

5.2.1 The first objective; To examine the project monitoring and evaluation

practices on construction projects in Ghana.

A questionnaire was developed in a form of Likert scale rating from one to five.

Mean score ranking and one sample t test was used to analyse the data. After the

analysis, the highest ranked practices from one to ten are Participatory monitoring

and approach are utilized to decide execution, Project mapping is directed in projects

tasks, Monitoring planning, Stochastic technique is utilized as a part of monitoring

practices, Fluctuations are directed on execution, timetable and cost of project tasks,

Lack of adequate supervisory skills to monitor contracts, Approved procedures in

47
place for contractor monitoring, Contract performance appraisal is done during

project implementation, Technical audits are conducted during project

implementation and There is an appropriate system on anticipating project tasks. All

the ten highest ranked factors were also significant to the study from the one sample t

test analysis.

5.2.2 The second objective; To identify the barriers to monitoring and

evaluation on construction projects in Ghana.

To achieve the objective stated above, existing literature on the barriers to

monitoring and evaluation on construction projects was reviewed. Questionnaire

were developed as such to collect data from respondents. The analysis was done

using relative importance index. The following barriers were ranked from one to

five: There is a dominant use of donor procedures and guidelines in monitoring,

lessons learned are not incorporated, the improvement of PM&E targets that are not

reliable with the requirements and estimations of intended recipients, sustainability is

often not considered and absence of a thorough national database PM&E framework.

5.2.3 The third objective; To identify the drivers that push for the

implementation of monitoring and evaluation practices on construction projects

in Ghana.

The last objective was to identify the drivers that push for the implementation of

monitoring and evaluation practices and extant literature was reviewed to gather

information for questionnaire development. Relative importance index was also used

on this objective to rank the drivers. The following drivers that is the overall project

budget of the project, the extent of participation in and capacity for M&E, the

assumption that links the project objectives to specific interventions or activities, the

48
project scope and size and the project duration were ranked as the five highest

drivers for monitoring and evaluation.

5.3 FINDINGS

Many findings were extracted from the analysis of the data. Relative importance

index being the tool for analysis was used to achieve the second and third objectives

while mean score and one sample test was used in achieving the first objective of the

research. The following are the findings of the study:

 The characteristics of the respondents proved credible and reliable as the

provided by the respondents were more than qualified to answer the

questionnaire.

 The highest ranked practices from one to ten are: participatory monitoring

and approach are utilized to decide execution, project mapping is directed in

projects tasks, monitoring planning, stochastic technique is utilized as a part

of monitoring practices, fluctuations are directed on execution, timetable and

cost of project tasks, lack of adequate supervisory skills to monitor contracts,

approved procedures in place for contractor monitoring, Contract

performance appraisal is done during project implementation, Technical

audits are conducted during project implementation and There is an

appropriate system on anticipating project tasks. All the ten highest ranked

factors were also significant to the study from the one sample t test analysis.

 The barriers arrived at after the relative importance index from the first
ranked to the fifth are: there is a dominant use of donor procedures and

guidelines in monitoring, Lessons learned are not incorporated, the

improvement of PM&E targets that are not reliable with the requirements and

49
estimations of intended recipients, Sustainability is often not considered and

Absence of a thorough national database PM&E framework.

 The overall project budget of the project, the extent of participation in and

capacity for M&E, the assumption that links the project objectives to specific

interventions or activities, the project scope and size and the project duration

were ranked as the five highest drivers for monitoring and evaluation are the

five highest ranked drivers using the relative importance index.

5.4 CONCLUSION

Monitoring and Evaluation system describes a set of organizational structures,

management processes, plans, indicators and standards that ensure that monitoring

and evaluation functions of a project are implemented effectively. The study aimed at

assessing the monitoring and evaluation practices on construction projects in Ghana.

Using purposive sampling sixty questionnaires were received for analysis. The

analysis used was the one sample t test, relative importance index and mean score

ranking. The practices identified were participatory monitoring and approach are

utilized to decide execution, Project mapping is directed in projects tasks, monitoring

planning, Stochastic technique is utilized as a part of monitoring practices,

Fluctuations are directed on execution, timetable and cost of project tasks. The first

five barriers were also there is a dominant use of donor procedures and guidelines in

monitoring, Lessons learned are not incorporated, the improvement of PM&E targets

that are not reliable with the requirements and estimations of intended recipients,

Sustainability is often not considered and Absence of a thorough national database

PM&E framework. The overall project budget of the project, the extent of

participation in and capacity for M&E, the assumption that links the project

50
objectives to specific interventions or activities, the project scope and size and the

project duration were the five highest drivers for monitoring and evaluation.

5.5 RECOMMENDATION

The study recommends that, the barriers identified by the study will go a long way to

inform practitioners to prevent them from happening. The significant monitoring and

evaluation practices identified are also important for all practitioners and is also

recommended that academicians also use it as an additional source of literature and

information. The drivers identified is important in driving the implementation of

project monitoring and evaluation and should be taken note of.

5.6 LIMITATION

The location of these contractors became a huge challenge to this research. The non-

responsiveness of some of the respondents were also a big liability to the study. The

work was also limited to only a number of contractors in the Ghanaian construction

industry. These however did not undermine the study and its findings.

5.7 RECOMMENDATION FOR FUTURE RESEARCH

Project monitoring and evaluation are well known in academia but it becomes very

challenging when practitioners are not applying what they read. It is therefore

recommended that, a best practise framework can be done on the implementation of

project monitoring and evaluation practices. Further exploration is required in the

area of determination of contractors’ perspective of project performance. This is

expected to give the assessment process a further outlook.

51
REFERENCES

Adinyira, E. and Ayarkwa, J. (2010) Potential critical challenges to


internationalization by Ghanaian contractors,The Ghana Surveyor Journal,
Vol.4No.1p.59-62
Agbodjah, L. S. (2009). A Human Resource Management Policy Development
(HRMPD) Framework for Large Construction Companies Operating in
Ghana. Kumasi:Kwame Nkrumah University of Science and Technology.
Ahadzie D.K. (2007). A model for predicting the performance ofproject managers in
mass housebuilding projects in Ghana. Unpublished thesis (PhD). University
of Wolverhampton, UK.
Al-Najjar, J., Enshassi, A. and Kumarawamy, M (2009) Delays and cost overruns in
the construction projects in the Gaza Strip, Journal of Financial Management
of Property and Construction, Vol .14 No. 2, pp. 126-127
Amoah, P., Ahadzie, D. K. and Dansoh. A. (2011). “The Factors Affecting
Construction Performance in Ghana: The Perspective of Small-Scale
Building Contractors.” The Ghana Surveyor 4 (1): 41–48.
Antoniadis, D.N., Edum-Fotwe, F.T& Thorpe, A., (2006). Project reporting and
Complexity In: Boyd D (ed) Proceedings of 22nd annual conference
ARCOM, UCE Birmingham.

Arthur, W.B., Durlauf, S. & Lane D., (1997). Introduction: The Economy as a
Complex Evolving System. In (Eds.) The Economy as a Complex Evolving
System II. Santa Fe Institute, Santa Fe and Reading, MA: Addison-Wesley.

Asare O. E. 2010). ‘Utilization of the multiple aspects of my IMDP learning to


improve upon delays in the implementation of capital projects directly linked
to production sustainability of the Obuasi Mine’. IMDP Thesis, Graduate
School of Business University of Cape Town.

AUSAID, (2006) M & E Framework Good Practice Guide. Sydney. AUSAID.


Avert.Org. 2005: HIV/AIDS in Botswana.

Babbie, E. (2002). The basics of social research. Belmont, CA: Wadsworth


Publishing.

52
Baccarini, D., (1996). The Concept of Project Complexity – a review. International
Journal of Project Management.

Baguley, T. (2012). Serious stats: A guide to advanced statistics for the behavioral
sciences. London: Palgrave Macmillan.

Baiden, B.K., (2006), “Framework for the integration of the project delivery team”,
unpublished Doctoral Thesis submitted in partial fulfilment of the
requirement for the award of Doctor of Philosophy at Loughborough
University, Loughborough United Kingdom.
Beatham, S., Anumba, C., Thorpe, T., Hedges, I. (2004) KPIs: a critical appraisal of
their use in construction, Benchmarking, An International Journal, 11, 1, pp.
93-117.

Bernard, H.R. (2002) Research Methods in Anthropology: Qualitative and


quantitative methods. 3rd ed. California: AltaMira Press, Walnut Creek.
Bhagavan, M. R., Virgin, I. Generic aspects of institutional capacity development in
developing countries, Stockholm Environment Institute, 2004.

Blomquist, T., & Müller, R., (2006). Practices, Roles, and Responsibilities of Middle
Managers in Program and Portfolio Management. Project Management
Journal, 37(2), 52–66.

Bohn, S. J. (2009). Benefits and barriers of construction project monitoring using hi-
resolution automated cameras. Unpublished thesis (MSc), Georgia Institute of
Technology.

BQP/CPN (2001) KPIs – Drivers of improvement or a measurement nightmare,


Members’ Report 1149, Royal Academy of Engineering, British Quality
Foundation/Construction Productivity Network, London.

Brace, N., Kemp, R., &Snelgar, R. (2012). SPSS for Psychologists: A Guide to Data
Analysis, (revised and expanded).

Brau, J. C., &Woller, G. M. (2004). Microfinance: A comprehensive review of the


existing literature. The Journal of Entrepreneurial Finance, 9(1), 1.ent.
Economic Development Quarterly, 16(4), 360-369.

53
Bryant, R., Hooper, P., & Mann, C. (Eds.). (2010). Evaluating policy regimes: new
research in empirical macroeconomics. Brookings Institution Press.

Bryman, A., & Bell, E. (2011). Business Research Methods (3 ed.). New York,
United States: Oxford University Press Inc.

Buchanan, T., & Smith, J. L. (1999). Using the Internet for psychological research:
Personality testing on the World Wide Web. British journal of Psychology,

Castillo, E. (2009). Process optimization: A statistical approach. Journal of Quality


Technology, 40(2), 117-135.

Chaplowe, S. G. (2008). Monitoring and Evaluation Planning: Guidelines and Tools.


Washington, DC and Baltimore, MD: Catholic Relief Services.
Chaplowe, S.G (2008),Monitoring and Evaluation Planning, American Red
Cross/CRS M&E Module Series. Washington, DC and Baltimore, MD:
American Red Cross and Catholic Relief Services (CRS).
Chaplowe, S.G (2008). Monitoring and evaluation planning, American Red
Cross/CRS M&E Module Series, American Red Cross and Catholic Relief
Services (CRS), Washington, DC and Baltimore, MD.

Chaplowe, Scott G. (2008). Monitoring and evaluation planning module. American


Red Cross and catholic relief services. Washington, DC and Baltimore, MD.

Chesos R. (2010). Automated M&E systems for NGO’s. The coordinator, Issue no.5.

Cheung, S.O.; Suen, H. C. H.; & Cheung, K. W., (2004).PPMS: a Web-based


construction project performance monitoring system, Automation in
Construction.

Christou, N. V., Lieberman, M., Sampalis, F., &Sampalis, J. S. (2008). Bariatric


surgery reduces cancer risk in morbidly obese patients. Surgery for Obesity
and Related Diseases, 4(6), 691-695.
Cleland, D.I. & King, W.R., (1983).Systems analysis and project management ,
McGraw Hill, New York.

Cleland, D.I., & Ireland, I. R. (2007). Project management: Strategic design and
implementation (5thed.). Ney York, NY: McGraw-Hill.

54
Cochran, W. G., & Cox, G. M. (2002). Experimental designs (No. 311.2 C663). NY:
Wiley.

Coco, G. (2000). On the use of collateral. Journal of Economic Surveys, 14(2), 191-
214.

Cohen, H. H., & Cleveland, R. J. (2013). Safety program practices in record-holding


plants. Professional Safety, 28(3), 26-33.

Cohen, J. (1992). A power primer. Psychological Bulletin, 112, pp.155-159.

Cohen, J. & Manion, N. (2007). Statistical Power Analysis for the Behavioural
Sciences (2nded.). Hillsdale, NJ: Lawrence Erlbaum

Cresswell, J.W., (1994), “Research Design: Qualitative, Quantitative and Mixed


Methods Approaches”, 2nd Ed, Sage Publications.
Cresswell, J.W., (2003), “Research Design: Qualitative, Quantitative and Mixed
Methods Approaches”, 2nd Ed, Sage Publications.
Creswell, J. W., (2005), “Educational research: planning, conducting, and evaluating
quantitative and qualitative research”, 2nd edition, Upper Saddle River, NJ:
Prentice Hall.
Dvir, D., Raz, T &Shenhar, J., (2003). An empirical analysis of the relationship
between project planning and project success, International Journal of Project
Elonen, S. &Artto, .K.A, (2003). Problems in managing internal development
projects in multiproject environment. International journal of project
management.

Enshassi, A., Mohammed, S., Abu Mustafa, Z., and Mayer, P. E(2007) Factors
affecting labour productivity in building projects in the Gaza Strip. Journal of
civil enhineering and management. Vol 13 No. 4 pp245-254
Experience. A paper presented to 30th African Association for public Administration
and management annual round table conference, Accra GhanaOn 6th–10th
October.

Fadhley, S. A. (1991) A Study of Project Finance Banking with Special reference to


the Determinants of Investment Strategy. Unpublished Thesis (PhD),
Loughborough University
DEPARTMENT OF SUPPLY CHAIN AND INFORMATION SYSTEMS

55
Faniran, O. O, Love, P. E. D., & Smith, J., (2000). Effective Front –End Project
Management – A key Element in Achieving Project Success in Developing
Countries, 2nd International Conference on construction in Developing
Countries: Challenges facing the construction industry in developing
countries.

Faniran, O.O., Oluwoye, J.O& Lenard, D., (1998). Interactions between construction
planning and influence factors, Journal of Construction Engineering and
Management, 124(4), 245-256.

Faridi, A & El-Sayegh, S, (2006). Significant factors causing delay in the UAE
construction industry, Construction Management and Economics 24(11):
1167–1176.

FHI, (2004). Monitoring and evaluation of Behavioral change communication


programmes. Washington D.C: FHI.

Field, A. (2005). Discovering Statistics using SPSS for Windows, London: Sage
Publication.
Ghana National Development Planning Commission (2010).

Ghasemi, A., &Zahediasl, S. (2012). Normality tests for statistical analysis: a guide
for nonstatisticians. International journal of endocrinology and
metabolism,10(2), 486489.Gyorkos, T, (2003). Monitoring and Evaluation of
large scale Helminth control programmes. Acta Tropic.

Gujarati, D. N. (2003). Basic Econometrics. 4th ed. New York: McGraw-Hill.

Gyadu-Asiedu, W. (2009). Assessing construction project performance in Ghana:


Modelling practitioners' and clients' perspectives, Eindhoven:
TechnischeUniversiteit.

Gyadu-Asiedu, William (2009) Assessing Construction ProjectPerformance in


Ghana: Modelling Practitioners‟ and Clients‟ Perspectives.
Hair, J.F., Anderson, R.E. TathanR.L. and Black, W. C. (1998)Multivariate data
analysis. Upper Saddle River, New Jersey: Prentice Hall.
Handbook for Inspection of Education Institutions, Ministry of Education Science
and Technology. (2000).

56
Hardlife, Z., and G. Zhou. (2013). “Utilisation of Monitoring and Evaluation
Systems by Development Agencies: The Case of the UNDP in Zimbabwe.”
American International Journal of Contemporary Research, 3 (3): 70–83.
Harish, D., (2010).Performance Management Workshop Retrieved from
http://www.authorstream.com.

Hussey, J.et.al. (1997), Business Research. Palgrave: Basingstoke.

Hyvari, I., (2006). Success of Projects in Different Organizational Condition. Project


Management Journal Vol. 37.

IFAD (International Fund for Agricultural Development) (2002) A Guide for Project
M&E. Rome: IFAD. Available at:[http://www.ifad.org/evaluation/guide/toc
[07/06/2013]
International Fund for Agricultural Development (2002). A guide for project M&E.
Rome: IFAD. Available at: http://www.ifad.org/evaluation/guide/toc.htm
[Accessed on 07 June2013].

Kahilu, D. (2010). Monitoring and evaluation report of "the impact of information


and communication technology service (ICTs) among end users in the
ministry of agriculture and cooperatives in Zambia". Journal of Development
and Agricultural Economics, 3(7), 302-31.
Kamau, C. G. and Mohamed, H. B. (2015). Efficacy of Monitoring and Evaluation
Function in Achieving Project Success in Kenya: A Conceptual Framework.
Science Journal of Business and Management. Vol.3, No.3, pp. 82-94. doi:
10.11648/j.sjbm.20150303.14.
Kerzner, H. (2017). Project Management: A Systems Approach to Planning,
Scheduling, and Controlling. Hoboken, New Jersey: John Wiley & Sons.
Laryea, S., and Mensah, S. (2010). “The Evolution of Indigenous Contractors in
Ghana.” In Presented at the West Africa Built Environment Research
(WABER) Conference, edited by S. Laryea, R. Leiringer, and W. Hughes,
579–588. Accra: West Africa Built Environment Research.
Lewis, J.L. &Sheppard. S.R.J. (2006) Culture and communication: can landscape
visualization improve forest management consultation with indigenous
communities? Landscape and Urban Planning, 77. Pp.291–313.

57
Ling, R. (2002). The social juxtaposition of mobile telephone conversations and
public spaces, Conference on Social Consequences of Mobile Telephones.
Chunchon, Korea.

Marangu, E. M. (2012). Factors influencing implementation of community-based


projects undertaken by the banking industry in Kenya. a case of Barclays
Bank of Kenya (Masters dissertation). Kenyatta University, Nairobi, Kenya.
Marsh, D. and Furlong, E. (2002) Ontology and Epistemology in Political Science’
in Marsh, David and Stoker, Gerry (eds.): Theory and Methods in Political
Science, 2nd edition. Basingstoke: Palgrave.
Marsh, D. and Stoker, G. (2002), (Eds.) “Theory and Methods in Political Science”,
Basingstoke: Palgrave.
Müller, R. & Turner, J.R., (2005). "The Impact of Principal-Agent Relationship and
Contract type on Communication between Project Owner and Manager ”
International Journal of Project Management, vol. 23, no. 5, pp. 398-403.

Muller, R. & Turner, R. (2007). Matching the project manager’s leadership style to
project type. International Journal of Project Management, 25(4), pp. 21-32.

Naoum, S. G., (1991). Procurement and project performance - A comparison of


management and traditional contracting, CIOB occasional paper no. 45.

Naoum, S., Fong, D. & Walker, G., (2004). Critical success factors in project
management; in proceedings of International Symposium on Globalization
and Construction, Thailand.

Naoum, S.G., (1998), “Dissertation Research and Writing for Construction


Students”, Oxford: Bultermouth-Heinemom.
Naoum, S.G., (2002), “Dissertation research and writing for construction students”,
Oxford: Butterworth-Heinemann.
Navon, R., (2005). Automated project performance control of construction projects,
Automation in Construction, Vol. 14, PP. 467– 476.

Neuman, W. L. (2006). Social research methods: Quantitative and qualitative


approaches (Vol. 13, pp. 26-28). Boston, MA: Allyn and bacon.

Njiru, E. (2008). The Role of State Corporations in a Developmental State: The


Kenyan

58
Ofori, G. (1980). The Construction Industries of Developing Countries: The
Applicability of Existing Theories and Strategies for Their Improvement and
Lessons for the Future; the Case of Ghana. Singapore: University of London.
Ogbonna, E. & Harris, L., (2000). Leadership style, organizational culture and
performance: Empirical evidence from UK companies. International Journal
of Human Resources Management, 11(4), 766-788.

Ogoe, E. K. (1993). Decentralisation and Local Government Reforms in Ghana:


GNDC's Decentralisation Policies: the Case of Ahanta West District
Assembly.

Omonyo, A. B. (2015). Lectures in Project Monitoring & Evaluation for


Professional Practitioners. Deutschland: Lambert Academic Publishing.
Osei, V. (2013). “The Construction Industry and its Linkages to the Ghanaian
Economy–Polices to Improve the Sector’s Performance.” International
Journal of Development and Economic Sustainability, 1 (1): 56–72.
Osei-Hwedie, M., (2010), “Strategic Issues of Innovative Financing of Infrastructure
Project Delivery”, Unpublished Thesis (MSc), Kwame Nkrumah University
of Science and Technology, Kumasi- Ghana.
Otieno, F. A. O. (2000). The roles of monitoring and evaluation in projects.
Available at: http://www.irbnet.de/daten/iconda/CIB8942.pdf [Accessed on 5
February 2016].

Owusu-Manu, D., (2008), “Equipment Investment Finance Strategy for Large


Construction Firms in Ghana”, PhD Dissertation submitted to the Department
of Building Technology, Kwame Nkrumah University of Science and
Technology, Kumasi-Ghana: 2008.
Papke-Shields, K. E., Beise, C., & Quan, J. (2010). Do project managers practice
what they preach, and does it matter to project success? International Journal
of Project Management, 28(7), 650-662.
Passia, (2004). Civil Society empowerment: Monitoring and Evaluation. A Guide to
Project management: www.passia.org/seminars/2002/monitoring.htm.

Patton, M.Q., (1990), “Qualitative Evaluation and Research Methods”, Second


Edition Newbury park, CA: Sage Publications.

59
Patton, M.Q., (2008). Qualitative Research and Evaluation Methods. Thousand Oaks,
CA: Sage Publications.

Prabhakar, G. P. (2008). What is Project Success: A Literature Review?


International Journal of Business and Management, 3(9), 1-10.
Reymont, R., & Joreskog, K.G. (1993). Applied factor analysis in the natural
sciences. New York: Cambridge University Press.
Saunders, M., Lewis, P., & Thornhill, A., (2000), “Research Methods for Business
Students”, 2nd Ed, London: Pearson Education Limited
Senaratne, S. and Sexton, M.G. (2009) Role of knowledge in managing construction
project change, Journal of Engineering, Construction and Architectural
Management, Vol. 16 No. 2 pp. 186-7
Stevens, J. (1996). Applied multivariate statistics for the social sciences (3rd ed.).
Mahwah, NJ: Lawrence Erlbaum Associates
Tashakkori, C., & Teddlie, C., (2003), “Handbook of Mixed Methods in Social &
Behavioral Research”, Thousand Oaks: Sage Publications.
Tengan, C., Appiah-Kubi, E., Anzagira, L. F., Balaara, S. and Kissi, E. (2014).
Assessing Driving Factors to the Implementation of Project Monitoring and
Evaluation (Pme) Practices in the Ghanaian Construction Industry.
International Journal of Engineering Research & Technology. Vol. 3 Issue 2,
pp. 173-177.
Tengan, C., L. F. Anzagira, E. Kissi, S. Balaara, and C. A. Anzagira. (2014).
“Factors Affecting Quality Performance of Construction Firms in Ghana:
Evidence from Small–Scale Contractors.” Civil and Environmental Research,
6 (5): 18–23.
Tongoco, D.C. (2007) Purposive sampling as a tool for informant selection.
Enthobotany Research & Applications, 5. Pp.147-158.
Williams, M. (2015). Project Delivery and Unfinished Infrastructure in Ghana’s
Local governments (Policy Brief No. 89105). International Growth Center.
Yin, R.K., (2009), “Case study research: Design and methods”, 4th Ed, London:
Sage Publications.
Zwikael, O. (2009) Critical planning processes in construction processes, Journal of
Construction Innovation, Vol. 9 No.4 pp. 372-375.

60
APPENDIX

KWAME NKRUMAH UNIVERSITY OF SCIENCE AND TECHNOLOGY


SCHOOL OF GRADUATE STUDIES
INSTITUTE OF DISTANCE LEARNING

RESEARCH QUESTIONNAIRE

Assessment of Project Monitoring and Evaluation Practices on Construction

Projects in Ghana

These set of questions are intended for the research work on Assessment of Project

Monitoring and Evaluation Practices on Construction Projects in Ghana. The aim of

the study is to assess Project Monitoring and Evaluation Practices on Construction

Projects in Ghana. The work will be submitted to the Institute of Distance Learning,

Kwame Nkrumah University of Science and Technology, in partial fulfilment for the

award of Master’s Degree in Project Management. All information will be solely

used for academic purposes and would be treated as confidential.

Section A: Respondents Characteristics


Please tick [√] where appropriate and provide brief answers where necessary.
1. Sex : Male [ ] Female [ ]
2. What is your educational level?
Diploma / Professional Certificate [ ] Bachelor’s Degree [ ]
Masters / Postgraduate Degree [ ] PhD [ ]
Others, specify …………………………….
3. What is your occupation?
Architect [ ] Civil/Structural Engineer [ ]
Project Manager [ ] Quantity Surveyor [ ]
Others, (please specify) ………………………………………
4. How many years of working experience do you have in the field of
construction?
1 – 5 years [ ] 6 - 10 years [ ]

61
11 – 15 years [ ] 16 years and above [ ]
5. Which professional body are you affiliated to?
Ghana Institute of Architects [ ] Ghana Institutions of Surveyors [ ]
Ghana Institutions of Engineers [ ] Project Management Professional [ ]
Others (please specify) ……………………………

Section B: Project Monitoring and Evaluation Practices


1. Rate the statements below; strongly disagree-1, disagree-2, neutral-3,
Agree-4, strongly agree-5.

Practices
Please tick [√] under your choice of rating 1 2 3 4 5
No clear feedback mechanism between the contractor and
1
employer on projects
2 Monitoring planning
3 Technical audits are conducted during project implementation
4 No regular site inspections on road projects
5 No clear dispute resolution procedures for projects
6 There is no timely payment of contractors
7 There is poor record management on projects
Contract performance appraisal is done during project
8
implementation
Project expectations are not clearly communicated to
9
contractors
10 Contract supervisors do not prepare monitoring plans
11 Approved procedures in place for contractor monitoring
12 Lack of adequate supervisory skills to monitor contracts
13 Project mapping is directed in projects tasks
Stochastic technique is utilized as a part of monitoring
14
practices
Participatory monitoring and approach are utilized to decide
15
execution
Fluctuations are directed on execution, timetable and cost of
16
project tasks
17 There is an appropriate system on anticipating project tasks
18 The firm directs month to month projects appraisals
Other, please specify
19
20
21

62
Section C: Barriers in Practicing Project Monitoring and Evaluation
2. Rate the statements below; strongly disagree-1, disagree-2, neutral-3,
Agree-4, strongly agree-5.
Barriers
Please tick [√] under your choice of rating 1 2 3 4 5
1 Sustainability is often not considered
2 Lessons learned are not incorporated
3 There is a dominant use of donor procedures and guidelines in
monitoring
4 The improvement of PM&E targets that are not reliable with
the requirements and estimations of intended recipients
5 The improvement of PM&E goals that are not quantifiable and
consequently can't be utilized to assess projects
6 Absence of a thorough national database PM&E framework
7 Poor information quality, information gaps and irregularities
8 Rebelliousness with planning and monitoring and evaluation
rules
9 Constrained assets and budgetary allotments for monitoring
and evaluation
10 Frail linkage between budgeting, planning and monitoring and
evaluation
11 Frail institutional ability
12 Feeble interest in and use of monitoring and evaluation results
Others, please specify
13
14
15
Section D: Drivers that push for the Implementation of Monitoring and
Evaluation Practices
3. Rate the statements below; strongly disagree-1, disagree-2, neutral-3,
Agree-4, strongly agree-5.
Drivers
Please tick [√] under your choice of rating 1 2 3 4 5
1 The overall project budget of the project
2 The project duration
3 The extent of participation in and capacity for M&E
4 The project scope and size
The assumption that links the project objectives to specific
5
interventions or activities
The main beneficiaries or audience that the project seeks to
6
benefit
7 The overall goal or desired change of effect of the project
Others, please specify
8
9

THANK YOU

63

You might also like