Performance Information Handbook
Performance Information Handbook
April 2011
national treasury
Department:
National Treasury
REPUBLIC OF SOUTH AFRICA
Table of contents
1 CHAPTER 1 INTRODUCTION 1
1.1 Introduction 2
1.2 Legal and policy framework 3
1.3 Handbook content and approach 5
1.3.1 Content 5
1.3.2 Approach 6
1.4 Key definitions and distinctions 6
2 CHAPTER 2 DEVELOPING A PI FRAMEWORK 8
2.1 Introduction 8
2.2 Techniques/tools to systemise the PI Framework 9
2.3 Step 1: Indicator gathering 10
2.3.1 PI sources 10
2.4 Step 2: Prepare Performance Dimension (PD) template 10
Strategy Maps: clarifying inputs, outputs, outcomes for PI selection 10
2.4.1 Strategy Mapping 11
2.4.2 The Public Entity decision 12
2.4.3 Developing a Performance Dimension (PD) template 12
2.5 Step 3: Indicator filtering and selection 14
2.5.1 Classify indicators using the Performance Dimensions 14
2.5.2 Filtering and rating indicators 16
2.5.3 Reviewing and selecting indicators 20
2.6 Step 4: Additional decisions 21
2.6.1 Sustainability PI 21
2.6.2 Weighted PI index 21
2.7 Step 5: Validation and dissemination of PI Framework 22
2.8 Regulatory and administrative function challenges 22
3 CHAPTER 3 ENSURING QUALITY PI DATA 26
3.1 Introduction 26
3.2 Meeting minimum PI data standards 27
3.3 PI data assessment and improvement tool 27
3.3.1 Step 1: Identify and classify PI source datasets; undertake PI source data audit 29
3.3.2 Step 2: Ensure quality data from external agencies 30
3.3.3 Step 3: Assess the accuracy of internally collected data 30
i
3.3.4 Step 4: Assessing survey and administrative data for the timeliness,
interoperability and accessibility, and coherence and intergrity 33
3.3.5 Step 5: Design and undertake data verification process 34
3.4 Verifying PI source data and the role of internal audit 34
3.4.1 Verifying PI Data 35
3.4.2 Step 6: Develop remedial strategies to address data quality risks and include in
PI plan 36
3.5 Developing strategies to store PI 36
3.6 The development of electronic record and PI systems 37
4 CHAPTER 4 ANALYSIS AND REPORTING OF PI DATA 39
4.1 Introduction 39
4.2 Analysis tools/techniques 39
4.2.1 Basic comparative analysis 39
4.2.2 Benchmarking 40
4.2.3 Scoring & rating 40
4.2.4 PI integration 42
4.3 Using PI in the annual budget cycle 42
4.3.1 Setting targets 43
4.3.2 Using PI in budget preparation 44
4.3.3 Budget Implementation and PI Reporting 46
5 CHAPTER 5 ASSESSING AND BUILDING PI CAPACITY 48
5.1 Introduction 48
5.2 Capacity requirement checklist 48
5.3 Guidance on priority capacity building activities 49
6 CHAPTER 6 DOCUMENTING PI SYSTEMS 50
6.1 Introduction 50
6.1.1 Performance Information Plans 50
6.1.2 The organisational PI Manual 51
6.2 Developing a PI Plan 51
6.2.1 Step 1: Develop a PI Improvement Sequencing Strategy 52
6.2.2 Step 2: Who is responsible for PI Organisational Arrangements? 53
6.2.3 Step 3: Develop PI Framework 54
6.2.4 Step 4: Describing and targeting improvements to data quality 55
6.2.5 Step 5: Setting up systems to analyse, report and use PI 56
6.2.6 Step 6: Capacity building and training 56
6.2.7 Step 7: Compile the annexures 56
ii
6.3 Organisational PI Manuals 57
Bibliography 61
Appendix A: The PI System at a Glance i
Appendix B: PI Framework Decision making Flowchart ii
Decision flowchart steps iii
Appendix C: Approaches to measuring environmental sustainability iv
Global Reporting Initiative (GRI) iv
Accounting for Sustainability Initiative iv
Appendix D: Reporting example vi
Appendix E: SASQAF data standards vii
NARSSA Standards vii
Appendix F: Correctional Services Centre Level Monitoring Tool ix
iii
Acronyms & glossary
Accounting Officer The administrative head of a government department,
constitutional institutions or entity
Plan period The five financial years to which the development of the
performance plan relates
Presentational PI is presented in budget documents but there is no link, or
performance expectation of, between these PI and allocations
budgeting
vi
PERFORMANCE INFORMATION HANDBOOK CHAPTER 1
1 CHAPTER 1
INTRODUCTION
This Handbook provides descriptions of approaches and tools that national and
provincial departments, public entities and constitutional institutions can use to
implement the Programme Performance Information developed by the National
Treasury and as outlined in chapter 5 of the Treasury Regulations.
The PI Plan ensures the development of quality PI Systems over time. Organisations
are required by the Regulations on Programme Performance Information to submit PI
Plans to Parliament or the provincial legislatures and to report on their implementation.
A PI Manual is an internal guide within a department or entity to organisational PI
practice. It sets out the organisational PI Framework and clarifies roles and
responsibilities for the management and use of PI. A PI Manual is not required by the
new Regulations, but this Handbook advises organisations to compile one in order to
support the quality and effective use of PI.
1
PERFORMANCE INFORMATION HANDBOOK CHAPTER 1
1.1 Introduction
Different departments, institutions and entities are at varying stages with regards to
developing effective PI Systems. Some organisations have been developing their PI
Systems for decades. These organisations often use sophisticated electronic systems
to extract information from their electronic records (administrative, financial, human
resource and other) to PI datasets and then calculate indicator values. They apply
various target setting, rating and scoring techniques to interpret and analyse
performance data and have effective institutional systems to use the information in
organisational decision-making.
Other organisations at national and provincial level have PI Frameworks and Systems
that are still rudimentary. Their only explicit programme performance indicators are
selected to comply with the Public Finance Management Act requirement to submit
measurable objectives with budgets, in other words, for use in the Estimates of National
Expenditure (ENE), or to comply with the National Treasury Regulations on Strategic
Plans as well as complying with the requirements of the Department of Performance
Monitoring and Evaluation, Department of Public Service and Administration (DPSA)
and Statistics South Africa (StatsSA). The quality of these indicators is sometimes poor
and it is often because they were selected without a proper organisational process and
the data used to calculate them for baseline and reporting purposes, is not available or
is unreliable. These organisations have weak systems to collect and store performance
data and PI receives little attention in organisational decision-making processes.
Organisations that have progressed in their development of PI Systems will confirm that
the development of a robust management system is an iterative process of trial and
error, even when a lot of effort is put in initially to design a good PI Framework. They
will also confirm that after more than a decade, the iterations, trial and error and
improvements still continue.
This chapter shares the lessons learned from the experiences of some South African
departments and entities in developing their PI Systems and provides guidance on how
best to sequence the process from the position of weak PI.
A core system design step is to decide for each indicator how the indicator will be used
in organisational planning, budgeting, execution and reporting processes. Key
questions are:
Who is responsible for managing the indicator and related target?
For the collection of data from source data systems?
For calculating the indicator?
For interpreting the results (chapter 4)?
How will targets for the indicator be identified and validated with affected parties?
When in organisational decision-making and review cycles will the indicator be
used?
How will the indicator be reported on, how frequently, by whom and to whom?
Will performance against the indicator be benchmarked against any target, against
previous performance or against performance of other units undertaking the same
work?
When will the indicator be reviewed to confirm its continued usefulness?
2
PERFORMANCE INFORMATION HANDBOOK CHAPTER 1
This Handbook assists departments (national and provincial), public entities and
constitutional institutions to achieve the government’s aspirations to manage
performance through the development of robust Performance Information (PI)
Frameworks and Systems.
The intended users of the Handbook are the organisational units and individuals
designated as responsible for the determination of a PI Framework and the
development and management of the resulting PI System. The Handbook will also be
useful for programme and unit managers who are providing input into the organisational
PI Framework and System, or who want to develop more comprehensive sub-
frameworks for their specific programmes, sub-programmes, units or projects.
This Chapter outlines the legal requirements in respect of the Handbook contents and
the recommended approach and key definitions.
Accounting officers are responsible for targeting performance and managing PI. In
terms of the PFMA Section 27(4), national departments’ accounting officers must
submit measurable objectives with their draft budgets to Parliament and provincial
accounting officers submit to provincial legislatures. In terms of Section 40(1) and (3)
accounting officers must provide information on departments’ achievements against
their predetermined objectives in the Annual Report; and in terms of Section 55(1) and
(3) accounting authorities of public entities should do the same. Furthermore, in Section
38(1)(b) accounting officers of departments and constitutional institutions are
responsible for the transparent, effective, efficient, and economical use of resources of
the department or constitutional institution.
In terms of the Public Service Act (1994) Section 7A(4)(c) executive authorities
determine the reporting requirements of the heads of government components,
including public entities, to the head of the principal department to enable oversight of
the component in respect of policy implementation, performance, integrated planning,
budgeting and service delivery.
3
PERFORMANCE INFORMATION HANDBOOK CHAPTER 1
The Policy Framework for the Government Wide Monitoring and Evaluation (GWM&E)
System, published in 2007 by the Presidency, emphasised the importance of monitoring
and evaluation in realising a more effective government. It identified three data terrains
that together comprise the sources of information on government performance: (i)
evaluations, (ii) programme PI and (iii) social, economic and demographic statistics
(SEDS). It assigned to accounting officers the accountability for the frequency and
quality of monitoring and evaluation information; the integrity of the systems responsible
for the production and utilisation of the information; and it requires prompt managerial
action in relation to M&E findings.
The GWM&E identifies the National Treasury as the lead institution responsible for
programme PI. This is in line with its Constitutional authority for performance
information and responsibility for prescribing measures to ensure transparency and
expenditure control in each sphere of government as outlined in sections 215 and 216.
In 2007 the National Treasury issued the Framework for Managing Programme
Performance Information (FMPPI). The aims of the FMPPI are to:
define roles and responsibilities for PI,
promote accountability to Parliament, provincial legislatures and municipal councils
and the public through timely, accessible and accurate publication of performance
information,
clarify standards for PI, supporting regular audits of non-financial information where
appropriate,
improve the structures, systems and processes required to manage PI.
The document outlines key concepts in the design and implementation of management
systems to define, collect, report and utilise PI in the public sector.
The National Treasury in accordance with the PFMA must promote and enforce
transparency and effective management in respect of revenue, expenditure, assets and
liabilities of departments, entities and constitutional institutions.
The Department of Performance, Monitoring and Evaluation will collaborate with the
National Treasury in supporting the departments to develop Performance Information
Plans and Performance Information Systems. The department is currently in a process
of developing a monitoring and evaluation information technology system that would
support the development of monitoring and evaluation systems by various departments.
In 2009 government re-affirmed its intention to shift its high-level management focus
from inputs (budgets, personnel and equipment) and outputs to managing for outcomes.
The Department of Performance Monitoring and Evaluation (PME) has recently
announced the adoption of 12 measurable outcomes that will become the focus of
government policy and implementation. Specific departmental performance targets will
4
PERFORMANCE INFORMATION HANDBOOK CHAPTER 1
be finalised once service delivery agreements are concluded in support of the identified
outcomes.
With the renewed outcome focus, accountability will also shift from just being about
compliance with regulation, to include accountability for service delivery outputs and
outcomes. This accountability will be at the political level, through mechanisms
developed by PME1, and at a managerial level between Ministers and accounting
officers. The Minister in the Presidency: National Planning emphasised that the central
planning instruments such as the Medium Term Strategic Framework and the National
Plan of Action will focus much more on measurable objectives and timelines.
1.3.1 Content
Most organisations already have some form of PI in place, namely the indicators
identified in their Strategic Plans and reported on in their Annual Reports, and a system
to manage them. The structure of the Handbook is built around a series of tools that
enable PI managers in national and provincial departments, public entities and
constitutional institutions to assess and improve their PI Systems; from the choice of
indicators to assessing and improving the human resource and system capacity to
manage PI.
Performance indicators in different departments and entities are often associated with a
whole different set of approaches and tools, for example logical frameworks, results-
based management (RBM) techniques, and the balanced scorecard (BSC) approach.
The Handbook therefore takes a broad approach that accommodates the different
methodologies adopted and provides tools to:
Map out organisations’ existing policies, strategies and plans,
1
At the time of compiling this Handbook the proposal was that Ministers and MECs would have
performance agreements with the President followed by six-monthly reporting on progress, while sector
institutions would commit to achieving performance, measured by selected performance indicators
through sector forums and sector delivery agreements.
5
PERFORMANCE INFORMATION HANDBOOK CHAPTER 1
Test whether the performance indicators proposed in them are adequate against key
FMPPI criteria,
Encourage the addition or improvement of indicators.
1.3.2 Approach
This Handbook introduces useful approaches and tools; explains key concepts; pools
information regarding various regulations, policies and guidelines in the South African
public sector relevant to the management of programme PI; and provides examples of
the application of key concepts, approaches and tools.
The tools provided in this Handbook can be used by all organisations. The appropriate
application of the tools however requires organisations to understand their functions
and structures. This might mean adjusting some tools to fit the organisation’s
requirements, or leaving out some steps of the tools, which are not applicable to its’
specific environment.
A Microsoft Excel Workbook is provided with this Handbook (see
www.treasury.gov.za/publications/other). The Workbook includes the PI Framework and
data assessment databases, various worksheets and a help function, all of which can
be accessed from a central worksheet. It is expected that organisations may adjust and
apply the tools in an organisation-relevant way. The text therefore frequently refers to
the possibility of adjusting the content of the tools to sector or organisation-specific
imperatives, values and structures.
6
PERFORMANCE INFORMATION HANDBOOK CHAPTER 1
The Performance Information Plan, the Strategic Plan and the Annual
Performance Plan: The Strategic Plan and the Annual Performance Plan (APP) are
required in terms of the Treasury Regulations. These plans set out the organisation’s
goals and objectives, the strategies to achieve these objectives and the annual
performance targeted by programmes to achieve the identified goals. These plans
would set the targets attached to the indicators selected to measure organisational
performance. The PI Plan will set out the organisation’s strategy to manage
performance information that is required to construct the indicators and report against
the targets set in the Strategic Plan and the APP, amongst others.
7
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
CHAPTER 2
2
DEVELOPING A PI FRAMEWORK
2.1 Introduction
The ‘Improving Government Performance: Our Approach’ (PME) proposal of 2009
demands that:
“…we need to focus more on outcomes as we use our time, money and management…
This requires a shift of focus from inputs - budgets, personnel and equipment - to
managing for outcomes”. (PME, 2009, p3)
The Medium The most valuable A list of activities The President appoints A Cabinet cluster
Terms Strategic outputs that required to Ministers accountable for is established for
Framework and influence each achieve each the delivery of the each outcome.
other key strategy outcome are outputs is created. outcome.
documents are defined. Cluster agrees on
translated into 12 The essential A delivery forum is how best to
main outcome Indicators and inputs that form created comprising implement
indicators. targets are set for part of the delivery institutions or agencies delivery
these outcomes. chain for the with a role in delivering. agreement.
They form a clear, outputs are
simple expression identified. The delivery forum Cluster produces
of Government’s negotiates terms of six monthly
mandate. delivery and procedures reports to the
in a Delivery Agreement President.
The key aim of the PI Framework is to add ‘context’ to any PI System by integrating
“performance indicators with long term goals of the community, perhaps as stated in a
strategic plan” (Theurer, 1998, p22). Quality indicators against national and
organisational objectives emanate from progressive improvement, experience and
adaption to changing circumstances, and regular review.
8
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
measured. The selection of a few critical indicators, which will measure service delivery
on the key mandate of the organisation for strategic and high level operational
management purposes, will ensure that executive management is not overwhelmed
with too many indicators (while providing appropriate and strategic coverage of the
major service delivery demands). Organisations should apply the Pareto principle, that
20 per cent of the indicators will satisfy 80 per cent of the PI demands. This does not
obviate the need to measure and monitor a vast array of other additional information
operationally, for which responsibility is assigned at various lower levels through a PI
Framework.
The diagram below sets out in broad terms the methodology proposed here.
Finalise Performance
Dimension allocation
2
This handbook is not a comprehensive manual for all tools and techniques referred to, sometimes only providing a
reference to further information that can be accessed. This is deliberate to limit the size of this guide and refrain from
unnecessary detail especially where organisations have already built topic capacity.
9
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
While this diagram describes a linear process, the indicator selection process is not
linear and will invariably require a return to earlier steps, on account of gaps identified,
weak data, problems in setting targets or as a result of benchmarking. Appendix B
provides a more detailed decision flowchart of the steps and illustrates the necessity of
returning to earlier steps to strengthen the resulting PI Framework.
2.3.1 PI sources
Prior to final selection, all existing and potential indicators should be assembled for
entry into the Performance Dimension Template.
Outputs: are the final products, or goods and services produced for delivery through
organisational processes. Outputs may be defined as "what we produce or deliver".
Outcomes: are the medium-term results for specific beneficiaries that are the
consequence of achieving specific outputs. Outcomes should relate clearly to an
institution's strategic goals and objectives set out in its plans. Outcomes are "what we
wish to achieve".
10
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
The PI Framework should include indicators that are within the control of the
organisation and those that are important to track from a policy management
perspective. There needs to be a balance between organisation specific operational
indicators and policy-oriented indicators.
Thus the sorting of indicators into Performance Dimensions will require the organisation
to be clear about the relationship between inputs, outputs and outcomes against
organisational objectives, even if many outputs from different organisations contribute to
the achievement of an outcome (see Box 2.1 below for a proposal to manage these
outcomes across the public sector).
Although there is recognition that the PI demands of public sector departments and
entities can be quite different and sometimes more complex when compared to the
private sector for which the Balanced Scorecard (BSC) originally evolved, it is
suggested here that `strategy mapping can be applied independently to assist indicator
identification.
11
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
The compilation of ‘strategy maps’, which makes these relationships clear, is useful in
the development of a PI Framework. It identifies what ‘needs’ to be measured and
enables the organisation to compare the results with existing PI. However,
consideration might also be given to using the ‘Logical Model’ or Results-Based
Management (RBM) based techniques to organise indicators from the above sources
into a hierarchy of inputs, outputs and outcome levels. A ‘step-by-step’ guide to compile
strategy maps is not provided, although additional readings are available. We
recommend using the PD tool.
What is a PI Hierarchy?
Classifying indicators into a ‘hierarchy’ mainly enables PI management to be arranged
and responsibility assigned to the appropriate level within the organisation, so that any
one level is not overwhelmed by the magnitude of the PI being managed by it.
It is suggested that any layer of the management hierarchy could only reasonably
manage approximately 20 indicators on a regular basis. When the number of indicators
exceeds this amount then it is time to consider whether it is possible to assign
responsibility of the excess to another level within the organisational structure.
3
The ‘strategy’ level is illustrated by KRAs from the Department of Agriculture’s Strategic Plan, 2009.
12
PERFOR
RMANCE INFORMA
ATION HAN
NDBOOK CHAPT
TER 2
Diagram
m 2.3 Perfo
ormance Dimension
D ns and the
e Performance Cub
be
It is also
o especiallyy importan
nt to note tthat any on c indicator can satisfy
ne specific y multiple
objectivees and va alues, and d that an indicator that meetts multiplee criteria would
w be
preferred to one that
t meetss limited oor few criteeria. Wherre this is tthe case apply
a the
convention of assigning all criteria thatt apply to th
hat indicato
or.
13
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
what it wants to achieve. These objective statements would normally be found in the
strategic plan.
The ‘side’ dimension is the organisation’s structure.
The top dimension is the indicator values or characteristics selected by the
organisation to classify the ‘type’ of indicator used.
The PD can be adapted to suit the circumstances of the organisational structure and
values being considered. For example, ‘Sectors’ are included in the PD diagram (see
Diagram 2.4 below) to recognise that some indicators may be classified as they relate
to an overall sector with multi-department involvement, most likely crucial to national
priorities. But another structure such as a small public entity may not have a need for
this level and would also replace the ‘Department’ level with the label ‘Public Entity’.
Similarly, organisations can either use the suggested set of values or include other
values described in their strategic plan. There may be suggested values, e.g.
‘technology’ and ‘innovation’ that may not be applicable to all organisations.
The concepts here are presented in a graphic format, considered the best way to
understand the requirements. However, in practice and especially when dealing with a
large number of indicator proposals, the techniques described in this chapter (as well as
techniques described in chapter 4) are best implemented in a simple database format.
NT has developed Excel spreadsheet tools to assist
(www.treasury.gov.za/publications/other).
14
PERFOR
RMANCE INFORMA
ATION HAN
NDBOOK CHAPT
TER 2
Diagram
m 2.4 Perfo
ormance Dimension
D n ‘slices’
Followin
ng this, the
e examiner should in
nspect the
e ‘blocks’ that have m
multiple ‘●’s. There
are a nu
umber of coonsideratio
ons:
O
Over-measurement off that PD fa actor, disp
played by multiple
m inddicators
In
ndicators may
m be considered ffor higher or o lower le
evel use inn the organ
nisational
P
PD structurre
P
Potential too select thhe best in ndicator foor that PDD factor aand drop the
t other
in
ndicators, subject
s to further
f asssessment tot be discussed below w.
A classiffication ga
ap may sim mply exist b
because th
he desired governmeent outcom me cannot
be meassured directly or the data is no ot immediaately availa
able. For eexample, it may be
decided that a surrvey is nee eded to devvelop a me
easuremen nt index. Buut before becoming
b
relevantt such an indicator needs
n to b
be at least measured d twice andd if this is done by
annual ssurvey the ere will obv
viously be a 2 year delay
d before the indiicator can be used.
4
A PI classsifier and/or examiner
e should have a goo
od knowledge of the strategiic plan
15
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
Until then, consideration would need to be given to elevating a lower level operational
indicator to achieve ‘proxy’ KRA coverage.
The other advantage of the PD classification criteria is that it provides a simple thought
provoker to identify an aspect of government activity that may not have been measured
previously but should be. For example, ‘innovative’ practices may not have featured in a
previous PI Framework but when the gap is highlighted it is realised that there is an
excellent innovative practice occurring that is not being measured.
The natural outcome for ‘first round’ classification, due to the nature of public sector
organisations, is often that there is usually a heavy concentration in terms of
quantitative ‘productivity’ type indicators that suggests some KRAs are inadequately
managed, or too many indicators are selected for political structures or executive
management to manage and not assigned operationally within the organisation.
The 2007 FMPPI explained the concept of SMART: the acronym for performance
targets being specific, measurable, achievable, relevant and time-bound. The rating
system in the PI Framework tool extends the classification criteria to include other
FMPPI indicator criteria considerations. The tool also provides a rating and scoring
method to help decide whether a specific indicator should be used or developed.
Focus on the factors that are crucial to success and measure "what is important,
not make important what you can measure” (Evans and Richardson, 2009, p16)
The following illustration shows the header labels copied from the selection criteria
‘matrix’, a simple Excel spreadsheet template. This includes the PD classification and
some of the classification factors, to become the basis of rating and indicator selection.
16
PERFOR
RMANCE INFORMA
ATION HAN
NDBOOK CHAPT
TER 2
Diagram
m 2.5 PI Se
election matrix
m
[Exa
ample extracct from indica
ator selection
n tool]
Dete ermine the e name off the indic cator: Ens sure that the
t name truly desc
cribes the
indiccator and iss kept as short as po ssible with
hout corrup
pting the m
meaning.
Indiccator obje ective: Description o of what thee indicatorr is intendeed to achiieve, e.g.
meassurement of o the nummber of stafff attending
g a training
g session wwould be described
d
as ‘ccapacity buuilding’. Deepartmentss must ensure that indicator obj bjectives arre not too
broad d, in orderr to avoid a situation where verry many ind dicators arre linked to
o any one
objecctive.
Indiccator interpretation n: Descripttion of howw the resu
ult of the i ndicator should
s be
interp
preted for a positive or ‘good’ outcome. Often this s can be oobvious, e.g. a high
atten
ndance ratte at a cap pacity build
ding training program
mme woulld be cons sidered a
favouurable outcome com mpared to low level of attenda ance. But, for some complex
econnomic and financial indicators
i it is not always obv
vious, e.g. an increase in the
gene wed favourably by investors, buut not borrrowers.
eral level off interest rates is view
Perfo
ormance Dimensio on classifiication: This
T criterio
on matchees to the graphical
repre
esentation of the technique. T There are 3 aspects s. Strateg
gy/Objectiv ve (most
commmonly the abbreviatted KRA from the strategic plan), Stru ucture (first ‘slice’
propo
osal of the
e organisattion level w
where the indicator prroposer beelieves the indicator
belon
ngs), and Value/Cha
V aracteristic c.
17
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
Data availability: Is the data necessary for calculating the indicator currently
available from an existing system, or is data collection design required? The
preference for ease of indicator implementation would be using existing data. The
regularity of data updating as well as the reliability and credibility of the data should
also be taken into consideration.
Leading/Lagging classification: A leading indicator provides evidence of a future
outcome in advance of its occurrence, while a lagging indicator explains past
outcomes. Stock market returns are a leading indicator for economic performance:
the stock market usually begins to decline before the economy as a whole declines
and usually begins to improve before the general economy begins to recover from a
downturn. The unemployment rate is a lagging indicator of general economic
performance: employment tends to increase two or three quarters after an upturn in
the general economy. Frequently the focus is on lagging indicators as these are the
easiest to compute, but lagging indicators by themselves promote a philosophy of
identifying and correcting past divergence from plans rather than avoiding future
problems. A balance between leading and lagging indicator types is preferred to
alert managers to areas where corrective action is required and allow corrective
action to avoid problems before they arise.
Economy/Effectiveness/Efficiency classification: FMPPI defines economy as
exploring “whether specific inputs are acquired at the lowest cost and at the right
time; and whether the method of producing the requisite outputs is economical”;
efficiency as “how productively inputs are translated into outputs” indicating a desire
for the maximum outputs for a given level of inputs or minimum inputs for a given
level of outputs; and effectiveness is defined as “the extent to which the outputs
achieve desired outcomes”. None of these indicator types is necessarily ‘better’ than
the other, but the purpose of this classification is to encourage that a mix used5.
Triple Bottom Line classification: This was introduced as part of the ‘Balanced
Scorecard’ approach to ensure that there was a balance in the PI being applied, that
social, environmental and economic factors should be considered concurrently.
Community/Customer/Client Satisfaction Influence classification: FMPPI
requires ‘who benefits’ as an indicator selection criterion. A key consideration is
whether an improvement in the indicator outcome or achievement of the outcome
will have a ‘direct’ genuine impact on the organisation’s community/customer/client
satisfaction. A preference for indicator selection is for indicators that have a positive
or high influence on satisfaction.
Departmental 'influence' on indicator outcome: A direct relationship exists
between FMPPI ‘accountability’ requirements and an understanding of the degree to
which an organisation can influence the outcome. Without influence it is not feasible
to be held fully accountable. There will be indicators (generally outcome indicators)
that are extremely useful, but preference would be given to those over which an
organisation can exert influence and change the outcomes. Similar to customer
satisfaction this criterion will also be somewhat subjective. Achievement of the target
of each proposed indicator should be categorised as likely reflecting high, medium
or low organisational influence.
5
In practical terms it is commonly the case that the same indicator could be used as a measure of economy or
efficiency, and it may be difficult to discern. Do not be concerned with this issue; select the most likely for sorting and
ranking purposes being aware that the distinction should not be used to eliminate an indicator from consideration.
18
PERFOR
RMANCE INFORMA
ATION HAN
NDBOOK CHAPT
TER 2
Diagram
m 2.6 Indic
cator ratin
ng illustrattion
What determines whether an indica ator is goood, averag ge or pooor? The technique
propose ed is a sysstematic considerati
c on of seleected key FMPPI crriteria thatt enables
each indicator to be autom matically ra
ated. Whe en a perfo ormance rreport stattes “data
unavaila able” to rep
port on the
e selected indicator, this calls into
i questiion how th
he PI was
initially sselected ass performa
ance cannoot be meas sured wherre the dataa is unavaillable.
6
Users of the suggested
d ‘tool’’ can sim
mply adapt the
e suggested scoring
s to suit their needs
19
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
The next step would be to review the ‘‘blocks” that do not have an indicator. The
question has to be asked whether non-measurement, especially at the organisational
level, has major implications. One alternative might be to temporarily elevate a lower
level indicator to the strategic level, even though it does not meet the importance
criteria. More directly a new indicator may need to be developed, which will take some
time.
Next consider whether some aspects of the dimension are being over measured, by
having more than one indicator for the same ‘block’. If this is the case consider
eliminating or assigning responsibility for the additional indicators with the lowest
scores. Maintain a record of indicators eliminated, and decide whether the data is to be
collected so that reporting can be continued on an adhoc basis even though not part of
the Strategic Plan or MTEF. If this is the case those documents should record the data
being collected for this purpose so that users are aware of availability.
Consider the baseline budget indicators proposed by the National Treasury and any
MTEF Guidelines issued. It is also important to consider indicators required or
20
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
Now revisit the output/outcome consideration discussed in box 2.1. Where an outcome
indicator is retained even though it may not score well in terms of the indicator rating
criteria, it should be highlighted and included within the organisation’s reportable
indicators. Due to their nature, such indicators would ordinarily have high prominence.
2.6.1 Sustainability PI
Financial, economic and environmental sustainability are common concepts of concern
to national, provincial and local governments. The inclusion of sustainability PI concepts
in PI Frameworks is therefore encouraged. Financial and economic sustainability is
commonly defined in terms of progression toward service delivery goals without the
need for large and disruptive changes in revenue policy or risk of economic shocks.
21
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
A simple and often inadequate solution has been to focus on the time element. If a
strategy document needed to be developed or a policy written, regardless of the
intended real outcome of the strategy or policy, the performance measurement often
focussed on ‘was it done by the due date’, often with little consideration as to how the
due date relates to the quality of life improvement of South African citizens or even the
quality of the document. Schacter (2006) has proposed additional objective criteria to
attempt, to produce a measure of performance linked to outcomes, including
assessments of:
Adequate consultation undertaken
Purpose articulation
Logic of the advice or report
Accuracy and completeness of the underlying evidence
Balanced viewpoint presented
Range of viable options presented
Presentation quality
Pragmatic advice
An example assessment, assuming a target set on the basis of a weighted total score is
shown below:
22
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
Deciding on the individual indicators that will be included and their respective weights is
not an easy task, most likely needs a Minister or other senior government official who
has oversight for the function, or a group of senior officials to undertake such an
assessment. A simple weighted and scored questionnaire with a target score would be
a substantial improvement on a report of completion due date.
In a number of cases the service delivery function may be performed by another sphere
of government which may even be on an agency basis. This challenge has been met
elsewhere in the world. Table 2.2 indicates some OECD suggestions for consideration
in respect of the ‘regulatory’ function and table 2.3 provides some advice in respect of
the administrative function.
23
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
Measure ‘cost’ Cost per output/unit rate e.g. standard report production
Rates of office equipment utilisation
Administrative activity (cost per item such as invoice)
(Evans and Richardson, 2009)
Measure ‘quality’ Number of administrative errors
Number of customer complaints
(Evans and Richardson, 2009)
Measure administrative Staff availability
‘flexibility’ (Evans and Richardson, 2009)
Measure ‘speed’ Document turnaround time
Transaction processing time
(Evans and Richardson, 2009)
Measure ‘reliability’ Reports issued on time
(Evans and Richardson, 2009)
Table 2.3 highlights that generally the best available measure of performance for
administrative functions is ‘process’ (output) orientated rather than performance
outcomes.
Also refer to section 4.2 which includes a discussion on Data Envelope Analysis.
24
PERFORMANCE INFORMATION HANDBOOK CHAPTER 2
Include
identified
Step 1: Step 2: Classify in Performance Dimensions indicators
Gather (PDs) See Chapter 2 for guidance on designing in draft PI
key a PD Template) Framework
indicators for review
yes during data
used Step 3: For each indicator check whether it
testing
Undertake is required by external role players
indicator
scan and list: Step 4: Investigate all indicators in PD Include
•Indicators identified
used in Sub-step 4.1: Is there at least one productivity (success) indicators
Strategic indicator that measures achievement in each listed yes in draft PI
Plan and organisational objective in the PD cube. Framework
Annual no for review
Report Sub-step 4.2: Develop productivity value indicators for the during data
objectives that are not covered testing
•Indicators
used in ENE
•Other key Sub-step 4.3: Is there at least one productivity (success) Include
indicators and one efficiency indicator per budget programme. yes identified
used no indicators
internally in draft PI
Sub-step 4.4: Develop productivity and efficiency
•All Framework
indicators for the programmes that are not covered
indicators for review
required by during data
external role testing
players Step 6: Undertake data availability test on
each indicator Step 5: Compile draft
Determine
proxy
Sub-step 6.1: Determine what financial and non- PI Framework
financial data is required to construct the indicator, in
indicator This draft PI includes all
which format and how frequently.
indicators selected in Steps
Inform
3, 4.1, 4.2, 4.3 and 4.4
external role
player of use Sub-step 6.2: Is the data available?
of proxy yes Include indicator
no
indicator. Sub-step 6.3: Is the indicator required by an external in 2nd draft PI
yes
role-player? Framework
Develop
no
strategy to
Substep 6.4: Is there a proxy indicator for which data is
collect data
available that can be used? yes Include indicator
original
in 2nd draft PI
indicator. Is no Framework
strategy feasible
and cost yes Step 8: Include
effective? Develop strategy to collect data for
implementation of strategy yes original indicator. Is the strategy feasible
no as urgent in PI Plan and cost-effective?
Negotiate new If data collection strategies no
indicator for include surveys or indicies
Select new indicator and restart at step 6. If no
which data is ensure that statistical
indicator can be found, include data collection
available with expertise is used in design
for best SMART score indicator in Step 8 no
external role
player and
include in Step 9: Complete PI Step 7: Compile 2nd Draft PI
Core PI Framework Framework
Framework
Best possible coverage of PI For each indicator in the Framework
requirements yes complete PI Rating and Scoring as set out
Include proxy in Chapter 2.
indicator in 2nd Data for all indicators are available
Does the indicator score well on the
draft Core PI Indicators pass SMART test SMART test?
Framework
25
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
CHAPTER 3
3
ENSURING QUALITY PI DATA
3.1 Introduction
The quality of performance information is a function of the quality and appropriateness
of the indicator selected and the quality of the data used to calculate the indicator.
Levels of PI data: The chapter acknowledges that generally PI data comprises at least
two layers of records, which exist at different levels for the purposes of PI management.
At the first level are all the PI source records and datasets. This refers to the records
that are generated in the implementation of an organisation’s programmes: patient
records, case files, logs of water quality tests, delivery receipts of school meals
delivered, the application file of an ID book applicant. These are the original records
which often comprise the evidence for verifying PI source data. These underlying
records are counted, either manually or electronically, to form source datasets (e.g.
number of malaria cases reported, number of water quality tests, number of primary
school children provided with a meal at school daily, average number of days taken
to issue an ID book).
The values in these source datasets in respect of selected indicators are recorded at
predetermined moments in time e.g. at the end of each month, to form a PI record.
PI records form the second level of PI data.
Information at both levels needs to be collected in line with data quality standards and
maintained to ensure authenticity, reliability, integrity and usefulness.
This chapter provides tools to assess the quality of systems that generate, collect and
maintain records at both levels, thereby to assess the quality of the datasets
themselves.
26
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
This chapter provides guidance on data quality standards, data storage principles and
approaches to verifying data for administrative PI source records and surveys. The
guidance is presented in the form of a data assessment tool, which can be applied to
both PI source data and PI datasets. Finally, key principles for the development of
electronic data storage systems will be discussed.
The Microsoft Excel Workbook provided with this Handbook includes a worksheet that
can be completed for each dataset assessed, and a database to record decisions with
regards to the dataset.
Use the identified weaknesses in data collection and storage systems to develop
corrective action and contribute to the PI Plan.
Diagram 3.1: Data quality assessment and improvement tool flowchart
Step 4: Assessing survey and administrative data for Step 5: Design and
timeliness, interpretability and accessibility, coherence undertake data verification
and integrity processes
Timeliness
Is the data reference period aligned with the PI period; Is data
available punctually; Can PI be updated as required?
Identify risky datasets
Interpretability
Is metadata available for each dataset, is the PI Manual available,
can and does the organisation certify accuracy of PI data, set out
features that affect its quality?
Accessibility
Can survey and administrative data be accessed in a variety of
formats, are surveys and admin records catalogued transparently,
are access restrictions documented, can data users access support?
Coherence
Are common frameworks, concepts, definitions, classifications and
methodologies agreed, documented and used within and across data
sets; are data series consistent and reconcilable over time, are
departures from common sources identified;
Step 6: Develop remedial strategies to address data quality risks and include in PI Plan
Completion of the data assessment tool will provide (i) description of PI data sets; (ii)
identification of key weaknesses (iii) description of verification processes and (iv) strategies
for remedial action for inclusion in PIP.
28
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
Type of data: The three core types of data: administrative records, survey data and
official statistics can each be broken down in different data sub-types, with their own
particular data problems. The diagram below sets out the different data sub-types
associated with each main data type under the heading record type.
Social
Officials Data Assessment undertaken by StatsSA
Economic
Statistics
Demographic
Once the PI manager is clear on the source datasets for which the organisation is
responsible, they should undertake a PI source data record audit. This audit will
investigate all PI source data records.
29
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
A PI source data record audit will help the PI manager to develop an understanding of
the organisation’s record keeping strengths and weaknesses and particular issues with
regards to data series required to calculate PI indicators.
To qualify the accuracy of data means to alert users to any features of data collection or
storage that may affect the quality of the data. With regards to data sourced from other
organisations for use in a PI Framework, the Data Quality Assessment Tool only
requires that organisations ensure that the providing organisation has certified the data
as accurate, or note any qualifications provided when the data is utilised.
Data can either be collected by another organisation or by an external agency on behalf
of the department. However an organisation is deemed responsible for data collection
when it is finally accountable for that data. If an agency collects the data on behalf of
the organisation, the agency running the taxi recapitalisation programme on behalf of
the Department of Transport, for example, it is still deemed to be internal data to the
Department of Transport. However, if the required data is collected by another
organisation not directly connected to the information, for example the Department of
Tourism, then the Department of Transport cannot be held accountable for the quality of
the data, it is deemed to be collected by an external “agency”.
In addition, if an organisation requires data from an external agency for its own PI
purposes it is recommended that a memorandum of understanding is entered with the
external agency specifying:
The data that is required including metadata which is often called “data about data”.
Metadata requires structured information that describes, explains, locates, or
otherwise makes it easier to use or interpret an information resource.
The frequency of data provision
The format in which data will be provided
Responsibility for data provision in the providing agency
Such a memorandum of understanding will ensure that data sharing procedures and
coordination among data producing agencies are specified clearly and adhered to.
Finally, the agency using the data should ensure that there is coherence between
internal and external data (see section below on coherence of data).
30
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
Statistical surveys comprise the systematic collection of information about a group using
random sampling in order to describe the total target group in relation to one or more
characteristics. Often surveys used in performance information are statistical in nature
and have the following elements in common:
The information is gathered by asking people questions
Information is collected from a sub-set, in other words a sample of the total target
group to be described, rather than the entire target group.
There are different statistical survey types. Many surveys done for the purpose of
performance information are a mix of cross-sectional and longitudinal surveys.
Longitudinal surveys: Longitudinal surveys undertake the same survey more than once
in order to compare results between surveys or to strengthen the reliability of results.
There are different types of longitudinal surveys.
While samples are of the same group profile (e.g. matriculants), they are typically
not composed of the same people being interviewed in different time periods.
A cohort study tracks changes over time among the same cohort (the same people).
An example is a study that tracks the numeracy of learners who entered the
schooling system in a specific year.
Assessing survey data for accuracy: The pursuit of quality survey information is equal to
the effort to minimise errors, or deviations between the ‘true’ value of the characteristic
in the group that is being studied and the value returned through the survey. The
questions asked in the questionnaire, how the answers are collected and who answers
the questions can all affect the quality of the survey information. Survey methodology
provides a mechanism to ensure that statistical errors are minimised and sampling and
non-sampling errors are measured correctly.
The table below sets out key questions organisations can ask to ensure that their
survey data is reliable:
31
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
Key questions to assess the accuracy of administrative data for each PI source
record set:
1. Does the organisation have a data policy in place that sets the following key
parameters for the PI administrative records?-
- Where the records are kept
- Who has control of the records
- What format they should be kept in (e.g. paper, electronic)
- What measures should be in place for their security, particularly with regards
to control over access and removal from their designated storage, prevention
of unauthorised access, protection against alteration or deletion
- The audit trail with regards to each dataset.
2. Did the record audit reveal compliance with the data policy? What level of
compliance is in place?
3. What access do individuals have to data that are relevant to the performance of
their units or themselves? Under what conditions do they have access and are
these conditions known?
4. Are key staff members aware of the records policy and the specifications with
regards to the records that are relevant to their tasks?
5. Based on the PI Record Audit, do records reflect accurately the events and
transactions to which they relate?
32
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
Data quality will be supported if, when designing an indicator and developing its
metadata for the PI Plan and PI Manual (the internal organisation guide on PI), the
organisation indicates what evidence should be kept at site level to verify the existence
and accuracy of the underlying records supported PI. However, when first drafting a PI
Framework and undergoing a data assessment, the processes of the first PI record
audit can be used to identify evidence that is kept already and identify evidence gaps.
After the completion of the record audit a list of evidence against each indicator can be
developed, listing existing evidence and new evidence that should be kept.
3.3.4 Step 4: Assessing survey and administrative data for the timeliness,
interoperability and accessibility, and coherence and integrity
For PI indicators include the metadata as set out in Chapter 2 of this Handbook.
For internal administrative and survey data include the concepts, definitions,
classification and methodologies used in collecting and processing data. This is
particularly important for administrative records that are organisation-specific and
not managed or controlled through transversal standards (such as public sector
accounting standards) and IT systems such as PERSAL and BAS.
Certify or alternatively qualify the accuracy of internal data and qualify indicators
using external data that was not certified as accurate.
For data received from other organisations include the name of the organisation and
a short description of the dataset. Responsibility for making the metadata on
datasets transparent rests with the collecting organisation.
33
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
Coherence: If common concepts, definitions, variables and classifications are not used
across indicators and source datasets, departures from the norm should be identified in
the metadata.
The completion of a data assessment using the questions set out above allows the PI
manager to identify datasets that are risky and that should be subjected to verification
checks and/or an internal audit.
1) The internal audit function, under direction of the audit committee, must include
in their annual audit plans reviews of the effectiveness of internal controls with
regards to records management and performance information. The annual risk
assessment undertaken by the accounting officer must identify the emerging
risks with regards to records management and performance information, which
will guide the aspects that should be included in the audit plan. This risk
assessment can draw on the results of the PI source data assessment set out
above.
34
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
3) The official and unit tasked with responsibility for PI should draw up an annual
plan, within the context of the PI Plan – to schedule the verification of PI-relevant
records and PI which have been identified as risky using the data assessment
tool. The section below discusses approaches to data verification.
For each record type, data verification will involve techniques that are very similar to
audit techniques. Notwithstanding the type of record, the verification process involves
the selection of a sample of reported PI data points within a dataset, and for the sample
checking whether (i) the records exist and (ii) are authentic and reliable representations
of the actual transaction or event. The evidence that will be required will differ from
indicator to indicator.
For correspondence records verification requires tracking the documentation and its
proper management within organisational processes. For example, for the
production of a policy document it requires checking that the document exists as is
purported and that there is primary evidence that the document was adopted within
the organisation, through checking signatures on the document tracking system and
whether the minutes of meetings have been signed off.
For non-correspondence records, verification requires that a sample of data points
are matched to records and that the records are matched with other evidence kept
by the organisation of the event or transaction or with external data sources.
The table below sets out a few examples of possible data verification activities against
specific indicators:
Number of hectares Beneficiary records List of beneficiaries and identity numbers, hectares
of land redistributed distributed to each
to land reform Case files Existence and accuracy of underlying sample of case
beneficiaries files for each of the individual beneficiaries
Outstanding cases Court roll data Consistency of flow statistics for court cases
on the court roll at Existence of a sample of paper case files and
end of each Case files accuracy of electronic records of case files
financial year
Number of malaria Clinic and hospital Availability of a list of sites and malaria cases per site
cases records Match of site records against the overall number of
malaria cases reported
36
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
7
Organisations are advised to also consult the NARSSA guidelines for managing electronic records in
government.
37
PERFORMANCE INFORMATION HANDBOOK CHAPTER 3
Evaluate existing record creation, collection and storage practices to ensure that
they can be applied to electronic records: The development of an electronic system
provides the opportunity to redesign business processes to increase reliability and
efficiency. Before applying current practices in an electronic environment, it is
worthwhile evaluating them and identifying opportunities for improvement.
Ensure that solutions take a long-term view: The PI managers should be driving
decisions with regards to whether the system should be an integrated solution for all
aspects of PI management, or whether PI needs should be covered by separate
systems that interlink. It is important to take a long-term view and ensure that the
system is flexible to include future PI management needs.
It is also important that the IT system can interface with other systems in the
organisation and in government generally. This is important firstly in view of the
Presidency’s establishment of a central outcomes framework and its ability to access
systems across government to extract data for monitoring purposes. Secondly, at sector
level compatible PI and source data systems will facilitate improved intergovernmental
sector management. The Minimum Interoperability Standards (MIOS) released by
DPSA and set by State Information Technology Agency (SITA) provides government’s
technical principles and standards for achieving interoperability and information systems
coherence across the public sector. These standards are mandatory and would apply at
a minimum to any PI IT system development.
Take stock of the paper-based systems that will need to migrate to the new IT
system: Not all existing PI or PI source data will need to be included. It is important that
PI managers develop a schedule of PI source data and historical information that need
to be imported into the new system and where the relevant records are held (and in
which format).
Evaluate the human skills available to collect and keep records for the new IT
system and make clear the roles and responsibilities of actors in the new system:
The quality of outputs from the IT system will only be as good as the quality of data
captured into the system, albeit at site-level or the transfer of information into a
standalone PI IT system. An evaluation of the readiness of existing staff to use new
systems against a clear understanding of roles and responsibilities and the
implementation of training and capacity building programmes necessary to ensure that
the new IT system will improve the quality and efficiency of PI management.
38
PERFORMANCE INFORMATION HANDBOOK CHAPTER 4
CHAPTER 4
4
ANALYSIS AND REPORTING OF PI DATA
4.1 Introduction
The chapter is structured to include advice on different techniques to analyse PI data and on
the use of PI at different points in the budget cycle.
Provide a graphic analysis: plot performance against the indicator over periods of
time on a graph; plot deviation from target over time on a graph; plot measurements
of the change in performance between one period and another over time on a graph.
Develop ratios: many PI Framework indicators may already be expressed as ratios
or percentages (for example number of assaults per 100 000 inmates). Many
however are provided as absolute numbers (for example number of malaria cases
reported). In addition to comparing performance for these indicators against
previous time periods, targets or other organisational units, PI managers can also
make absolute number indicators more meaningful by relating them to contextually
important data, for example the number of malaria cases per 100 000 people, when
preparing reports. Section 4.2.4 below briefly discusses how integrating PI (i.e.
developing ratios using different indicators) can be useful for the interpretation.
Present data as indices: select a meaningful value for an indicator (for example
target performance; average performance; highest performance or lowest
performance) and express comparable values as an index in relation to the
meaningful value, e.g. inflation.
4.2.2 Benchmarking
Benchmarking involves measuring the organisation in terms of the best practice within
the industry. This is important in assessing if the organisation‘s performance is in par
with what is expected in the sector or area of operation. Benchmarking identifies a
realistic sense of the capability of the organisation.
One of the difficulties for the public sector is identifying best practice, and it has been
acknowledged that “it is difficult to produce reliable data that enable accurate
international comparisons” (OECD, 2007, p63).
But there are also public sector advantages in seeking to compare, such as the ability to
benchmark within government by identifying best practice functions in one department
that can be used as a benchmark for other departments.
40
PERFORMANCE INFORMATION HANDBOOK CHAPTER 4
Compare ‘like’ with ‘like’: only units with identical service delivery responsibilities
should be compared.
Take account of differences between units: even when units have exactly the same
service delivery responsibilities, their operating circumstances might differ. A police
station in the Northern Cape which covers an area of several hundred square
kilometres for example, cannot have the same average response time to
emergency calls as a station in Gauteng. Weighting scores should be developed,
i.e. a set of weighting principles that will allow the scores of the police station in the
Northern Cape to be compared fairly and sensibly to the scores of the police station
in Gauteng.
Use data that is reliable across units: in cases where data was consistently reliable
across units before the introduction of the tool, systematic comparison between units
creates the incentive for unit managers to ‘game the system’. The introduction of
such a tool therefore has to be accompanied by rigorous data assessment and
routine data verification checks on units (see Chapter 3 for approaches to ensuring
data quality).
Target and assess improvement in performance against the unit’s own previous
performance as well as against system-wide performance. The wider the differences
between the performances of different units the more important it is to have several
types of indicators to target and assess performance. For example, if the
assessment focuses only on improvement against previous performance, units that
routinely perform close to 100% achievement will seem comparatively worse in
effecting improvements than units that improved by 20 to 30% from a lower base (an
initially bad performance).
The comparison of units with a scoring and rating tool can be used to identify units that
are in need of support or where corrective measures are required, or to incentivise and
award relative good performance. The tool can also be used to identify twinning
arrangements where a better performing unit can be partnered and used to improve the
performance of a lagging unit of a similar nature and in similar circumstances within
selected performance bands8.
It is important that a scoring and rating tool is well documented. A technical document is
essential that describes the tool and how the ratings should be interpreted. Buy-in by
the service delivery units being measured and their involvement in its design will
prevent perceptions of unfair rating and increase the effectiveness of the tool.
8
A performance band is commonly used to stratify (distinguish) units to make comparison easier, e.g. emergency
response times might be the percent of actual achievement classified within the bands of 0-15 minutes, 16-20
minutes, 21-30 minutes etc.
41
PERFORMANCE INFORMATION HANDBOOK CHAPTER 4
alternatives response of ‘strongly agree’, ‘agree’, ‘neither agree nor disagree’, ‘disagree’
or ‘strongly disagree’. As advised in chapter 3, anyone intending to apply a statistical
technique, such as a ‘Likert’ scale, in performance measurement should get expert
advice (also refer chapter 6).
4.2.4 PI integration
PI integration refers to combining financial and non-financial indicators, or combining
more than one PI indicator as part of performance evaluation, to create useful
information for decision making. PI integration can take various forms, including:
The non-financial information used for determining cost distribution, such as the number
of pupils in a provincial school system, is usually referred to as a ‘cost driver’. There are
many instances where non-financial information is useful for measuring costs,
especially for budget development purposes, but may not qualify as ‘strategic’ PI, e.g.
hours worked by staff performing administrative tasks. There are also cost drivers that
may not usually be used as ‘strategic’ PI, but because of a change in national priorities
may be elevated to a strategic level; e.g. megawatts of electricity consumed or mega
litres of water consumed may be important for sub-programme costs, but would become
strategic PI related to electricity and water conservation or environmental objectives.
This is illustrated by the diagram below. Especially note that indicator target setting is
closely associated with the budget cycle, and therefore discussed in this chapter, as
targets should generally only be developed in relation to consideration of available
resources (Section 4.3).
9
The ‘level’ only refers to the place in the organisational hierarchy constructed for the purpose and is not intended to
assign a rating or value judgement regarding its importance
42
PERFORMANCE INFORMATION HANDBOOK CHAPTER 4
Operational Budget
Planning preparation
Strategic Planning
Performance targets Performance
Selection of performance set against available information used to
indicators for assessment
resources and service make judgements about
through PI Framework
delivery circumstances allocation of resources
(Chapter 3)
(Chapter 2)
(Section 4.3.1)
(Section 5.4.6) (Section 4.3.2)
(Section 5.4)
Review and
evaluation of current
performance feeds
into strategic
planning
(Section 5.3)
(Section 4.3.3) (Section 5.3)
(Section 4.3.3)
A service level is the ‘amount’ or type of service that is to be provided, often expressed in
quantitative terms e.g. immunise 500 000 individuals in a vaccination campaign.
A service standard refers to the quality of the service to be provided, benchmarked against
international standards whilst taking into account South Africa’s current level of development.
An example of a service standard is the quality of the vaccination efforts against measles and
de-worming- e.g. illness incidences reduce to less than 1000 in a province.
Realism and ‘stretch’: The emphasis on ‘realistic and achievable’ targets should be
counterbalanced with an emphasis on ‘stretching’ targets, to improve performance over
time. In order to achieve the right balance between ‘realistic, achievable’ and ‘stretch’,
target setting processes should not be entirely top-down, nor entirely bottom-up. The
current practice in government, targets are often set as part of the strategic plan or
budgeting exercise in isolation from the units that are expected to deliver the services in
line with target, without a good enough understanding of the baseline performance and
of how quickly it can be improved.
43
PERFORMANCE INFORMATION HANDBOOK CHAPTER 4
A municipality had an internal construction team that built footpaths, commonly using
concrete, tar or pavers. The ‘paving’ gang consistently achieved the laying of paving material
covering a certain number of square metres per day over a long period of time. It was
generally agreed that this work output was ‘realistic and achievable’ as it had been proven.
This enabled the Finance department to calculate the budget based on the expected output.
However, due to additional external funding one financial year there would be far more work
undertaken than the ‘gang’ could manage, so some contract staff were engaged to assist.
The project manager formed the contract staff into a separate team that was located close to
the existing ‘gang’, effectively creating competition between permanent and contracted staff.
Consequently the rate of laying paving material for both teams far exceeded the ‘historic’
rate, so much so that all internal and external funds were spent that financial year and many
more square meters had been completed using those funds. The project manager had
stretched the targets!
Target setting must also recognise the ‘power of incremental achievement’, where small
steps each year toward an objective over time could compound into a giant step in
some future year. Avoid setting unrealistic targets in the short term.
It is clear that a shift to performance targeting linked to budgets may require a newly
structured budgeting process in some organisations: if the strategic plan and budget are
to be implemented with the use of performance information, processes to draft these
will necessarily be participative and iterative.
budgeting” (OECD, 2008, p2), it does represent an improvement on a budget only with
‘financial numbers’.
It is important to use PI to inform the allocation process using ‘unit rates’ and target
setting combined with the need to assess related performance issues and to make a
concluding comprehensive judgement.
Using PI in the allocation process requires the annual review of medium term
performance targets and setting targets for the new outer year, linked to resources. In
agreeing to targets it is essential to know:
what the current baseline performance is,
what the trend against the indicator for the past three years has been,
what circumstances will influence the demand for services or the achievement of the
target in future, and
the level of resources the organisation is prepared to commit to the service.
The calculation of unit costs against the target would be an important factor but, the unit
costs cannot be applied blindly. There might be opportunities for efficiency savings that
allow the unit to deliver more services for less, or particular circumstances in the year
ahead might mean that fewer units will be delivered for the same cost.
Total cost is defined as the total direct sub-programme resources together with an
appropriate allocation of overhead costs that support the sub-programme. Total cost
can be determined by the attribution of overheads. For example, Justice &
Constitutional Development’s (J&CD) overheads for 2009/10 were estimated as:
Original Budget
Corporate overhead
Administration 1 162 082 783
Court Services - Facilities Management 145 384 000
Court Services - Administration of Courts 1 110 605 600
National Prosecuting Authority - Support Services 400 341 000
45
PERFORMANCE INFORMATION HANDBOOK CHAPTER 4
consideration should be given to including it where the capital items are regularly
recurring and of similar budget proportion.
The calculation of unit costs using total sub-programme cost (including costs that are
not overhead costs as well) then enables organisations to match total sub-programme
budget allocations to targeted performance and vice versa. This can be done initially at
a broad level, for example by calculating the cost per house delivered at the current
service standard using non-financial and financial performance data (see reporting
example in Appendix D. Over time organisations can become far more sophisticated by
breaking programmes down into their component activities, establishing activity-based
baselines.
Reporting formats
Appendix F contains an example of a possible reporting format. The format is also
available in a simple Excel spreadsheet template. The format achieves:
Performance budgeting ‘informed’ approach, comparing the financial resources
required with the PI targets,
Benchmarking information, calculating a unit rate,
Illustrative charts that compare budget and PI trends and monthly reporting,
Brief explanation of performance trends.
A key feature of the example is illustrating the aim of demonstrating all information on
the particular PI subject on one page to improve speed of management comprehension.
‘Dashboard’ report
Organisations can also develop PI Dashboards at various levels of the organisation for
use in periods between the formal quarterly reports.
An accounting officer for example can be provided with a weekly (or monthly) one or
two page report providing an analysis of performance against strategically important
indicators. The indicators selected for a ‘dashboard report’ would highlight performance
issues in time for the accounting officer or senior management to institute remedial
action when performance falls below par. The report can make use of the type of
analyses described in section 4.2.1 above. Similarly, managers at lower levels can work
with PI managers to design a dashboard report relevant to their sub-programmes or
units.
47
PERFORMANCE INFORMATION HANDBOOK CHAPTER 5
CHAPTER 5
5
ASSESSING AND BUILDING PI CAPACITY
5.1 Introduction
This chapter provides a checklist of the human resource and system capacity required
to implement an effective PI System and provides advice on locating PI capacity and
the prioritisation of PI capacity development.
Specialist skills
Access to statistical sampling, survey and other statistical analysis skills
If developing a comparative scoring and rating system to compare performance
across units, access to capacity to design and implement the system and support
management in the use of the system
Internal audit capacity to undertake audits of data collection and storage
This checklist can be used to undertake an audit of PI capacity and skills in the
organisation and develop strategies to address capacity shortfalls. To ensure accuracy
and usefulness of PI, it is important that departments build and strengthen internal
capacity when it comes to collection, analysis and interpretation of information.
Different organisations will set up their PI capacity differently: some may centralise the
capacity in an M&E Unit or in the accounting officer’s or Chief Executive’s office, others
might prefer putting M&E capacity in each division or region, or some combination of
the two. In some departments, the PI capacity is situated in the Strategic Management
units.
49
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
CHAPTER 6
6
DOCUMENTING PI SYSTEMS
6.1 Introduction
The Regulations on Programme Performance Information require departments,
constitutional institutions and Public Entities to develop PI Plans. The Plans are
required to be a medium term statement to Parliament on what PI the organisation will
collect, how it will manage and use the PI and how it intends improving PI. The purpose
of the Plans is to ensure the progressive development of PI Systems.
This chapter provides guidance on the content of and approaches to drafting these two
documents.
A PI Plan must also indicate who will manage PI, how it is to be managed and what
capacity building the organisation will undertake.
A PI Plan must be submitted every five years with an organisation’s Strategic Plan to
Parliament. Organisations can update the PI Plan during the five years if necessary, by
submitting an adjusted plan.
50
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
Systems for reviewing PI Frequency and nature of processes by which the organisation
and monitoring PIP periodically reviews its PI Framework and PI System.
Implementation Process to review implementation of PIP
51
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
The first step in planning for the development of organisational PI Systems is to locate
the organisation on the PI development path and to target in broad terms where the
organisation would like to be on the path in five years’ time. Diagram 8.2 below
illustrates this.
Validate
selected
indicators
Confirm data
availability and
collection
responsibility
Assess data Upgrade and
Build reliable quality and add Develop
Decide core
record create indicators Develop Ensure key electronic
Decide reporting
management processes and build performance internal record and
strategy for Develop mechanisms
systems for to verify required data budgeting / stakeholders performance
locating PI Core PI and processes
PI data for systems other uses of are trained at reporting
functions Framework
existing PI PI all times and
Draft PI Manual
management
systems
Build capacity
to manage core
system
The organisation should first assess its current systems for managing PI by using the
various tools provided in this Handbook, and then map out its PI development in broad
phases over the medium term (for the first five year plan) and long term (for the next
five to ten years).
Part of the strategy would be the exemptions that the organisation has applied for in
terms of the requirements of the Regulations and until when the exemptions are
envisaged to remain in place. For example, the organisation might wish to postpone
undertaking a full assessment of data quality and the development of verification
processes until the outer years of the medium term. This should be indicated and listed
in the PI Plan.
52
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
Select the organisational PI Manager: When drafting the first PI Plan the organisation
must decide which unit and manager will have primary responsibility for PI. This does
not mean that the full burden of responsibility for deciding, collecting, analysing and
reporting on PI will rest with the manager and their unit. The management of PI is an
organisation-wide task involving strategic planning, monitoring and evaluation and other
business unit managers as well as the Chief Financial Officer. However, the
Regulations require that one manager, with direct access to the Accounting or
Executive Officer, must have ultimate responsibility for directing, controlling, monitoring
and reviewing the PI System.
Depending on organisational need and capacity, organisations may decide that the
responsibility for managing PI should be added to the job description of an existing
senior manager, such as the CFO or the head of the strategic planning or monitoring
unit.
Set out a strategy for how the structures to manage PI will be strengthened: If the
organisation is planning to shift the primary function for PI, develop or extend structures
to manage PI or improve the organisational standing of the PI unit over the five-year
plan period, it should (i) set out these strategies in the main document and (ii) list them
in the annexure, with an indication of who will be responsible for their implementation.
Set out how the organisation intends reviewing the PI Framework and a System
periodically and monitor PI Plan implementation: The final part of step two of the PI
Plan can describe organisational approaches and systems to reviewing the PI
Framework and PI System and monitoring PI Plan implementation. Key parameters are:
How frequently will the organisation review its PI Framework: annually, every two
years, once every five years? When in the organisational planning and budgeting
cycle? As the PI Framework is linked to the indicators proposed in the Strategic Plan
and the Estimates of National Expenditure, the proposal is that these should come
from the core PI Framework. An annual review of the PI Framework will be effective.
Updated PI Plans can be submitted with marginal changes.
How frequently will the other assessments be undertaken, for example on source PI
datasets, capacity etc? Will these only be done every five years, given how time-
53
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
consuming they are, or will specific datasets be identified in addition to the period
between major exercises of assessment and possible inclusion in internal audit and
verification programmes?
Who will be responsible for the review?
How will it be done:
- will it be part of the strategic planning process,
- will an oversight review exercise involving the PI Manager and a core team be
undertaken annually supplemented by an organisation-wide thorough exercise
every five years,
- Will a thorough review accompanied by data assessments be done every year?
In writing up Step 3 of the PI Plan, organisations can choose simply to provide a short
narrative on the process they followed to select their performance indicators, including
the key criteria for selection supplemented by the Annexure required in terms of the
Regulations which lists the indicators against performance dimensions and provides the
required metadata.
Step 3 should then detail how the organisation will improve its PI Framework over the
five year period. Will it
increase the breadth of the Framework (i.e. measure its values/more values in
addition to measuring objectives and programmes; measure objectives in greater
detail),
increase the intensity of the Framework (add indicators against objectives,
programmes and sub-programmes or values),
replace proxy indicators with improved indicators as it develops data collection
systems, and/or
increase the depth of the framework (i.e. add indicators that measure performance
at lower levels of programme and organisational structure)?
The organisation should draw on its sequencing strategy to make these choices and
describe its selected course of action. It does not have to name the indicators that it will
add to the framework: that can be done in revised PI Plans as the additional indicators
are added and data for them is collected.
54
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
At the second level are the PI records that are generated themselves: once the PI
System has collected a statistic for a certain point in time (e.g. number of meals served
to primary school learners per day in the 2nd quarter of 2010) it needs to store this
statistic as a record that cannot be altered unless justified and the alteration is recorded.
The PI Plan needs to provide information on how the organisation currently manages PI
and how it intends to improve its management over the medium term at both these
levels. It is important to note that for the first level it is only the records that are relevant
to the selected PI indicators that need to be assessed and included in the PI Plan.
It is therefore recommended that the organisation structures this section of its PI Plan
into two sub-sections: in the first subsection the organisation needs to list the main
datasets, their metadata and plans to improve certain aspects and in the second section
it needs to discuss the PI data collection and storage system. It is advised that
organisations undertake a PI source data records audit and identify the datasets that
require intervention. The table below provides a template for describing PI source
datasets. Organisations are not required to use this template: it is provided merely as a
possible way to summarise the required information on data collection, verification and
storage. The example used is records on meals provided to learners.
Organisations are required to indicate how they will improve data quality.
55
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
Section five of the PI Plan will describe organisational systems to ensure quality use of
PI in strategic and operational management processes and to report on PI.
The PI Plan should list all external reporting requirements and how the organisation
intends fulfilling these.
The PI Plan should identify changes that the organisation wants to make to how it uses
PI for internal and external decision making, accountability and communication. This
may include increasing the frequency of reporting, improving PI reporting formats
(clarity, coverage, depth), using it systematically to develop flexible budgets (see
chapter 4), and using PI to manage unit performance across the organisation.
The capacity assessment tool provided in chapter 5 will assist the organisation to
describe and assess existing human resource capacity, for example
capacity in the central unit responsible for PI,
the understanding of PI throughout the organisation and the capacity to use PI in
strategic and operational management processes,
the capacity of data collection staff and their understanding of why the data is
collected.
The PI Plan would provide an organisational strategy to build capacity against these
dimensions and detail the internal and external training planned. Development of
systems (ICT and other) may be noted as part of the strategy.
56
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
ii. An annexure that lists strategies from PI sections 3, 4, 5 and 6 to improve PI,
with the responsible person for their implementation indicated.
iii. An annexure that lists PI source datasets that have been selected for
inclusion in the internal audit programme and verification by the PI Manager.
iv. An annexure that lists and describes datasets that the organisation commits
to provide to other organisations, the frequency of their provision and who is
responsible for the provision.
The PI Manual will specify how the indicator must be calculated, using which
datasets and the specifications of the datasets as well as definitions of indicators.
Direct how changes in the indicator should be interpreted and provide any
qualifications on the accuracy, validity and completeness of the data or information
on limitations of the indicator itself. The directions should indicate whether the
indicator is a leading or a lagging indicator and what complementary data or
management information can be investigated to understand and interpret changes in
the indicator.
Provide direction on how units of the organisation should use the PI in strategic and
operational management processes.
Act as a records policy for PI data. The PI Manual will direct how PI data is to be
captured, how frequently, by whom, where it will be stored, in which format and what
the rules are with regards to accessing and amending the data.
Advise staff on available capacity building or external training that can be accessed
to build PI capacity.
57
PERFORMANCE INFORMATION HANDBOOK CHAPTER 6
In short, the PI Manual will capture at any point in time, the systems in use in the
organisation to select, collect, store, verify and use PI.
58
PERFORMANCE INFORMATION HANDBOOK CHAPTER 7
CHAPTER 7
PREPARING FOR PI AUDITS
7.1 Introduction
The Public Audit Act (PAA) requires the Auditor General to audit performance
information on an annual basis. Sections 20(2)(c) and 28(1)(c) require that the audit
report reflect at least an opinion or conclusion on the reported information relating to
performance of an institution against predetermined objectives.
The Auditor General has adopted a phasing-in approach to compliance with the Public
Audit Act with regards to expressing an audit opinion on reported performance
information. Since 2005/06 auditees have been subjected to a review of their policies,
systems, processes and procedures for the management of and reporting on
performance against their predetermined objectives and of the accuracy, validity and
completeness of the information presented. Findings in this regard have been included
in the audit reports and as from 2009/10 audits, audit opinions are included in the
management reports issued to the auditees. From 2010/11 onward the Auditor General
may decide to provide an audit opinion on reported performance information in the audit
reports.
This brief chapter is aimed at illustrating to organisations how the application of various
tools in the handbook will assist them in preparing for audits of performance
information.
59
PERFORMANCE INFORMATION HANDBOOK CHAPTER 7
60
PERFORMANCE INFORMATION HANDBOOK BIBLIOGRAPHY
7 Bibliography
Accounting for Sustainability Initiative: http://www.accountingforsustainability.org/reporting/ accessed 10/2009
Allen, R 2008, Reforming Fiscal Institutions: The Elusive Art of the Budget Advisor, OECD Journal on
Budgeting, 2008/3, p67-75
Department of Performance Monitoring and Evaluation (2009), Improving Government Performance: Our
Approach, p14
Evans, C and Richardson M 2009, How to Manage Effectively with Performance Indicators, The British Journal
of Administrative Management, Summer 2009, p16-17
OECD 2007b, Jacobzone, S., C. Choi and C. Miguet (2007), "Indicators of Regulatory Management
Systems", OECD Working Papers on Public Governance, 2007/4, OECD Publishing
Noman, Z, 2008, Performance Budgeting in the United Kingdom, OECD Journal on Budgeting, p75 - 90
Robinson, M & Last, D 2009, A Basic Model of Performance-Based Budgeting, International Monetary
Fund, p1-12
Rowley, J 1998, Quality Measurement in the Public Sector, Total Quality Management, May 2/3, p321 –
333)
Schacter, M 2006, The Worth of a Garden, Performance Measurement and Policy Advice in the Public
Service, a discussion paper, p1-12
Talluri, S, 2000, Data Envelope Analysis: Models and Extensions, Decision Line, May 2000, p8-11
Westhuizen and Dollery 2009, South African Local Government Efficiency Measurement, Centre for Local
Government University of new England, April 2009, p1-19
61
PERFORMANCE INFORMATION HANDBOOK
Appendix A: The PI System at a Glance
Regulations
MTSF MTEF
Systems
DATASET
ANAYSIS
INDICATOR
SOURCE STRUCTURING
CATEGORISATION BUDGET
DOCUMENTS, INDICATOR A Performance INDICATOR
, FILTERING, PRIORITISATION
e.g. GATHERING Dimension SELECTION
SCORING & ADJUSTMENT
STRATEGIC TEMPLATE
PLAN
i
DATA TARGET
COLLECTION SETTING
APPENDICES
BENCHMARKING REPORTING &
PERIODIC PI
MEASUREMENT & OTHER PUBLICATION
REVIEW
ANALYSIS (INCL MEDIA)
Appendices
PERFORMANCE INFORMATION HANDBOOK APPENDICES
2 Appendix
pp B: PI Framework Decision making
g
Flowchart
Appendices
ii
PERFORMANCE INFORMATION HANDBOOK APPENDICES
iii
PERFORMANCE INFORMATION HANDBOOK APPENDICES
GRI has developed a public sector ‘Agency’ supplement to its guidelines. Whilst the
public agency supplement is still in the piloting phase the reporting framework would
provide useful ideas for sustainability indicators. Appendix D is one example extracted
from the public agency supplement as an illustration of the type of material presented
there. The full ‘Public Agency’ supplement is included in the additional reading pack
which can be accessed at:
http://www.globalreporting.org/ReportingFramework/SectorSupplements/PublicAgency/
“In the context of reporting, a crucial element of achieving change is for mainstream
reporting to reflect, not just the organisation's financial performance, but also its
sustainability performance, demonstrating the strategic importance of sustainability
factors and how these factors form part of the decision-making process of the business.”
(http://www.accountingforsustainability.org/reporting/)
This initiative included the establishment an international forum for the sharing of ideas
and experiences. The South African Institute of Chartered Accountants is a forum
network accounting body member, along with 17 other international accounting bodies.
The full report, created when the initiative was launched in 2007, is available in the
separate Readings pack. (http://www.accountingforsustainability.org/output/page1.asp)
One of the techniques used by the Initiative is to highlight areas of best practice
application. In this regard there are currently 2 public sector documents promoted by
the Initiative as representing best practice. Both papers are included within the
Readings Pack (link to be provided).
England’s Cabinet Office 2007/08 Annual Report – the section most probably
of most relevance to environmental sustainability is ‘Taking a Greener Approach’,
iv
PERFORMANCE INFORMATION HANDBOOK APPENDICES
from pages 50 to 51. Especially note the inclusion of ‘Climate waste and
resource indicators’ mainly related to the need to minimise carbon and waste
production and promote efficient water utilisation, as part of their Performance
Framework. There is a general trend internationally for these types of indicators
to become a standard requirement for all government departments.
v
PERFORMANCE INFORMATION HANDBOOK APPENDICES
14 000
200 000
12 000
Houses
Rm
150 000 10 000
8 000
100 000
6 000
4 000
50 000
2 000
0 0
2005/06 2006/07 2007/08 2008/09 2008/09 2008/09 2009/10 2010/11 2011/12
Month: April May June July Aug Sep Oct Nov Dec Jan Feb Mar
Target: 20 000 22 000 22 000 22 000 25 000 25 000 22 000 20 000 15 000 15 000 21 000 21 000
Actual: 19 500 19 700 21 000 21 500 23 000 27 000 25 000 22 000 20 000 18 000 22 000 28 000
20 000
15 000
10 000
5 000
0
April May June July Aug Sep Oct Nov Dec Jan Feb Mar
Performance explanation [Insert explanation here of the variance in performance between the actual performance and the monthly
targets]
vi
PERFORMANCE INFORMATION HANDBOOK APPENDICES
Relevance of information: The degree to which it meets the real needs of the
organisation. The PI Framework discussed in chapter 2 is the primary tool in the PI
domain for ensuring that PI is relevant to the real needs of the organisation.
The timeliness of information: This refers to the delay between the measured event
and the date on which the information becomes available for use as PI.
The accessibility of information: This refers to the ease with which data can be
obtained. This includes the ease with which the existence of information can be
ascertained, as well as the suitability of the form or medium through which the
information can be accessed10.
The interpretability (credibility) of information: The ease with which users can
understand statistical information through the provision of metadata. Metadata is the
description of indicators. For example, what are the concepts, definitions and
classifications used in the collection of data? Who collects the data? Whether
information is provided by the organisation that will assist the user to assess the
accuracy of the data produced.
Integrity of information: Refers to the presence of values and practices within the
organisation that ensure users’ confidence in the organisation and its information.
NARSSA Standards
The NARSSA records management policies pose key dimensions for quality
administrative records, namely authenticity, reliability, integrity and usefulness and
provide instruction on how to achieve these dimensions.
10
Accessibility however, has to be weighed up against the cost of providing accessibility. The PI
Framework in Chapter 2 applied cost as a concept of the indicators chosen.
vii
PERFORMANCE INFORMATION HANDBOOK APPENDICES
Useability A useable record is one that can The contextual linkages of records should
be located, retrieved, presented carry the information needed for an
and interpreted. It should be understanding of the transactions that
capable of subsequent created and used them. It should be possible
presentation as directly to identify a record within the context of
connected to the business broader business activities and functions.
activity or transaction that The links between records that document a
produced it. sequence of activities should be maintained.
Source: NARSA, 2007, Records Management Policy Manual.
viii
PERFORMANCE INFORMATION HANDBOOK APPENDICES
The Correctional Centre Level Monitoring Tool had its origins in efforts to measure one of the
immediate outcomes of the Department of Correctional Services (DCS), which required
constructing an index of seven indicators. It was soon clear that the tool could be expanded to
measuring organisational performance more broadly, by introducing a refined scoring
mechanism that will assign scores to individual centres based on quantitative performance
reports.
The essence of the basic tool is an Excel spreadsheet file, consisting of two worksheets. One
is the data worksheet, in which data is imported and managed. The second is the Interface
worksheet where the comparative performance ratings are calculated, sorted and viewed.
Against each selected indicator, a centre will achieve a score somewhere between 0.0 and
10.0. A 10 should be scored when ‘perfection” is attained. A zero reflects a dismal
performance. The former is obviously more difficult to determine, and a decision on what
constitutes ‘perfect’ performance is necessary for each indicator. Once values are assigned
they remain across years. For example, if the value achieved by the best performing centre is
selected to be the 10 score, the value should remain over years notwithstanding changes in
what the best performing centre actually scores at.
A common approach for statistical measurement was that the purpose of each indicator
scoring equation was to spread the scores. Ideally, there should be a few Centres that perform
extremely well, a few that perform dismally, with the vast bulk somewhere in the middle. This
distribution of scores will also ensure sensitivity to changes. It is important that not too many
centres actually score a zero (or a 10 for that matter) for any one indicator. The decision on
the target score was made separately for each individual indicator. As mentioned above, each
Centre is given a rating out of 10 against each indicator, based on an equation that should
provide a reasonable distribution across a histogram11 for all Centres. Once weighted, these
indicator scores combine to provide an index score out of 10.
Indicators are of three major types. There are positive, negative and parabolic indicators.
Negative indicators are those that measure performance of a Centre in trying to prevent
something happening, and ideally achieving a reported score of zero. Examples are
assaults, escapes and unnatural deaths. A problem associated with negative indicators is
11
A histogram is a bar chart that presents a frequency distribution.
ix
PERFORMANCE INFORMATION HANDBOOK APPENDICES
that undetected non-reporting of such incidences will earn a perfect score of 10.
Positive indicators are those in which the Centre has to provide something of substance in
order to score, and the higher the result the higher the score. Examples are the provision
of resources, such as nurse attendance and education, rehabilitation programmes or
security measures.
Parabolic indicators are those for which the achievement of a percentage of 100 is ideal,
and in which scores of 10 less than 100 and a score of 10 more than 100 produce an
equal score in the tool. As an example, under spending can be as bad as overspending,
and reaching less than 100% accommodation capacity can be as bad as overcrowding by
10%, as it is a waste of resources.
Source: Department of Correctional Services