Evaluation of
training
programmes
Training Evaluation
Framework
• For organisations, the question is: What would a multi-level training
evaluation framework look like, which measures individual-level
outcomes and organisational-level outcomes jointly and
independently, considers relevant stakeholders at different levels of the
evaluation and takes into account the longitudinal nature of the process?
• A feasible training evaluation framework must explicitly address how
individual-level learning and transfer of learning yield results at a
work group, unit and organisational level (Kozlowski, Brown,
Weissbein, Cannon-Bowers, & Salas, 2000).
Training Evaluation
Framework
• In addition, the training evaluation framework must be
relevant to a range of stakeholders within the organisation
and further include steps to cover the entire ‘training-to-
performance process’ (Brinkerhoff & Dressler, 2015).
• Overall evaluation process has to shift from ‘knee-
jerk reactions, to rectify performance problems’
(Kraiger, McLinden, & Casper, 2004) towards an
organisational strategic activity addressing current and
future individual and organisational requirements.
Resistance to Training
Evaluation
• There is nothing to evaluate: training a reward for good
performance; training not expected to accomplish anything;
• No one really cares about evaluating training: formal
evaluation too expensive and time consuming
• Evaluation threat to my job: if time and money spent on
training and its identified no learning occurred then it
doesn’t reflect well on training provided;
Challenges
• Absence of transfer of learning from place of training to workplace has been
major perceived deficiency of corporate training and development programmes
• Indian corporate sector is facing challenge of designing and developing more
valid, reliable, and operational measures to evaluate effectiveness of training
• High pressure for increased quality, innovation, and productivity acts as a
major driving force for Indian corporate training and development programmes
• Organisations rely mostly on reactions of participants to monitor
effectiveness of training
• Majority of organisations use questionnaires as an instrument to gather data
about evaluation
• In most of the cases evaluation done immediately after training
Traditional Measures vs. Impact
measures
• HR/training professionals might be more concerned with
the negative consequences of the results of their training
measurement; measuring impact of training may be time
consuming and expensive.
• There may be limitations and difficulties in the available
measurement models
• Cooperation is required from participants and line
managers.
Benefits of Training Evaluation
• Improved effectiveness.
• Improved efficiency.
• Greater credibility for the personnels to include
information on how to do a better job now or in future
programmes.
• Stronger commitment to and understanding of training
by key administrators.
Training evaluation models
• The most
popular and
utilised training
evaluation
model is
Kirkpatrick’s
four-level model
(Kirkpatrick,
1975; 1979).
Training evaluation models
• A subsequent model by Tannenbaum, Methieu, Cannon-
Bowers & Salas (1993) expanded on Kirkpatrick’s
approach by refining trainee reactions into post-training
attitudes of affective responses and perceived value of the
training.
• A key argument is that ‘trainees may leave training with
different perspectives than when they entered’
(Tannenbaum 1993), which can positively affect trainees’
resource/effort allocation, self-efficacy, satisfaction
and/or commitment.
Training evaluation models
• Another evaluation strategy is Holton’s (1996, 2005) HRD
Evaluation Research and Measurement model, which
focuses on the evaluation targets of learning, transfer and
results.
• Holton (1996) argues that overall organisational
performance is a result of individual performance,
which is a direct consequence of learning.
• Trainee reactions do not fall into Holton’s model.
Training evaluation models
• Another evaluation strategy is Kraiger’s (2003) Decision-Based
Evaluation Model, which aims to link training outcomes to
strategic organisational objectives.
• Strategic objectives translate in Kraiger’s model into three
evaluation target areas
• training content and design (emphasis on design, delivery and
validity of training),
• changes in learners in terms of knowledge and behaviour
(emphasis on affective, cognitive and behavioural changes) and
• organisational pay-offs in learning transfer and results (emphasis
on transfer climate, job performance and results)
Training evaluation models
• Another model by Phillips (1997) focuses training
evaluation on Return on Investment (ROI).
• The ROI model measures the cost-benefit of an investment
in a training programme for an organisation over a set
period of time (Phillips & Phillips, 2001).
• ROI addresses an organisation’s expectation of clearer and
more tangible impacts of training programmes by
converting collected evaluation data into monetary values,
which are compared to the overall cost
Training evaluation models
• Phillip’s model attempts to isolate the link between
training and actual monetary business objectives.
• The ROI approach undermines systems thinking and team
performance improvements (Brinkerhoff & Dressler, 2002),
as it focuses on individual contributions, leaving out
holistic organisational performance contributors.
Benefits of Training Evaluation
• Formal corrective feedback system for developing
strengths and weakness of training participants—at
level 2 ‘learning’ and to some extent at level 3
‘behaviour’ as well
• Increased knowledge and expertise in the
development and implementation of training
programmes that produce the results for which they
were intended.
CRITERIA FOR TRAINING EVALUATION
Employee reactions : employee reaction to training are
evaluated by conducting interviews or administering
questionnaires to trainees.
Here, HR professionals interested in:
•whether trainees liked the programme,
•whether they thought the instruction was clear and helpful,
•whether they believe that they learned the material.
CRITERIA FOR TRAINING EVALUATION
• Even though positive employee reactions are necessary for
training to be successful, … positive employee reactions do
not necessarily mean that training will lead to changes in
knowledge or performance (Alliger and Janak, 1989).
• Trainee reactions constitute the lowest level of training
evaluation.
Level 1: Reaction
Guidelines for Evaluating Reaction
• Determine what one wants to find out.
• Design a form that will quantify reactions.
• Encourage written comments.
• Get 100 percent immediate response.
• Develop an acceptable standard.
• Measure reactions against the standard.
Kirkpatrick Level 2: Learning
• To what extent has learning occurred? Three things can be
accomplished in a training programme:
• Understand the concepts, principles, and techniques being
taught.
• Develop and/or improve skills.
• Change attitudes.
• All programmes have the objective of increasing the
knowledge of the participants.
Level 2: Learning
• For some programmes: Objective of increasing
the technical or sales skills of the participants;
for some other as “Diversity Training,” maybe
aimed at changing attitudes.
• Learning evaluations should be targeted to
the specific objectives of the programme and
should be used to evaluate all programmes.
CRITERIA FOR TRAINING EVALUATION
Employee learning :
• Employee learning criteria are used when HR
professionals wish to determine how well trainees
acquire ‘Knowledge, Skills, and Abilities’ (KSAs)
taught in training.
• Tests on the training material commonly used for
evaluating learning and maybe given both before and
after training to compare scores.
Level 2: Learning
Guidelines for Evaluating Learning
• Measure knowledge, skill, and/or attitudes before and after the
training.
• Use a paper-and-pencil test for knowledge and attitudes.
• Use a performance test for skills.
• Get 100 percent response.
• If practical, use a control group that does not receive the training to
compare with the experimental group that receives the training.
Kirkpatrick Level 3: Behaviour
To what extent has on-the-job behaviour changed as
a result of the programme?
• Transfer of training could be considered effective
when behavioural or performance changes taught in
training are expressed on the job.
• Can trainees now do things they could not do before
(for example, negotiate, conduct an appraisal
interview)?
Level 3: Behaviour
To what extent has on-the-job behaviour changed as
a result of the programme?
• Do participants demonstrate new behaviours on the job?
• Has their performance improved?
• Data useful for evaluating training transfer can be
obtained through interviews of trainees and their co-
workers and observations of job performance.
Level 3: Behaviour
• Difficult to measure and most important.
• If the trainees do not apply what they learned, the
programme has been a failure even if learning has taken
place.
• Therefore, measuring behaviour change is necessary, not
only to see if behavior has changed, but also to determine
the reasons why change has not occurred.
Level 3: Behaviour
Guidelines for Evaluating Behaviour
• If possible, evaluate on before and after training;
• It is usually impossible to do this, so it becomes necessary to
do it after the programme and determine what the
participant is doing differently than he/she was doing before
the programme.
• Allow time for the behaviour to change.
Level 3: Behaviour
• Survey and/or interview one or more of the following:
a. The trainee
b. The bosses of the trainee
c.The subordinates of the trainee
d. Others who observe the behaviour of the trainee
• Get 100 percent response or a sampling.
• Repeat at appropriate times.
• Use a control group if practical.
• Consider the cost of the evaluation versus the possible benefits.
Level 3: Behaviour
• Abraham Lincoln was once asked, “If you had five
hours to chop down a tree, how would you do it?”
• He allegedly responded, “I would spend the first
four hours sharpening my axe.”
• Thorough preparation and a firm foundation
essential to successfully meet the challenge of
transferring learning to behaviour.
Transferring learning to
behaviour
Support
1. Executive and management sponsorship critical
for success.
• Input from all levels of management in the design and
implementation of the programme may increase the
level of the engagement and commitment to the
process.
• Develop a framework for measuring changes in
participant behaviours, and for coaching and
Transferring learning to
behaviour
2. Executives and managers need to be greatly
involved in the actual delivery of the modules and
post-programme assessment.
• This may give great credibility to the programme in the
eyes of the participants, and may help to drive home the
desired behaviour changes in the hearts and minds of
those senior leaders who participated.
• Managers may also trained in followup techniques to
encourage new behaviours.
Transferring learning to
behaviour
• Results and behaviour change need to be
celebrated along the way.
• HR consultants may act as coaches and as
process improvement consultants during
the sessions, and may make sure positive
attitudes, efforts & the display of new
behaviours at all stages of the training are
recognized.
Transferring learning to
behaviour
• Managers and facilitators may use both formal and informal
methods to measure progress of participants.
• Pre-and post-assessments may be administered for each module to
identify levels of learning.
• Such reports and subjective feedback from facilitators may be sent
to managers to aid in further coaching.
• Individuals needing additional development support maybe afforded
feedback discussions, modeling and coaching.
• Senior leaders maybe given summary reports of participants during
and after training to identify specific outcomes of the programme.
Transferring learning to
behaviour
• Homework assignments maybe an integral part of
each module to ensure learning transfer outside the
classroom.
• Such work assignments may always be linked to their jobs
to add an element of relevance to their learning.
• Participants maybe allowed to complete these assignments
while at work.
Level 3: Behaviour
Top ten mistakes leaders make when trying to transfer learning
to behaviour
• Number 10: Not linking and aligning incentives to desired
behaviour and subsequent results.
• Number 9: Trying to do too much and not focusing efforts on
mission critical behavior.
• Number 8: Having the wrong kind of leaders, or the right kind in
the wrong positions.
• Number 7: Not providing adequate technology and system
support.
• Number 6: Not providing a balance of accountability and
support.
Level 3: Behaviour
Top ten mistakes leaders make when trying to transfer learning
to behaviour
• Number 5: Not providing clear direction— vision, strategy, and
expectations.
• Number 4: Promoting a culture of employees who are
discouraged from learning.
• Number 3: Not developing action plans from a business
consulting approach.
• Number 2: Not following up and following through
• Number 1: Not eliciting buy-in and involvement from executives.
Kirkpatrick Level 4: Results
• To what extent have results occurred because of the training?
• Results could be determined by many factors including less
turnover, improved quantity of work, improved quality,
reduction of waste, reduction in wasted time, increased sales,
reduction in costs and increase in profits
• As in the case of evaluating behaviour, evaluation should be done only
on those programmes considered most important or most expensive.
• It has been recommended that ROI should only be attempted on about
5 percent of an organization’s programmes.
Level 4: Results
Guidelines for Evaluating Results
• Measure on before and after training.
• Allow time for possible results to take place.
• Repeat at appropriate times.
• Use a control group if practical.
• Consider the cost of the evaluation versus the possible
benefits.
Philips &Philips Level 5: ROI
The Ultimate Level of Evaluation
• The ROI process adds a fifth level to the four levels of evaluation,
which were developed almost 40+ years ago (Kirkpatrick, 1975;
1979).
• At Level 1, Reaction and Planned Action—satisfaction from
programme participants is measured, along with a listing of how
they planned to apply what they have learned.
• At Level 2, Learning—measurements focus on what participants
learned during the programme using tests, skill practices, role plays,
simulations, group evaluations, and other assessment tools.
Level 5: ROI
• At Level 3, Application and Implementation— a variety of follow-up
methods are used to determine if participants applied on the job
what they learned.
• At Level 4, Business Impact, the measurement focuses on the
changes in the impact measures linked to the programme. Typical
Level 4 measures include output, quality, costs, time, and customer
satisfaction.
• At Level 5, Return on Investment (the ultimate level of evaluation),
the measurement compares the programme’s monetary benefits
with the programme costs; the evaluation cycle is not complete until
the Level 5 evaluation is conducted.
Level 5: ROI
• ROI must be simple, economical, theoretically sound based on generally
accepted practices
• ROI process must account for other factors that have influenced output
variables
• ROI process must have flexibility to be applied on a pre-programme
basis & may be used in planning ROI such as inclusion of various
resources and costs involved as well as post-programme basis
• ROI must be applicable with all types of data including hard data
such as quality, cost and time and soft data such as job satisfaction,
customer satisfaction, turnover, grievances, complaints
Level 5: ROI
• ROI process must include costs of the programme
• Actual calculation must use an acceptable ROI formula
• ROI process must have a successful track record
• ROI (%) = Net Programme Benefits/Programme CostsX100
Level 5: ROI
• ROI from some training programme is high such as sales,
supervisory or managerial training while ROI for some training
may be lower, say for technical training
• ROI is based on converting both hard and soft data to monetary
values;
• Intangible benefits include items such as:
• Increased job satisfaction, increased organisational commitment,
improved teamwork, improved customer service, reduced
complaints and reduced conflicts
Level 5: ROI
Among the cost components that should be included are:
• the cost to design and develop the programme, possibly
prorated over the expected life of the programme;
• the cost of all programme materials provided to each participant;
• the cost for the instructor/facilitator, including preparation time
as well as delivery time;
• the cost of the facilities for the training programme;
• travel, lodging, and meal costs for the participants, if applicable;
• salaries plus employee benefits of the participants who attend
the training; and
• administrative and overhead costs of the training function,
allocated in some convenient way.
Barriers to ROI Implementation
Lack of Skills and Orientation for HRD Staff
• Many training and performance improvement staff members do not understand ROI
nor do they have the basic skills necessary to apply the process within their scope of
responsibilities.
• Measurement and evaluation is not usually part of the preparation for the job.
• Also, the typical training programme does not focus on results, but more on
learning out- comes.
• Staff members attempt to measure results by measuring learning.
• Consequently, a tremendous barrier to implementation is the change needed for the
overall orientation, attitude, and skills of the HRD staff.
Barriers to ROI Implementation
Costs and Time
• The ROI process will add some additional costs and time to
the evaluation process of programmes, although the added
amount will not be excessive.
• It is possible this barrier alone stops many ROI
implementations early in the process.
• A comprehensive ROI process can be implemented for 3–
5% to the overall training budget.
Barriers to ROI Implementation
Faulty Needs Assessment
• Many of the current HRD programmes do not have an adequate needs
assessment.
• Some of these programmes have been implemented for the wrong reasons based
on management requests or efforts to chase a popular trend in the industry.
• If the programme is not needed, the benefits from the programme will be
minimal.
• An ROI calculation for an unnecessary programme will likely yield a negative
value. This is a realistic barrier for many programmes.
Barriers to ROI Implementation
Fear
• Some HRD departments do not pursue ROI because of fear
of failure or fear of the unknown.
• Fear of failure appears in many ways.
• Designers, developers, facilitators, and programme owners
may be concerned about the consequence of negative ROI.
Barriers to ROI Implementation
Discipline and Planning
• A successful ROI implementation requires much planning & a disciplined
approach to keep the process on track.
• Implementation schedules, evaluation targets, ROI analysis plans,
measurement and evaluation policies, and follow-up schedules maybe
required.
• This becomes a barrier, particularly when there are no immediate pressures
to measure the return.
• If the current senior management group is not requiring ROI, the
HRD staff may not allocate time for planning and coordination.
Other pressures and priorities also often eat into the time necessary
for ROI implementation.
Barriers to ROI Implementation
False Assumptions
• Many HRD staff members have false assumptions about
the ROI process, such as:
• The impact of a training programme cannot be
accurately calculated.
• Managers do not want to see the results of training
and development expressed in monetary values.
Barriers to ROI Implementation
False Assumptions
• If the CEO does not ask for the ROI, he or she is not expecting it.
• “I have a professional, competent staff. Therefore, I do not have
to justify the effectiveness of our programmes.”
• The training process is a complex, but necessary activity;
therefore, it should not be subjected to an accountability
process.
Concerns with ROI
• For an ROI process to be useful, it must balance many issues such
as feasibility, simplicity, credibility, and soundness.
• Three major audiences must be pleased with the ROI
process to accept and use it:
• Practitioners who design, develop, and delivery programmes.
• Senior managers, sponsors, and clients who initiate and
support programmes.
• Researchers who need a credible process.
Concerns with ROI
HRD Practitioners
For years, HRD practitioners have assumed that ROI could not be
measured.
• ROI process appear too confusing: When they examined a typical
process, they found long formulas, complicated equations, and
complex models that made the.
• Effort and cost in data collection & analysis
• HRD practitioners seek an ROI process that is simple and easy
to understand & implement and not too expensive
Concerns with ROI
Senior Managers/Sponsors/Clients
• Managers who approve HRD budgets, request HRD
programmes, or live with the results of programmes, have
a strong interest in developing the ROI in training
• They want a process that provides quantifiable results &
that is simple and easy to understand
Concerns with ROI
Researchers
• Researchers usually insist that models, formulas, assumptions,
and theories are sound and based on commonly accepted
practices
• They also want a process that produces accurate values and
consistent outcomes.
• If estimates are necessary, researchers want a process that
provides the most accuracy within the constraints of the situation
Evaluation at different levels
• For example, at Level 1, where it is easy to measure reaction,
organizations achieve a high level of activity, with many
organizations requiring 100% evaluation.
• Inthese situations, a generic questionnaire is
administered at the end of each programme.
• Level 2, Learning, is another relatively easy area to
measure and the target is high, usually in the 50–70%
range. This target depends on the organization, based on the
nature and type of programmes.
Evaluation at different levels
• At Level 3, Application, the percentage drops because of
the time and expense of conducting follow-up evaluations.
Targets in the range of 25–35% are common.
• Targets for Level 4, Business Impact, and
• Level 5, ROI, are relatively small, reflecting the challenge
of comprehending any new process. Common targets are
10% for Levels 4 and 5% for Level 5.
Feasibility of conducting ROI
• An important consideration in planning the ROI impact study—
to determine the appropriate levels for evaluation.
• Some evaluations to stop at Level 3, where a detailed
report to determine the extent to which participants were
using what they have learned.
• Others to be evaluated at Level 4, impact, where the
consequences of their on-the-job application monitored.
Feasibility of conducting ROI
• A Level 4 impact study will examine hard and soft
data measures directly linked to the programme. This
type of study to require that the impact of the programme
be isolated from other influences.
• Finally, if the ROI calculation needed, two additional
steps required; the Level 4 impact data must be
converted to monetary value and the costs of the
programme captured so that the ROI may be
developed.
• Only a few programmes should be taken to this level of
evaluation.
Feasibility of conducting ROI
• During the planning stage, the feasibility for a Level 4 or 5 impact
study should be examined; relevant questions that need to be
addressed are:
• What specific measures have been influenced with this programme?
• Are those measures readily available?
• Can the effect of the programme on those measures be isolated?
• Are the costs of the programme readily available?
• Will it be practical and feasible to discuss costs?
• Can the impact data be converted to monetary value?
• Is the actual ROI needed or necessary?
Collecting Data
• Data collection is central to the ROI methodology.
• Both hard data (representing output, quality, cost, and time) and
soft data (including job and customer satisfaction) to be
collected.
• Data collected using a variety of methods including the
following:
• Surveys: to determine the degrees to which participants
are satisfied with the programme, have learned skills and
knowledge, and have used various aspects of the
programme; useful for Levels 1, 2, and 3 data.
Collecting Data
• Questionnaires are usually more detailed than surveys;
participants provide responses to a variety of open-ended and forced
response questions. Questionnaires can be used to capture Levels 1, 2,
3, and 4 data.
• Tests conducted to measure changes in knowledge and skills
(Level 2); tests come in a wide variety of formal (criterion-referenced
tests, performance tests and simulations, and skill practices) and
informal (facilitation assessment, self assessment, and team assessment)
methods.
Collecting Data
• On-the-job observation captures actual skill application and use.
• Observations: Particularly useful in customer service training and
are more effective when the observer is either invisible or
transparent. Observations are appropriate for Level 3 data.
• Interviews: Conducted with participants to determine the extent
to which learning has been used on the job; allow for probing to
uncover specific applications and are usually appropriate
with Level 3 data, but can be used with Levels 1 and 2 data.
Collecting Data
• Focus groups: Conducted to determine the degree to which
a group of participants has applied the training to job
situations; usually appropriate with Level 3 data.
• Action plans and programme assignments: Developed by
participants during the training programme and are
implemented on the job after the programme is completed;
follow-ups provide evidence of training programme success;
Levels 3 and 4 data can be collected with action plans.
Isolating the Effects of Training
• An often overlooked issue in most evaluations
• In this step of the process, specific strategies
explored that determine the amount of output
performance directly related to the programme,
resulting in increased accuracy and credibility of ROI
calculations.
• This step is essential because there may be many factors
that may influence performance data after training
Isolating the Effects of Training
• The following techniques have been used by organizations to
tackle this important issue:
• A control group arrangement is used to isolate training
impact; with this strategy, one group receives training, while
another similar group does not receive training; difference in
the performance of the two groups is attributed to the training
programme;
• when properly set up and implemented, the control group
arrangement is the most effective way to isolate the effects of
training.
Isolating the Effects of Training
• Trend lines used to project the values of specific output variables
if training had not been undertaken (e.g. turnover trend may be
compared to last three years for a particular department/level/role
without training & then with training on employee career
planning/leadership development; and the difference represents
the estimate of the impact of training; under certain conditions,
this strategy can accurately isolate the training impact.
• When mathematical relationships between input and output
variables are known, a forecasting model is used to isolate the
effects of training.
Isolating the Effects of Training
• Participants estimate the amount of improvement related
to training; with this approach, participants are provided with
the total amount of improvement, on a pre-programme and
post-programme basis, and are asked to indicate the percent of
the improvement that is actually related to the training
programme.
• Supervisors of participants estimate the impact of
training on the output variables. Supervisors of participants
are presented with the total amount of improvement and are
asked to indicate the percent related to training.
Isolating the Effects of Training
• Senior management estimates the impact of training.
• In these cases, managers provide an estimate or “adjustment” to reflect the
portion of the improvement related to the training programme.
• Experts provide estimates of the impact of training on the
performance variable.
• When feasible, other influencing factors are identified and the impact
estimated or calculated, leaving the remaining, unexplained improvement
attributed to training.
• In some situations, customers provide input on the extent to which
training has influenced their decision to use a product or service.
Converting Data to Monetary Values
• To calculate the return on investment, data collected in a Level 4 evaluation are
converted to monetary values and compared to programme costs.
• This requires a value to be placed on each unit of data connected with the programme.
• A wide variety of techniques are available to convert data to monetary values.
• The specific techniques selected usually depends on the types of data and the situation:
• Output data are converted to profit contribution or cost savings. In this strategy,
output increases are converted to monetary value based on their unit contribution to profit or
the unit of cost reduction. Standard values for these items are readily available in most
organizations.
Converting Data to Monetary Values
• The cost of quality is calculated and quality improvements are
directly converted to cost savings. Standard values for these
items available in many organizations.
• For programmes where employee time is saved, the
participants’ wages and employee benefits are used to develop
the value for time.
• Because a variety of programmes focuses on improving the time
required to complete projects, processes, or daily activities, the value
of time becomes an important and necessary issue. This is a standard
formula in most organizations.
Converting Data to Monetary Values
• Historical costs, developed from cost statements, are
used when they are available for a specific variable.
• In this case, organizational cost data establishes the
specific monetary cost savings of an improvement.
Converting Data to Monetary Values
• When available, internal and external experts
may be used to estimate a value for an improvement;
in this situation, the credibility of the estimate hinges on
the expertise and reputation of the individual.
• External databases are sometimes available to
estimate the value or cost of data items; research,
government, and industry data-bases can provide
important information for these values; the difficulty lies in
finding a specific database related to the situation.
Converting Data to Monetary Values
• Participants estimate the value of the data item & must be
capable of providing a value for the improvement.
• Supervisors and managers provide estimates
when they are both willing and capable of assigning
values to the improvement.
• HRD staff estimates may be used to determine a
value of an output data item, so in such cases, it is
essential for the estimates to be provided on an unbiased
basis.