Training Evaluation and Measurement
MS. PREETI BHASKAR
ASSISTANT PROFESSOR
ICFAI BUSINESS SCHOOL ,DEHRADUN
Introduction
 Trainingeffectiveness:Benefits that the company and the trainees receive
fromtraining
 Trainingoutcomes or criteria:Measures that the trainerand the company
use to evaluatetraining programs
 Trainingevaluation:The process of collecting the outcomes needed to
determine if training iseffective
 Evaluationdesign: Collection of information, includingwhom, what, when,
and how,for determiningthe effectivenessof the trainingprogram
Reasons for Evaluating Training
Companies make large investments in trainingand
education and view them as a strategy to be successful;
they expect the outcomes of training to be measurable
Training evaluation provides the data needed to
demonstrate that training does provide benefits to the
company
It involves formative and summative evaluation
Formative Evaluation
 Takesplace during program designand development
 It helps ensure that the training program is well
organizedand runs smoothly
 Traineeslearn and are satisfied withthe program
 It provides informationabout how to make the program better; it involves
collecting qualitativedata about the program
 Pilot testing: Process of previewing the training program with
potential trainees and managersor with othercustomers
SummativeEvaluation
 Determines the extent to which trainees have changed as a result of
participatingin thetraining program
It may includemeasuringthe monetarybenefitsthat the companyreceives
from the program(ROI)
It involvescollectingquantitativedata
 A training program should beevaluated:
Toidentifythe program’sstrengthsand weaknesses
Toassesswhethercontent,organization,and administrationof the
programcontributeto learningand theuse of trainingcontent on the job
Toidentifywhich traineesbenefitedmost or leastfrom theprogram
CONT.
Togather data to assist in marketing training programs
Todetermine the financial benefits and costs of the
program
Tocompare the costs and benefits of:
Training versus non-traininginvestments
Differenttraining programs to choose the bestprogram
Outcomes Used in the Evaluationof Training Programs
 Reaction outcomes
 It is collected at the program’s conclusion
 Cognitive outcomes
 Determine the degree to which trainees are familiar with the principles,
techniques, and processes emphasized in the training program
 Skill-based outcomes
 The extent to which trainees have learned skills can be evaluated by observing
their performance in work samples such as simulators
Cont.
6-8
 Affective outcomes
If traineeswere askedabouttheirattitudes on a survey, that wouldbe
considereda learning measure
 Results:Used to determine the trainingprogram’s payofffor the
company
 Return on investment
Directcosts:Salariesandbenefitsfor all employees involvedin training;
program material andsupplies; equipmentor classroomrentalsor
purchases;and travelcosts
Indirect costs:Not relateddirectlyto the design, development,or
deliveryof the training program
Benefits:Valuethat the companygainsfrom the trainingprogram
Determining Whether Outcomesare Appropriate
Criteria
Relevance
The extent to which training outcomes are related to the learned capabilities emphasized
in the training program.
Criterion contamination - the extent that training outcomes measure inappropriate capabilities
or are affected by extraneous conditions.
Criterion deficiency - the failure to measure training outcomes that were emphasized in the
training objectives.
Reliability The degree to which outcomes can be measured consistently over
time.
Discrimination The degree to which trainees’ performance on the outcome actually reflects true differences
in performance.
Practicality The ease with which the outcome measures can be collected.
EvaluationPractices
• It is important to recognize the limitations of choosing to measure
only reaction and cognitive outcomes
• Toensure an adequate training evaluation,companies must collect
outcome measures related to both learning and transfer
TrainingEvaluation Practices
Outcomes
6-
11
Percentage
of
Companies
Using
Outcome
EvaluationDesigns
Threats to validity: Factors that will lead an
evaluator to question either the:
Internal validity: The believability of the study results
External validity: The extent to which the evaluation
results are generalizable to other groups of trainees and
situations
Threats to
Validity
6-
13
Methods to Control for Threatsto Validity
Pretests and post-tests: Comparison of the post- training
and pretraining measures can indicate the degree to
which trainees have changed as a result of training
Use of comparison groups: Group of employees who
participate in the evaluation study but do not attend the
training program
Hawthorne effect
Cont.
6-
15
Random assignment: Assigning employees to the
training or comparison group on the basis of chance
alone
It is often impractical
Analysis of covariance
Typesof Evaluation Designs
Post-test only: Only post-training outcomes are collected
Appropriatewhentraineescan be expectedto have similarlevelsof
knowledge,behavior,or results outcomespriorto training
Pretest/post-test: Pretraining and post-training outcome
measures are collected
Used by companiesthat wantto evaluatea training programbut are
uncomfortablewith excludingcertain employees
Comparison of Evaluation Designs
Typesof Evaluation Designs
Pretest/post-test with comparisongroup:Includes trainees and a comparison
group
Differences betweeneach of the trainingconditions andthe comparison
groupareanalyzeddetermining whetherdifferencesbetweenthe groups
were caused bytraining
 Time series: Training outcomes are collected at periodic intervals both
before and after training
It allows an analysis of the stability of training outcomes
over time
Reversal: Time period in which participants no longer receive the
training intervention
Factors that Influence the Typeof Evaluation Design
Determining Return on Investment
Cost-benefit analysis: Process of determiningthe
economic benefits of a training program using accounting
methods that look at training costs and benefits
ROI should be limited only to certain training
programs, because it can becostly
Determining costs
• Methodsfor comparingcosts of alternativetraining programsinclude
the resourcerequirementsmodeland accounting
Determining Return on Investment
 Determiningbenefits– Methods include:
Technical,academic,and practitioner literature
Pilottrainingprogramsandobservanceof successful job
performers
Observanceof successful jobperformers
Estimatesby traineesand their managers
 Tocalculate ROI
Identifyoutcomes
Place a valueon the outcomes
Determinethe changein performanceaftereliminatingother
potentialinfluenceson trainingresults
Obtainan annualamountof benefits
Determinethe training costs
Calculatethe totalbenefitsby subtractingthe trainingcosts from
benefits(operational results)
Calculatethe ROIby dividingoperationalresultsby costs
• The ROI givesan estimateof the dollarreturnexpectedfrom each
• dollarinvestedin training
Determining
Costs for a
CostBenefit
Analysis
Determining Return on Investment
 Utilityanalysis:Cost-benefit analysis method that involves assessing the
dollar value of trainingbased on:
 Estimates of the difference in job performancebetween trainedand untrained
employees
 The numberof individuals trained
 The length of time a training program is expected to
influence performance
 The variability in job performancein the untrained group of employees
Practical Considerations inDetermining ROI
Training programs best suited for ROI analysis:
Have clearly identified outcomes
Are not one-time events
Are highly visible in the company
Are strategically focused
Have effects that can be isolated
Cont.
Showing the link between training and market share gain
or other higher-level strategic business outcomes can be
veryproblematic
Outcomes can be influenced by too many other factors not
directly related to training
Business units may not be collecting the data needed to
identify the ROI of trainingprograms
Measurement of training can be expensive
Success Cases and Return on Expectations
Return on expectations (ROE): Process through which
evaluation demonstrates to key business stakeholders
that their expectations about training have been satisfied
Success cases: Concrete examples of the impact of training
that show how learning has led to results that the company
finds worthwhile
Measuring Human Capital andTraining Activity
American Society of Trainingand Development (ASTD):
Provides information about training hours and delivery
methods that companies can use to benchmark
Workforce analytics: Practice of using quantitative methods
and scientific methods to analyze data from human resource
databases and other databases to influence important
companymetrics
Cont.
Dashboards: Computer interface designed to receive
and analyze the data from departments within the
company to provide information to managers and
other decision makers
Useful because they can provide a visual display using charts
of the relationshipbetween learning activities and business
performance data
Training Metrics
6-
30
Training Evaluation and Measuremen.pptx

Training Evaluation and Measuremen.pptx

  • 1.
    Training Evaluation andMeasurement MS. PREETI BHASKAR ASSISTANT PROFESSOR ICFAI BUSINESS SCHOOL ,DEHRADUN
  • 2.
    Introduction  Trainingeffectiveness:Benefits thatthe company and the trainees receive fromtraining  Trainingoutcomes or criteria:Measures that the trainerand the company use to evaluatetraining programs  Trainingevaluation:The process of collecting the outcomes needed to determine if training iseffective  Evaluationdesign: Collection of information, includingwhom, what, when, and how,for determiningthe effectivenessof the trainingprogram
  • 3.
    Reasons for EvaluatingTraining Companies make large investments in trainingand education and view them as a strategy to be successful; they expect the outcomes of training to be measurable Training evaluation provides the data needed to demonstrate that training does provide benefits to the company It involves formative and summative evaluation
  • 4.
    Formative Evaluation  Takesplaceduring program designand development  It helps ensure that the training program is well organizedand runs smoothly  Traineeslearn and are satisfied withthe program  It provides informationabout how to make the program better; it involves collecting qualitativedata about the program  Pilot testing: Process of previewing the training program with potential trainees and managersor with othercustomers
  • 5.
    SummativeEvaluation  Determines theextent to which trainees have changed as a result of participatingin thetraining program It may includemeasuringthe monetarybenefitsthat the companyreceives from the program(ROI) It involvescollectingquantitativedata  A training program should beevaluated: Toidentifythe program’sstrengthsand weaknesses Toassesswhethercontent,organization,and administrationof the programcontributeto learningand theuse of trainingcontent on the job Toidentifywhich traineesbenefitedmost or leastfrom theprogram
  • 6.
    CONT. Togather data toassist in marketing training programs Todetermine the financial benefits and costs of the program Tocompare the costs and benefits of: Training versus non-traininginvestments Differenttraining programs to choose the bestprogram
  • 7.
    Outcomes Used inthe Evaluationof Training Programs  Reaction outcomes  It is collected at the program’s conclusion  Cognitive outcomes  Determine the degree to which trainees are familiar with the principles, techniques, and processes emphasized in the training program  Skill-based outcomes  The extent to which trainees have learned skills can be evaluated by observing their performance in work samples such as simulators
  • 8.
    Cont. 6-8  Affective outcomes Iftraineeswere askedabouttheirattitudes on a survey, that wouldbe considereda learning measure  Results:Used to determine the trainingprogram’s payofffor the company  Return on investment Directcosts:Salariesandbenefitsfor all employees involvedin training; program material andsupplies; equipmentor classroomrentalsor purchases;and travelcosts Indirect costs:Not relateddirectlyto the design, development,or deliveryof the training program Benefits:Valuethat the companygainsfrom the trainingprogram
  • 9.
    Determining Whether OutcomesareAppropriate Criteria Relevance The extent to which training outcomes are related to the learned capabilities emphasized in the training program. Criterion contamination - the extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions. Criterion deficiency - the failure to measure training outcomes that were emphasized in the training objectives. Reliability The degree to which outcomes can be measured consistently over time. Discrimination The degree to which trainees’ performance on the outcome actually reflects true differences in performance. Practicality The ease with which the outcome measures can be collected.
  • 10.
    EvaluationPractices • It isimportant to recognize the limitations of choosing to measure only reaction and cognitive outcomes • Toensure an adequate training evaluation,companies must collect outcome measures related to both learning and transfer
  • 11.
  • 12.
    EvaluationDesigns Threats to validity:Factors that will lead an evaluator to question either the: Internal validity: The believability of the study results External validity: The extent to which the evaluation results are generalizable to other groups of trainees and situations
  • 13.
  • 14.
    Methods to Controlfor Threatsto Validity Pretests and post-tests: Comparison of the post- training and pretraining measures can indicate the degree to which trainees have changed as a result of training Use of comparison groups: Group of employees who participate in the evaluation study but do not attend the training program Hawthorne effect
  • 15.
    Cont. 6- 15 Random assignment: Assigningemployees to the training or comparison group on the basis of chance alone It is often impractical Analysis of covariance
  • 16.
    Typesof Evaluation Designs Post-testonly: Only post-training outcomes are collected Appropriatewhentraineescan be expectedto have similarlevelsof knowledge,behavior,or results outcomespriorto training Pretest/post-test: Pretraining and post-training outcome measures are collected Used by companiesthat wantto evaluatea training programbut are uncomfortablewith excludingcertain employees
  • 17.
  • 18.
    Typesof Evaluation Designs Pretest/post-testwith comparisongroup:Includes trainees and a comparison group Differences betweeneach of the trainingconditions andthe comparison groupareanalyzeddetermining whetherdifferencesbetweenthe groups were caused bytraining  Time series: Training outcomes are collected at periodic intervals both before and after training It allows an analysis of the stability of training outcomes over time Reversal: Time period in which participants no longer receive the training intervention
  • 19.
    Factors that Influencethe Typeof Evaluation Design
  • 20.
    Determining Return onInvestment Cost-benefit analysis: Process of determiningthe economic benefits of a training program using accounting methods that look at training costs and benefits ROI should be limited only to certain training programs, because it can becostly Determining costs • Methodsfor comparingcosts of alternativetraining programsinclude the resourcerequirementsmodeland accounting
  • 21.
    Determining Return onInvestment  Determiningbenefits– Methods include: Technical,academic,and practitioner literature Pilottrainingprogramsandobservanceof successful job performers Observanceof successful jobperformers Estimatesby traineesand their managers
  • 22.
     Tocalculate ROI Identifyoutcomes Placea valueon the outcomes Determinethe changein performanceaftereliminatingother potentialinfluenceson trainingresults Obtainan annualamountof benefits Determinethe training costs Calculatethe totalbenefitsby subtractingthe trainingcosts from benefits(operational results) Calculatethe ROIby dividingoperationalresultsby costs • The ROI givesan estimateof the dollarreturnexpectedfrom each • dollarinvestedin training
  • 23.
  • 24.
    Determining Return onInvestment  Utilityanalysis:Cost-benefit analysis method that involves assessing the dollar value of trainingbased on:  Estimates of the difference in job performancebetween trainedand untrained employees  The numberof individuals trained  The length of time a training program is expected to influence performance  The variability in job performancein the untrained group of employees
  • 25.
    Practical Considerations inDeterminingROI Training programs best suited for ROI analysis: Have clearly identified outcomes Are not one-time events Are highly visible in the company Are strategically focused Have effects that can be isolated
  • 26.
    Cont. Showing the linkbetween training and market share gain or other higher-level strategic business outcomes can be veryproblematic Outcomes can be influenced by too many other factors not directly related to training Business units may not be collecting the data needed to identify the ROI of trainingprograms Measurement of training can be expensive
  • 27.
    Success Cases andReturn on Expectations Return on expectations (ROE): Process through which evaluation demonstrates to key business stakeholders that their expectations about training have been satisfied Success cases: Concrete examples of the impact of training that show how learning has led to results that the company finds worthwhile
  • 28.
    Measuring Human CapitalandTraining Activity American Society of Trainingand Development (ASTD): Provides information about training hours and delivery methods that companies can use to benchmark Workforce analytics: Practice of using quantitative methods and scientific methods to analyze data from human resource databases and other databases to influence important companymetrics
  • 29.
    Cont. Dashboards: Computer interfacedesigned to receive and analyze the data from departments within the company to provide information to managers and other decision makers Useful because they can provide a visual display using charts of the relationshipbetween learning activities and business performance data
  • 30.