18.02.MSA Attribute (Advance)
18.02.MSA Attribute (Advance)
Analysis
Objectives
•
Understand the application of Attribute Agreement Analysis on Binary, Nominal
and Ordinal data.
•
Interpret Attribute Agreement Analysis from 3 approaches
•
Assessment of Agreement
•
Kappa Statistics
•
Kendall Coefficient
Measurement System Analysis (MSA)
Why MSA?
• One concern is the misclassification:
“Parts” inside specs may actually measure outside the spec
Unnecessary scrapping or reworking
• “Parts” outside the spec may actually measure inside the spec
Ship unacceptable product to customer.
• Another concern is determining if the measurement system is capable of detecting a
difference between “parts”
Where to Start?
• The very first step is to know if the difference among good and bad parts is caused by
product variation or measurement variation.
• Start with your project Y and then any other measurements that will be utilized to
characterize the process
3
When is MSA implemented?
• 1. Attribute (Discrete)
• Data cannot be adequately described on a continuous scale
• Examples:
Binary : Pass / Fail, Good /Bad, Go/No go. Kappa
Nominal (no natural ordering) :
Defect type A, B, C…H / Lifted, Broken, Missing Kendall
Ordinal (with natural ordering) : Low/Med/Hi, Cold/cool/
tepid/warm/hot/scalding, Satisfaction rate on a 1-5 scale
• 2. Variable (Continuous)
• Data can be described on a continuous scale
• Examples:
Length, height, gap, thickness, Torque, Speed, Taste, surface finish, time 6
Attribute Agreement
Analysis (AAA)
Measurement System Analysis
Purposes of Study
• To determine if inspectors (appraisers) across all shifts, machines, lines, etc… use
the same criteria to discriminate “good” from “bad”
• To quantify the ability of inspectors (appraisers) or gages to accurately repeat their
inspection decisions over time
• To identify how well inspectors/gages measure a known master (possibly defined by
the customer) to ensure no misclassification occurs
• To determine areas where:
Training is needed
Procedures or control plans are lacking
Standards are not clearly defined
Gage adjustment or correlation is necessary 8
AAA General Requirements
Per Jabil procedures an Attribute Gauge R&R should be performed for appraisers
when any of the following situations may occur:
• A new appraiser is appointed.
• A new product released for production has major change in complexity or
customer´s requirement/expectations.
• At regular intervals to ensure appraiser capability is maintained. The interval
frequency should be defined based on production quality performance (bi-annual
assessment recommended but not required).
• Issues related to appraiser skills arise.
• Inspector certification / re-certification process.
Steps to perform AAA
5. Perform MSA
Study •Assessment of Agreement
•Binary & Nominal – Kappa
6. Analyze MSA
Results in Minitab
Outputs • Ordinal – Kappa & Kendall
10
AAA Guidelines
• When planning to conduct an Attribute MSA, follow the Steps for AAA taught in Green Belt
training.
• Samples preparation
• Assessment process.
AAA Guidelines
• Operator Selection:
Minimum of 2 inspectors & 2 trials each depending on what you are trying to
learn.
More is better. Preferred sample size is 30 – 50 pieces, keep max within 100
pieces.
12
AAA Guidelines
14
AAA Guidelines
Assessment Process:
Each Inspector will perform per planned number of trials and samples
Randomizing samples between each trial and ensure blind test (Inspector
should not be able to recognize the sample due to part identification during this
assessment)
Attribute Gauge R&R process to be conducted in a controlled/designated area
Inspectors follow SOP, WI, Visual Aids or IPC-A-610 to determine pass/fail
(Nominal) or classification (Ordinal) decision for each sample
The time given to inspect each unit during the Attribute Gauge R&R shall align
with the production takt time allotted for the inspection of each unit
The results are entered into Data collection worksheet
15
Different Analysis for Attribute Data
Binary
Pass / OK
2 – Moderate
:
3 – Significant
Type H / Misaligned
4 – Major
Kendall's
Kappa Coefficients
Statistics + Assessment
+ Assessment Agreement +
Agreement Kappa
16
AAA - Analysis Focus
Correctness With
Consistency
Standard Reference
WITHIN Within Appraiser: Each Appraiser vs.
(REPEATABILITY) Assess the consistency Standard:
of responses for each Assess the correctness of
appraiser responses for each
appraiser
BETWEEN Between Appraisers: All Appraisers vs.
(REPRODUCIBILITY Assess the consistency Standard: Assess the
) of responses between correctness of responses for
appraisers all appraisers
Example
Inspectors at a manufacturing
plant reviewed the production
parts according to the standards
given by the QE, thereby decide
on the final disposition.
To assess the consistency and
correctness of the inspectors'
ratings, the QE asks 5 inspectors
to rate the quality of 40 samples
twice. Samples were randomly
presented.
18.02.2AAA Example.mtw
Minitab Attribute Measurement Systems
Expectation:
Require a minimum score of
90%
Obvious good and obvious bad
would be expected to be 100%.
John, Ken, Mary & Rose scored 100%. They have consistent conclusions of each samples when
inspected twice over all 40 samples.
Keith, however scored 95%, has 38 samples with matching conclusions, out of 40 inspected. We
are 95% confident that Keith’s performance ranges between 83% and 99%
Interpret Minitab Analysis
Step 3: Assess the correctness of responses for each appraiser
#OK/Fail column contains the number of samples that the appraiser passed
but should have failed—a false positive.
#Fail/OK column contains the number of samples that the appraiser failed
but should have passed—a false negative.
#Mixed column indicates the number of times that the appraiser made
inconsistent ratings across trials.
Interpret Minitab Analysis
Step 4: Assess the consistency of responses between appraisers
5 appraisers achieved 82.5% agreement versus Standard. There is still some work
needed as the expectation is 90%
Interpret Minitab Analysis
Assessment of Agreement - Summary
Correctness With
Consistency
Standard Reference
•All above 90% •Keith matches the true
(REPEATABILITY)
2. Kappa Statistics:
Binary : Pass / Fail, Good /Bad, Go/No go.
Nominal (no natural ordering): Defect type A, B, C…H / Lifted, Broken,
Missing
The general Rule Of Thumb
Correctness With
Consistency (ROT) for interpreting Kappa
Standard Reference
Statistics :
WITHIN Within Appraiser: Each Appraiser vs.
(REPEATABILITY) Assess the consistency Standard: Assess the • < 0.75 Measurement System
of responses for each correctness of responses needs attention
appraiser for each appraiser • 0.75 – 0.9 Generally
BETWEEN Between Appraisers: All Appraisers vs. acceptable, improvement may
(REPRODUCIBILITY) Assess the consistency Standard: Assess the be needed depending on
of responses between correctness of responses application and risk.
appraisers for all appraisers • > 0.9 Excellent Measurement
System
AAA: Kappa Statistics
Kappa compares the proportion of agreement between the inspectors after removing
agreement by chance.
Notation:
Source: “When Quality is a Matter of Taste, Use Reliability Indexes”, D. Futrell, Quality Progress, May 1995
AAA: Kappa Statistics
- Guidelines
Use Kappa statistics to assess the degree of agreement of the nominal (or ordinal)
ratings made by multiple Inspectors when the Inspectors evaluate the same samples.
• Kappa statistics treat all misclassifications equally
• Kappa values range from –1 to +1.
• The higher the value of kappa, the stronger the agreement. When:
• Kappa = 1, perfect agreement exists.
• Kappa = 0, agreement is the same as would be expected by chance.
• Kappa < 0, agreement is weaker than expected by chance; this rarely occurs.
The general rule of thumb for interpreting Kappa Statistics:
• < 0.75 Measurement System needs attention
• 0.75 – 0.9 Generally acceptable, improvement may be needed depending on
application and risk.
• > 0.9 Excellent Measurement System
Example – Binary (Pass / Fail)
Inspectors at a manufacturing
plant reviewed the production
parts according to the standards
given by the QE, thereby decide
on the final disposition.
To assess the consistency and
correctness of the inspectors'
ratings, the QE asks 5 inspectors
to rate the quality of 40 samples
twice. Samples were randomly
presented.
18.02.2AAA Example.mtw
Minitab Attribute Measurement Systems
John, Ken, Mary & Rose achieved Kappa = 1, for both categories
of samples. They have perfect agreement.
These results show that all the appraisers correctly matched the
standard ratings on 33 of the 40 samples. The overall kappa value
is 0.9399, which indicates strong agreement with the standard
values.
Interpret Minitab Analysis
Kappa Statistics Summary
Correctness With
Consistency
Standard Reference
• Except Keith at 82.5%, All • Keith matches the true result
others above ≥95% 85% of the time, while Mary’s
(REPEATABILITY)
WITHIN
2. Kappa Statistics:
Binary : Pass / Fail, Good /Bad, Go/No go.
Nominal (no natural ordering): Defect type A, B, C…H / Lifted, Broken,
Missing
The general Rule Of Thumb
Correctness With
Consistency (ROT) for interpreting Kappa
Standard Reference
Statistics :
WITHIN Within Appraiser: Each Appraiser vs.
(REPEATABILITY) Assess the consistency Standard: Assess the • < 0.75 Measurement System
of responses for each correctness of responses needs attention
appraiser for each appraiser • 0.75 – 0.9 Generally
BETWEEN Between Appraisers: All Appraisers vs. acceptable, improvement may
(REPRODUCIBILITY) Assess the consistency Standard: Assess the be needed depending on
of responses between correctness of responses application and risk.
appraisers for all appraisers • > 0.9 Excellent Measurement
System
Example – Nominal (Style, A, B, C, D or E)
18.02.3AAAforNominalandOrdinal.xlsx
Minitab Analysis
Duncan, Haynes, Holmes & Montgomery are consistent and having correct
ratings except for Simpson. Simpson is least consistent and has least correct
ratings.
Interpret Minitab Analysis
Step 2: Assess the consistency of responses for each appraiser
Correctness With
Consistency
Standard Reference
• Except Simpson at 77.5%, • Simpson matches the true
All others above ≥95% result 46.67% of the time and
(REPEATABILITY)
WITHIN
When using ordinal ratings, Kendall's coefficients, which account for ordering, are usually
more appropriate statistics to determine association than kappa alone. Minitab estimates
Kendall's coefficient by:
Notation
Term Description
N the number of subjects
Σ Ri2 the sum of the squared sums of ranks for each of the ranked N
subjects
K the number of appraisers
Tj Tj assigns the average of ratings to tied observation
18.02.3AAAforNominalandOrdinal.xls
Example
Many of the kappa values are 1, Because the data are ordinal,
which indicates perfect Minitab provides the Kendall's
agreement within an appraiser coefficient of concordance values.
between trials. Some of These values are > 0.9, which
Simpson's kappa values ranges indicates a very strong association
between 0.314 to 0.9499. You within the appraiser ratings
might want to investigate why
Simpson's ratings of those
samples were inconsistent.
Interpret Minitab Analysis
Step 3: Assess the correctness of responses for each appraiser
Correctness With
Consistency
Standard Reference
• Except Simpson at • Simpson matches the true
77.5%, All others above result 77.5% of the time and
(REPEATABILITY)
Conclusion: Kendall Statistics is ok across all analysis but did not meet
BETWEEN
Sense Multipliers
• Optical magnifiers, sound amplifiers, and other devices to expand the ability of
the unaided human to sense the defects / categories
Masks
• Used to block-out the inspectors view of irrelevant characteristics so he / she can
focus on real responsibilities
Templates
• These are a combination of gage, magnifier, and mask. Example: cardboard
template placed over terminal boards
Any extra or misplaced terminal will prevent the template from seating properly
Overlay
• These are visual aids in the form of transparent sheets on which guidelines or
tolerance lines are drawn
Reorganization of Work
• Suppose certain errors are due to fatigue, because of an inability to maintain
concentration for long periods of time
Source: Juran, J.M. (1988). Juran’sQuality Control Handbook, Fourth Edition. McGraw-Hill, New York, p18.83 66
Attribute Agreement Analysis (AAA)
- Summary
68
THANK YOU
18.02.4Autogage.mtw
Extra – Binary (Go / No GO)
Inspectors at a textile
manufacturing plant rate samples
of dyed fabric as go or no-go
(pass or fail) based on the
absence or presence of white
specks. To assess the consistency
and correctness of the inspectors'
ratings, a quality engineer asks
two inspectors to rate print
quality on 60 samples of fabric
twice. Fabric samples were
randomly presented.
18.02.5DyeCotton.mtw
Source: Minitab Support
Version Log
User Date Version # Changes