0% found this document useful (0 votes)
12 views29 pages

TQM unit III lecture notes

The document provides lecture notes on Total Quality Management (TQM), focusing on tools and techniques such as the seven traditional tools of quality, benchmarking, and Six Sigma. It details various quality management tools like flow charts, check sheets, histograms, and control charts, explaining their purposes and applications in identifying and solving quality-related issues. Additionally, it outlines the benchmarking process and the Six Sigma project methodology, emphasizing the importance of continuous improvement in organizational performance.

Uploaded by

meghanabala2516
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views29 pages

TQM unit III lecture notes

The document provides lecture notes on Total Quality Management (TQM), focusing on tools and techniques such as the seven traditional tools of quality, benchmarking, and Six Sigma. It details various quality management tools like flow charts, check sheets, histograms, and control charts, explaining their purposes and applications in identifying and solving quality-related issues. Additionally, it outlines the benchmarking process and the Six Sigma project methodology, emphasizing the importance of continuous improvement in organizational performance.

Uploaded by

meghanabala2516
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

RAGHU ENGINEERING COLLEGE

Autonomous
(Approved by AICTE, New Delhi, Accredited by NBA (CIV, ECE, MECH, CSE), NAAC with ‘A’
grade & Permanently Affiliated to JNTU-GV Vizianagaram)
Dakamarri, BheemunipatnamMandal, Visakhapatnam Dist. – 531 162 (A.P.)
Ph: +91-8922-248001, 248002 Fax: + 91-8922-248011
e-mail: [email protected] website: www.raghuenggcollege.com

Total Quality Management

UNIT – III Lecture Notes

NPTEL: Link: https://archive.nptel.ac.in/courses/110/104/110104080/

Syllabus

TQM tools and techniques:- The seven traditional tools of quality, New management tools – Six
sigma : concept manufacturing and Service sector,Bench marking: Reason to bench marking,
Reasons to bench marking, Bench marking process, FMEA – Stages and types.

7 TRADITIONAL TOOLS OF QUAITY

The seven traditional tools of quality are a set of visual exercises that can help in troubleshooting issues related
to quality. They are called traditional because they were developed before the use of more advanced statistical
methods, and they are suitable for people with little formal training in statistics. They can be used to solve most
quality-related problems in various industries and processes.

The seven tools are:

1. Flow chart: A diagram that shows the sequence of steps in a process, using standard symbols. It can help
identify the sources of variation, waste, and inefficiency in a process

A flowchart is a picture of the separate steps of a process in sequential order. It is a generic tool that can be
adapted for a wide variety of purposes, and can be used to describe various processes, such as a manufacturing
process, an administrative or service process, or a project plan.
2. Check sheet: A table that records the frequency of occurrence of different categories of data. It can help
collect and organize data in a systematic way, and reveal patterns or trends in the data.

A Check Sheet refers to a document used for recording data at the time and place of the operation of
interest. Typically, a blank document is taken to start with. Then it is designed for easy, quick, and
effective recording of the required data, which can be either qualitative or quantitative.
This sheet helps identify all possible errors and has the flexibility to add more sources of error based
on practical experience. These are then used for recording data about the errors, which are eventually
used for analyzing the operational issue.

Check Sheet suggested by Roger S. Pressman in software engineering a practitioner’s Approach:

Check Sheet suggested by Roger S. Pressman in software engineering a practitioner’s Approach:

A software engineering organization collects information on defects for a period of one year. Some of the
defects are uncovered as software is being developed. Others are encountered after the software has been
released to its
end-users. Although hundreds of different errors are uncovered, all can be tracked to one (or more) of the
following causes:

• incomplete or erroneous specifications (IES)


• misinterpretation of customer communication (MCC)
• intentional deviation from specifications (IDS)
• violation of programming standards (VPS)
• error in data representation (EDR)
• inconsistent component interface (ICI)
• error in design logic (EDL)
• incomplete or erroneous testing (IET)
• inaccurate or incomplete documentation (IID)
• error in programming language translation of design (PLT)
• ambiguous or inconsistent human/computer interface (HCI)
• miscellaneous (MIS)

To apply statistical SQA, Table 8.1 is built. The table indicates that IES, MCC, and EDR are the vital few causes
that account for 53 percent of all errors. It should be noted, however, that IES, EDR, PLT, and EDL would be
selected as the vital few causes if only serious errors are considered. Once the vital few causes are determined, the
software engineering organization can begin corrective action. For example, to correct MCC, the software
developer might implement facilitated application specification techniques to improve the quality of customer
communication and specifications. To improve EDR, the developer might acquire CASE tools for data modeling
and perform more stringent data design reviews.
It is important to note that corrective action focuses primarily on the vital few. As the vital few causes are
corrected, new candidates pop to the top of the stack.

Statistical quality assurance techniques for software have been shown to provide substantial quality improvement.
In some cases, software organizations have achieved a 50 percent reduction per year in defects after applying
these techniques.

In conjunction with the collection of defect information, software developers can calculate an error index (EI) for
each major step in the software process {IEE94]. After analysis, design, coding, testing, and release, the following
data are gathered:

Ei = the total number of errors uncovered during the ith step in the software engineering
process

Si = the number of serious errors


Mi = the number of moderate
errors Ti = the number of minor
errors
PS = size of the product (LOC, design statements, pages of documentation) at the ith step

ws, wm, wt = weighting factors for serious, moderate, and trivial errors, where recommended values are ws = 10,
wm = 3, wt = 1. The weighting factors for each phase should become larger as development progresses. This
rewards an organization that finds errors early.
At each step in the software process, a phase index, PIi, is computed:
PIi = ws (Si/Ei) + wm (Mi/Ei) + wt (Ti/Ei)

The error index is computed by calculating the cumulative effect on each PIi, weighting errors encountered
later in the software engineering process more heavily than those encountered earlier:

EI = (i x PIi)/PS
= (PI1 + 2PI2 + 3PI3 + . . . iPIi)/PS

BLOG image check sheetCheck Sheet suggested by Roger S. Pressman in software engineering a practitioner’s
Approach:
3. Histogram: A graphical display of the frequency distribution of numerical data. It can help analyze the
shape, spread, and central tendency of the data, and identify outliers or abnormal values1
4. Pareto chart: A type of histogram that shows the relative frequency of different causes of a problem,
arranged in descending order. It can help identify the vital few causes that account for most of the
problem, based on the 80-20 rule1

A Pareto chart is a powerful tool that helps viewers understand which factors most influence outcomes.
It's based on the Pareto principle, which is that 80 percent of outcomes arise from 20 percent of causes.
The chart helps to display this principle graphically.
Difference between Pareto Chart and Histogram: The main difference between a Pareto chart and a histogram
lies in the purpose they serve:

Histogram Pareto Chart


A histogram is a type of bar chart that A Pareto chart is a special kind of histogram that displays the
shows the distribution of variables or causes of problems based on their overall influence. It helps
causes of problems. It represents each prioritize corrective actions by showing the errors with the greatest
cause as a column, and the height of the impact in descending order of frequency. Additionally, it includes
column represents the frequency of that an arc that represents the cumulative percentage of cause
cause. frequencies.

5. Cause and effect diagram: A diagram that shows the possible causes of a problem or an effect, using a
fishbone-like structure. It can help brainstorm and organize potential causes into categories, and find the
root cause of a problem1

Link to create own fish bone diagram: https://asq.org/quality-resources/fishbone Example

1:
Example 2: given below

6. Scatter diagram: A plot that shows the relationship between two variables, using dots to represent
pairs of data values. It can help determine if there is a correlation between the variables, and how
strong or weak it is1

A scatter diagram is a correlation chart that visually depicts the relationship between two variables. It
provides insights into how two variables affect each other when they are plotted over a graph. Some of the
benefits of scatter diagrams include: Visualization of the relationship between two variables.
Control chart: A graph that shows how a process variable changes over time, using upper and lower control limits to
indicate the acceptable range of variation. It can help monitor and control a process, and detect any special causes of
variation that need corrective action1

Control charts are a key part of the management reporting process that have long been used in manufacturing,
stock trading algorithms, and process improvement methodologies like Six Sigma and Total Quality
Management (TQM). The purpose of a control chart is to set upper and lower bounds of acceptable
performance given normal variation. In other words, they provide a great way to monitor any sort of process
you have in place so you can learn how to improve your poor performance and continue with your successes.

These tools can be used individually or together, depending on the nature and scope of the quality problem.
They can also be combined with other quality management methods, such as Six Sigma, TQM, or Lean
management23
The seven tools of quality

1) Process Flow Diagram


2) Check Sheets
3) Histogram
4) Cause and Effect Diagram
5) Pareto Diagram
6) Scatter Diagram
7) Control Charts

The new seven management tools

1) Affinity Diagram
2) Interrelatioship Digraph
3) Tree Diagram
4) Matrix Diagram
5) Prioritation matrices
6) Process Decision Program Chart
7) Activity network Diagram
(1) What is Bench Marking?
Benchmarking
Benchmarking is a systematic method by which organizations can measure themselves against
the best industry practices. The essence of benchmarking is the process of borrowing ideas and
adapting them to gain competitive advantage. It is a tool for continuous improvement.

BENCHMARKING

(1) What is Bench marking?

Benchmarking is a systematic method by which organizations can measure themselves against


the best industry practices.

 Benchmarking is a systematic search for the best practices, innovative ideas, and highly
effective operating procedures.
BENCHMARKING CONCEPT

What is our performance level ? What are others performance levels ?

How do we do it ? How did they get there ?

Creative

Breakthrough Performance

REASONS TO BENCHMARK :

 It is a tool to achieve business and competitive objectives


 It can inspire managers (and Organizations) to compete
 It is time and cost effective
 It constantly scans the external environment to improve the process
 Potential and useful technological breakthroughs can be located and adopted early

PROCESS OF BENCHMARKING

The following six steps contain the core techniques of Benchmarking

1. Decide what to benchmark


 Benchmarking can be applied to any business or production process
 The strategy is usually expressed in terms of mission and vision statements
 Best to begin with the mission and critical factors
 Choosing the scope of the Benchmarking study
 Pareto analysis – what process to investigate
 Cause and Effect diagram – for tracing outputs back

2. Understand current performance


 Understand and document the current process
 Those working in the process are the most capable of identifying and correcting
problems
 While documenting, it is important to quantify
 Care should be taken during accounting information
3. Plan
 A benchmarking team should be chosen
 Organizations to serve as the benchmark need to be identified
 Time frame should be agreed upon for each of the benchmarking tasks

There are three types of benchmarking

a. Internal
b. Competitive
c. Process
4. Study Others
Benchmarking studies look for two types of information

 How best the processes are practiced


 Measurable results of these practices
Three techniques for conducting the research
are

 Questionnaires
 Site visits
 Focus groups
5. Learn from the data
Answering a series of questions like

 Is there a gap between the organization‟s performance and the performance of the best-
in-class organizations?
 What is the gap? How much is it?
 Why is there a gap? What does the best-in-class do differently that is better?
 If best-in-class practices were adopted, what would be the resulting
improvement? Benchmarking studies can reveal three different outcomes

 Negative gap
 Parity
 Positive gap

6. Using the findings


The objective is to close the gap. For this

 Findings must be communicated to the people within the organization


 Action plans must be developed to implement new
processes Groups that must agree on the change

 Process owners
 Upper management
Steps for the development and execution of action plans are

1. Specify tasks
2. Sequence tasks
3. Determine resources needs
4. Establish task schedule
5. Assign responsibility for each task
6. Describe expected results
7. Specify methods for monitoring results

PITFALLS AND CRITICISMS OF BENCHMARKING :

 Idea of copying others


 It is not a cure or a business philosophy
 Some process have to be benchmarked repeatedly
 It is not a substitute for innovation

(2) (i) Explain the relevance of 6-sigma concept in achieving quality output in a process.
(ii) Give an example of a company practicing six-sigma concept.
Six Sigma Project Methodology

 DMAIC (Define)
 Define (What is important?)
 Base-lining and benchmarking processes
 Decomposing processes into sub-processes
 Specifying customer satisfaction goals/sub-goals (requirements)
 Support tools for Define step:
 Benchmarking
 Baseline
 Voice of Customer (Win Win)
 Voice of Business (Win Win)
 Quality Function Deployment & etc.
 DMAIC (Measure)
 Measure (How are we doing?)
 Identifying relevant metrics based on engineering principles and models
 Performance measurement: throughput, quality (statistically, mean
and variation)
 Cost (currency, time, and resource)
 Other example of measurement: response times, cycle times,
transaction rates, access frequencies, and user defined thresholds
 Support tools for Measure step:
7 Basic tools : Flow chart, Check Sheets, Pareto diagrams, Cause/Effect
diagrams, Histograms, and Statistical Process Control (SPC).
Defect Metrics
Data Collection Forms, Plan, Logistics
 DMAIC (Analyze)
 Analyze (What‟s wrong?)
Evaluate the data/information for trends, patterns, causal relationships and
“root cause”
Example: Defect analysis, and Analysis of variance
Determine candidate improvements
 Support tools for Analyze step:
Cause/Effect diagram
Failure Modes & Effects
Analysis Decision & Risk
Analysis Statistical Inference
Control Charts
Capability Analysis and etc.
 DMAIC (Improve)
 Improve (What needs to be done?)
Making prototype or initial improvement
Measure and compare the results with the simulation results
Iterations taken between Measure-Analyze-Improve steps to achieve the
target level of performance
 Support tools for Improve step:
Design of
Experiments
Modeling
Tolerancing
Robust Design
DMAIC (Control)
 Control (How do we guarantee performance?)
Ensuring measurements are put into place to maintain improvements
 Support tools for Control step:
Statistical Controls: Control Charts, Time Series methods
Non-Statistical Controls: Procedural adherence, Performance Mgmt.,
Preventive activities
Six Sigma Case Study-I
Six sigma project: web design.

Define: Design a web site that ranks in the top ten (10) on all major search engines
and directories.
Measure: Enter "six sigma" and check ranking in search engines.
Analyze: URL name, title of pages, and other factors are major ranking criteria. Reciprocal
links and other routine activities aid in search engine ranking.
Improve: Purchase URL with six sigma included, optimize each page, develop reciprocal
links, and perform other regular activities required to maintain traffic and ranking.
Control: Monitor ranking on search engines weekly. You can check on the success of this
project by entering "six sigma" in the search field of your favorite search engine. The titles
and descriptions may vary , the URL link is the performance measure.

Six Sigma Case Study-II


Six sigma project: water treating.

Define: Water treating unit in 15 years had never been able to handle the nameplate
capacity. Treatment chemical costs were higher than other types of treatment units.
Measure: Confirmed flow rate through the system vs. nameplate.
Analyze: Measure system evaluation and found many measurements that were off by over 100%.
Hourly operations identified key variables in the operation of the unit and the acceptable range of
each. Conducted three different Designed Experiments.
Improve: Corrected the measurement problems. Found set of operating variables that produced
107% of nameplate capacity at higher quality with lower chemical use. Chemical use reduced by
$180K per year.
Control: Hourly operations trained, procedures modified, process to check
measurement instituted. Model for changes in inlet water conditions.

Failure Mode and Effects Analysis (FMEA)

1.
What is FMEA?

FMEA is an analytical technique that combines the technology and experience of people in
identifying foreseeable failure modes of a product or process and planning for its elimination.

FMEA is a “before-the-event” action requiring a team effort to easily and inexpensively alleviate
changes in design and production.

It is a group of activities comprising the following :

1. Recognize the potential failure of a product or process.


2. Identify actions that eliminate / reduce the potential failure.
3. Document the process.
Two important types of FMEA
are

 Design FMEA
 Process FMEA
What are the types of FMEA?
There are several types of FMEA : design FMEA, process FMEA, equipment
FMEA, maintenance FMEA, concept FMEA, service FMEA, system FMEA,
environmental FMEA, and others.

⚫ A Failure Mode is:


⚫ The way in which the component, subassembly, product, input, or process
could fail to perform its intended function

⚫ Failure modes may be the result of upstream operations or may


cause downstream operations to fail
⚫ Things that could go wrong
Why

Methodology that facilitates process improvement

⚫ Identifies and eliminates concerns early in the development of a process or


designImprove internal and external customer satisfaction
⚫ Focuses on prevention
⚫ FMEA may be a customer requirement
⚫ FMEA may be required by an
applicable Quality System Standard

INTENT OF FMEA :
 Continually measuring the reliability of a machine, product or process.
 To detect the potential product - related failure mode.
 FMEA evaluation to be conducted immediately following the design phase.
BENEFITS OF FMEA:

 Having a systematic review of components failure modes to ensure that any failure
produces minimal damage.
 Determining the effects of any failure on other items.
 Providing input data for exchange studies.
 Determining how the high-failure rate components can be adapted to high-reliability
components.
 Eliminating / minimizing the adverse effects that failures could generate.
 Helping uncover the misjudgements, errors etc.
 Reduce development time and cost of manufacturing.

(6) Explain the methodology used for FMEA.


⚫ FMEA Procedure

1. For each process input (start with high value inputs), determine the ways in which the
input can go wrong (failure mode)

2. For each failure mode, determine effects

⚫ Select a severity level for each effect

3. Identify potential causes of each failure mode

⚫ Select an occurrence level for each cause

4. List current controls for each cause


⚫ Select a detection level for each cause

5. Calculate the Risk Priority Number (RPN)

6. Develop recommended actions, assign responsible persons, and take actions

⚫ Give priority to high RPNs


⚫ MUST look at severities rated a 10

7. Assign the predicted severity, occurrence, and detection levels and compare RPNs

⚫ FMEA Inputs and Outputs

o Severity, Occurrence,
and Detection

o Severity

 Importance of the effect on customer requirements

o Occurrence: Frequency with which a given cause occurs and crates failure modes.
o Detection: The ability of the current control scheme to prevent or to detect a given cause.
⚫ Rating Scales
⚫ There are a wide variety of scoring “anchors”, both quantitative or qualitative
⚫ Two types of scales are 1-5 or 1-10
⚫ The 1-5 scale makes it easier for the teams to decide on scores
⚫ The 1-10 scale may allow for better precision in estimates and a wide variation in
scores (most common)

⚫ Rating Scales
⚫ Severity
⚫ 1 = Not Severe, 10 = Very Severe

⚫ Occurrence
⚫ 1 = Not Likely, 10 = Very Likely

⚫ Detection
⚫ 1 = Easy to Detect, 10 = Not easy to Detect
⚫ Risk Priority Number (RPN)
⚫ RPN is the product of the severity, occurrence, and detection scores.

Summary

An FMEA:

⚫ Identifies the ways in which a product or process can fail


⚫ Estimates the risk associated with specific causes
⚫ Prioritizes the actions that should be taken to reduce risk

(7) Explain with an example Process FMEA document.

The basic philosophy behind process FMEA document is shown in the following
document. Process FMEA is an analytical Technique utilized by a manufacturing
Responsible Engineering Team as a means to assure that , to the extent possible,
potential failure modes and associated causes /mechanisms have been considered
and addressed
FMEA TEAM : Engineers from

- Assembly - Manufacturing - Materials - Quality -


Service - Supplier - Customer
FMEA DOCUMENTATION :

The purpose of FMEA documentation is

 To allow all involved Engineers to have access to others thoughts

 To design and manufacture using these collective thoughts (promotes team approach)

FMEA for Software development, the complete Process

FMEA, Failure Modes and Effects Analysis, is a proactive approach to defect prevention and can be applied to
Software development process. Application of FMEA to software allows us to anticipate defects before they
occur, thus allowing us to build in quality into our software products.

Failure Modes and Effects Analysis, involves structured brainstorming to analyze potential failure modes in
software, rate and rank the risk to the software and take appropriate actions to mitigate the risk. This process is used
to improve software quality, reduce Cost of Quality (CoQ), Cost of Poor Quality, (CoPQ) and defect density.

Failure Modes and Effects Analysis, can be performed at System Level and at Network Element/Component
level. They require planning, as detailed in phase 1 below. Our process of conducting a Failure Modes Analysis is
documented in phase 2 in the Table below. The steps of conducting a System Failure Modes Analysis are outlined
in Phase 3, in the Table below. The steps of conducting a Network Element/Component level Failure Modes
Analysis, are outlined in Phase 4 in the Table below.

The High level process steps for performing Software FMEA are:
1. Planning for System Software FMEA

2. Train and familiarize the team with traditional FMEA process

3. Cause and Effect Analysis

4. Identifying Potential Failure Modes

5. Assigning original RPN ratings pre-risk mitigation

6. Assigning resulting RPN ratings post-risk mitigation

7. Conduct Software System, Software Sub-system (Network Element level), or Software Component (Sub-
sub-system Level) Failure Modes Analysis, as required.

8. Collect appropriate metrics to analyze Return on Investment (ROI) on the Software FMEA effort
Conduct Software FMEA, Process Guidelines

 Once the potential failure modes are identified, they are further analyzed, by potential causes and
potential effects of the failure mode (Cause and Effects Analysis, 5 Whys, etc.).

 For each failure mode, a Risk Priority number (RPN) is assigned based on:

 Occurrence Rating, Range 1–10; the higher the occurrence probability, the higher the rating

 Severity Rating, Range 1–10; the higher the severity associated with the potential failure mode,
the higher the rating

 Detectability Rating, Range 1–10; the lower the detectability, the higher the ratingOne simplification
is to use a rating scale of High, Medium and Low for Occurrence, Severity and Detectability Ratings:

 High: 9

 Medium: 6

 Low: 3
 RPN = Occurrence * Severity * Detection; Maximum = 1000, Minimum = 1

 For all potential failures identified with an RPN score of 150 or greater, the FMEA team will
propose recommended actions to be completed within the phase the failure was found. These actions
can be FTR Errors.

 A resulting RPN score must be recomputed after each recommended action to show that the risk has
been significantly mitigated.

Conduct System Engineering FMEA

 At the System Engineering Level, the Failure Modes Analysis, consists of:

 Complete FMEA Team Charter, get Management approval. schedule meetings.

 Identify and scope the customer critical and high risk areas.

 Front end (Top-down approach) analysis of System Documentation. Using the system functional
Parameters to identify the areas of concern for system engineers and down stream development
teams.

 FMEA will then be performed on the system requirements and sub- systems identified.

Conduct Software FMEA for Component and/or Application team

 Complete FMEA Team Charter, get Management approval, schedule meetings.

 Top-Down approach, using the System Engineering FMEA results.


 Bottom-up approach, using history of previous releases to identify areas of concern in the current
software architecture.

 Perform FMEA analysis

 Box Requirements phase

 Box High Level Design and Low level Design phase

 Box Low Level Design phase

 Box Coding Phase (If required)

 Collect FMEA metrics and ROI (Return on Investment)


Software Failure Modes Analysis, results in significant cost savings, by detecting defects early that would have
otherwise been detected in the test phases or by the Customer. A Software Defect Cost model showed that the
later a defect is detected, the more the cost; a defect detected by the Customer can cost up to $70,000 (Per
Defect!!). However, the argument may continue as to how one can measure the benefit of Software FMEA effort.
It can be considered a “chicken and the egg” type problem because issues identified early are not looked upon as
severely as defects; defects by definition are issues identified after a test phase, so the true measure of a Failure
Modes Analysis activity would require a comparative analysis on the Software system or sub-system, comparing
typical defect density, testing costs, productivity, in a Software FMEA centric Software release versus a non
Software FMEA release.

Case studies have shown that there is an extremely high ROI (return on investment) for each Software FMEA
activity; the return ranges from 10X to 40X.
One way to look at the Software FMEA ROI is in terms of a cost avoidance factor — the amount of cost avoided
by identifying issues early in the life cycle. This is accomplished relatively easily by multiplying the number of
issues found in a phase by the Software cost value from the Software cost table.

The main purpose of doing a Software Failure Modes Analysis, is to identify Software defects in the associated
development phases. Identifying Requirements defects in Requirements phase, Design defects in Design phase,
etc. This ensures reliable Software, with significant cost and schedule time savings to the organization. Earlier
detection of defects is a paradigm change, but may not be obvious to Software managers or leaders; the Software
Failure Modes Analysis, Subject Matter Expert may need to convince senior leaders and management to commit
to this effort.

The Quantitative benefits of Software FMEA are:


• Software that is more robust and reliable
• Software testing cost is significantly reduced (measured as Cost of Poor Quality)
• Productivity of the Organization increases, in terms of developing reliable, and high quality software in a
shorter duration
• Improvement in schedule time

Conducting FMEA in software model: https://dl.acm.org/doi/abs/10.1145/1764810.1764819

You might also like