2.1 - Cec342
2.1 - Cec342
Lecture
Unit 2 YIELD, MEASUREMENT ACCURACY, AND TEST TIME 01
No.
Topics Yield - Measurement Terminology
Bloom’s
Learning Outcome (LO) At the end of this lecture, students will be able to
Knowledge Level
LO1 Define Yield Remembering
LO2 Summarize various Measurement Terminology Understanding
Yield
The primary goal of a semiconductor manufacturer is to produce large quantities of ICs for sale
to various electronic markets—that is, cell phones, ipods, HDTVs, and so on. Semiconductor factories are
highly automated, capable of producing millions of ICs over a 24-hour period, every day of the week. For
the most part, these ICs are quite similar in behavior, although some will be quite different from one
another. A well-defined means to observe the behavior of a set of large elements, such as ICs, is to
categorize their individual behavior in the form of a histogram, as shown in Figure. Here we illustrate a
histogram of the offset voltage associated with a lot of devices. We see that 15% of devices produced in
this lot had an offset voltage between −0.129 V and −0.128 V. We can conjecture that the probability of
another lot producing devices with an offset voltage in this same range is 15%. Of course, how confident
we are with our conjecture is the basis of all things statistical; we need to capture more data to support
our claim. This we will address shortly; for now, let us consider the “goodness of what we produced.
Figure. Histogram showing specification limits and regions of acceptance and rejection.
In general, the component data sheet defines the “goodness” of an analog or mixed-signal
device. As a data sheet forms the basis of any contract between a supplier and a buyer, we avoid
any subjective argument of why one measure is better or worse than another; it is simply a matter
of data sheet definition. Generally, the goodness of an analog and mixed-signal device is defined
by a range of acceptability, bounded by a lower specification limit (LSL) and an upper specification
limit (USL), as further illustrated in Figure 5.1. These limits would be found on the device data
sheet. Any device whose behavior falls outside this range would be considered as a bad device.
This particular example considers a device with a two-sided limit. Similarly, the same argument
applies to a one-sided limit; just a different diagram is used.
If 10,000 parts are tested and only 7000 devices pass all tests, then the yield on that lot of
10,000 devices is 70%. Because testing is not a perfect process, mistakes are made, largely on
account of the measurement limitations of the tester, noise picked up at the test interface, and noise
produced by the DUT itself. The most critical error that can be made is one where a bad device is
declared good, because this has a direct impact on the operations of a buyer. This error is known as
an escape. As a general rule, the impact that an escape has on a manufacturing process goes up
exponentially as it moves from one assembly level to another. Hence, the cost of an escape can be
many orders of magnitude greater than the cost of a single part. Manufacturers make use of test
metrics to gauge the goodness of the component screening process. One measure is the defect level
(DL) and it is defined as
Measurement Terminology
The cause of escapes in an analog or mixed-signal environment is largely one that is related
to the measurement process itself. A value presented to the instrument by the DUT will introduce
errors, largely on account of the electronic circuits that make up the instrument. These errors
manifest themselves in various forms. Below we shall outline these errors first in a qualitative way,
and then we will move onto a quantitative description in the next section on how these errors
interrelate and limit the measurement process.
This definition of precision is somewhat counterintuitive to most people, since the words
precision and accuracy are so often used synonymously. Few of us would be impressed by a
“precision” voltmeter exhibiting a consistent 2-V error! Fortunately, the word repeatability is far
more commonly used in the test-engineering field than the word precision. This textbook will use
the term accuracy to refer to the overall closeness of an averaged measurement to the true value
and repeatability to refer to the consistency with which that measurement can be made. The word
precision will be avoided.
Unfortunately, the definition of accuracy is also somewhat ambiguous. Many sources of
error can affect the accuracy of a given measurement. The accuracy of a measurement should
probably refer to all possible sources of error. However, the accuracy of an instrument (as
distinguished from the accuracy of a measurement) is often specified in the absence of
repeatability fluctuations and instrument resolution limitations. Rather than trying to decide
which of the various error sources are included in the definition of accuracy, it is probably more
useful to discuss some of the common error components that contribute to measurement
inaccuracy. It is incumbent upon the test engineer to make sure all components of error have been
accounted for in a given specification of accuracy.
Random Errors
Figure. Output codes versus input voltages for an ideal 3-bit ADC.
If this ADC were part of a crude DC voltmeter, the meter would produce an output reading
of 1.25 V any time the input voltage falls between 1.0 and 1.5 V. This inherent error in ADCs and
Repeatability
Nonrepeatable answers are a fact of life for mixed-signal test engineers. A large portion of
the time required to debug a mixed-signal test program can be spent tracking down the various
sources of poor repeatability. Since all electrical circuits generate a certain amount of random
noise, measurements such as those in the 100-mV offset example are fairly common. In fact, if a
test engineer gets the same answer 10 times in a row, it is time to start looking for a problem. Most
likely, the tester instrument’s full-scale voltage range has been set too high, resulting in a
measurement resolution problem. For example, if we configured a meter to a range having a 10-
mV resolution, then our measurements from the prior example would be very repeatable (100 mV,
100 mV, 100 mV, 100 mV, etc.). A novice test engineer might think that this is a terrific result, but
the meter is just rounding off the answer to the nearest 10-mV increment due to an input ranging
problem. Unfortunately, a voltage of 104 mV would also have resulted in this same series of
perfectly repeatable, perfectly incorrect measurement results. Repeatability is desirable, but it does
not in itself guarantee accuracy.
Stability
A measurement instrument’s performance may drift with time, temperature, and humidity.
The degree to which a series of supposedly identical measurements remains constant over time,
temperature, humidity, and all other time-varying factors is referred to as stability. Stability is an
essential requirement for accurate instrumentation.
Shifts in the electrical performance of measurement circuits can lead to errors in the tested
results. Most shifts in performance are caused by temperature variations. Testers are usually
equipped with temperature sensors that can automatically determine when a temperature shift has
occurred. The tester must be recalibrated anytime the ambient temperature has shifted by a few
degrees. The calibration process brings the tester instruments back into alignment with known
electrical standards so that measurement accuracy can be maintained at all times.
After the tester is powered up, the tester’s circuits must be allowed to stabilize to a constant
temperature before calibrations can occur. Otherwise, the measurements will drift over time as the
tester heats up. When the tester chassis is opened for maintenance or when the test head is opened
up or powered down for an extended period, the temperature of the measurement electronics will
typically drop. Calibrations then have to be rerun once the tester recovers to a stable temperature.
Correlation
Correlation is another activity that consumes a great deal of mixed-signal test program
debug time. Correlation is the ability to get the same answer using different pieces of hardware or
software. It can be extremely frustrating to try to get the same answer on two different pieces of
equipment using two different test programs. It can be even more frustrating when two
supposedly identical pieces of test equipment running the same program give two different
answers.
Of course correlation is seldom perfect, but how good is good enough? In general, it is a
good idea to make sure that the correlation errors are less than one-tenth of the full range between
the minimum test limit and the maximum test limit. However, this is just a rule of thumb. The
exact requirements will differ from one test to the next. Whatever correlation errors exist, they
must be considered part of the measurement uncertainty, along with nonrepeatability and
systematic errors.
The test engineer must consider several categories of correlation. Test results from a mixed-
signal test program cannot be fully trusted until the various types of correlation have been
verified. The more common types of correlation include tester-to-bench, tester-to-tester, program-
to-program, DIB-to-DIB, and day-to-day correlation.
Tester-to-Bench Correlation
Often, a customer will construct a test fixture using bench instruments to evaluate the
quality of the device under test. Bench equipment such as oscilloscopes and spectrum analyzers
can help validate the accuracy of the ATE tester’s measurements. Bench correlation is a good idea,
since ATE testers and test programs often produce incorrect results in the early stages of debug. In
addition, IC design engineers often build their own evaluation test setups to allow quick debug of
device problems. Each of these test setups must correlate to the answers given by the ATE tester.
Often the tester is correct and the bench is not. Other times, test program problems are uncovered
when the ATE results do not agree with a bench setup. The test engineer will often need to help
debug the bench setup to get to the bottom of correlation errors between the tester and the bench.
Tester-to-Tester Correlation
Sometimes a test program will work on one tester, but not on another presumably identical
tester. The differences between testers may be catastrophically different, or they may be very
subtle. The test engineer should compare all the test results on one tester to the test results
obtained using other testers. Only after all the testers agree on all tests is the test program and test
hardware debugged and ready for production. Similar correlation problems arise when an existing
test program is ported from one tester type to another. Often, the testers are neither software
Program-to-Program Correlation
When a test program is streamlined to reduce test time, the faster program must be
correlated to the original program to make sure no significant shifts in measurement results have
occurred. Often, the test reduction techniques cause measurement errors because of reduced DUT
settling time and other timing-related issues. These correlation errors must be resolved before the
faster program can be released into production.
DIB-to-DIB Correlation
No two DIBs are identical, and sometimes the differences cause correlation errors. The test
engineer should always check to make sure that the answers obtained on multiple DIB boards
agree. DIB correlation errors can often be corrected by focused calibration software written by the
test Engineer.
Day-to-Day Correlation
Correlation of the same DIB and tester over a period of time is also important. If the tester
and DIB have been properly calibrated, there should be no drift in the answers from one day to the
next. Subtle errors in software and hardware often remain hidden until day-to-day correlation is
performed. The usual solution to this type of correlation problem is to improve the focused
calibration process.
Reproducibility
The term reproducibility is often used interchangeably with repeatability, but this is not a
correct usage of the term. The difference between reproducibility and repeatability relates to the
effects of correlation and stability on a series of supposedly identical measurements. Repeatability
is most often used to describe the ability of a single tester and DIB board to get the same answer
multiple times as the test program is repetitively executed.
Reproducibility, by contrast, is the ability to achieve the same measurement result on a
given DUT using any combination of equipment and personnel at any given time. It is defined as
the statistical deviation of a series of supposedly identical measurements taken over a period of
time. These measurements are taken using various combinations of test conditions that ideally
should not change the measurement result. For example, the choice of equipment operator, tester,
DIB board, and so on, should not affect any measurement result.
Qn Bloom’s
Question Answer
No Knowledge Level
1 What does the yield of a given lot of material represent?
A) The ratio of good devices to total devices tested
B) The total number of devices tested A Remembering
C) The percentage of devices with random errors
D) The measurement resolution of the tester
2 What is the theoretical concept based on a probability
argument used to estimate defect levels?
A) Quantization error
D Remembering
B) Stability
C) Correlation
D) Defect level (DL)
3 What does precision refer to in measurement terminology?
A) Measurement accuracy
B) Repeatability of a series of measurements B Remembering
C) Variation of measurement conditions
D) Quantization error
4 What is quantization error related to in measurement
instruments?
A) Stability
C Remembering
B) Accuracy
C) Resolution
D) Reproducibility
5 What does stability in measurement instruments refer to?
A) The consistency of a series of measurements
B) Drift with time, temperature, and humidity
C) The ability to get the same answer using different B Remembering
hardware
D) The statistical deviation of identical measurements over
time
Students have to prepare answers for the following questions at the end of the lecture
Marks CO Bloom’s
Qn
Question Knowledge
No
Level
1 Define the term "yield" in semiconductor
2 CO2 Remembering
manufacturing.
2 Explain the concept of "defect level (DL)" in 2 CO2 Understanding
Reference Book