0% found this document useful (0 votes)
200 views9 pages

2.1 - Cec342

This document provides an overview of yield, measurement accuracy, and test time concepts for a lecture on mixed signal IC design testing. It defines yield as the ratio of good devices to total devices tested. Measurement accuracy refers to how close a measurement is to the true value, while precision refers to the consistency of repeated measurements. The document outlines sources of error such as systematic or bias errors that consistently affect measurements, and random errors caused by noise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
200 views9 pages

2.1 - Cec342

This document provides an overview of yield, measurement accuracy, and test time concepts for a lecture on mixed signal IC design testing. It defines yield as the ratio of good devices to total devices tested. Measurement accuracy refers to how close a measurement is to the true value, while precision refers to the consistency of repeated measurements. The document outlines sources of error such as systematic or bias errors that consistently affect measurements, and random errors caused by noise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

DEPARTMENT OF ECE

CEC342 MIXED SIGNAL IC DESIGN TESTING (R 2021)

Lecture
Unit 2 YIELD, MEASUREMENT ACCURACY, AND TEST TIME 01
No.
Topics Yield - Measurement Terminology
Bloom’s
Learning Outcome (LO) At the end of this lecture, students will be able to
Knowledge Level
LO1 Define Yield Remembering
LO2 Summarize various Measurement Terminology Understanding

Yield
The primary goal of a semiconductor manufacturer is to produce large quantities of ICs for sale
to various electronic markets—that is, cell phones, ipods, HDTVs, and so on. Semiconductor factories are
highly automated, capable of producing millions of ICs over a 24-hour period, every day of the week. For
the most part, these ICs are quite similar in behavior, although some will be quite different from one
another. A well-defined means to observe the behavior of a set of large elements, such as ICs, is to
categorize their individual behavior in the form of a histogram, as shown in Figure. Here we illustrate a
histogram of the offset voltage associated with a lot of devices. We see that 15% of devices produced in
this lot had an offset voltage between −0.129 V and −0.128 V. We can conjecture that the probability of
another lot producing devices with an offset voltage in this same range is 15%. Of course, how confident
we are with our conjecture is the basis of all things statistical; we need to capture more data to support
our claim. This we will address shortly; for now, let us consider the “goodness of what we produced.

Figure. Histogram showing specification limits and regions of acceptance and rejection.

In general, the component data sheet defines the “goodness” of an analog or mixed-signal
device. As a data sheet forms the basis of any contract between a supplier and a buyer, we avoid
any subjective argument of why one measure is better or worse than another; it is simply a matter
of data sheet definition. Generally, the goodness of an analog and mixed-signal device is defined
by a range of acceptability, bounded by a lower specification limit (LSL) and an upper specification
limit (USL), as further illustrated in Figure 5.1. These limits would be found on the device data
sheet. Any device whose behavior falls outside this range would be considered as a bad device.
This particular example considers a device with a two-sided limit. Similarly, the same argument
applies to a one-sided limit; just a different diagram is used.

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


Testing is the process of separating good devices from the bad ones. The yield of a given lot
of material is defined as the ratio of the total good devices divided by the total devices tested:

If 10,000 parts are tested and only 7000 devices pass all tests, then the yield on that lot of
10,000 devices is 70%. Because testing is not a perfect process, mistakes are made, largely on
account of the measurement limitations of the tester, noise picked up at the test interface, and noise
produced by the DUT itself. The most critical error that can be made is one where a bad device is
declared good, because this has a direct impact on the operations of a buyer. This error is known as
an escape. As a general rule, the impact that an escape has on a manufacturing process goes up
exponentially as it moves from one assembly level to another. Hence, the cost of an escape can be
many orders of magnitude greater than the cost of a single part. Manufacturers make use of test
metrics to gauge the goodness of the component screening process. One measure is the defect level
(DL) and it is defined as

or, when written in terms of escapes, we write

Often DL is expressed in the number of defects-per-million or what is more commonly


stated as parts per million (ppm).
It is important to note that a measure of defect level is a theoretical concept based on a
probability argument and one that has no empirical basis, because if we knew which devices were
escapes, then we would be able to identify them as bad and remove them from the set of good
devices. Nonetheless, companies do estimate their defect levels from various measures based on
their field returns, or by analyzing the test data during the test process using a secondary
screening procedure.

Measurement Terminology
The cause of escapes in an analog or mixed-signal environment is largely one that is related
to the measurement process itself. A value presented to the instrument by the DUT will introduce
errors, largely on account of the electronic circuits that make up the instrument. These errors
manifest themselves in various forms. Below we shall outline these errors first in a qualitative way,
and then we will move onto a quantitative description in the next section on how these errors
interrelate and limit the measurement process.

Accuracy and Precision


In conversational English, the terms accuracy and precision are virtually identical in
meaning. Combining the definitions from these and other sources gives us an idea of the accepted
technical meaning of the words:

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


Accuracy: The difference between the average of measurements and a standard sample for which
the “true” value is known. The degree of conformance of a test instrument to absolute standards,
usually expressed as a percentage of reading or a percentage of measurement range (full scale).

Precision: The variation of a measurement system obtained by repeating measurements on the


same sample back-to-back using the same measurement conditions.

According to these definitions, precision refers only to the repeatability of a series of


measurements. It does not refer to consistent errors in the measurements. A series of
measurements can be incorrect by 2 V, but as long as they are consistently wrong by the same
amount, then the measurements are considered to be precise.

This definition of precision is somewhat counterintuitive to most people, since the words
precision and accuracy are so often used synonymously. Few of us would be impressed by a
“precision” voltmeter exhibiting a consistent 2-V error! Fortunately, the word repeatability is far
more commonly used in the test-engineering field than the word precision. This textbook will use
the term accuracy to refer to the overall closeness of an averaged measurement to the true value
and repeatability to refer to the consistency with which that measurement can be made. The word
precision will be avoided.
Unfortunately, the definition of accuracy is also somewhat ambiguous. Many sources of
error can affect the accuracy of a given measurement. The accuracy of a measurement should
probably refer to all possible sources of error. However, the accuracy of an instrument (as
distinguished from the accuracy of a measurement) is often specified in the absence of
repeatability fluctuations and instrument resolution limitations. Rather than trying to decide
which of the various error sources are included in the definition of accuracy, it is probably more
useful to discuss some of the common error components that contribute to measurement
inaccuracy. It is incumbent upon the test engineer to make sure all components of error have been
accounted for in a given specification of accuracy.

Systematic or Bias Errors


Systematic or bias errors are those that show up consistently from measurement to
measurement. For example, assume that an amplifier’s output exhibits an offset of 100 mV from
the ideal value of 0 V. Using a digital voltmeter (DVM), we could take multiple readings of the
offset over time and record each measurement. A typical measurement series might look like this:
101 mV, 103 mV, 102 mV, 101 mV, 102 mV, 103 mV, 103 mV, 101 mV, 102 mV . . .
This measurement series shows an average error of about 2 mV from the true value of 100 mV.
Errors like this are caused by consistent errors in the measurement instruments. The errors can
result from a combination of many things, including DC offsets, gain errors, and nonideal linearity
in the DVM’s measurement circuits. Systematic errors can often be reduced through a process
called calibration.

Random Errors

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


In the preceding example, notice that the measurements are not repeatable. The DVM gives
readings from 101 to 103 mV. Such variations do not surprise most engineers because DVMs are
relatively inexpensive. On the other hand, when a two million dollar piece of ATE equipment
cannot produce the same answer twice in a row, eyebrows may be raised.
Inexperienced test engineers are sometimes surprised to learn that an expensive tester
cannot give perfectly repeatable answers. They may be inclined to believe that the tester software
is defective when it fails to produce the same result every time the program is executed. However,
experienced test engineers recognize that a certain amount of random error is to be expected in
analog and mixed-signal measurements.
Random errors are usually caused by thermal noise or other noise sources in either the
DUT or the tester hardware. One of the biggest challenges in mixed-signal testing is determining
whether the random errors are caused by bad DIB design, by bad DUT design, or by the tester
itself. If the source of error is found and cannot be corrected by a design change, then averaging or
filtering of measurements may be required.

Resolution (Quantization Error)


In the 100-mV measurement list, notice that the measurements are always rounded off to
the nearest millivolt. The measurement may have been rounded off by the person taking the
measurements, or perhaps the DVM was only capable of displaying three digits. ATE
measurement instruments have similar limitations in measurement resolution. Limited resolution
results from the fact that continuous analog signals must fi rst be converted into a digital format
before the ATE computer can evaluate the test results. The tester converts analog signals into
digital form using analog-to-digital converters (ADCs).
ADCs by nature exhibit a feature called quantization error. Quantization error is a result of
the conversion from an infi nitely variable input voltage (or current) to a finite set of possible
digital output results from the ADC. Figure shows the relationship between input voltages and
output codes for an ideal 3-bit ADC. Notice that an input voltage of 1.2 V results in the same ADC
output code as an input voltage of 1.3 V. In fact, any voltage from 1.0 to 1.5 V will produce an
output code of 2.

Figure. Output codes versus input voltages for an ideal 3-bit ADC.

If this ADC were part of a crude DC voltmeter, the meter would produce an output reading
of 1.25 V any time the input voltage falls between 1.0 and 1.5 V. This inherent error in ADCs and

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


measurement instruments is caused by quantization error. The resolution of a DC meter is often
limited by the quantization error of its ADC circuits.
If a meter has 12 bits of resolution, it means that it can resolve a voltage to one part in 212–1
(one part in 4095). If the meter’s full-scale range is set to ±2 V, then a resolution of approximately
1 mV can be achieved (4 V/4095 levels). This does not automatically mean that the meter is
accurate to 1 mV, it simply means the meter cannot resolve variations in input voltage smaller than
1 mV. An instrument’s resolution can far exceed its accuracy. For example, a 23-bit voltmeter
might be able to produce a measurement with a 1-μV resolution, but it may have a systematic
error of 2 mV.

Repeatability
Nonrepeatable answers are a fact of life for mixed-signal test engineers. A large portion of
the time required to debug a mixed-signal test program can be spent tracking down the various
sources of poor repeatability. Since all electrical circuits generate a certain amount of random
noise, measurements such as those in the 100-mV offset example are fairly common. In fact, if a
test engineer gets the same answer 10 times in a row, it is time to start looking for a problem. Most
likely, the tester instrument’s full-scale voltage range has been set too high, resulting in a
measurement resolution problem. For example, if we configured a meter to a range having a 10-
mV resolution, then our measurements from the prior example would be very repeatable (100 mV,
100 mV, 100 mV, 100 mV, etc.). A novice test engineer might think that this is a terrific result, but
the meter is just rounding off the answer to the nearest 10-mV increment due to an input ranging
problem. Unfortunately, a voltage of 104 mV would also have resulted in this same series of
perfectly repeatable, perfectly incorrect measurement results. Repeatability is desirable, but it does
not in itself guarantee accuracy.

Stability
A measurement instrument’s performance may drift with time, temperature, and humidity.
The degree to which a series of supposedly identical measurements remains constant over time,
temperature, humidity, and all other time-varying factors is referred to as stability. Stability is an
essential requirement for accurate instrumentation.
Shifts in the electrical performance of measurement circuits can lead to errors in the tested
results. Most shifts in performance are caused by temperature variations. Testers are usually
equipped with temperature sensors that can automatically determine when a temperature shift has
occurred. The tester must be recalibrated anytime the ambient temperature has shifted by a few
degrees. The calibration process brings the tester instruments back into alignment with known
electrical standards so that measurement accuracy can be maintained at all times.
After the tester is powered up, the tester’s circuits must be allowed to stabilize to a constant
temperature before calibrations can occur. Otherwise, the measurements will drift over time as the
tester heats up. When the tester chassis is opened for maintenance or when the test head is opened
up or powered down for an extended period, the temperature of the measurement electronics will
typically drop. Calibrations then have to be rerun once the tester recovers to a stable temperature.

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


Shifts in performance can also be caused by aging electrical components. These changes are
typically much slower than shifts due to temperature. The same calibration processes used to
account for temperature shifts can easily accommodate shifts of components caused by aging.
Shifts caused by humidity are less common, but can also be compensated for by periodic
calibrations.

Correlation
Correlation is another activity that consumes a great deal of mixed-signal test program
debug time. Correlation is the ability to get the same answer using different pieces of hardware or
software. It can be extremely frustrating to try to get the same answer on two different pieces of
equipment using two different test programs. It can be even more frustrating when two
supposedly identical pieces of test equipment running the same program give two different
answers.
Of course correlation is seldom perfect, but how good is good enough? In general, it is a
good idea to make sure that the correlation errors are less than one-tenth of the full range between
the minimum test limit and the maximum test limit. However, this is just a rule of thumb. The
exact requirements will differ from one test to the next. Whatever correlation errors exist, they
must be considered part of the measurement uncertainty, along with nonrepeatability and
systematic errors.
The test engineer must consider several categories of correlation. Test results from a mixed-
signal test program cannot be fully trusted until the various types of correlation have been
verified. The more common types of correlation include tester-to-bench, tester-to-tester, program-
to-program, DIB-to-DIB, and day-to-day correlation.

Tester-to-Bench Correlation
Often, a customer will construct a test fixture using bench instruments to evaluate the
quality of the device under test. Bench equipment such as oscilloscopes and spectrum analyzers
can help validate the accuracy of the ATE tester’s measurements. Bench correlation is a good idea,
since ATE testers and test programs often produce incorrect results in the early stages of debug. In
addition, IC design engineers often build their own evaluation test setups to allow quick debug of
device problems. Each of these test setups must correlate to the answers given by the ATE tester.
Often the tester is correct and the bench is not. Other times, test program problems are uncovered
when the ATE results do not agree with a bench setup. The test engineer will often need to help
debug the bench setup to get to the bottom of correlation errors between the tester and the bench.

Tester-to-Tester Correlation
Sometimes a test program will work on one tester, but not on another presumably identical
tester. The differences between testers may be catastrophically different, or they may be very
subtle. The test engineer should compare all the test results on one tester to the test results
obtained using other testers. Only after all the testers agree on all tests is the test program and test
hardware debugged and ready for production. Similar correlation problems arise when an existing
test program is ported from one tester type to another. Often, the testers are neither software

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


compatible nor hardware compatible with one another. In fact, the two testers may not even be
manufactured by the same ATE vendor. A myriad of correlation problems can arise because of the
vast differences in DIB layout and tester software between different tester types. To some extent,
the architecture of each tester will determine the best test methodology for a particular
measurement. A given test may have to be executed in a very different manner on one tester
versus another. Any difference in the way a measurement is taken can affect the results. For this
reason, correlation between two different test approaches can be very difficult to achieve.
Conversion of a test program from one type of tester to another can be one of the most daunting
tasks a mixed-signal test engineer faces.

Program-to-Program Correlation
When a test program is streamlined to reduce test time, the faster program must be
correlated to the original program to make sure no significant shifts in measurement results have
occurred. Often, the test reduction techniques cause measurement errors because of reduced DUT
settling time and other timing-related issues. These correlation errors must be resolved before the
faster program can be released into production.

DIB-to-DIB Correlation
No two DIBs are identical, and sometimes the differences cause correlation errors. The test
engineer should always check to make sure that the answers obtained on multiple DIB boards
agree. DIB correlation errors can often be corrected by focused calibration software written by the
test Engineer.

Day-to-Day Correlation
Correlation of the same DIB and tester over a period of time is also important. If the tester
and DIB have been properly calibrated, there should be no drift in the answers from one day to the
next. Subtle errors in software and hardware often remain hidden until day-to-day correlation is
performed. The usual solution to this type of correlation problem is to improve the focused
calibration process.

Reproducibility
The term reproducibility is often used interchangeably with repeatability, but this is not a
correct usage of the term. The difference between reproducibility and repeatability relates to the
effects of correlation and stability on a series of supposedly identical measurements. Repeatability
is most often used to describe the ability of a single tester and DIB board to get the same answer
multiple times as the test program is repetitively executed.
Reproducibility, by contrast, is the ability to achieve the same measurement result on a
given DUT using any combination of equipment and personnel at any given time. It is defined as
the statistical deviation of a series of supposedly identical measurements taken over a period of
time. These measurements are taken using various combinations of test conditions that ideally
should not change the measurement result. For example, the choice of equipment operator, tester,
DIB board, and so on, should not affect any measurement result.

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


Consider the case in which a measurement is highly repeatable, but not reproducible. In
such a case, the test program may consistently pass a particular DUT on a given day and yet
consistently fail the same DUT on another day or on another tester. Clearly, measurements must
be both repeatable and reproducible to be production-worthy.

Assessment questions to the lecture

Qn Bloom’s
Question Answer
No Knowledge Level
1 What does the yield of a given lot of material represent?
A) The ratio of good devices to total devices tested
B) The total number of devices tested A Remembering
C) The percentage of devices with random errors
D) The measurement resolution of the tester
2 What is the theoretical concept based on a probability
argument used to estimate defect levels?
A) Quantization error
D Remembering
B) Stability
C) Correlation
D) Defect level (DL)
3 What does precision refer to in measurement terminology?
A) Measurement accuracy
B) Repeatability of a series of measurements B Remembering
C) Variation of measurement conditions
D) Quantization error
4 What is quantization error related to in measurement
instruments?
A) Stability
C Remembering
B) Accuracy
C) Resolution
D) Reproducibility
5 What does stability in measurement instruments refer to?
A) The consistency of a series of measurements
B) Drift with time, temperature, and humidity
C) The ability to get the same answer using different B Remembering
hardware
D) The statistical deviation of identical measurements over
time

Students have to prepare answers for the following questions at the end of the lecture

Marks CO Bloom’s
Qn
Question Knowledge
No
Level
1 Define the term "yield" in semiconductor
2 CO2 Remembering
manufacturing.
2 Explain the concept of "defect level (DL)" in 2 CO2 Understanding

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE


semiconductor testing.
3 Define the term "stability" in the context of
measurement instruments. Why is stability crucial 2 CO2 Remembering
for accurate measurements?
4 Describe quantization error, and how does it relate to
2 CO2 Remembering
measurement instruments?
5 Explain various Measurement Terminology used in
13 CO2 Understanding
Mixed Signal IC Design Testing.

Reference Book

Author(s) Title of the book Page numbers

Gordon W.Roberts, An Introduction to Mixed-signal IC Test and


127 - 134
Friedrich Taenzler, Measurement, Oxford University Press, Inc.2012
Mark Burns,

CEC342 - Mixed Signal IC Design Testing Prepared by Sriram Sundar S, AP/ECE

You might also like