0% found this document useful (0 votes)
79 views5 pages

Diagnosis Review Questions and Short Notes

Uploaded by

rami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views5 pages

Diagnosis Review Questions and Short Notes

Uploaded by

rami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

REVIEW QUESTIONS

1.C 6.A 11.D


2.B 7.D 12.A
3.A 8.C
4.D 9.B
5.A 10.C

Diagnosis SOME NOTES

Clinicians should be familiar with basic principles when interpreting diagnostic tests.

THE ACCURACY OF A TEST RESULT

Diagnosis is an imperfect process, resulting in a probability rather than a certainty of being right.
Te doctor’s certainty or uncertainty about a diagnosis has been expressed by using terms such as
“rule out” or “possible” before a clinical diagnosis. Increasingly, clinicians express the likelihood
that a patient has a disease as a probability.

Te test has given the correct result when it is positive in the presence of disease (true positive) or
negative in the absence of the disease (true negative). On the other hand, the test has been
misleading if it is positive when the disease is absent (false positive) or negative when the
disease is present (false negative).

The Gold Standard

A test’s accuracy is considered in relation to some way of knowing whether the disease is truly
present or not—a sounder indication of the truth often referred to as the gold standard (or
reference standard or criterion standard).

More often, one must turn to relatively elaborate, expensive, or risky tests to be certain whether
the disease is present or absent. Among these are biopsy, surgical exploration, imaging
procedures, and of course, autopsy.

Because it is almost always more costly, more dangerous, or both to use more accurate ways of
establishing the truth, clinicians and patients prefer simpler tests to the rigorous gold standard, at
least initially. Chest x-rays and sputum smears are used to determine the cause of pneumonia,
rather than bronchoscopy and lung biopsy for examination of the diseased lung tissue.
Electrocardiograms and blood tests are used frst to investigate the possibility of acute myocardial
infarction, rather than catheterization or imaging procedures.

Definition

sensitivity is defned as the proportion of people with the disease who have a positive test for the
disease. A sensitive test will rarely miss people with the disease.

A specifc test will rarely misclassify people as having the disease when they do not.

Use of Sensitive Tests

A sensitive test (i.e., one that is usually positive in the presence of disease) should be chosen
when there is an important penalty for missing a disease.

Use of Sensitive Tests

a highly specifc test is rarely positive in the absence of disease; it gives few false-positive results.

In summary, a highly specifc test is most helpful when the test result is positive.

Trade-Offs between Sensitivity and Specificity

It is obviously desirable to have a test that is both highly sensitive and highly specifc.
Unfortunately, this is often not possible. Instead, whenever clinical data take on a range of
values, there is a trade-off between the sensitivity and specifcity for a given diagnostic test. In
those situations, the location of a cutoff point, the point on the continuum between normal and
abnormal, is an arbitrary decision. As a consequence, for any given test result expressed on a
continuous scale, one characteristic, such as sensitivity, can be increased only at the expense of
the other (e.g., specifcity).

The Receiver Operator Characteristic (ROC) Curve

Another way to express the relationship between sensitivity and specifcity for a given test is to
construct a curve, called a receiver operator characteristic (ROC) curve.

ESTABLISHING SENSITIVITY AND SPECIFICITY


ensitivity and specificity may be inaccurately described because an improper gold standard has
been chosen, as discussed earlier in this chapter. In addition, two other issues related to the
selection of diseased and nondiseased patients can profoundly a ffect the determination of
sensitivity and specifcity as well. Tey are the spectrum of patients to which the test is applied and
bias in judging the test’s performance. Statistical uncertainty, related to studying a relatively
small number of patients, also can lead to inaccurate estimates of sensitivity and specificity.

Spectrum of Patients

Difculties may arise when the patients used to describe the test’s properties are di fferent from
those to whom the test will be applied in clinical practice.

Also, patients with disease often differ in severity, stage, or duration of the disease, and a test’s
sensitivity will tend to be higher in more severely affected patients.

Some people in whom disease is suspected may have other conditions that cause a positive test,
thereby increasing the false-positive rate and decreasing specificity.

One reason for this is that levels of the cancer marker, CA-125, recommended by guidelines, are
elevated by many diseases and conditions other than ovarian cancer. Low specifcity of diagnostic
and screening tests is a major problem for ovarian cancer and can lead to surgery on many
women without cancer.

In theory, the sensitivity and specifcity of a test are independent of the prevalence of diseased
individuals in the sample in which the test is being evaluated. In practice, however, several
characteristics of patients, such as stage and severity of disease, may be related to both the
sensitivity and the specifcity of a test and to the prevalence because di fferent kinds of patients
are found in highand low-prevalence situations.

BIAS

a positive test may prompt the clinician to continue pursuing the diagnosis, increasing the
likelihood that the disease will be found. On the other hand, a negative test may cause the
clinician to abandon further testing, making it more likely that the disease, if present, will be
missed.
All the biases discussed tend to increase the agreement between the test and the gold standard.
Tat is, they tend to make the test seem more accurate than it actually is.

Chance

Values for sensitivity and specifcity are usually estimated from observations on relatively small
samples of people with and without the disease of interest. Because of chance (random variation)
in any one sample, particularly if it is small, the true sensitivity and specifcity of the test can be
misrepresented, even if there is no bias in the study.

Terefore, reported values for sensitivity and specifcity should not be taken too literally if a small
number of patients is studied.

Predictive value

Te probability of disease, given the results of a test, is called the predictive value of the test.

Predictive value is sometimes called posterior (or posttest) probability, the probability of disease
after the test result is known.

Te term accuracy is sometimes used to summarize the overall value of a test. Accuracy is the
proportion of all test results, both positive and negative, that is correct.

Likelihood Ratios

Tey summarize the same kind of information as sensitivity and specifcity and can be used to
calculate the probability of disease after a positive or negative test (positive or negative
predictive value).

Odds

Odds and probability contain the same information, but they express it di fferently. Probability,
which is used to express sensitivity, specifcity, and predictive value, is the proportion of people
in whom a particular characteristic, such as a positive test, is present. Odds, on the other hand, is
the ratio of two probabilities, the probability of an event to that of 1 – the probability of the
event.

Why Use Likelihood Ratios?


Te main advantage of likelihood ratios is that they make it possible to go beyond the simple and
clumsy classifcation of a test result as either abnormal or normal, as is done when describing the
accuracy of a diagnostic test only in terms of sensitivity and specifcity at a single cuto ff point.

One can defne likelihood ratios for each of an entire range of possible values. In this way,
information represented by the degree of abnormality is not discarded in favor of just the crude
presence or absence of it.

In general, tests with LRs further away from 1.0 are associated with few false positives and few
false negatives (>10 for LR+ and <0.1 for LR–), whereas those with LRs close to 1.0 give much
less accurate results (2.1 to 5.0 for LR+ and 0.5 to 0.2 for LR–)

In summary, likelihood ratios can accommodate the common and reasonable clinical practice of
putting more weight on extremely high (or low) test results than on borderline ones when
estimating the probability (or odds) that a particular disease is present.

You might also like