Chapter 01
Chapter 01
Chapter 01:
PHYSICAL QUANTITIES & mEASUREMENT
LENGTH MEASURING INSTRUMENTS
Meter Rule
A meter rule is a measuring tool used to measure length or distance, typically up to 1
meter. It is a straight, rigid stick, often made of wood, plastic, or metal, and is marked with
measurements in millimeters (mm), centimeters (cm), and meters (m). The meter rule is
commonly used in classrooms, laboratories, and workshops for measuring objects or
spaces.
The meter rule is usually divided into 100 cm, and each centimeter is further divided into 10
millimeters. This provides a high level of precision for measuring smaller objects. To use a
meter rule, place it along the object you are measuring, ensuring that it is aligned properly
with the edge of the object. Read the measurement by noting the value at the point where
the object ends.
The meter rule is a simple and reliable tool, but it has limitations. For example, it is not
suitable for measuring very small or very large distances with high precision. Additionally, it
may not give exact results for very precise measurements, as there can be slight human
error when reading the scale.
Vernier Calliper
A Vernier calliper is a precision measuring instrument used to measure linear dimensions
with high accuracy. It is widely employed in mechanical engineering, metalworking,
woodworking, and scientific laboratories to measure internal and external dimensions as
well as depths of objects.
The Vernier calliper was invented by Pierre Vernier in 1631. The device has evolved over
centuries to become a vital tool for precise measurement, incorporating improved
materials and manufacturing techniques to enhance accuracy.
A typical Vernier calliper consists of the following parts:
1. Main Scale: The fixed part of the instrument marked with measurements, usually
in centimeters and millimeters.
2. Vernier Scale: A sliding scale that enables fractional readings between the main
scale divisions, increasing measurement precision.
3. Fixed Jaw: Attached to the main scale, it holds one side of the object being
measured.
4. Sliding Jaw: Moves along the main scale and is connected to the Vernier scale; it
holds the other side of the object.
5. Depth Rod: A thin rod extending from the end of the caliper, used for measuring
depths.
6. Locking Screw: A screw used to fix the sliding jaw in position for accurate readings.
1|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
7. Thumb Screw (or Fine Adjustment Screw): Facilitates smooth and precise
movement of the sliding jaw.
The Vernier calliper operates on the principle of the Vernier scale, which allows
measurements with a resolution finer than the smallest main scale division. When the jaws
are closed or placed around an object, the alignment of the Vernier scale with the main
scale provides the measurement.
For example, if the main scale has divisions of 1 mm and the Vernier scale has 10
divisions, then:
The main parts of a screw gauge include a U-shaped frame, an anvil, a spindle, a sleeve (or
barrel), a thimble, a ratchet, and a lock nut. The object to be measured is placed between
the anvil (fixed part) and the spindle (movable part). The sleeve has the main scale marked
on it, while the thimble has the circular scale. The ratchet ensures uniform pressure during
measurement, and the lock nut holds the spindle in place after measurement.
There are two scales on a screw gauge: the main scale (pitch scale) on the sleeve,
graduated in millimeters, and the circular scale (head scale) on the thimble, typically
divided into 100 equal parts.
Some important terms:
Pitch: The distance the spindle moves in one complete rotation of the thimble. For
example, if the spindle moves 1 mm in one full rotation, then the pitch is 1 mm.
Least Count (LC): The smallest measurement that can be accurately read with the
screw gauge. It is calculated by dividing the pitch by the number of divisions on the
circular scale. For example, if the pitch is 1 mm and there are 100 divisions on the
circular scale, then the least count is 0.01 mm.
3|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
Zero Error: Occurs when the zero of the circular scale does not align with the zero of
the main scale when the jaws are closed. If the circular scale zero is below the main
scale line, it is called positive zero error; if it is above, it is called negative zero error.
To correct the measurement, subtract the zero error from the observed reading.
Total Reading: The actual measurement obtained from the screw gauge. It is
calculated by adding the main scale reading to the product of the circular scale
reading and the least count. Then, apply the zero error correction to get the correct
measurement.
To use a screw gauge, first determine the least count. Then, check for zero error by closing
the jaws and noting if the zeros coincide. Place the object between the anvil and spindle,
and rotate the thimble until the object is gently held. Use the ratchet to ensure uniform
pressure. Take the main scale reading (MSR) and the circular scale reading (CSR). Calculate
the total reading using the formula: Total Reading = MSR + (CSR × Least Count), and then
adjust the total reading by adding or subtracting the zero error to get the correct
measurement.
For example, if the pitch is 1 mm, the circular scale has 100 divisions, the least count is 0.01
mm, the main scale reading is 5 mm, the circular scale reading is 25, and the zero error is
+0.02 mm, then the total reading is 5 mm + (25 × 0.01 mm) = 5.25 mm. The correct
measurement is 5.25 mm - 0.02 mm = 5.23 mm.
The screw gauge is used for measuring the diameter of thin wires and determining the
thickness of small sheets like paper, plastic, or metal. It is used in mechanical engineering
and manufacturing for precise measurements. The advantages of using a screw gauge
include high precision and accuracy, the ability to measure very small dimensions, and its
portability and ease of use.
Some precautions to take while using a screw gauge are: ensure the instrument is clean and
free from dust, do not apply excessive force—use the ratchet for uniform pressure, always
check for zero error before taking measurements, and handle the instrument carefully to
avoid damage.
4|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
This instrument is commonly used in laboratories due to its simplicity and accuracy in
measuring mass. To ensure precise measurements, it’s important to handle the balance
carefully, ensure that it is properly calibrated, and avoid external disturbances like air
currents during the measurement process.
5|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
To use a stopwatch, press the start button to begin timing, press the stop button to end
timing, and press the reset button to clear the time and prepare for the next measurement.
Some stopwatches also have a split-time function, allowing you to record intermediate
times without stopping the overall timing.
A stopwatch is a valuable tool for accurately measuring time intervals, with mechanical and
digital types offering different features to suit various needs.
Errors in Measurement
1. Random Error
Random errors are unpredictable variations that occur in every measurement. They can be
caused by slight fluctuations in instruments, environmental conditions, or even the
observer’s judgment. For example, when measuring the length of an object multiple times
with a ruler, you might get slightly different readings each time due to small, uncontrollable
6|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
factors. These errors affect the precision of measurements but can be minimized by taking
multiple readings and averaging them.
2. Systematic Error
Systematic errors are consistent, repeatable errors that occur due to flaws in the
measurement system. They can arise from improperly calibrated instruments, faulty
measurement techniques, or environmental factors that consistently affect measurements
in the same way. For instance, if a thermometer is not calibrated correctly, it might always
read 2°C higher than the actual temperature. These errors affect the accuracy of
measurements and can often be corrected by recalibrating instruments or improving
measurement methods.
3. Human Error
Human errors are mistakes made by the person conducting the measurement. They can
include misreading scales, recording incorrect values, or using instruments improperly. For
example, reading the wrong line on a graduated cylinder or misplacing a decimal point in a
calculation is common human errors. These errors can be reduced by careful attention,
proper training, and double-checking measurements.
Uncertainty in Measurement
Uncertainty in measurement means that there is always some doubt about the exactness of
any measurement. It recognizes that no measurement can be completely accurate because
various factors can affect the result. This idea is very important in science and engineering
because it helps estimate how much a measurement could differ from the actual value.
Every measurement has some uncertainty, which can come from different sources like the
limits of the measuring tool, environmental conditions, or how the person is using the tool.
For example, a ruler with millimeter marks can measure lengths to the nearest millimeter,
so it has an uncertainty of ±0.5 mm. This means the smallest difference between two
measurements that the ruler can show.
To show uncertainty, we often write measurements with a number that shows the possible
difference, like “5.0 cm ± 0.5 cm.” This means the true value is likely to be between 4.5 cm
and 5.5 cm. The uncertainty helps us understand how precise the measurement is and how
reliable the results are.
Recognizing and considering uncertainty is important in experiments and design work, as it
ensures that conclusions are based on accurate measurements and that mistakes are
reduced.
Significant Figures
Significant figures are the digits in a number that give us meaningful information about
how precise a measurement is. They are important in science and math because they help
us show how accurate a measurement or calculation is. For example, if we measure the
length of a pencil as 5.34 cm, the digits 5, 3, and 4 are significant because they tell us the
exact measurement. However, if we measure it as 0.003 cm, the leading zeros are not
significant—they just help place the decimal point.
7|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
In general, non-zero digits are always significant. Zeros between non-zero digits are also
significant. However, zeros at the beginning of a number (before any non-zero digits) are
not significant. Also, trailing zeros in a decimal number are significant, but in a whole
number without a decimal point, they are usually not considered significant.
Understanding significant figures is important because it helps us communicate how precise
our measurements are. When performing calculations, we must follow certain rules for
rounding and for keeping the right number of significant figures, depending on the
operation we’re doing (like adding, subtracting, multiplying, or dividing). This ensures that
our results are as accurate as possible given the tools and methods we used.
Precision & Accuracy
Precision and accuracy are two important ideas when measuring or performing
experiments.
Accuracy refers to how close a measurement is to the true or correct value. For example, if
you are measuring the length of a table and the true length is 100 cm, a measurement of
99.8 cm or 100.2 cm would be considered accurate because it is very close to the true
value.
Precision, on the other hand, refers to how consistent or repeatable your measurements
are. If you measure the same object several times and get 99.8 cm each time, your
measurements are precise, even if they are not exactly accurate. Precision is about getting
the same result each time, but it doesn't necessarily mean that result is correct.
In short, accuracy is about being close to the true value, while precision is about being
consistent in your measurements. Ideally, you want both: accurate and precise results.
8|Page