0% found this document useful (0 votes)
6 views8 pages

Chapter 01

The document provides an overview of various measuring instruments used in physics, including meter rules, Vernier callipers, screw gauges, physical balances, stopwatches, and measuring cylinders. It explains their functions, components, usage, advantages, limitations, and the importance of accuracy and precision in measurements. Additionally, it discusses errors in measurement, uncertainty, and the concept of significant figures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views8 pages

Chapter 01

The document provides an overview of various measuring instruments used in physics, including meter rules, Vernier callipers, screw gauges, physical balances, stopwatches, and measuring cylinders. It explains their functions, components, usage, advantages, limitations, and the importance of accuracy and precision in measurements. Additionally, it discusses errors in measurement, uncertainty, and the concept of significant figures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

F.M.

Shariq Awan Al-Hussain Study Zone


BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)

Chapter 01:
PHYSICAL QUANTITIES & mEASUREMENT
LENGTH MEASURING INSTRUMENTS
Meter Rule
A meter rule is a measuring tool used to measure length or distance, typically up to 1
meter. It is a straight, rigid stick, often made of wood, plastic, or metal, and is marked with
measurements in millimeters (mm), centimeters (cm), and meters (m). The meter rule is
commonly used in classrooms, laboratories, and workshops for measuring objects or
spaces.
The meter rule is usually divided into 100 cm, and each centimeter is further divided into 10
millimeters. This provides a high level of precision for measuring smaller objects. To use a
meter rule, place it along the object you are measuring, ensuring that it is aligned properly
with the edge of the object. Read the measurement by noting the value at the point where
the object ends.
The meter rule is a simple and reliable tool, but it has limitations. For example, it is not
suitable for measuring very small or very large distances with high precision. Additionally, it
may not give exact results for very precise measurements, as there can be slight human
error when reading the scale.

Vernier Calliper
A Vernier calliper is a precision measuring instrument used to measure linear dimensions
with high accuracy. It is widely employed in mechanical engineering, metalworking,
woodworking, and scientific laboratories to measure internal and external dimensions as
well as depths of objects.
The Vernier calliper was invented by Pierre Vernier in 1631. The device has evolved over
centuries to become a vital tool for precise measurement, incorporating improved
materials and manufacturing techniques to enhance accuracy.
A typical Vernier calliper consists of the following parts:
1. Main Scale: The fixed part of the instrument marked with measurements, usually
in centimeters and millimeters.
2. Vernier Scale: A sliding scale that enables fractional readings between the main
scale divisions, increasing measurement precision.
3. Fixed Jaw: Attached to the main scale, it holds one side of the object being
measured.
4. Sliding Jaw: Moves along the main scale and is connected to the Vernier scale; it
holds the other side of the object.
5. Depth Rod: A thin rod extending from the end of the caliper, used for measuring
depths.
6. Locking Screw: A screw used to fix the sliding jaw in position for accurate readings.

1|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
7. Thumb Screw (or Fine Adjustment Screw): Facilitates smooth and precise
movement of the sliding jaw.
The Vernier calliper operates on the principle of the Vernier scale, which allows
measurements with a resolution finer than the smallest main scale division. When the jaws
are closed or placed around an object, the alignment of the Vernier scale with the main
scale provides the measurement.

How to Read a Vernier Calliper


1. Main Scale Reading: Note the measurement on the main scale just before the zero mark
of the Vernier scale.
2. Vernier Scale Reading: Find the line on the Vernier scale that exactly aligns with a line on
the main scale.
3. Calculate Total Measurement:
Measurement = Main Scale Reading + Vernier Scale Reading
Main scale divisions are usually in millimeters.
The Vernier scale allows readings to fractional millimeters (e.g., 0.01 mm).
Least Count
The least count is the smallest measurement that can be accurately read using the
Vernier calliper. It is calculated as:

For example, if the main scale has divisions of 1 mm and the Vernier scale has 10
divisions, then:

Accuracy and Precision


Vernier Calliper typically offers an accuracy of up to 0.02 mm, making them suitable for
precise measurements required in scientific and engineering applications.
We can use Vernier Calliper for
 Measuring the diameter of rods, wires, or cylinders.
 Measuring internal diameters of holes and cavities.
 Measuring depths of recesses or grooves.
 Used in laboratories and manufacturing for quality control.
Advantages
2|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
o High accuracy and precision.
o Easy to use once familiar with the reading process.
o Versatile for various measurement tasks.
Limitations
o Sensitive to temperature variations, which can cause expansion or contraction.
o Require careful handling to avoid damage.
o Limited in measuring very small or very large dimensions.
The Vernier calliper is an essential tool in precision measurement, combining simplicity with
high accuracy. Its ability to measure internal, external and depth dimensions makes it
indispensable across multiple fields.
Micrometer Screw Gauge
A screw gauge, also known as a micrometer screw gauge, is an instrument used to measure
very small lengths, such as the thickness of a wire or a thin sheet, with high accuracy—
typically up to 0.01 mm. It works on the principle of a screw: when the screw is rotated, it
moves linearly along its axis. This linear movement allows for precise measurement of small
distances.

The main parts of a screw gauge include a U-shaped frame, an anvil, a spindle, a sleeve (or
barrel), a thimble, a ratchet, and a lock nut. The object to be measured is placed between
the anvil (fixed part) and the spindle (movable part). The sleeve has the main scale marked
on it, while the thimble has the circular scale. The ratchet ensures uniform pressure during
measurement, and the lock nut holds the spindle in place after measurement.
There are two scales on a screw gauge: the main scale (pitch scale) on the sleeve,
graduated in millimeters, and the circular scale (head scale) on the thimble, typically
divided into 100 equal parts.
Some important terms:
 Pitch: The distance the spindle moves in one complete rotation of the thimble. For
example, if the spindle moves 1 mm in one full rotation, then the pitch is 1 mm.
 Least Count (LC): The smallest measurement that can be accurately read with the
screw gauge. It is calculated by dividing the pitch by the number of divisions on the
circular scale. For example, if the pitch is 1 mm and there are 100 divisions on the
circular scale, then the least count is 0.01 mm.

3|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
 Zero Error: Occurs when the zero of the circular scale does not align with the zero of
the main scale when the jaws are closed. If the circular scale zero is below the main
scale line, it is called positive zero error; if it is above, it is called negative zero error.
To correct the measurement, subtract the zero error from the observed reading.
 Total Reading: The actual measurement obtained from the screw gauge. It is
calculated by adding the main scale reading to the product of the circular scale
reading and the least count. Then, apply the zero error correction to get the correct
measurement.
To use a screw gauge, first determine the least count. Then, check for zero error by closing
the jaws and noting if the zeros coincide. Place the object between the anvil and spindle,
and rotate the thimble until the object is gently held. Use the ratchet to ensure uniform
pressure. Take the main scale reading (MSR) and the circular scale reading (CSR). Calculate
the total reading using the formula: Total Reading = MSR + (CSR × Least Count), and then
adjust the total reading by adding or subtracting the zero error to get the correct
measurement.
For example, if the pitch is 1 mm, the circular scale has 100 divisions, the least count is 0.01
mm, the main scale reading is 5 mm, the circular scale reading is 25, and the zero error is
+0.02 mm, then the total reading is 5 mm + (25 × 0.01 mm) = 5.25 mm. The correct
measurement is 5.25 mm - 0.02 mm = 5.23 mm.
The screw gauge is used for measuring the diameter of thin wires and determining the
thickness of small sheets like paper, plastic, or metal. It is used in mechanical engineering
and manufacturing for precise measurements. The advantages of using a screw gauge
include high precision and accuracy, the ability to measure very small dimensions, and its
portability and ease of use.
Some precautions to take while using a screw gauge are: ensure the instrument is clean and
free from dust, do not apply excessive force—use the ratchet for uniform pressure, always
check for zero error before taking measurements, and handle the instrument carefully to
avoid damage.

MASS MEASURING INSTRUMENTS


Physical Balance
A physical balance, also known as a beam balance, is an instrument used to measure the
mass of objects. It operates on the principle of moments, which states that when two equal
masses are placed at equal distances from the pivot point on opposite sides of a beam, the
beam remains balanced.
The physical balance consists of a horizontal beam supported at its center by a fulcrum. At
each end of the beam, there are pans suspended where objects can be placed. When an
object of unknown mass is placed on one pan, standard weights of known mass are added
to the other pan until the beam is balanced. At this point, the mass of the unknown object
equals the total mass of the standard weights.

4|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
This instrument is commonly used in laboratories due to its simplicity and accuracy in
measuring mass. To ensure precise measurements, it’s important to handle the balance
carefully, ensure that it is properly calibrated, and avoid external disturbances like air
currents during the measurement process.

How to Calibrate a Physical (Beam) Balance


1. Ensure the Balance is on a Level Surface: Place the balance on a stable, flat surface
to prevent any tilting that could affect accuracy.
2. Zero the Balance: Before placing any object on the balance, ensure it reads zero. If it
doesn’t, adjust the zeroing mechanism (often a knob or screw) until the pointer
aligns with the zero mark.
3. Use Standard Weights: Place a known standard weight on one pan and adjust the
riders or sliding weights on the beams until the balance is level. This checks the
accuracy of the balance.
4. Check across the Scale: Repeat the process with different standard weights to ensure
the balance is accurate across its entire range.
5. Adjust as Necessary: If discrepancies are found, consult the manufacturer’s
instructions for calibration adjustments or seek professional servicing.

TIME MEASURING INSTRUMENTS


Stop Watch
A stopwatch is a device used to measure the time interval between two events. It is
commonly used in various fields such as sports, science experiments, and everyday
activities to record precise durations. There are two main types of stopwatches: mechanical
(analog) and digital.
A mechanical stopwatch operates using a spring mechanism that needs to be wound up
manually. It typically measures time intervals up to 0.1 seconds. To use it, you press the
start button to begin timing, press it again to stop, and press a third time to reset the
needle to zero.
A digital stopwatch uses electronic components and displays the time on a digital screen. It
offers higher precision, often measuring time intervals up to 0.01 seconds. Digital
stopwatches are more user-friendly and can include additional features like split-time
recording and memory functions.

5|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
To use a stopwatch, press the start button to begin timing, press the stop button to end
timing, and press the reset button to clear the time and prepare for the next measurement.
Some stopwatches also have a split-time function, allowing you to record intermediate
times without stopping the overall timing.
A stopwatch is a valuable tool for accurately measuring time intervals, with mechanical and
digital types offering different features to suit various needs.

VOLUME MEASURING INSTRUMENTS


Measuring Cylinder
A measuring cylinder, also known as a graduated cylinder, is a laboratory instrument used
to measure the volume of liquids accurately. It is typically made from materials like
borosilicate glass or polypropylene, which offer durability and chemical resistance.
The cylinder is marked with graduated lines along its length, each representing a specific
volume. Common sizes range from 10 mL to 1000 mL, allowing for precise measurements
across various volumes.
To use a measuring cylinder, place it on a flat surface and pour the liquid into it. Ensure that
your eye is level with the meniscus (the curve at the surface of the liquid) in order to avoid
parallax errors. Read the volume at the bottom of the meniscus for concave liquids like
water.
Measuring cylinders are essential tools in laboratories for tasks such as preparing solutions,
conducting experiments, and performing quality control in industrial processes.

Errors in Measurement
1. Random Error
Random errors are unpredictable variations that occur in every measurement. They can be
caused by slight fluctuations in instruments, environmental conditions, or even the
observer’s judgment. For example, when measuring the length of an object multiple times
with a ruler, you might get slightly different readings each time due to small, uncontrollable

6|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
factors. These errors affect the precision of measurements but can be minimized by taking
multiple readings and averaging them.
2. Systematic Error
Systematic errors are consistent, repeatable errors that occur due to flaws in the
measurement system. They can arise from improperly calibrated instruments, faulty
measurement techniques, or environmental factors that consistently affect measurements
in the same way. For instance, if a thermometer is not calibrated correctly, it might always
read 2°C higher than the actual temperature. These errors affect the accuracy of
measurements and can often be corrected by recalibrating instruments or improving
measurement methods.
3. Human Error
Human errors are mistakes made by the person conducting the measurement. They can
include misreading scales, recording incorrect values, or using instruments improperly. For
example, reading the wrong line on a graduated cylinder or misplacing a decimal point in a
calculation is common human errors. These errors can be reduced by careful attention,
proper training, and double-checking measurements.
Uncertainty in Measurement
Uncertainty in measurement means that there is always some doubt about the exactness of
any measurement. It recognizes that no measurement can be completely accurate because
various factors can affect the result. This idea is very important in science and engineering
because it helps estimate how much a measurement could differ from the actual value.
Every measurement has some uncertainty, which can come from different sources like the
limits of the measuring tool, environmental conditions, or how the person is using the tool.
For example, a ruler with millimeter marks can measure lengths to the nearest millimeter,
so it has an uncertainty of ±0.5 mm. This means the smallest difference between two
measurements that the ruler can show.
To show uncertainty, we often write measurements with a number that shows the possible
difference, like “5.0 cm ± 0.5 cm.” This means the true value is likely to be between 4.5 cm
and 5.5 cm. The uncertainty helps us understand how precise the measurement is and how
reliable the results are.
Recognizing and considering uncertainty is important in experiments and design work, as it
ensures that conclusions are based on accurate measurements and that mistakes are
reduced.
Significant Figures
Significant figures are the digits in a number that give us meaningful information about
how precise a measurement is. They are important in science and math because they help
us show how accurate a measurement or calculation is. For example, if we measure the
length of a pencil as 5.34 cm, the digits 5, 3, and 4 are significant because they tell us the
exact measurement. However, if we measure it as 0.003 cm, the leading zeros are not
significant—they just help place the decimal point.

7|Page
F.M.Shariq Awan Al-Hussain Study Zone
BS (HONS). Naushera (S/V)
English Literature & Linguistics Physics (IX)
In general, non-zero digits are always significant. Zeros between non-zero digits are also
significant. However, zeros at the beginning of a number (before any non-zero digits) are
not significant. Also, trailing zeros in a decimal number are significant, but in a whole
number without a decimal point, they are usually not considered significant.
Understanding significant figures is important because it helps us communicate how precise
our measurements are. When performing calculations, we must follow certain rules for
rounding and for keeping the right number of significant figures, depending on the
operation we’re doing (like adding, subtracting, multiplying, or dividing). This ensures that
our results are as accurate as possible given the tools and methods we used.
Precision & Accuracy
Precision and accuracy are two important ideas when measuring or performing
experiments.
Accuracy refers to how close a measurement is to the true or correct value. For example, if
you are measuring the length of a table and the true length is 100 cm, a measurement of
99.8 cm or 100.2 cm would be considered accurate because it is very close to the true
value.
Precision, on the other hand, refers to how consistent or repeatable your measurements
are. If you measure the same object several times and get 99.8 cm each time, your
measurements are precise, even if they are not exactly accurate. Precision is about getting
the same result each time, but it doesn't necessarily mean that result is correct.
In short, accuracy is about being close to the true value, while precision is about being
consistent in your measurements. Ideally, you want both: accurate and precise results.

8|Page

You might also like