100% found this document useful (1 vote)
372 views148 pages

Introduction to Instrumentation Control

instrumentation ppt

Uploaded by

dalapojustin123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
372 views148 pages

Introduction to Instrumentation Control

instrumentation ppt

Uploaded by

dalapojustin123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

ELEN 30173

Instrumentation and Control


POLYTECHNIC UNIVERSITY OF THE PHILIPPINES – MANILA, COLLEGE OF ENGINEERING, ELECTRICAL ENGINEERING DEPARTMENT

September 23, 2024


Topic 01
Introduction to Instrumentation
Agenda and Control

and Definition

Learning Measurement Parameters


Objectives
History of Instrumentation

Application of Instrumentation

Instrumentation Engineering
Basics of Instrumentation
Instrumentation – is the design, equipping, and/or use of measuring instruments in determining real life
conditions in a plant’s process, as for observation, measurement and control.

Instrumentation deals with the installation, maintenance, and calibration of devices used to automate
industrial processes.

Instrumentation Devices used in industrial plants – process instrumentation


Basics of Instrumentation
Field of Study
Instrumentation is a collective term for measuring instruments, used for indicating, measuring, and
recording physical quantities. It is also a field of study about the art and science of making measurement
instruments involving areas of metrology, automation, and control theory.
The term has its origin in the art and science of scientific instrument-making.

Analog Instruments in Diesel Generator Sets Digital Instrument in Diesel Generator Sets
Measurement Parameters
Instrumentation is used to measure many parameters
Pressure (physical values), including:

Moisture
Levels of
Flow Temperature or Density Viscosity
Liquids
Humidity

Ionising
Frequency Current Voltage Inductance Capacitance
Radiation

Chemical
Chemical Properties
Resistivity Composition Position Vibration

Toxic Gases

Weight
Brief History of Instrumentation
Pre-Industrial Era Early-Industrial Era

1500 /
270 1663 1930 1950 1970 1982 2022
BCE

Time Early Systems ANSI/ISA S50


• Introduction of • Signal
• Water Clocks
Pneumatic Compatibility of
• Automatic Transmitters and Electrical
Control System Automatic 3-Term Instruments
in Clocks PID Controllers

Weather Clock Transistor Digitalization


• Christopher Wren
• Commercialization • Introduction of
– design of
of Transistors by Computerized
weather clock
• Meteorological sensors mid-1950s Instruments
moving pens over paper
driven by clockwork
Applications of Instrumentation

• Space Explorations (Guidance, Navigation and Control)

• Personal and Public Mobility (Land, Air, and Sea Transportation)

• Manufacturing Plant (Equipment Operation)

• Building System Automation

• Home Appliances and Devices

• Laboratory Instrumentation
Instrumentation Engineering
Instrumentation engineering is the engineering
specialization focused on the principle and operation of
measuring instruments that are used in design and
configuration of automated systems in areas such as
electrical and pneumatic domains, and the control of
quantities being measured.

Instrumentation Engineering typically work for industries


with automated processes, such as chemical or
manufacturing plants, with the goal of improving system
productivity, reliability, safety, optimization and stability.

To control the parameters in a process or in a particular


system, devices such as microprocessors,
microcontrollers or PLCs are used, but their ultimate aim
is to control the parameters of a system.
Instrumentation Engineering

Instrumentation Engineers are responsible for


integrating the sensors with the recorders, transmitters,
displays or control systems, and producing the piping
and instrumentation diagram for the process.

Instrumentation engineers may design or specify


installation, wiring and signal conditioning. They may be
responsible for commissioning, calibration, testing and
maintenance of the system.

Instrumentation technologists, technicians and


mechanics specialize in troubleshooting, repairing, and
maintaining instruments and instrumentation systems.
Topic 02
Instrumentation and Control
Agenda General Concept

and Process Control

Learning Measurements
Objectives
Methods of Measurement

Instrumentation General Objectives

Accuracy vs Precision
Process Control

Process control is the automatic control of an output variable by sensing the amplitude of the output
parameter from the process and comparing it to the desired or set level and feeding an error signal
back to control an input variable.

Process control is the ability to monitor and adjust a process to give a desired output. It is used in
industry to maintain quality and improve performance. An example of a simple process that is
controlled is keeping the temperature of a room at a certain temperature using a heater and a
thermostat.

Sample Diagram of a Process Control


Measurement

The old measurement is used to tell us length, weight and temperature are a change of these
physical measurement is the result of an opinion formed by one (or) more observes about the
relative size (or) intensity of some physical quantities.

The word measurement is used to tell us the length, the


weight, the temperature, the color or a change in one of
these physical entities of a material. Measurement
provides us with means for describing the various physical
and chemical parameters of materials in quantitative
terms. For example 10 cm length of and object implies
that the object is 10 times as large as 1 cm; the unit
employed in expressing length.
Fundamental measuring process
Measurement

There are two requirements which are to be satisfied to get good result from the measurement.

1. The standard must be accurately known and internationally accepted.

2. The apparatus and experimental procedure adopted for comparison must be provable.
Methods of Measurement

1. Direct and indirect measurement

2. Primary and secondary & tertiary measurement

3. Contact and non-contact type of measurement


1. Direct and Indirect Measurement

Measurement is a process of comparison of the physical quantity with a standard depending upon
requirement and based upon the standard employed, these are the two basic methods of
measurement.

1. Direct Measurement

The value of the physical parameter is determined by comparing it directly with different standards. The
physical standards like mass, length and time are measured by direct measurement.

2. Indirect Measurement

The value of the physical parameter is more generally determined by indirect comparison with the
secondary standards through calibration.
The measurement is converted into an analogous signal which subsequently process and fed to the
end device at present the result of measurement.
2. Primary, Secondary & Tertiary Measurement

The complexity of an instrument system depending upon measurement being made and upon the
accuracy level to which the measurement is needed. Based upon the complexity of the measurement
systems, the measurement are generally grouped into three categories.

1. Primary Mode
The sought value of physical parameter is determined by comparing it directly with reference standards
the required information is obtained to sense of side and touch.

Examples
a. Matching of two lengths is determining the length of a object with ruler.
b. Estimation the temperature difference between the components of the container by
inserting fingers.
c. Use of bean balance measure masses.
d. Measurement of time by counting a number of strokes of a block.
2. Primary and Secondary & Tertiary Measurement
2. Secondary and Tertiary Measurement
Secondary and tertiary measurement are the indirect measurements involving one transmission are
called secondary measurements and those involving two convergent are called tertiary measurements.

Examples
a. The convergent of pressure into displacement by means of be allows and the convergent
of force into displacement.

b. Pressure measurement by manometer and the temperature measurement by mercury in


glass tube thermometer.

c. The measurement of static pressure by boundary tube pressure gauge is a typical example
of tertiary measurement.
3. Contact and Non-Contact type of Measurement
1. Contact Type

Where the sensing element of measuring device as a contact with medium whose characteristics are
being measured.

2. Non-contact type

Where the sense doesn't communicate physically with the medium.

Example

The optical, radioactive and some of the electrical/electronic measurement belong to this category.
Objectives of Instrumentation
1. The major objective of instrumentation is to measure and control the field parameters to increase
safety and efficiency of the process.

2. To achieve good quality.

3. To achieve auto machine and automatic control of process there by reducing human.

4. To maintain the operation of the plan within the design exportations and to achieve good quantity
product.
Accuracy and Precision
• Accuracy - comparison of actual(true) value vs measured(computed) values

Errors:

• Absolute Error
E A  True Value  Computed Value

• Relative Error
True Value  Computed Value
ER 
True Value
Accuracy and Precision
• Precision – comparison of several computed (measured) values

Errors:

• Absolute Approximate Error


E A  New Value  Pr evious Value

• Relative Approximate Error


New Value  Pr evious Value
ER 
New Value
Assignment
Research the Origins of Measurements. Submit by Monday, September 30, 2024, until 11:59 am
only.

• Individual submission via Gdrive


• Submission using MS word. Include figure, and computations. Organize your paper with letter
head in first page, table of contents, etc. Put your name, year and section and date of submission.
Activity
Answer the problems in Instrumentation and Control Instructional Materials, page 11. Submission is
until 5:30pm today, September 23, 2024.

• Individual submission via Gdrive


• Answer using any kind of paper/MS word. Put your name, year and section and date of
submission.
THANK YOU
ELEN 30173

Instrumentation and Control


POLYTECHNIC UNIVERSITY OF THE PHILIPPINES – MANILA, COLLEGE OF ENGINEERING, ELECTRICAL ENGINEERING DEPARTMENT

September 23, 2024


Topic 03
Instrumentation and Control
Agenda Elements
Generalized measurement system and
and its functional elements
Learning Classification of Instruments
Objectives
Input, Output Configuration of a
Measuring Instrument
Performance Characteristics of a
Measuring Instrument
Generalized measurement system and its functional elements

Generalized measurement system


Generalized measurement system and its functional elements

1. Primary sensing element

2. Variable conversion / Transducer element

3. Manipulation of element

4. Data transmission element

5. Data processing element

6. Data presentation element


Generalized measurement system and its functional elements

The principal functions of an instrument are the acquisition of information by Sensing and perception,
the process of that information, and its final presentation to a Human observer. For analysis and
synthesis, the instruments are considered systems (or) assemblies of interconnected components
organized to perform a specified function. The different components are called elements
1. Primary Sensing Element

An element that is sensitive to the measured variable. The sensing element senses the condition,
state (or) value of the process variable by extracting a small part of the energy from the measurement
and producing an output proportional to the input. Because of the energy expansion, the measured
quantity is always disturbed. Good instruments are designed to minimize this loading effect.
2. Variable Conversion / Transducer Element
An element that converts the signal from one physical for to Another without changing the information
content of the signal.

Example;
• Bourdon tube and bellows which transfer pressure into
displacement.
• Proving ring and other elastic members which converts force into displacement.
• Rack and Pinion: It converts rotary to linear and vice versa.

• Thermocouple which converts information about temperature difference to information in the


form of E.M.F.
3. Manipulation Element

It modifies the direct signal by amplification, filtering etc., so that a desired output is produced.

[input] × constant = Output


4. Data Transmission Element

An element that transmits the signal from one location to another without changing the information
content. Data may be transmitted over long distances (from one location to another) or short
distances
(from a test center to a nearby computer).
5. Data Processing Element

An element that modifies data before it is displayed or finally recorded. Data processing may be used
for such purposes as:

• Corrections to the measured physical variables to compensate for scaling, non-linearity, zero
offset, temperature error, etc.

• Covert the data into useful form, e.g., calculation of engine efficiency from speed, power input
and torque developed.

• Collect information regarding average, statistical, and logarithmic values.


6. Data Presentation Element
An element that provides a record or indication of the output from the data processing element. In a
measuring system using electrical instrumentation, an exciter, and an amplifier are also incorporated
into the circuit.
6. Data Presentation Element
The display unit may be required to serve the following functions.

• Transmitting
• Signaling
• Registering
• Indicating
• Recording
Generalized measurement system and its functional elements

The generalized measurement system is classified into 3 stages:

a) Input Stage
b) Intermediate Stage
i. Signal Amplifications
ii. Signal Filtration
iii. Signal Modification
iv. Data Transmission
c) Output Stage
Generalized measurement system and its functional elements

a) Input Stage
The input stage (Detector-transducer) is acted upon by the input signal (a variable to be
measured) such as length, pressure, temperature, angle, etc., and which transforms this signal
into some other physical form. When the dimensional units for the input and output signals are the
same, this functional element/stage is referred to as the transformer.
Generalized measurement system and its functional elements

b) Intermediate Stage
Generalized measurement system and its functional elements

b) Intermediate Stage
i. signal amplification to increase the power or amplitude of the signal without affecting its
waveform. The output from the detector-transducer element' is generally too small to operate
an indicator or a recorder and its amplification is necessary. Depending upon the type of
transducer signal, the amplification device may be of mechanical, hydraulic/pneumatic,
optical, and electrical type.

ii. Signal filtration to extract the desired information from extraneous data. Signal filtration
removes the unwanted noise signals that tend to obscure the transducer signal. Depending
upon the nature of the signal and situation, one may use mechanical, pneumatic or electrical
filters.

iii. Signal modification to provide a digital signal from an analog signal or vice versa, or change
the form of output from voltage to frequency or from voltage to current.

iv. Data transmission to telemeter the data for remote reading and recording.
Generalized measurement system and its functional elements

c) Output Stage
Generalized measurement system and its functional elements

c) Output Stage
which constitutes the data display record or control. The data presentation stage collects the
output from the signal-conditioning element and presents the same to be read or seen and noted
by the experimenter for analysis. This element may be of;

• visual display types such as the height of liquid in a


manometer or the position of the pointer on a scale.

• numerical readout on an electrical instrument

• Graphic record on some kind of paper chart or


a magnetic tape.
Classification of Instruments
1. Automatic and Manual Instruments

2. Self Generating and Power Operated

3. Self contact and remote indicating instruments

4. Deflection and null type

5. Analog and digital types

6. Contact and non-contact type


Classification of Instruments
1. Automatic and Manual Instruments
The manual instruments require the services of an operator while the automatic types do not. For
example, the temperature measurement by mercury-in-glass thermometer is automatic as the
instrument indicates the temperature without requiring any manual assistance. However, the
measurement of temperature by a resistance thermometer incorporating; Wheatstone bridge in its
circuit is manual in operation as it needs an operator for obtaining the null position.

Manual Weighing Scale Automatic / Digital Weighing Scale


Classification of Instruments
2. Self Generating and Power Operated

Self-generated instruments are the output is supplied entirely by the input signal. The instrument
does not require any out side power in performing its function

Example: mercury in glass thermometer, bourdon pressure gauge, pitot tube for measuring
velocity

So instruments require same auxiliary source of power such as compound air, electricity, hydraulic
supply for these operations and hence are called externally powered instruments (or) passive
instruments.

Example:
• L.V.D.T(Linear Variable Differential Transducer)
• Strain gauge load cell
• Resistance thermometer and the mister.
• Self contained remote indicator.
Classification of Instruments
3. Self contact and remote indicating instruments

The different elements of a self-contained instrument are contained in one physical assembly. In a
remote indicating instrument, the primary sensing element may be located at a sufficiently long
distance from the secondary indicating element. In the modern instrumentation technology, there
is a trend to install remote indicating instruments where the important indications can be displayed
in the central control rooms.

Flow Indicator Sensor


Classification of Instruments
4. Deflection and null type

In null-type instruments, the physical effect


caused by the quantity being measured is
nullified (deflection maintained at zero) by
generating an equivalent opposing effect. The
Deflection Instruments
equivalent null causing effect then provides a
measure of the unknown quantity. A deflection
type instrument is that in which the physical
effect generated by the measuring quantity
(measurand) is noted and correlated to the
measurand.

Null Instruments
Classification of Instruments
5. Analog and digital types

The signals of an analog unit vary in a continuous fashion and can take on infinite number of
values in a given range. Wrist watch speedometer of an automobile, fuel gauge, ammeters and
voltmeters are examples of analog instruments.
Instruments basically perform two functions:
(i) Collection of data and;
(ii) control of plant and process.

Accordingly based upon the service rendered, the instruments may also be classified as indicating
instruments, recording instruments and controlling instruments.
Classification of Instruments
6. Contact and non-contact type
Linear displacement sensors are used to measure the distance between two points or two plane
surfaces. They use various technologies, but there are two basic types: contact and non-contact.
As their names suggest, contact sensors make physical contact with the object that is being
measured and non-contact sensors do not.

Contact-based measurement is a good choice for applications with


low levels of cleanliness. Contact devices are also recommended for
measuring exterior features that are not visible to non-contact devices.
Simple proximity switch

Non-contact measurement is faster than contact measurement,


especially for applications with high sampling rates. Non-contact
systems can also measure more points at one time and without
putting pressure on the object. They are also less prone to sensor
wear and won’t dampen the motion of a target.
MicroTrak3: 1D Laser Displacement Sensor
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

An instrument operates on an input quantity (measurement/designed variable) to provide an output


called the measurements. The input is denoted by “i” and the output is denoted by “o”. The
performance of the instrument can be stated in terms of an operational transfer Function(G). The
input and output relationship is characterized by the operation ‘G’ such that;
o=G i
The various inputs to a measurement system can be classified into three categories:

i) Desired input
ii) Interfering input
iii) Modifying input
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

i) Desired input:

A quantity that the instrument is specifically intended to measure. The desired input produces an
output component according to an input-output relation symbolized by ; here represents the
mathematical operation necessary to obtain the output from the input.
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

ii) Interfering input:

A quantity to which the instrument is unintentionally sensitive. The interfering input would produce
an output component according to input-relation symbolized by
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

iii) Modifying input:

A quantity that modifies the input-6utput relationship for both the desired and interfering inputs. The
modifying input would cause a change in and/or .The specific manner in which affects and G, is
represented by the symbols and , respectively.
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

A block diagram of these various aspects has been illustrated in Fig.


INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

Example:

Consider a deferential manometer which consists of an u-tube filled with mercury and with its ends
connected to the two points between which the pressure differential is to be measured. The
pressure differential - is worked out from the hydrostatic (Equilibrium) equation:

(- ) = g h ( − )

and are the mass densities of mercury and fluid respectively, and h is the scale reading. If the fluid
flowing in the pipeline is a gas, then << accordingly the above identity can be re-written as;

(- ) = g h
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

Here differential pressure is - is the desired input; Scale reading ‘h’ is the output and is the
parameter which relates the output and the input.
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

A) The manometer is placed on a wheel which is subjected to acceleration and scale indicates a
reading even through the pressures & at the two ends are equal.

The acceleration constitutes the interference input. The manometer has an angular tilt i.e., is not
properly aligned with the direction of the gravitational force.

An output will result even when there is no pressure difference. Here the angular tilt acts as the
interfering input.
INPUT, OUTPUT CONFIGURATION OF A MEASURING INSTRUMENT

Here scale factor establishes the input-output relation and this gets modified due to

i) Temperature variation which change the value of density of mercury.

ii) Change in gravitational force due to change in location of a manometer.

So, these 2 are modifying quantities.

1) Signal filtering

2) Compensation by opposing inputs

3) Output correction
Performance Characteristics of a Measuring Instrument

1. Static characteristics

2. Dynamic characteristics

The performance characteristics of an instrument system are conclusion by low accurately the
system measures the requires input and how it rejects the undesirable inputs.
Error = measured value (𝑉𝑚) – true value ((𝑉𝑡)
Correction = (𝑉𝑡 - 𝑉𝑚).
Performance Characteristics of a Measuring Instrument

1. Static characteristics

a) Range and span,


b) Accuracy, error, correction,
c) Calibration,
d) Repeatability,
e) Reproducibility
f) Precision,
g) Sensitivity,
h) Threshold,
i) Resolution,
j) Drift,
k) Hysteresis, dead zone.
Performance Characteristics of a Measuring Instrument

a) Range and span

The region between the limits within which an instrument is designed to operate for measuring,
indicating (or) recording a physical quantity is called the range of the instrument. The range is
expressed by standing the lower and upper values. Span represents the algebraic difference
between the upper and lower range values of the instruments.

Example:

Range - 10 to 80 Span = 90
Range 5 bar to 100 bar Span = 100 – 5 = 95 bar
Range 0 v to 75v Span = 75volts
Performance Characteristics of a Measuring Instrument

b) Accuracy, error, correction:

No instrument gives an exact value of what is being measured, there is always some uncertainty
in the measured values. This uncertainty is expressed in terms of accuracy and error.
Accuracy of an indicated value (measured) may be defined as closeness to an accepted
standard value (true value). The difference between measured value () and true value () of the
quantity is expressed as instrument error.

=-𝑉

Static correction is defined as -

=-
Performance Characteristics of a Measuring Instrument

c) Calibration:

The magnitude of the error and consequently


the correction to be applied is determined by
making a periodic comparison of the instrument
with standards that are known to be constant.
The entire procedure laid down for making,
adjusting, or checking a scale so that readings
of an instrument or measurement system
conform to an Accepted standard is called
calibration. The graphical representation of the
calibration record is called the calibration curve
and this curve relates standard values of input
or measurand to actual values of output
throughout the operating range of the
instrument.
Performance Characteristics of a Measuring Instrument

c) Calibration:

A comparison of the instrument reading may be made with;

(i) a primary standard,

(ii) a secondary standard of accuracy


greater than the instrument to be
calibrated,

(iii) a known input source.


Performance Characteristics of a Measuring Instrument

c) Calibration:

The following points and observations need consideration while calibrating an instrument:

(a) Calibration of the instrument is out with the instrument in the same (upright, horizontal, etc.)
and subjected same temperature and other environmental conditions under which it is to
operate while in service.

(b) The instrument is calibrated with values of the measuring impressed both in the increasing
and in the decreasing order. The results are then expressed graphically, typically the output is
plotted as the ordinate and the input or measuring as the abscissa.

(c) Output readings for a series of impressed values going up the scale may not agree with the
output readings for the same input values when going down.

(d) Lines or curves plotted in the graphs may not close to form a loop.
Performance Characteristics of a Measuring Instrument

d) Repeatability

Repeatability describes the closeness of the output readings when the same input is applied
repeatability over a short period of time with the same measurement conditions, same instrument
and observer, same location, and same conditions of use maintained throughout.
Performance Characteristics of a Measuring Instrument

e) Reproducibility

Reproducibility describes the closeness of output readings for the same input. When are
changes in the method of measurement, observer, measuring instrument, location, conditions of
use, and time of measurement.
Performance Characteristics of a Measuring Instrument

f) Precision

The instrument’s ability to reproduce a certain group of the


readings with a given accuracy is known as precision i.e., if a
no of measurements are made on the same true value then
the degree of closeness of these measurements is called
precision.

It refers to the ability of an instrument to give its


readings again and again in the same manner
for constant input signals.
Performance Characteristics of a Measuring Instrument

g) Sensitivity:

Sensitivity of an instrument is the ratio of magnitude of response (output signal) to the magnitude of
the quantity being measured (input signal) i.e.,

Static sensitivity = =
Performance Characteristics of a Measuring Instrument

h) Threshold:

Threshold defines the minimum value of input which is necessary to cause a detectable change
from zero output.
When the input to an instrument is gradually increased from zero, then the input must reach to a
certain minimum value, so that the change in the output can be detected. The minimum value of
input refers to threshold.
Performance Characteristics of a Measuring Instrument

i) Resolution:

It is defined as the increment in the input of the instrument for which input remains constant i.e.,
when the input given to the instrument is slowly increased for which the output remains the same
until the increment exceeds a different value.

Resolution is the measure to which an instrument


can sense the variation of a quantity to be
measured. It is the maximum incremental change in
the instrument’s output with a change in any
specified portion of its measuring range.

Demonstration of resolution using target


analogy
Performance Characteristics of a Measuring Instrument

j) Drift:

The slow variation of the output signal of a measuring instrument is known as drift.

The variation of the output signal is not due to any changes in the input quantity but to the
changes in the working conditions of the components inside the measuring instruments.
Performance Characteristics of a Measuring Instrument

k) Hysteresis, Dead zone:

Hysteresis is the maximum difference for the same measuring quantity (input signal) between the
up-scale and down-scale reading during a full range measure in each direction.

Dead zone is the largest range through which an input signal can be varied without initiating any
response from the indicating instrument it is due to the friction.
Performance Characteristics of a Measuring Instrument

2. Dynamic characteristics:

a) Speed of response and measuring lag,


b) Fidelity and dynamic error,
c) Overshoot,
d) Dead time and dead zone,
e) Frequency response.
Performance Characteristics of a Measuring Instrument

a) Speed of response and measuring lag:

In a measuring instrument the speed of response (or) responsiveness is defined as the rapidity
with which an instrument responds to a change in the value of the quantity being measured.

Measuring lag refers to the delay in the response of an instrument to a change in the input
signal. The lag is caused by conditions such as inertia, or resistance.
Performance Characteristics of a Measuring Instrument

b) Fidelity and dynamic errors:

Fidelity of an instrumentation system is defined as the degree of closeness with which the
system indicates (or) records the signal which is upon it. It refers to the ability of the system to
reproduce the output in the same form as the input. If the input is a sine wave then for 100%
fidelity the output should also be a sine wave.
The difference between the indicated quantity and the true value of the time quantity is the
dynamic error. Here the static error of the instrument is assumed to be zero.
Performance Characteristics of a Measuring Instrument

c) Overshoot:

Because of maximum and inertia. A moving part i.e., the pointer of the instrument does not
immediately come to reset in the find deflected position. The pointer goes to find a deflected
position. The pointer goes beyond the steady state i.e., it overshoots.

The over shoot is defined as the maximum amount by which the pointer moves beyond the
steady state.
Performance Characteristics of a Measuring Instrument

d) dead time and dead zone:

Dead time is defined as the time required for an instrument to begin to respond to a change in
the measured quantity it represents the time before the instrument begins to respond after the
measured quantity has been altered.

Dead zone define the largest change of the measured to which the instrument does not respond.
Dead zone is the result as friction backlash in the instrument.
Performance Characteristics of a Measuring Instrument

e) Frequency response:

(The dynamic performance of both measuring and control systems is determined by applying
some known and predetermined input signal to its primary sensing element and them)
Maximum frequency of the measured variable that an instrument is capable of following with
error. The usual requirement is that the frequencies of the measured should not exceed 60% of
the natural frequency measuring instrument.
Sources of Error

a) Calibration of Instrument

b) Instrument Reproducibility

c) Measuring Arrangement

d) Workpiece

e) Environment Condition

f) Observer Skill
1. Calibration of Instrument

For any instrument, calibration is necessary before starting the process of measurement. When the
instrument is loaded frequently for a long time, the calibration of the instrument is used frequently for a
long time, and the calibration of the instrument may get disturbed. The instrument that is out of
calibration cannot give the measured value. Therefore the output produced by such an instrument has
error.

The error due to improper instrument calibration is known as systematic instrumental error, and it
occurs regularly.

Therefore this error can be eliminated by, properly calibrating the instrument at frequent intervals.
2. Instrument Reproducibility

Though an instrument is calibrated perfectly under group of conditions, the output produced by that
instrument contains error. This occurs if the instrument is used under those set of conditions which are
not identical to the conditions existing during calibration. i.e., the instrument should be used under
those set of conditions at which the instrument is calibrated. This type of error may occur
systematically or accidentally.
3. Measuring Arrangement

The process of measurement itself acts as a source of error if the arrangement of different
components of a measuring instrument is not proper.

Example:

While measuring length, the comparator law of Abbe should be followed. According to this, the actual
value of length is obtained when measuring the instrument, and scale axes are collinear, and any
misalignment of these will give an error value. Hence this type of error can be eliminated by having a
proper arrangement of measuring instruments.
4. Work piece

The physical nature of object (work piece) i.e., roughness, softness and hardness of the object acts
as a source of error. Many optomechanical and mechanical type of instruments contact the object
under certain fixed pressure conditions. Since the response of soft and hard objects under these
fixed conditions is different, the output of measurement will be in error.
5. Environmental Condition

Changes in the environmental conditions is also a major source of error. The environmental conditions
such as temperature, humidity, pressure, magnetic or electrostatic field surrounding the instrument
may affect the instrumental characteristics. Due to this the result produced by the measurement may
contain error.

There errors are undesirable and can be reduced by the following ways,

a. Arrangement must be made to keep the conditions approximately constant

b. Employing hermetically sealing to certain components in the instrument, which eliminate the
effects of the humidity, dust, etc.

c. Magnetic and electrostatic shields must be provided.


6. Observer’s Skill

It is a well-known fact that the output of measurement of a physical quantity is different from operator
to operator and sometimes even for the same operator the result may vary with sentimental and
physical states.

One of the examples of error produced by the operator is parallax error in reading a meter scale. To
minimize parallax errors, modern electrical instruments have digital display of output.
Classification of errors and elimination of errors

No measurement can be made with perfect accuracy but it is important to find out what accuracy is
and how different errors have entered into the measurement. A steady of errors is a first step in finding
ways to reduce them.

Errors may arise from different sources and are usually classified as under;
a) Gross errors
b) Systematic (or) instrumental errors
c) Random (or) environmental errors
1. Gross Errors

This cause of errors mainly covers human mistakes in reading instruments and recording and
calculating measurement result. The responsibility of the mistake normally lies with the experimental.

Ex: The temperature is 31.50 , but it will be written as 21.50 and there is an obvious error. However
they can be avoided by adopting two means,

1. Great care should be taken in reading and recording the data.

2. Two, three (or) even more readings should be taken for quantity under measurement
2. Systematic Errors

These type of errors are divided into three categories.

a. Instrumental Errors

b. Environmental errors

c. Observational errors
a. Instrumental Errors

These errors occurs due to three main reasons.

a. Due to inherent short comings of the instrument

b. Due to misuse of instruments

c. Due to loading effects of instruments.


b. Environmental Errors

These errors are caused due to changes in the environmental conditions in the area surrounding the
instrument, that may affect the instrument characteristics, such as the affects of changes in
temperature, humidity, barometric pressure or if magnetic field or electrostatic field.

These undesirable errors can be reduced by the following ways.

i. Arrangement must be made to keep the conditions approximately constant.

ii. Employing hermetically sealing to certain components in the instrument, which eliminate the
effects of the humidity dust, etc.

iii. Magnetic or electrostatic shields must be provided.


c. Observational Errors

These errors are produced by the experiment. The most frequent error is the parallax error introduced
in reading a meter scale.
Group Reporting – Electrical Instrumentation

Group No. Topic Details


VOM / MULTIMETER / AMMETER / VOLT METER
1
/ OHMMETER
2 INSULATION RESISTANCE TESTER
3 THERMAL IMAGING / THERMAL SCANNER
4 POWER QUALITY ANALYZER / DATA LOGGER
INSTRUMENT TRANSFORMER (CURRENT AND HISTORY, SPECIFICATIONS,
5 FUNCTIONALITY AND OPERATIONS,
POTENTIAL TRANSFORMER
INSTRUMENTS USED, METHODOLOGY
6 SOLAR POWER METER OF USAGE AND ACTUAL APPLICATION
WORK ENVIRONMENT MEASUREMENT
7 INSTRUMENTS (NOISE LEVEL, AIR QUALITY,
TEMPERATURE)
WORK ENVIRONMENT MEASUREMENT
8 INSTRUMENTS (TEMPERATURE, LIGHT,
CHEMICAL OR BIOLOGICAL HAZARDS)
THANK YOU
ELEN 30173

Instrumentation and Control


POLYTECHNIC UNIVERSITY OF THE PHILIPPINES – MANILA, COLLEGE OF ENGINEERING, ELECTRICAL ENGINEERING DEPARTMENT
Topic 04
Measurement of Pressure
Agenda
and Pressure definition

Learning Pressure Terminologies


Objectives
Pressure Measurement Groups

Classification of Pressure Measuring


Devices
Pressure Definition

 The action of force against some opposite force.

 A force in the nature of thrust distributed over a surface.

 The force acting against a surface within a closed container.


Pressure Units

Some of the commonly used pressure units are;

1bar = N/
= 1.0197 kgf /
= 750.06 mm of Hg

1 micron = 1M
= mm of Hg

1 torr = 1 mm of Hg.

1 𝜇 bar = 1 dyne/

Pa = N/
Pressure Terminologies

Following terms are generally associated with pressure and its measurement.

1. Atmospheric pressure ()

2. Absolute pressure ()

3. Gauge Pressure () and Vacuum Pressure ()

4. Static Pressure () and Total Pressure ()


1. Atmospheric pressure ()

This is the pressure exerted by the envelope of air surrounding the earth surface. Atmospheric
pressure is usually determined by a mercury column barometer shown in fig. A long clean thick glass
tube closed at one end is filled with pure mercury. The tube diameter is such that capillary effects are
minimum.
The open end is stoppered and the tube is inserted into a mercury container; the stoppered end kept
well beneath the mercury surface. When the stopper is removed, mercury runs out of the tube into the
container and eventually mercury level in the tube settles at height h above mercury level in the
container. Atmospheric pressure acts at the mercury surface in the container, and the mercury vapour
pressure exits at the top of mercury column in the tube.

From hydrostatic equation,


− = 𝜌𝑔ℎ……………………………..(i)
1. Atmospheric pressure ()

Mercury has a low vapour pressure (≈1.6 × bar at 20 °C) and thus for all intents and purposes it can be
neglected in comparison to which is about 1.0 bar at mean sea level. Then,

= 𝜌𝑔ℎ……………………………..(ii)

Atmospheric pressure varies with altitude, because the air nearer the earth’s surface is compressed by
air above. At sea level, value of atmospheric pressure is close to 1.01325 bar or 760 mm of Hg column
(= 10.33 m of water column).
2. Absolute pressure ()

It is defined as the force per unit area due to the interaction of fluid particles amongst themselves. A
zero pressure intensity will occur when molecular momentum is zero. Such a situation can occur only
when there is a perfect vacuum, i.e., a vanishingly small population of gas molecules or of molecular
velocity. Pressure intensity measured from this state of vacuum or zero pressure is called absolute
pressure.
3. Gauge Pressure () and Vacuum Pressure ()

Instruments and gauges used to measure fluid pressure generally


measures the difference between the unknown pressure ‘P’ and the
existing atmospheric pressure . When the unknown pressure is more
than the atmospheric pressure the pressure is recorded by the
instrument is called gauge pressure (). A pressure reading below the
atmospheric pressure is known as vacuum pressure or negative
pressure. Actual absolute pressure is the sum of gauge pressure
indication and the atmospheric pressure.

=
3. Static Pressure () and Total Pressure ()

Static pressure is defined as the force per unit area acting on a wall by a fluid at rest or flowing
parallel to the wall in a pipe line.

Static pressure of a moving fluid is measured with an instrument


which is at rest relative to the fluid. The instrument should
theoretically move with same speed as that of the fluid particle
itself. As it is not possible to move a pressure transducer along in a
flowing fluid; static pressure is measured by inserting a tube into
the pipe line at right angles to the flow path.

2
𝑉
Velocity pressure = total pressure – static pressure. =( 𝑃 𝑡 − 𝑃 𝑠)
2𝑔
Pressure Measurement Groups

A. Instruments for measuring low pressure (below 1 mm of Hg) : manometers and low pressure
gauges

B. Instruments for medium and low pressures (below 1 mm of Hg to 1000 atmospheres):- Bourdon
tube and diaphragm gauges

C. Instruments for measuring low vacuum and ultra high vacuum (760 torr to torr and beyond):
McLeod thermal conductivity and ionization gauges

D. Instruments for measuring very high pressure (1000 atmospheres and above): bourdon tube,
diaphragm and electrical resistance pressure gauges

E. Instruments for measuring varying pressure: engine Indicator and CRO (cathode ray
oscilloscope)
Classification of Pressure Measuring Devices

1. Gravitational transducer

A. A dead weight tester


B. Manometer

2. Mechanical Gauges – Elastic Pressure Transducer


A. A bourdon tube pressure gauges

B. Elastic diaphragm pressure gauge

C. Bellows gauges

3. Low Pressure Gauges


Classification of Pressure Measuring Devices

4. McLeod Gauge

5. Thermal Conductivity Gauges

A. Thermocouple gauge

B. Pirani gauge

6. Ionization Gauge
Various Principles of Measurement

1. Pressure can be measured by balancing a column of liquid against the pressure which has to be
measured. The height of the column which is balanced becomes a measure of the applied
pressure when calibrated.
Example: Manometer

2. When the pressure is applied to the elastic elements, its shape changes which intern the pointer
and moves with respect to the scale. The pointer readings become a measure of applied
pressure.
Example: bourdon tube pressure gauge, diaphragm, bellows.

3. When electric current flows through a conducting wire it gets heated. Depending up on the
conductivity of the surrounding media the heat is dissipated from the wire. The rate of change in
the temperature of the wire becomes a measure of the pressure.
Example: Pirani gauge, ionization gauge, thermal conductivity gauges.
1. Gravitational Transducer – Dead Weight Piston Gauge

The dead weight tester is a primary standard for pressure measurement, and it offers a good
calibration facility over a wide pressure range (700 N/ to 70 MN/ gauge in steps as small as 0.01% of
range with a calibration uncertainly of 0.01-0.05% of the reading).

A typical gauge is schematically shown in Fig. It consists of an accurately machined, bored and finished piston which is inserted
into a close fitting cylinder; both of known cross- section areas. A platform is attached to the top of the piston and it serves to hold
standard weights of known accuracy. The chamber and the cylinder are filled with a clean oil; the oil being supplied from an oil
reservoir provide with a check valve at its bottom. The oil withdrawn from the reservoir when the pump plunger executes an
outward stroke and forced into the space below the piston during inward motion of the pump plunger. For calibrating a gauge, an
appropriate amount of weight is placed on the platform and the fluid pressure is applied until enough upward force is developed to
lift the piston-weight combination. When this occurs, the piston-weight combination begins to float freely within the cylinder.
1. Gravitational Transducer – Dead Weight Piston Gauge

Under the equilibrium condition the pressure force is balanced against the gravity force on the mass
‘m’ of the calibrated masses, plus the piston and flat form and a frictional force. If ‘A’ is the equivalent
area of the piston cylinder combination then:
( mg + frictional drag )
𝑃𝐴=mg+ frictional drag 𝑃=
𝐴

The effective or equivalent area depends on such factors as piston cylinder clearance, pressure level,
temperature, and is normally taken as the mean of the cylinder and piston areas.
1. Gravitational Transducer – Manometer

Manometers measure pressure by balancing a column of liquid against the pressure to measured.
Height of column so balanced is noted and then converted to the desired units. Manometers may be
vertical, inclined, open, differential or compound. Choice of any type depends on its sensitivity of
measurement, ease of operation and the magnitude of pressure being measured. Manometers can be
used to measure gauge, differential, atmospheric, and absolute pressure.

𝑃𝑖𝑒𝑧𝑜 𝑚𝑒𝑡𝑒𝑟
𝑈 −𝑇𝑢𝑏𝑒 𝑀𝑎𝑛𝑜𝑚𝑒𝑡𝑒𝑟
𝑆𝑖𝑛𝑔𝑙𝑒 𝑐𝑜𝑙𝑢𝑚𝑛𝑚𝑎𝑛𝑜𝑚𝑒𝑡𝑒𝑟
1. Gravitational Transducer – Manometer

𝑃𝑖𝑒𝑧𝑜𝑚𝑒𝑡𝑒𝑟
lt is a vertical transparent glass tube, the upper end of which is open to atmosphere and the lower
end is in communication with the gauge point (a point in the fluid container at which pressure is to be
measured). Rise of fluid in the tube above a certain gauge point is a measure of the pressure at that
point.
1. Gravitational Transducer – Manometer

Fluid pressure at gauge point A = atmospheric pressure at the free surface + pressure due to a liquid
column of height
=

Where, w is the specific weight of the liquid.

Similarly for the gauge point B,


=
1. Gravitational Transducer – Manometer

𝑈 −𝑇𝑢𝑏𝑒 𝑀𝑎𝑛𝑜𝑚𝑒𝑡𝑒𝑟
This simplest and useful pressure measuring consists of a transparent tube bent in the form of letter
U and filled with a particular liquid whose density is known. The choice of a particular manometric
liquid upon the pressure range and nature of the fluid whose pressure is sought. For high mercury
(specific gravity 13.6) is the manometric/balancing liquid. For low pressure liquids like carbon
tetrachloride (specific gravity 1.59) or acetylene tetrabromide (specific gravity 2.59) are employed.
Quite often, some colours are added to the balancing so as to get clear readings.

𝑈 −𝑇𝑢𝑏𝑒 𝑀𝑎𝑛𝑜𝑚𝑒𝑡𝑒𝑟𝑠
1. Gravitational Transducer – Manometer

a) Measurement of pressure greater than atmospheric pressure:


Due to greater pressure 𝑝𝑥 in the container, the manometric liquid is forced downward the left limb of
the U-tube and there is a corresponding rise of manometric liquid in the right limb.

Let, = height of the light liquid above the datum line

= height of the light liquid above the datum line

For the right limb the gauge pressure at point 2 is

= atmospheric pressure, i.e., zero gauge pressure at the free surface +


pressure due to head of manometric liquid of specific weight ; =
1. Gravitational Transducer – Manometer

For the right limb the gauge pressure at point 2 is

= gauge pressure + pressure due to height of the liquid of specific


=
weight

Points 1 and 2 are at the same horizontal plane; = and therefore, 𝑤2 h2 =𝑃 𝑥 +𝑤1 h1

Gauge pressure in the container, 𝑃 𝑥 =𝑤2 h2 −𝑤1 h1


1. Gravitational Transducer – Manometer

b) Measurement of pressure less than atmospheric pressure:


Due to negative pressure in the container, the manometric liquid is sucked upwards in the left limb of
the U-tube and there is a corresponding fall of manometric liquid in the right limb.

Gauge pressure in the container, 𝑃 𝑥 =−(𝑤 ¿ ¿ 1 h1 +𝑤2 h2 )¿


1. Gravitational Transducer – Manometer

𝑆𝑖𝑛𝑔𝑙𝑒 𝐶𝑜𝑙𝑢𝑚𝑛 𝑀𝑎𝑛𝑜𝑚𝑒𝑡𝑒𝑟


Single column manometer is modified form of manometer in which a reservoir having a large cross
sectional area as compared to the area of the tube, is connected to one of the limbs (say left limb) of
the reservoir. The other limb may be vertical or inclined. There are two types of single column
manometers.

1. Vertical single column manometer

2. Inclined single column manometer


1. Gravitational Transducer – Manometer

1. Vertical single column manometer

To start with, let both limbs of the manometer be exposed to atmospheric pressure. Then the liquid level
in the wider limb (also called reservoir well basin) and narrow limbs will correspond to position 0-0.

= height of center of pipe above 0-0

= rise of heavy liquid in right limb


1. Gravitational Transducer – Manometer

For the left limb, the gauge pressure at point 1 is: 𝑃 1 = 𝑃 𝑥 + 𝑤1 h1 + h1 𝛿 h

For the right limb, the gauge pressure at point 2 is: 𝑃 2 = 0 + 𝑤 2 h2 + 𝑤 2 𝛿 h

Points 1 and 2 are at the same horizontal plane: = and therefore;

Gauge pressure in the container is: 𝑃 𝑥 = (𝑤 2 h 2 − 𝑤1 h1 ¿ + 𝛿 h(𝑤 2 − 𝑤1)


1. Gravitational Transducer – Manometer

2. Inclined single column manometer

This manometer is more sensitive. Due to inclination the distance moved by heavy liquid in the right
limb will be more.

𝐿𝑒𝑡 ,𝑙=𝑙𝑒𝑛𝑔𝑡h 𝑜𝑓 h𝑒𝑎𝑣𝑦 𝑙𝑖𝑞𝑢𝑖𝑑𝑚𝑜𝑣𝑒𝑑𝑖𝑛 𝑟𝑖𝑔h𝑡 𝑙𝑖𝑚𝑏 𝑓𝑟𝑜𝑚0− 0


𝜃= 𝐼𝑛𝑐𝑙𝑖𝑛𝑎𝑡𝑖𝑜𝑛𝑜𝑓 𝑟𝑖𝑔h𝑡 𝑙𝑖𝑚𝑏 𝑤𝑖𝑡h h𝑜𝑟𝑖𝑧𝑜𝑛𝑡𝑎𝑙. 𝑃𝑥 = 𝑤2𝑙 ¿
1. Gravitational Transducer – Manometer

Advantages of Manometers;

 Relatively inexpensive and easy to fabricate

 Good accuracy and sensitivity

 Requires little maintenance; are not affected by vibrations

 Particularly suitable to low pressures and low differential pressures

 Sensitivity can be altered easily by affecting a change in the quantity of manometric liquid in the
manometer
1. Gravitational Transducer – Manometer

Limitations of Manometers;

 generally large and bulky, fragile and gets easily broken

 measured medium has to be compatible with the manometric fluid used

 readings are affected by changes in gravity, temperature and altitude

 surface tension of manometric fluid creates a capillary affect and possible hysteresis

 meniscus height, has to be determined by accurate means to ensure improved accuracy.


2. Mechanical Gauges – Elastic Pressure Transducer

For measuring pressures in excess of two atmosphere elastic mechanical type transducers are used.
The actions of these gauges are based on the deflection of the hollow tube, diaphragm and bellows,
caused by the applied pressure difference. The resulting deflection made directly accurate a pointer.

Scale read out through suitable linkages and gears are the motion may be transmitted through and
electrical signal. Bellows and diaphragm gauges are generally suitable up to 28 to 56 bars. Where as
bourdon tubes are very high pressure gauges.
2. Elastic Pressure Transducer – Bellow Gauges

The bellows is a longitudinally expansible and collapsible member


consisting of several convolutions or folds. The general acceptable
methods of fabrication are:

i. turning from a solid stock of metal,

ii. soldering or welding stamped annular rings,

iii. rolling a tubing,

iv. hydraulically forming a drawn tube.

Material selection is generally based on considerations like strength or the pressure range, hysteresis
and fatigue.
2. Elastic Pressure Transducer – Bellow Gauges

In the differential pressure arrangement, two bellows are connected to the ends of an equal arm lever.
If equal pressures are applied to the two bellows, they would extend by the same amount. The
connecting lever would then rotate but no movement would result in the movement sector. Under a
differential pressure, the deflections of the bellow would be unequal and the differential displacement of
the connecting levers would be indicated by the movement of the pointer on a scale.
2. Elastic Pressure Transducer – Bellow Gauges

Advantages of Bellow Gauges

1. Simple in Construction

2. Good for low to moderate pressures

3. Available for gauge, differential and absolute pressure measurements.

4. Moderate cost
2. Elastic Pressure Transducer – Bellow Gauges

Limitations of Bellow Gauges

1. Zero shift problems.

2. Needs spring for accurate characterization.

3. Requires compensation for temperature ambient changes.


2. Elastic Pressure Transducer – Bourdon Gauges

The pressure responsive element of a bourdon gauge consists essentially of metal tube (called
bourdon tube or spring), oval in cross section and bent to from a circular segment of, approximately
200 to 300 degrees. The tube is fixed but open at one end and it is through this fixed end that the
pressure to be measured is applied. The other end is closed but free to allow displacement under
deforming action of the pressure difference across the tube walls.

When a pressure (greater than atmosphere) is applied to the


inside of the tube, its cross-section tends to become circular.
This makes the tube straighten itself out with a consequent
increase in its radius of curvature, i. e., the free end would
collapse and curve.
2. Elastic Pressure Transducer – Bourdon Gauges

Type Travel:

The motion of the free end commonly called tip travel is a function of tube length wall thickness, cross
sectional geometry and modulus of the tube material. For a bourdon tube a deflection ∆a of the
elemental tip can be expressed as
𝑎𝑃
∆ 𝑎 = 0.05 ¿
𝐸
Where ‘a’ is the total angle subtended by the tube before pressurization, P is the applied pressure
difference and ‘E’ is the modulus of elasticity of the tube material.
2. Elastic Pressure Transducer – Bourdon Gauges

Errors and their rectification : in general 3 types of error found in bourdon gauges.

𝑍𝑒𝑟𝑜𝑒𝑟𝑟𝑜𝑟 𝑜𝑟 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝑒𝑟𝑟𝑜𝑟 𝑤h𝑖𝑐h 𝑟𝑒𝑚𝑎𝑖𝑛𝑠 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 𝑜𝑣𝑒𝑟 𝑡h𝑒𝑒𝑛𝑡𝑖𝑟𝑒𝑝𝑟𝑒𝑠𝑠𝑢𝑟𝑒𝑟𝑎𝑛𝑔𝑒.


𝑀𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑒𝑟𝑟𝑜𝑟 𝑤h𝑒𝑟𝑒𝑖𝑛 𝑡h𝑒𝑔𝑎𝑢𝑔𝑒𝑚𝑎𝑦 𝑡𝑒𝑛𝑑𝑡𝑜𝑔𝑖𝑣𝑒𝑝𝑟𝑜𝑔𝑟𝑒𝑠𝑠𝑖𝑣𝑒𝑙𝑦 𝑎h𝑖𝑔h𝑒𝑟 𝑜𝑟 𝑙𝑜𝑤𝑟𝑒𝑎𝑑𝑖𝑛𝑔 .
𝐴𝑛𝑔𝑢𝑙𝑎𝑟𝑖𝑡𝑦 𝑒𝑟𝑟𝑜𝑟 : 𝑞𝑢𝑖𝑡𝑒𝑜𝑓𝑡𝑒𝑛𝑖𝑡 𝑖𝑠 𝑠𝑒𝑒𝑛 𝑡h𝑎𝑡 𝑎 𝑜𝑛𝑒 –𝑡𝑜 – 𝑜𝑛𝑒𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑒𝑛𝑐𝑒𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑜𝑐𝑐𝑢𝑟 .
2. Elastic Pressure Transducer – Bourdon Gauges

Bourdon tube shapes and configurations:

The C-type bourdon tube has a small tip travel and this necessitates amplification by a lever,
quadrant, pinion and pointer arrangement. Increased sensitivity can be obtained by using a very long
length of tubing in the form of a helix, and a flat spiral as indicated in Fig.
2. Elastic Pressure Transducer – Bourdon Gauges

Bourdon Tube Materials

1. Pressure 100 to 700 KN /(tubes are made of phosphor bronze).

2. For high pressure P=7000 to 63000 KN / (tubes are made of alloy steel or k-monel)
2. Elastic Pressure Transducer – Bourdon Gauges

Advantages of Bourdon Gauges

1. Low cost and simple in construction.

2. Capability to measure gauge absolute and differential pressure.

3. Availability in several ranges.


2. Elastic Pressure Transducer – Bourdon Gauges

Limitations of Bourdon Gauges

1. Response

2. Susceptibility to sharp and vibration

3. Mutually required geared movement for application


2. Elastic Pressure Transducer – Diaphragm Gauges

In its elementary form, a diaphragm is a thin plate of circular shape


clamped firmly around its edges. The diaphragm gets deflected in
accordance with the pressure differential across the side;
deflection being towards the low pressure side.

The pressure to be measured is applied to diaphragm causing it to


deflect the deflection being proportional to applied pressure. The
movement of diaphragm depends on its thickness and diameter.
The pressure deflection relation for a flat diaphragm with
adjustable clamped is given by
4
16 𝐸 𝑡
𝑃= 4 2
¿
3 𝑟 (1 − 𝜇 )
2. Elastic Pressure Transducer – Diaphragm Gauges

Where;

𝑃=𝑝𝑟𝑒𝑠𝑠𝑢𝑟𝑒 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑎𝑐𝑟𝑜𝑠𝑠 𝑡h𝑒 𝑑𝑖𝑎𝑝h𝑟𝑎𝑔𝑚


𝐸=𝑚𝑜𝑑𝑢𝑙𝑢𝑠 𝑜𝑓 𝑒𝑙𝑎𝑠𝑡𝑖𝑐𝑖𝑡𝑦
𝑡= 𝑑𝑖𝑎𝑝h𝑟𝑎𝑔𝑚 𝑡h𝑖𝑐𝑘𝑛𝑒𝑠𝑠
𝑀 =𝑃𝑜𝑖𝑠𝑠𝑜 𝑛′ 𝑠 𝑅𝑎𝑡𝑖𝑜
𝑅= 𝑑𝑖𝑎𝑝h𝑟𝑎𝑔𝑚𝑟𝑎𝑑𝑖𝑢𝑚
𝑌 =𝑑𝑒𝑓𝑙𝑒𝑐𝑡𝑖𝑜𝑛𝑎𝑡 𝑐𝑒𝑛𝑡𝑒𝑟 𝑜𝑓 𝑑𝑖𝑎𝑝h𝑟𝑎𝑔𝑚.
2. Elastic Pressure Transducer – Diaphragm Gauges

There are two basic types of Diaphragm Element Design

𝑀𝑒𝑡𝑎𝑙𝑙𝑖𝑐 𝑑𝑖𝑎𝑝h𝑟𝑎𝑔𝑚 𝑤h𝑖𝑐h 𝑑𝑒𝑝𝑒𝑛𝑑𝑠𝑢𝑝𝑜𝑛𝑖𝑡𝑠𝑜𝑤𝑛 𝑟𝑒𝑠𝑖𝑙𝑖𝑒𝑛𝑐𝑒 𝑓𝑜𝑟 𝑖𝑡𝑠 𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛


2. Elastic Pressure Transducer – Diaphragm Gauges

Diaphragm Types

The diaphragms can be in the form of flat, corrugated or dished plates; the choice depending on the
strength and amount of deflection desired. Most common types of diaphragms are shown in Fig.
2. Elastic Pressure Transducer – Diaphragm Gauges

Diaphragm material, pressure ranges and applications

Metallic diaphragms are generally fabricated form a full hard, cold-rolled nickel, chromium or iron alloy
which can have an elastic limit up to 560 MN / ml. Typical pressure ranges are 0 - 50 mm water
gauge, 0-2800 kN/ pressure and 0 - 50 mm water gauge vacuum.

Typical applications are low pressure absolute pressure gauges, draft gauges, liquid level gauges and
many types of recorders and controllers operating in the low range of direct or differential pressures.

Non-metallic slack diaphragms are made a variety of materials such as gold beaters, skill , animal
membranes, impregnated silk clothes and synthetic materials like Teflon, neoprene, polythene …Etc.
2. Elastic Pressure Transducer – Diaphragm Gauges

Advantages of Diaphragm Gauges

1. Relatively small size and moderate cost.

2. Capability to with stand high over pressures and maintain good linearity over a wide range.

3. Availability of gauge for absolute and differential pressure measurement.

4. Minimum of hysteresis and no permanent zero shift.


2. Elastic Pressure Transducer – Diaphragm Gauges

Limitations of Diaphragm Gauges

1. Needs protection from shocks and vibrations.

2. Cannot be used to measure high pressure.

3. Difficult to repair.
3. Low Pressure Gauges

Below 1 mm of mercury (Hg) is known as low pressure gauges. Pressure ranges:

𝐿𝑜𝑤 𝑣𝑎𝑐𝑢𝑢𝑚=760 𝑡𝑜𝑟𝑟 𝑡𝑜 25 𝑡𝑜𝑟𝑟 ,


−3
𝑀𝑒𝑑𝑖𝑢𝑚 𝑣𝑎𝑐𝑢𝑢𝑚=25 𝑡𝑜𝑟𝑟 𝑡𝑜 10 𝑡𝑜𝑟𝑟 ,
𝐻𝑖𝑔h 𝑣𝑎𝑐𝑢𝑢𝑚=10 −3 𝑡𝑜𝑟𝑟 𝑡𝑜 10− 6 𝑡𝑜𝑟𝑟 ,
𝑉𝑒𝑟𝑦 h𝑖𝑔h 𝑣𝑎𝑐𝑢𝑢𝑚=10− 6 𝑡𝑜𝑟𝑟 𝑡𝑜 10− 9 𝑡𝑜𝑟𝑟 ,
−9
𝑈𝑙𝑡𝑟𝑎 h𝑖𝑔h 𝑣𝑎𝑐𝑢𝑢𝑚=10 𝑡𝑜𝑟𝑟 𝑎𝑛𝑑 𝑏𝑒𝑦𝑜𝑛𝑑 .
3. Low Pressure Gauges

Low pressure gauges are grouped into two types. They are;

(i) Direct measurement: wherein displacement deflection caused by the pressure is measured and is
correlated to the applied pressure. This principle is incorporated in bellows, diaphragm, bourdon tube,
manometer.

(ii) Indirect type measurements: wherein the low pressure is detected through measurement of a
pressure controlled property such as volume, thermal conductivity etc. the inferential gauges include
Thermal conductivity gauges, Ionization gauge, McLeod gauge.
4. Thermal Conductivity Gauges

These gauges measure pressure through a change in the thermal conductivity of the gas. Their
operation is based on the thermodynamic principle that “at low pressures there is a relationship
between the pressure and thermal conductivity i.e., the heat conductivity decreases with decrease in
pressure”. There are two types of thermal gauges.

A. Thermocouple gauge.

B. Pirani gauge.
A. Thermocouple Gauge

The schematic diagram of a thermocouple-type conductivity gauge


is shown in the figure. The pressure to be measured was admitted
to a chamber. A constant current is passed through the thin metal
strip in the chamber.

Due to this current, the metal strip gets heated and acts as hot
surface. The temperature of this hot surface is sensed by a
thermocouple attached to the metal strip. The glass tube acts as the
cold surface. Whose temperature is nearly equal to room
temperature. The conductivity of the metal strip changes due to the
applied pressure. This change in conductivity causes a change in
the temperature, which is sensed by the thermocouple. The
thermocouple produces current corresponding to the thermocouple
output which is then indicated by a mm. This shows current
becomes a measure of the applied pressure when calibrated.
B. Pirani Gauge

The schematic diagram of a thermocouple-type conductivity gauge


is shown in the figure. The pressure to be measured was admitted
to a chamber. A constant current is passed through the thin metal
strip in the chamber.

Due to this current, the metal strip gets heated and acts as hot
surface. The temperature of this hot surface is sensed by a
thermocouple attached to the metal strip. The glass tube acts as the
cold surface. Whose temperature is nearly equal to room
temperature. The conductivity of the metal strip changes due to the
applied pressure. This change in conductivity causes a change in
the temperature, which is sensed by the thermocouple. The
thermocouple produces current corresponding to the thermocouple
output which is then indicated by a mm. This shows current
becomes a measure of the applied pressure when calibrated.
THANK YOU

You might also like