MODULE 3
MODULE 3
Module III
Vision based sensors- Elements of vision sensor, image acquisition, image processing, edge detection,
feature extraction, object recognition, pose estimation and visual servoing, hierarchy of a vision
system, CCD and CMOS Cameras, Monochrome, stereovision, night vision cameras, still vs video
cameras, kinect sensor; Block schematic representations. Criteria for selection of sensors- range,
dynamic range, sensitivity, Linearity, response time, band width, accuracy, repeatability & precision,
Resolution & threshold, type of output, size and weight, environmental conditions, interfacing.
The task of the camera as a vision sensor is to measure the intensity of the light reflected by an object,
as indicated in above Figure using a photosensitive element termed pixel (or photosite). A pixel is
capable of transforming light energy into electric energy. The sensors of different types like CCD,
CMOS, etc., are available depending on the physical principle exploited to realize the energy
transformation.
1.Camera Systems
As indicated in Fig. 4.16, a camera is a complex system comprising of several devices inside it. Other
than the photosensitive sensor, there are shutter, a lens, and analog preprocessing electronics. The
lens is responsible for focusing the light reflected by the object on the plane where the photosensitive
sensors lies, called the image plane.. This is generally carried out by a software residing inside a
personal computer which saves the images.
Note that there are two types of video cameras: analog and digital. Analog cameras are not in
common anymore. However, if it is used, a frame grabber or video capture card, usually a special
analog-to-digital converter adopted for video signal acquisition in the form of a plug-in board which
is installed in the computer, is often required to interface the camera to a host computer. The frame
grabber will store the image data from the camera on-board, or system memory, and performs
sampling and digitizing of the analog data as necessary. In some cases, the camera may output digital
data, which is compatible with a standard computer. So a separate frame grabber may not be needed.
1
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
Vision software is needed to create the program which processes the image data. When an image has
been analyzed, the system must be able to communicate the result to control the process or to pass
information to a database. This requires a digital input/output interface. The human eye and brain can
identify objects and interpret scenes under a wide variety of conditions. Robot-vision systems are far
less versatile. So the creation of a successful system requires careful consideration of all elements of
the system and precise identification of the goals to be accomplished, which should be kept as simple
as possible.
Vidicon Camera
Early vision systems employed vidicon cameras, which were bulky vacuum tube devices. They are
almost extinct today but explained here for the sake of completeness in the development of video
cameras. Vidicons are also more sensitive to electromagnetic noise interference and require high
power. Their chief advantages are higher resolution and better light sensitivity. Figure 4.17 shows the
schematic diagram of a vidicon camera. The mosaic reacts to the varying intensity of a light by
varying its resistance. Now, as the electric gun generates and sends a continuous cathode beam to the
mosaic passing though two pairs of orthogonal capacitors (deflectors), the electron beam gets
deflected up or down, and left or right based on the charge on each pair of capacitors. As the beam
scans the image, at each instant, the output is proportional to the resistance of the mosaic or the light
intensity on the mosaic. By reading the output voltage continuously, an analog representation of the
image can be obtained. Please note that the analog signal of vidicon needs to be converted to digital
signal using analog-to-digital converters (ADC), in order to process the image further using a PC. The
ADC which actually performs the digitization of the analog signal requires mainly three steps, i.e.,
sampling, quantization, and encoding.
In sampling, a given analog signal is sampled periodically to obtain a series of discrete-time analog
signal, as illustrated in Fig. 4.18. By setting a specified sampling rate, the analog signal can be
approximated by the sampled digital outputs. However, while reconstructing the original signal from
the sample data, one may end up with a completely different signal. This loss of information is called
aliasing, and it can be a serious problem. In order to prevent aliasing, according to the sampling
2
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
theorem, the sampling rate must be at least twice the largest frequency in the original video signal if
one wishes to reconstruct that signal exactly.
In quantization, each sampled discrete time voltage level is assigned to a finite number of defined
amplitude levels. These levels correspond to the Gray scale used in the system. The predefined
amplitude levels are characteristics to a particular ADC and consist of a set of discrete values of
voltage levels. The number of quantization levels is defined by 2n, where n is the number of bits of
the ADC. For example, a 1-bit ADC will quantize only at two values, whereas with an 8-bit ADC, it
is possible to quantize at 28 = 256 different values. Note that a large number of bits enables a signal
to be represented more precisely. Moreover, sampling and quantization resolutions are completely
independent of each other.
3
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
1.Image Acquisition
In image acquisition, an image is acquired from a vidicon which is digitized or from a digital camera
(CCD or CID), as explained in the previous section. The image is stored in computer memory (also
called a frame buffer) in the format such as TIFF, JPG, Bitmap, etc. The buffer may be a part of the
frame grabber card or in the computer itself. Note that the image acquisition is primarily a hardware
function, however, software can be used to control light intensity, focus, camera angle,
synchronization, field of view, read times, and other functions. Image acquisition has four principle
4
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
elements, namely, a light source, either controlled or ambient, which is explained in Section 4.4.1, a
lens that focuses reflected light from the object on to the image sensor, an image sensor that converts
the light image into a stored electrical image, and the electronics to read the sensed image from the
image- sensing element, and after processing, transmit the image information to a computer
for further processing.
Types:
1. Image Acquisition using a Single Sensor
2. Image Acquisition using Sensor Strips
3. Image Acquisition using Sensor Arrays
This type of arrangement is found in digital cameras. A typical sensor for these cameras is a CCD
array, which can be manufactured with a broad range of sensing properties and can be packaged in
rugged arrays of 4000 * 4000 elements or more. CCD sensors are used widely in digital cameras and
other light sensing instruments. The response of each sensor is proportional to the integral of the light
energy projected onto the surface of the sensor, a property that is used in astronomical and other
applications requiring low noise images. The first function performed by the imaging system is to
collect the incoming energy and focus it onto an image plane. If the illumination is light, the front end
of the imaging system is a lens, which projects the viewed scene onto the lens focal plane. The sensor
array, which is coincident with the focal plane, produces outputs proportional to the integral of the
light received at each sensor.
2. Image Processing
Image-processing techniques are used to enhance, improve, or otherwise alter an image and to
prepare it for image analysis. Usually, during image processing, information is not extracted from the
image. The intention is to remove faults, trivial information, or information that may be important,
and to improve the image. Image processing examines the digitized data to locate and recognize an
5
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
object within the image field. It is divided into several sub-processes, which are discussed below:
Image Data Reduction
Here, the objective is to reduce the volume of data. As a preliminary step in the data analysis, the
schemes like digital conversion or windowing can be applied to reduce the data. While the digital
conversion reduces the number of gray levels used by the vision system, windowing involves using
only a portion of the total image stored in the frame buffer for image processing and analysis. For
example, in windowing, to inspect a circuit board, a rectangular window is selected to surround the
component of interest and only pixels within that window are analyzed.
Histogram Analysis
A histogram is a representation of the total number of pixels of an image at each gray level.
Histogram information is used in a number of different processes, including thresholding. For
example, histogram information can help in determining a cut-off point when an image is to be
transformed into binary values.
Thresholding
It is the process of dividing an image into different portions or levels by picking a certain grayness
level as a threshold. Comparing each pixel value with the threshold, and then assigning the pixel to
the different portions or level, depending on whether the pixel’s grayness level is below the threshold
(‘off’ or 0, or not belonging) or above the threshold (‘on’ or 1, or belonging).
Masking
A mask may be used for many different purposes, e.g., filtering operations and noise reduction, and
others. It is possible to create masks that behave like a low- pass filter such that higher frequencies of
an image are attenuated while the lower frequencies are not changed very much. This is illustrated in
Example 4.8. Thereby, the noise is reduced. Masking an image considers a portion of an imaginary
image shown in Fig. 4.23(a), which has all the pixels at a gray value of 20 except the one at a gray
level of 100. Note that the above reduction of noise has been achieved using what is referred as
neighborhood averaging, which causes the reduction of the sharpness of the image as well.
Edge Detection
Edge detection is a general name for a class of computer programs and techniques that operate on an
image and result in a line drawing of the image. The lines represent changes in values such as cross
section of planes, intersections of planes, textures, lines, etc. In many edge-detection techniques, the
resulting edges are not continuous. However, in many applications, continuous edges are preferred,
which can be obtained using the Hough transform. It is a technique used to determine the geometric
relationship between different pixels on a line, including the slope of the line. Consider a straight line
in the xy-plane, as shown in Fig. 4.24, which is expressed as
y = mx + c (4.15)
where m is the slope and c is the intercept.
The line represented by Eq. (4.15) can be transformed into a Hough plane of m – c with x and y as its
slope and intercept, respectively. Thus, a line in the xy-plane with a particular slope and intercept will
transform into a point in the Hough plane. line or not, where upon the orientation of an object in a
plane can be determined by calculating the orientation of a particular line in the object.
6
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
Segmentation
Segmentation is a generic name for a number of different techniques that divide the image into
segments of its constituents. The purpose of segmentation is to separate the information
contained in the image into smaller entities that can be used for other purposes. Segmentation
includes edge detection, as mentioned above, region growing and splitting, and others. While
region growing works based on the similar attributes, such as gray-level ranges or other
similarities, and then try to relate the regions by their average similarities, region splitting is
carried out based on thresholding in which an image is split into closed areas of neighborhood
pixels by comparing them with thresholding value or range.
3.Image Analysis
Image analysis is a collection of operations and techniques that are used to extract information
from images. Among these are feature extraction; object recognition; analysis of the position,
size, orientation; extraction of depth information, etc. Some techniques can be used for multiple
purposes. For example, moment equations may be used for object recognition, as well as to
calculate the position and orientation of an object. It is assumed that image processing has
already been performed to the image and available for the image analysis. Some of the image-
analysis techniques are explained below.
Feature Extraction
Objects in an image may be recognized by their features that uniquely characterize them. These
include, but are not limited to, gray-level histograms, morphological features such as perimeter,
area, diameter, number of holes, etc., eccentricity, cord length, and moments. As an example,
perimeter of an object may be found by first applying an edge-detection routine and then
counting the number of pixels on the perimeter. Similarly, the area can be calculated by region-
growing techniques, whereas diameter of a non circular object is obtained by the maximum
distance between any two points on any line that crosses the identified area of the object. In order
to know the thinness of an object, it can be calculated using either of the two ratios:
7
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
Object Recognition
The next in image analysis is to identify the object that the image represents based on the
extracted features. The recognition algorithm should be powerful enough to uniquely identify the
object. Typical techniques used in the industries are template matching and structural technique.
In template matching, the features of the object in the image, e.g., its area,diameter, etc., are
compared to the corresponding stored values. These values constitute the stored template. When
a match is found, allowing for certain statistical variations in the comparison process, the object
has been properly classified.
Structural techniques of pattern recognition rely on the relationship between features or edges of
an object. For example, if the image of an object can be subdivided in four straight lines
connected at their end points, and the connected lines are at right angles then the object is a
rectangle. This kind of technique is known as syntactic pattern recognition.
VISUAL SERVOING
Also known as vision-based robot control and abbreviated VS, is a technique which uses feedback
information extracted from a vision sensor (visual feedback) to control the motion of a robot.
There are two fundamental configurations of the robot end-effector (hand) and the camera.
Eye-in-hand, or end-point closed-loop control, where the camera is attached to the moving hand
and observing the relative position of the target.
Eye-to-hand, or end-point open-loop control, where the camera is fixed in the world and
observing the target and the motion of the hand.
Applications:
Some of the important applications of the machine vision system in the robots are:
Inspection
Orientation
Part Identification
Location
There are some of the future improvements researches are going on for providing highly-
developed machine vision system in the complicated areas.
Image data reduction:
In image data reduction , the objective is to reduce the volume of data. As a prelimnary step in the
data analysis, the following two schemes have found common for data reduction.
1.Digital conversion
2.Windowing.
8
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
1.Digital conversion
Digital conversion reduces the number of grey levels used by the machine vision systems.
Example: an 8 bit register used for each pixel could have 28 = 256. Gray levels. Depending on
the requirements of the application, digital conversion can be used to reduce the number of gray
levels by using fewer bits to represent the pixel light intensity. Four bits would reduce the
number of grey levels to 16. This kind of conversion reduces the magnitude of the image –
processing problem.
2. Windowing
Windowing involves using only a portion of the total image stored in the frame buffer forimage
processing and analysis this portion is called the window. Example: for inspection of printed
circuit board, one may wish to inspect and analysis only one component on the board. A
rectangular window is selected to surround the component of interest and only pixels only the
windows are analyzed.
1. Low-level Vision
The sequence of steps from image formation to image acquisition, etc., described above, along
with the extraction of certain physical properties of the visible environment, such as depth, three-
dimensional shape, object boundaries, or surface-material properties, can be classified as a
process of low-level vision. Activity in the low-level vision is to process images for feature
extraction (edge, corner, or optical flow). Operations carried out on the pixels in the image to
extract the above properties with respect to intensity or depth at each point in the image. One
may, for example, be interested in extracting uniform regions, where the gradient of the pixels
remains constant, or first-order changes in gradient, which would correspond to straight lines, or
second-order changes which could be used to extract surface properties such as peaks, pits,
ridges, etc. A number of characteristics that are typically associated with low-level vision
processes are as follows:
They are spatially uniform and parallel, i.e., with allowance for the decrease in resolution from
the center of the visual field outwards, similar process is applied simultaneously across the visual
field. For example, processing involved in edge detection, motion, or stereo vision, often proceed
in parallel across the visual field, or a large part of it.
Low-level visual processes are also considered ‘bottom-up’ in nature. This means that they are
determined by the data, i.e., data driven, and are relatively independent of the task at hand or
knowledge associated with specific objects. As far as the edge detection is concerned, it will be
performed in the same manner for images of different objects, with no regard to whether the task
to do with moving around, looking for a misplaced object, or enjoying the landscape.
9
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
2. Intermediate-level Vision
In this level, objects are recognized and 3D scenes are interpreted using the features obtained
from the low-level vision. The intermediate-level processing is fundamentally concerned with
grouping entities together. The simplest case is when one groups pixels into lines. One can then
express the line in a functional form. Similarly, if the output of the low-level information is a
depth map, one may further need to distinguish object boundaries, or other characteristics. Even
in the simple case where one is trying to extract a single sphere, it is not an easy process to go
from a surface-depth representation to a center-and-radius representation. In contrast to higher-
level vision, the process here does not depend on the knowledge about specific object.
3.High-level Vision
High-level vision, which is equivalent to image understanding, is concerned mainly with the
interpretation of scene in terms of the objects in it, and is usually based on knowledge of specific
objects and relationships. It is concerned primarily with the interpretation and use of information
in the image rather than the direct recovery of physical properties. In high-level vision,
interpretation of a scene goes beyond the tasks of line extraction and grouping. It further requires
decisions to be made about types of boundaries, such as which are occluding, and what
information is hidden from the user. Further grouping is essential at this stage since one may still
need to be able to decide which lines group together to form an object. To do this, it is necessary
to further distinguish lines which are part of the object structure, from those which are part of a
surface texture, or caused by shadows. High-level systems are, therefore, object oriented, and
sometimes called ‘top-down’. High-level visual processes are applied to a selected portion of the
image, rather than uniformly across the entire image, as done in low- and intermediate-level
visions. They almost always require some form of knowledge about the objects of the scene to be
included.
10
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
A CMOS sensor is a digital device. CMOS stands for ‘complementary metal-oxide semiconductor.’ A
CMOS sensor converts the charge from a photosensitive pixel to a voltage at the pixel site. The signal
is then multiplexed by row and column to multiple on-chip, digital-to-analog converters. CMOS
sensors have high speed, low sensitivity, and high, fixed-pattern noise.
A CCD sensor is a “charged coupled device.” Just like a CMOS sensor, it converts light into electrons.
Unlike a CMOS sensor, it is an analog device. It is a silicon chip that contains an array of
photosensitive sites. Being an analog device, output is immediately converted to a digital signal by an
analog-to-digital converter. The voltage is read from each site to reconstruct an image.
For a long time, the CCD sensor was the prevalent technology for capturing high-quality, low-noise
images. But CCD sensors are expensive to manufacture, so they often come with a higher price tag.
They also consume more power than CMOS sensors, sometimes a hundred times more. Luckily,
CMOS sensor technology has advanced to the point where it is fast approaching the quality and
11
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
capabilities of CCD technology, and with a significantly lower price tag, smaller size, and power
consumption.
In a CMOS image sensor, the charge from the photosensitive pixel at the pixel location can be
changed to a voltage & the signal is multiplexed through row and column and it is received by digital
to analog converter chip.
CMOS sensor is a digital device where every site includes a photodiode & three transistors to perform
different tasks like activating & resetting the pixel, amplification & charge conversion, and
multiplexing or selection.
The CMOS sensor multilayer fabrication procedure does not allow using microlenses over the chip,
thus reducing the effective collection efficiency. So this less efficiency combined through pixel-to-
pixel difference gives a lower signal-to-noise ratio & less overall image quality as compared to CCD
sensors.
12
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
The image sensor within a camera system receives incident light that is focused through a lens. Based
on the type of image sensors like CMOS or CCD, the camera system will transmit data to the next
phase like either a digital signal or a voltage.
CMOS sensors convert photons into electrons to a voltage & after that into a digital value through an
on-chip ADC (Analog to Digital Converter).
Depending on the manufacturer of the digital camera, the components used in the camera system and
its design will be changed. The main function of this design is to change light into a digital signal
then it can be examined to activate some further enhancement or user defined actions.
In a CCD device, the charge is actually transported across the chip and read at one corner of the array.
An analog-to-digital converter turns each pixel's value into a digital value. In most CMOS devices,
there are several transistors at each pixel that amplify and move the charge using more traditional
wires. The CMOS approach is more flexible because each pixel can be read individually.
CCDs use a special manufacturing process to create the ability to transport charge across the chip
without distortion. This process leads to very high-quality sensors in terms of fidelity and light
sensitivity. CMOS chips, on the other hand, use traditional manufacturing processes to create the chip
-- the same processes used to make most microprocessors. Because of the manufacturing differences,
there have been some noticeable differences between CCD and CMOS sensors.
13
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
Advantages
Disadvantages
These sensors are more susceptible to noise and images are grainy sometimes.
These sensors utilize more light for an enhanced image.
In this sensor, every pixel executes its conversion.
The homogeneity & quality of the image is low.
14
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
Applications
Monochrome
Monochrome cameras are capable of capturing images with great details and sensitivity when
compared to color cameras. Unlike color cameras, monochrome cameras capture all the incoming
light at each pixel – irrespective of color. Since red, green, and blue are all absorbed simultaneously,
each pixel can receive up to three times the amount of light.
In comparison with color camera, a monochrome camera has the following advantages:
1. Monochrome cameras perform better in low lighting conditions
2. Monochrome sensors have intrinsically higher frame rates
3. Monochrome algorithms are well-tuned
Stereo vision
Stereo vision is the process of recovering depth from camera images by comparing two or more views
of the same scene. Simple, binocular stereo uses only two images, typically taken with parallel
cameras that were separated by a horizontal distance known as the "baseline." The output of the
stereo computation is a disparity map (which is translatable to a range image) which tells how far
each point in the physical scene was from the camera.
The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired
from a different viewpoint in space. - The images can be obtained using multiple cameras or one
moving camera. - The term binocular vision is used when two cameras are employed.
plate. These electrons enter the holes of the plate and bounces on specially coated internal walls. This
activity generates more electrons and creates a denser cloud of electrons representing an intensified
version of the original image. This glows the phosphor screen and an image is generated on the screen
of the attached device.
Thermal imaging
Thermal imaging cameras detect the objects by infrared radiation and creates an image based on that
information. The emitted infrared light is focused by a special lens and it is scanned by a phased array
of infrared-detector elements. The detector creates a detailed temperature pattern called a thermogram
and it is converted into electric impulses. A signal-processing unit translates the impulses from the
elements into data for display. The combination of these impulses creates an image on the screen.
Infrared illumination
Infrared illumination cameras detect the invisible wavelengths of light and use infrared light to
illuminate images in dim lighting conditions. IR cameras have a series of infrared LEDs that transmit
infrared light in the dark. As infrared light cameras can interfere with colour images, IR cameras have
a cut-out filter to block it during the daytime. The filter is fitted between the lens and sensor, allowing
only the visible light to pass through in the daytime. Once the light level in the area drops to a certain
point, the cut-off filter changes its mode and allows infrared light to pass through it.
STILL CAMERA
a)Fixed Lens Cameras
Lens is not interchangeable (removable)
Variable zoom, controlled by servo controller
User friendly, less manual features
Less expensive
b)Single Lens Reflex (SLR)/Digital SLR
Lens is interchangeable (removable)
Professional quality, more manual features
More expensive
Lenses often sold separately
VIDEO CAMERA
Used for field production (outside studio)
Portable and durable
Records to tape, card, or built-in hard drive
Records audio
Built-in microphone
High quality audio inputs
1.The camera
The camera detects the red, green, and blue color components as well as body-type and facial features.
It has a pixel resolution of 640x480 and a frame rate of 30 fps. This helps in facial recognition and
16
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
body recognition.
3.The microphone
The microphone is actually an array of four microphones that can isolate the voices of the player from
other background noises allowing players to use their voices as an added control feature.
17
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
SENSOR SELECTION
1.Range
Range or span is a measure of the difference between the minimum and maximum values of its
input or output (response) so as to maintain a required level of output accuracy. For example, a
strain gauge might be able to measure output values over the range from 0.1 to 10 Newtons.
2.Sensitivity
Sensitivity is defined as the ratio of the change of output to change in input. As an example, if a
movement of 0.025 mm in a linear potentiometer causes an output voltage by 0.02 volt then the
sensitivity is 0.8 volts per mm. It is sometimes used to indicate the smallest change in input that
will be observable as a change in output. Usually, maximum sensitivity that provides a linear and
accurate signal is desired.
3.Linearity
Perfect linearity would allow output versus input to be plotted as a straight line on a graph paper.
Linearity is a measure of the constancy of the ratio of output to input. In the form of an equation,
it is
y = mx (4.27)
where x is input and y is output, and m is a constant. If m is a variable, the relationship is not
linear. For example, m may be a function of x, such as m = a + bx where the value of b would
introduce a nonlinearity. A measure of the nonlinearity could be given as the value of b.
4.Response Time
Response time is the time required for a sensor to respond completely to a change in input. The
response time of a system with sensors is the combination of the responses of all individual
components, including the sensor. An important aspect in selecting an appropriate sensor is to
match its time response to that of the complete system
5.Bandwidth
It determines the maximum speed or frequency at which a instrument associated with a sensor or
otherwise is capable of operating. High bandwidth implies faster speed of response. Instrument
bandwidth should be several times greater than the maximum frequency of interest in the input
signals.
6. Accuracy
Accuracy is a measure of the difference between the measured and actual values. An accuracy
of ±0.025 mm means that under all circumstances considered, the measured value will be within
0.025 mm of the actual value. In positioning a robot and its end-effector, verification of this level
of accuracy would require careful measurement of the position of the end-effector with respect to
the base reference location with an overall accuracy of 0.025 mm under all conditions of
temperature, acceleration, velocity, and loading. Accuracy describes ‘closeness to true values.’
18
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
means the ‘closeness of agreement’ between independent measurements of a quantity under the
same conditions without any reference to the true value, as done above. Note that the number of
divisions on the scale of the measuring device generally affects the consistency of repeated
measurement and, therefore, the precision. In a way, precision describes ‘repeatability.’ Figure
4.36 illustrates the difference between accuracy and precision.
9. Hysteresis
It is defined as the change in the input/ output curve when the direction of motion changes, as
indicated in Fig. 4.37. This behavior is common in loose components such as gears, which have
backlash, and in magnetic devices with ferromagnetic media, and others.
19
RAT 292 SENSORS AND ACTUATORS FOR ROBOTS MODULE III
electromagnetic field, radioactive environments, shock and vibrations, etc., should be taken into
account while selecting a sensor or considering how to shield them.
14. Interfacing
Interfacing of sensors with signal-conditioning devices and the controller of the robot is often a
determining factor in the usefulness of sensors. Nonstandard plugs or requirements for
nonstandard voltages and currents may make a sensor too complex and expensive to use. Also,
the signals from a sensor must be compatible with other equipment being used if the system is to
work properly.
15. Others
Other aspects like initial cost, maintenance cost, cost of disposal and replacement, reputation of
manufacturers, operational simplicity, ease of availability of the sensors and their spares should
be taken into account. In many occasions, these nontechnical considerations become the ultimate
deciding factor in the selection of sensors for an application.
20