0% found this document useful (0 votes)
148 views12 pages

CCD vs CMOS Sensors Explained

CCD and CMOS sensors both accumulate and convert light into electronic charges at pixels. However, CCD sensors move charges between pixels along shift registers before conversion to voltage, while CMOS sensors convert charges to voltage directly at each pixel for parallel readout. CMOS sensors are advantageous as they allow for faster frame rates and integration into systems-on-a-chip due to leveraging of CMOS fabrication processes. Binning pixels improves low-light performance but reduces resolution, and is performed more efficiently on-chip for CCD sensors than mathematically after digitization for CMOS sensors.

Uploaded by

Jonnie Butcher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views12 pages

CCD vs CMOS Sensors Explained

CCD and CMOS sensors both accumulate and convert light into electronic charges at pixels. However, CCD sensors move charges between pixels along shift registers before conversion to voltage, while CMOS sensors convert charges to voltage directly at each pixel for parallel readout. CMOS sensors are advantageous as they allow for faster frame rates and integration into systems-on-a-chip due to leveraging of CMOS fabrication processes. Binning pixels improves low-light performance but reduces resolution, and is performed more efficiently on-chip for CCD sensors than mathematically after digitization for CMOS sensors.

Uploaded by

Jonnie Butcher
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

A comparison of some of the key technologies

and characteristics in CCD and CMOS sensors


_ Andrew Kirby, Technical Specialist, Atik Cameras

Historical background
The first Charged Coupled Devices (CCDs) were conceived by Boyle and Smith1 in the
late 1960s whilst working at Bell Laboratories. Their initial interest was to develop a method
of transferring a charge along a row of Metal Oxide Semiconductor (MOS) capacitive
devices. Their pioneering work subsequently led to the creation of a device that they
termed ‘Bubble Memory’. In essence it is a form of the analogue ‘Bucket Brigade Delay
Line’ (BBD). Imagine that a charge is introduced at one end of a row of capacitors. It can
be stimulated by a clock pulse to gradually move along the row, eventually appearing at
the other end. Simply put, the charge goes in at one end, and out at the other, at a speed
that can easily be regulated – it is a form of controllable electronic delay.

Left: a simple schematic


of an electronic Bucket
Brigade Delay Line
formed from a capacitive
network – well known in
analogue audio designs.
Below: A real bucket
brigade in action. A line
of firemen passing water
down a line from bucket
to bucket.

A close derivative of this idea, known as a


‘Shift Register’, would later be implemented
in CCD image sensors. This highly significant
step, created a mechanism of conveniently
moving around the accumulated charges
to different parts of the image sensor.

The Architecture of the CCD Sensor


This article will primarily focus on discussing the behaviour of ‘interline’ CCD sensors,
however please be aware that other related technologies exist. These are most notably ‘full
frame’ and ‘frame transfer’.

In a CCD sensor, each pixel (or its photosensitive area) accumulates and converts incoming
light into electrons – i.e. electronic charges. Referring to the diagram below, immediately
adjacent to each pixel is a transfer gate which allows controlled access to the vertical shift
register. This is a vertical column that behaves like the Bucket Brigade Line discussed above.
In operation the following sequence of events typically occurs:

w The exposure is started, and light arrives at each image pixel in the form
of photons
w These photons are converted into electrons via the photoelectric effect
_1_
w At the end of the image exposure phase, the adjacent transfer gate opens
w The accumulated electrons remain as a group (i.e. a charge), then leave the
image pixel and cross into the vertical shift register
w A clock signal encourages these charges to make their way, step by step, down
this vertical shift register
w Eventually they reach the bottom and then cross into the horizontal shift register

Information in the form of charges, from across the entire two-dimensional array of pixels,
will also eventually end up in this horizontal register via the many other vertical shift registers.
It is therefore essential that the clock signals responsible for moving these charges within
the various registers, remain in synchronisation. Clearly, if synchronisation is lost, it will not
be possible to reconstruct the final image without it appearing corrupted. As the charges
emerge from the horizontal register, they are converted into a voltage and buffered. Note
that the horizontal register is a serial data system, as the charges have to wait in order to
travel down a single ‘pipe’. At this point, signal voltage leaves the sensor in order to be
converted into a digital signal ‘off-chip’. It is interesting to note that until this point, the
entire process has been purely analogue in nature.

The CMOS sensor


The initial development of CMOS imaging sensors dates back to the 1970s, although they
didn’t evolve into devices of general commercial interest until around twenty years later.
Part of the attractiveness in pursuing the CMOS route was because it was difficult to further
reduce the production costs of existing CCD devices. Additionally, their relatively slow

_2_
frame rate precluded their use in all manner of upcoming domestic devices. The future was
perceived to lay with something faster and more cost effective.

As described previously, the charges formed on a CCD sensor are not converted into
a voltage until relatively late in the process. In contrast, in a CMOS sensor the charge is
converted into a voltage at the pixel location. This effectively allows the information to be
read out directly from each pixel, simply by specifying its coordinate location. The readout
process can also be parallelised, which results in the ability to download the image from the
sensor in a considerably shorter time, potentially offering much higher frames per second.

In summary, a key advantage is that charges no longer have to be pumped around over
relatively long distances before being converted to a voltage.

Gradual improvements at the small scale


The CMOS sensor fabrication process is similar to that used to manufacture a whole range
of other CMOS integrated circuits. This is hugely beneficial in terms of production cost and
economy of scale. Additionally, many of the important functions that would previously
need to be created on an external circuit board can now be formed on the same ‘die’
as the sensor itself. Modern CMOS designs have a surprisingly large amount of processing
power integrated onto the sensor die. Thus, effectively creating a form of ‘system on a chip’
(SOC), where the overall device can be functionally accessible as a miniature computer.
Numerous features, such as signal processing, can be enabled via software as and
when necessary.

_3_
The development of CMOS image sensors can benefit from the technological race to follow
‘Moore’s Law’. This is a metric that has long been used to predict the evolution of CPU
performance, and this is associated with the gradual reduction in the size of its lithographic
features. In order to follow Moore’s Law, the miniaturisation of transistors must continue
over time. As these transistors become ever smaller, their power consumption reduces,
and consequently, more transistors can be squeezed onto the same CPU die, boosting
performance whilst maintaining the same power envelope. Therefore, by adopting some
of these more recent lithographic techniques, CMOS sensors can utilise ever more power
efficient semiconductors.

Binning
Binning is a process of adding the contribution from neighbouring pixels together in order to
form ‘super pixels’. For example, 2x2 binning adds 4 pixels together to form one pixel with a
bigger area, each with 4 times the sensitivity. Consequently, these larger pixels collect more
light, and offer a brighter image over the same exposure duration – an extremely useful
option in low-light situations. However, this is clearly at the expense of the resolution of the
camera. For example, if an imaginary 10MP sensor is binned at 2x2, then the apparent
number of pixels is reduced from 10million to 2.5 million.

Binning is one function where CCD and CMOS sensors perform the operation very
differently. With a CCD sensor, the binning process can be can conveniently performed
‘on-chip’, as the charges can be added together as they arrive in the shift registers. This is
an important feature, as when the charges are eventually read out (digitised), some noise
will inevitably be added. The same level of read noise occurs for a group of binned pixels as
for an individual pixel, which results in a highly efficient process.

For example, consider a hypothetical camera with a read noise of 3 electrons (3e-) and a
signal of 3e-. This equates to a SNR of 1 (i.e. unity). As the information signal is at the same
level as the noise, it will be impossible to reliably distinguish between the two. In fact, the
generally accepted criterion is that the information signal ought to be at least 2.7 times
larger than the noise i.e. the SNR ≥ 2.7

If 2x2 binning is employed, the combined signal from all 4 individual pixels adds up to: 4 x
3e- = 12e-. However, the read process is only applied once to the group of 4 pixels, so it only
incurs 3e- worth of read noise.

Therefore, the SNR ratio is now: 12e-/3e- = 4.

Which is an improvement by a factor of 4.

As discussed earlier, with a CMOS sensor the charges are converted to a voltage
locally. There are no shift registers with which to conveniently sum the charges. Instead
of combining them in the analogue part of the sensor, it must now be summed
mathematically after the signal has been digitised. Unfortunately, this leads to an increase
in read noise when compared to the CCD.

Once again, on applying 2 x 2 binning, we have a signal of: 4 x 3e- = 12e-

The read noise is proportional to √N , where N is the number of pixels.

So, in this case with a group of 4 pixels, the read noise is now: 12e-/SQR4 = 6e-

Thus, the SNR is simply: 12/6 = 2


Whilst this is an improvement of a factor of 2 compared to without binning, it is not as

_4_
impressive as what can be achieved with the CCD sensor. Additionally, as the level of
binning increases to 3×3 or 4×4… etc, the SNR performance difference between both types
of sensor continues to increase in favour of CCD.

Binning is also sometimes used as a method of temporarily decreasing the image download
time of CCD cameras. This can be useful when initially trying to focus on the subject.

Fill factor
You may have thought that the imaging pixels in a sensor are almost formed edge to
edge, so that the entire surface of the sensor is photosensitive. Unfortunately, this is almost
impossible to achieve because of the presence of other necessary circuitry, such as the
voltage conversion units that form part of each CMOS pixel. The result is that, unlike a
CCD sensor, only a limited portion of each CMOS pixel is actually photosensitive. The ratio
between the useable photosensitive area and the insensitive supplementary area is known
as the ‘fill factor’. The higher the fill factor, the more sensitive the sensor is to light because
its pixels are more efficient. Optimising the fill factor is critical in CMOS sensors, as they have
a considerable amount of additional circuitry. Typical values can vary between 30% to
90%. Sensors that feature a large number of small pixels are particularly badly affected.
As we will see below, these values can be improved by utilising micro lenses. These are
situated above each pixel, in order to redirect some of the light that might otherwise fall
outside of the photosensitive area. The other available tool, is to utilise the ever-smaller
lithography being used to fabricate the devices, which is another consequence of Moore’s
Law. Therefore, the impact of the supplementary components is reduced, simply by making
them smaller.

Full well capacity or full well depth


This is a measurement of how many electrons can be accommodated in a pixel’s
photosensitive area before saturation occurs. The ability to accumulate larger charges will
improve the SNR, and is therefore beneficial when striving to produce low noise images.
The fill factor will clearly have an impact on the maximum well capacity, as it is easier for
a large unobstructed photosensitive site to accumulate a greater charge. Early CMOS
sensors could only offer relatively small well depths, although subsequent designs have
gradually improved.

Dynamic range
A camera’s dynamic range provides an indication of the number of grey levels that
are available to reproduce the intensity variation within an image. It is usually quoted in
decibels (dB). It is possible to estimate the dynamic range from the ratio of the full well
capacity and the read noise

20 log10 Full well capacity


Read noise

The reason that this should really be considered as an estimate is that the camera’s
response may (in some regions) become non-linear; as the well becomes almost full for
example. Therefore, depending on the application, not all of the dynamic range of the
camera may be useable. Additionally, there are a variety of ways to calculate the read
noise. In summary, the quoted dynamic range can provide a useful way of comparing
different sensors, but not necessarily cameras.

_5_
Frontside and backside illumination
This section discusses the layer structure used to fabricate CMOS sensors. It also considers
the design evolution necessary to improve their performance in an effort to match the
excellent sensitivity of CCDs.

Left: The addition


of micro lenses
can improve the
efficiency of the
sensor

The most basic assembly (left) starts with a supporting silicon substrate on which the
all-important photosensitive area is formed. The components (transistors, capacitors and
connectors etc) responsible for converting the accumulated charge into a voltage are
then added in a supplementary layer. Unfortunately, these important components can
obscure or scatter incoming light, so that a portion of it never reaches the photosensitive
layer. Additionally, the thickness of this layer effectively collimates the incoming light into
almost parallel rays. Any information, even slightly off axis is not detected, so the angular
sensitivity of the detector is compromised. Consequently, this layer has a large influence on
the amount of light that is captured by the photosensitive layer, and therefore the overall
efficiency of the device. This effect is even more pronounced with sensors that have a large
number of small pixels. In order to mitigate this effect, a layer of moulded plastic micro
lenses (right) can be fabricated onto the upper layer, which can significantly improve the
sensitivity of the sensor.

In order to alleviate the problem caused by components in the supplemental layer


obscuring the photosensitive area, a different approach is required. Consequently, more
recent CMOS designs employ a modified layer structure in order to produce what is
known as a ‘Backside Illuminated’ sensor. In
this case, the rear of the supporting substrate
is chemically etched away so that the
photosensitive area can be accessed directly.
Again, an optional layer of micro lenses can
be added in order to improve performance.
When compared to the original basic assembly
above, the core of the device now appears
to be flipped over, as the supplementary
components no longer obscure the
photosensitive areas.
Above: Modifying the structure to create a
This reorganised layer structure, coupled with Backside Illuminated sensor, alleviates the
the addition of micro lenses, can dramatically problem of supplementary components
obscuring the photosensitive areas.
increase the efficiency of the device.

_6_
Shutter types and how they affect the final image Right: A
sensor
CCD cameras featuring
an
Historically, the interline CCD sensors used in our Atik electronic
shutter
and QSI cameras have used either an electronic or
simplifies
mechanical shutter. For example, our Atik 460ex model the camera
design and
uses a Sony ICX694 / 695 sensor where the shutter is
offers the
purely electronic. The exposure duration is controlled via ability to take
very short
software, and every pixel can start or stop the exposure
exposures.
simultaneously.

By contrast, our QSI 683wsg camera uses the large


format KAF-8300 CCD ‘full frame’ sensor originally
designed by Kodak. This camera uses a mechanical
shutter in the form of a rotatable disk with a rectangular
aperture. This sensor must be protected from incident
light during the readout phase, otherwise the image will
become corrupted. Mechanical shutters clearly have
Above: Some CCD sensors must be
their limitations due to the speed at which they can
protected from light during readout. react. Consequently, extremely short exposures may
Here a mechanical shutter is used which
makes the design more complicated
display a slight shadowing in the image due to part of
and compromises the ability to record the shutter still appearing in the FOV. However, for the
very short exposures.
vast majority of applications they work extremely well.

CMOS cameras… it’s more complicated

Some CMOS sensors utilise a system where only a portion of the image is captured at any
one time, for example, row by row. As an analogy, imagine viewing the subject through
a letterbox shaped cut-out that gradually scrolls down the image from top to bottom. This
is known as a ‘rolling shutter’. The image is then reconstructed as if the whole image was
recorded at exactly the same time. If the subject is static or slow moving, then this method
usually works well. However, if the image is fast moving, then the final image can appear to
be distorted, as in the following picture.

It is possible to modify the way a CMOS


sensor behaves during the exposure
phase, so that image distortions are
much less likely. This can be achieved
by adding some extra circuitry to
each pixel to enable them to all be
triggered simultaneously. The sensor
will now be capable of recording an
almost instantaneous snapshot. This is
known as a ‘global shutter’. However,
adding this extra circuitry comes at a
Above: An example of the unusual way in which an image
can come distorted when a camera with a rolling shutter is cost, since it also reduces the fill factor,
used to record images of a fast-moving subject.
and less of each pixel is photosensitive.
Consequently, the light sensitivity, and
therefore the efficiency of the sensor is
somewhat compromised.

_7_
Quick comparison of rolling and global shutter types used in CMOS cameras:

Characteristic Rolling Shutter Global Shutter


Ability to image fast moving subjects without distortion Poor Very good
Pixel fill factor Higher Lower
Potential FPS Higher Lower
Read out noise Lower Higher

It is interesting to refer back to the exposure control of CCD sensors, as these operate in a
way that is analogous to the CMOS global shutter. Although they are generally still limited
in their ability to image fast moving objects continuously due to their relatively slow image
download times. As a consequence, a CCD camera is unlikely to experience a bottleneck
when downloading the image over a regular USB2 connection. By comparison, a CMOS
camera may benefit from the faster USB3 interface, particularly if it uses a very
high-resolution sensor as there is a considerable increase in the volume of data to
be downloaded.

Noise annoys
Read noise

The process of reading out image charges inevitably adds some noise. When the charges
are converted into a signal voltage, and when being digitised by the ADC, the level of the
signal is adjusted. One of the main aims of the camera designer is to attempt to minimise
the read noise. For a CCD camera, typical values are in the range of 2 to 5 e-. In the case
of a CCD sensor, it is relatively straightforward to calculate the read noise, as the distribution
of noise values for each pixel is symmetrical. This can provide a useful indication of how
suitable the camera will be for low light imaging. The lower the read noise, the smaller the
signal it is capable of detecting.

Modern CMOS sensors can offer an extremely low read noise, however, their read-out
circuitry is more complicated, which leads to a greater variation in the pixel to pixel
performance. Consequently, the noise values often follow a heavily skewed distribution,
and RMS read noise values ought to be specified.

Thermal noise – dark current and dark frame


Thermal noise is caused when electrons are thermally excited and liberated from the silicon
in the sensor. They are sometimes known as thermal electrons. These electrons give rise to a
dark current, the size of which is related to the rate at which these electrons are liberated.
Hence, the dark current is measured in electrons per pixel per second.

The amount of thermal noise that is present within a sensor can be highly significant. At
room temperature there can be so much thermal noise present that it totally obscures
very feint details in the image. When imaging low light objects, such as fluorescence, it is
absolutely vital to minimise thermal noise. Fortunately, it is a relatively simple task to cool
the sensor down to a level where the thermal noise, and therefore the dark current, is much
reduced. As a guide for a typical CCD sensor, the thermal noise can approximately be
halved for every 5-10°C that the temperature is reduced below room temperature. This is
known as the ‘doubling temperature’. However, temperatures much below -10°C do not

_8_
generally have a large effect on the reduction of thermal noise. At this point the thermal
noise is so low that other forms of noise are of more concern.

From an end users’ point of view,


it is possible to quickly observe the
benefit of cooling the sensor by
recording a dark frame. This is where
the camera is capped to prevent
any incoming light, and an image
is recorded over an extended time
period. With the uncooled sensor,
there is so much thermal noise that
a large gradient is present in the
image. However, when the cooling is
activated, the only things present are
Above: Picture showing how the Peltier cooler is coupled to the
a scattering of hot pixels against an
underside of the imaging sensor with an aluminium coldfinger.
almost uniform black background.

Left: Example
dark frames.
Uncooled
(left) and
Cooled (right)
demonstrate
the huge
reduction in
thermal noise.

CMOS sensors can also benefit from cooling. However, because they feature a larger
number of electrical connectors, it is more complicated to couple the package to the
cooler. The design has to be much more creative.

Quantum efficiency
The ability of a camera
sensor to convert an
incoming photon into a
charge is measured by
its quantum efficiency
(QE). For example, if
half of the photons
generate a charge,
then its QE = 50%. There
are several reasons
why this process is not
completely lossless.
Firstly, some of the
incoming photons
may never reach the Above: Although a nominal QE figure may help you gauge the sensitivity of a
photosensitive site, sensor, it may be misleading at extreme ends of the spectrum.
due to scattering or
being blocked by

_9_
supplementary components and connectors. Secondly, QE varies with the wavelength
of the incoming light. It is desirable to use a sensor with a very high QE for use in low light
applications. The specified QE can be represented either as a nominal overall figure, or
more usefully, as a graph of QE vs wavelength. This is an important consideration, as there is
little point in selecting a sensor with a high overall QE, only to find that it is actually low over
the portion of the spectrum under investigation.

Some common image artifacts


Smearing - If an interline CCD sensor is over exposed to incoming illumination, the number
of electrons accumulated in a particular pixel can exceed its maximum capacity i.e. it’s
well depth. In this case, they can flow over the transfer gate and into the vertical register.
With a large excess of charges in the vertical register, it produces an artifact in the form of a
bright line in the final reconstructed image.

Blooming – The origin of this effect is similar to


smearing, except that in this case the excess
electrons overflow into neighbouring pixels. This
causes a halo to appear around bright objects
in the image. Blooming can appear at the
same time as smearing. Both these effects can
be minimised by choosing a CCD sensor with
‘anti- blooming’ technology. In this case, the
excess charges are deliberately drained away, Above: Smearing and blooming are artifacts
and therefore do not make an unwanted that can occur when excess charges migrate
into nearby regions of the sensor.
appearance in other parts of the image.

Hot pixels – In this case, one or more pixels appear to be unusually bright when recording
a long exposure dark field image. This is because they exhibit a particularly high dark
current. They are present in both CCD and CMOS sensors, and are usually caused by
minute amounts of contamination during the manufacturing process. Their impact on the
final image can be considerably reduced by cooling the sensor. Additionally, they can be
removed (calibrated out) by subtracting the dark frame from the image.

Dark pixels – This is a pixel which appears to be significantly dimmer than average, or even
completely inactive. These can become apparent when recording a bright flat field image.
As with hot pixels, the underlying cause is usually due to contamination during sensor
manufacture. Fortunately, by their very nature, they have much less of a visual impact on
the final image as hot pixels.

Uniformity – In a CCD sensor, all the data is usually read out through the same circuitry. This
is in complete contrast to a CMOS sensor, where each pixel has its own read out circuitry.
Small variations in the specifications of this circuitry, particularly the amplification, can
produce visible differences. When comparing flat fields of the two sensor types at various
illumination intensities, CMOS sensors tend to produce more variation than their CCD
counterparts.

Column artifacts – These artifacts are applicable to interline CCDs, and can affect an entire
column or just part of it. Their appearance is relatively common. As is the case with pixel
artifacts, they can appear to be bright, warm or dark. Their causes are numerous, such as
minor electrical shorts or leaks in the transfer gate, or obstructions in the vertical register.
They can inadvertently occur during manufacturing, as it is extremely difficult to produce an

_ 10 _
absolutely flawless CCD sensor. However, they can also sporadically appear after a period
of use. In this latter case it is usually as a result of interactions with cosmic rays. Cosmic rays
are continuously striking the CCD sensor, whether it is in use or in storage. In fact, they can
sometimes be observed in cooled dark frames. If a cosmic ray is particularly energetic, it
can permanently disrupt a few of the silicon atoms near the surface of the sensor, leading
to this effect. As with errant pixels, a combination of cooling the sensor and subtracting the
dark frame will usually remove these artifacts. Subtracting the bias frame is also helpful.

Left: An
example
of a
column
artifact
(left) and
a cosmic
ray strike
(right)

This type of artifact is often misidentified as a ‘column defect’ or ‘full column defect’.
However, they are not quite the same thing. A true column defect is where the vertical
column appears to be saturated and it may even affect neighbouring columns, so that
the artifact appears to be 2 or 3 pixels wide. It will clearly dominate the final image as it is
very severe. In this situation, the remedy is to edit out the offending columns in software or
disable them in firmware. Fortunately, this type of artifact is extremely rare.

Glossary:
Analogue to Digital Convertor (ADC) – A semiconductor device that digitizes analogue
signals.
Bucket Brigade Delay line (BBD) – A form of analogue delay circuitry, commonly used
in audio designs. It is analogous to a line of firemen passing water down a line between
buckets – hence its adopted name.
Coldfinger – A metallic interface, often aluminium, that is used to couple the sensor to a
thermoelectric device (Peltier)
Die – A single piece of silicon that houses a complete semiconductor device, such as a CPU
Dynamic Range – The ability to show subtle differences in shading in an image,
Fill Factor – A measure of how much of an image pixel is actually available to convert
photons into useable charges.
Flat Field – An image of a uniform background that is used to reveal shading variations or
imperfections caused by dust in the optical train.
FOV – Field of view
FPS – Frames per second. An indication of how quickly a camera, or a visual display, can
process video images.
Frame Transfer – A CCD sensor type where the charges formed are transferred en masse
to an equally sized shift register. This register is immediately adjacent to the photosensitive
area and is protected from incident light
Full Frame – This type of CCD sensor does not have vertical registers. Instead, charges are
transferred into adjacent rows until they appear in the horizontal register. It is essential that
these sensors are protected from incident light during the readout phase – usually by a
mechanical shutter.

_ 11 _
Global shutter – An imaging mode where every pixel across a sensor, both starts and stops
the exposure simultaneously.
Interline – A type of CCD sensor where the accumulated charges are transported via
neighbouring vertical registers that are interspersed between columns of photoactive sites.
Moore’s Law - The number of transistors on a semiconductor device is predicted to
approximately double every 18 months. A useful guide as to how the complexity of such
devices may evolve over time.
Peltier – A solid state thermoelectric device composed of many semiconductor elements
Photelectric Effect – An important physical phenomenon that describes the conversion of
light into electrons. Albert Einstein’s work in this area earned him the Nobel Prize in Physics
(1921)
Read noise – The process of ‘reading’ the image data (converting charges to a voltage)
inevitably adds a small amount of noise.
Rolling Shutter – An imaging mode where different areas of the sensor commence the
exposure at staggered intervals. For example, row by row.
SNR – Signal to Noise Ratio. For example, a higher signal to noise ratio indicates that a
system has less noise
SOC – System on a chip. The majority of the components that form a compact computer,
integrated into a single semiconductor package.

References
1. W. S. Boyle; G. E. Smith (April 1970). “Charge Coupled Semiconductor Devices”. Bell Syst. Tech. J. 49 (4): 587–593

v1.2

Unit 8 Lodge Farm Barns, Norwich, Norfolk, NR9 3LZ, UK. www.atik-cameras.com

_ 12 _

You might also like