1
Mobile & Satellite Communications
이동 및 위성 통신
Lecture 3:
Introduction to Wireless
Coding and Modulation
Pangun Park
Chungnam National University
Information Communications Engineering
2
Overview
Overvie
§ Electromagnetic Spectrum
§ Basic Signals
Ø Frequency, Wavelength, and Phase
§ Brief Antenna
§ Line Coding
§ Modulation
§ Channel Coding
Ø Hamming Distance, Block Codes
§ Multiple Access Methods
Ø Spread Spectrum Technology
Pangun Park (GNU)
3
Electromagnetic Spectrum
Electromagnetic Spectrum
Pangun Park (GNU) Mobile and Wireless Networking
2013 / 2014
24
Frequencies for communication
VLF = Very Low Frequency UHF = Ultra High Frequency
LF = Low Frequency SHF = Super High Frequency
MF = Medium Frequency EHF = Extra High Frequency
HF = High Frequency UV = Ultraviolet Light
VHF = Very High Frequency
Frequency and wave length:
!λ = c/f
wave length λ, speed of light c ≅ 3x108m/s, frequency f
1 Mm
300 Hz
10 km
30 kHz
100 m
3 MHz
1 m
300 MHz
10 mm
30 GHz
100 µm
3 THz
1 µm
300 THz
visible light
VLF LF MF HF VHF UHF SHF EHF infrared UV
optical transmission
coax cable
twisted
pair
4
Electromagnetic Spectrum
Electromagnetic Spectrum
§ Wireless communication uses 100 kHz to 60 GHz
Pangun Park (GNU)
3-8
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Electromagnetic Spectrum
Electromagnetic Spectrum
‰ Wireless communication uses 100 kHz to 60 GHz
Wireless
5
Frequencies for mobile communication
§ UHF-ranges for mobile cellular systems
Ø simple, small antenna for cars
Ø deterministic propagation characteristics, reliable connections
§ SHF and higher for directed radio links, satellite communication
Ø small antenna, focusing
Ø large bandwidth available
§ Wireless LANs use frequencies in UHF to SHF spectrum
Ø some systems planned up to EHF
Ø limitations due to absorption by water (>5 GHz) and oxygen (60 GHz)
molecules (resonance frequencies)
• weather dependent fading, signal loss caused by heavy rainfall etc.
Pangun Park (GNU)
Frequencies for communication
VLF = Very Low Frequency UHF = Ultra High Frequency
LF = Low Frequency SHF = Super High Frequency
MF = Medium Frequency EHF = Extra High Frequency
1 Mm
300 Hz
10 km
30 kHz
100 m
3 MHz
1 m
300 MHz
10 mm
30 GHz
100 µm
3 THz
1 µm
300 THz
visible light
VLF LF MF HF VHF UHF SHF EHF infrared UV
optical transmission
coax cable
twisted
pair
6
Licensed vs Unlicensed bands
§ Mobile cellular typically uses licensed bands
Ø Spectrum licensed to operator
Ø GSM:
• 900 MHz, 1800 MHz (Europe)
• 850 MHz, 1900 MHz (US)
• other bands
Ø UMTS, LTE
Ø See e.g., http://www.frequentieland.nl/wie.htm
§ WLAN typically uses unlicensed bands
Ø 2.4 GHz Industrial, Scientific, and Medical (ISM) band:
• IEEE 802.11b/g, Bluetooth, Zigbee, microwave oven
§ 5.8 GHz ISM band:
Ø IEEE 802.11a
Pangun Park (CNU)
7
Licensed bands : Korea 2016
Pangun Park (GNU)
8
Pangun Park (GNU)
Mobile and Wireless Networking
25
9
Overview
Overvie
§ Electromagnetic Spectrum
§ Basic Signals
Ø Frequency, Wavelength, and Phase
§ Brief Antenna
§ Line Coding
§ Modulation
§ Channel Coding
Ø Hamming Distance, Block Codes
§ Multiple Access Methods
Ø Spread Spectrum Technology
Pangun Park (GNU)
10
Basic: Decibels (Quiz)
Decibels
§
§
Pangun Park (GNU)
3-9
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Decibels
Decibels
‰ Attenuation = Log10
Pin
Pout
‰ Example 1: Pin = 10 mW, Pout=5 mW
Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB
‰ Example 2: Pin = 100mW, Pout=1 mW
Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB
Bel
Pin
Pout
decibel
‰ Attenuation = 10 Log10
Vin
Vout
decibel
‰ Attenuation = 20 Log10
3-9
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Decibels
Decibels
‰ Attenuation = Log10
Pin
Pout
‰ Example 1: Pin = 10 mW, Pout=5 mW
Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB
‰ Example 2: Pin = 100mW, Pout=1 mW
Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB
Bel
Pin
Pout
decibel
‰ Attenuation = 10 Log10
Vin
Vout
decibel
‰ Attenuation = 20 Log10
11
Basic: Decibels
Decibels
§
§
Pangun Park (GNU)
3-9
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Decibels
Decibels
‰ Attenuation = Log10
Pin
Pout
‰ Example 1: Pin = 10 mW, Pout=5 mW
Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB
‰ Example 2: Pin = 100mW, Pout=1 mW
Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB
Bel
Pin
Pout
decibel
‰ Attenuation = 10 Log10
Vin
Vout
decibel
‰ Attenuation = 20 Log10
3-9
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Decibels
Decibels
‰ Attenuation = Log10
Pin
Pout
‰ Example 1: Pin = 10 mW, Pout=5 mW
Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB
‰ Example 2: Pin = 100mW, Pout=1 mW
Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB
Bel
Pin
Pout
decibel
‰ Attenuation = 10 Log10
Vin
Vout
decibel
‰ Attenuation = 20 Log10
12
Signals I
§ Physical representation of data
§ Function of time and location
§ Signal parameters: parameters representing the value of data
§ Classification
Ø continuous time/discrete time
Ø continuous values/discrete values
Ø analog signal = continuous time and continuous values
Ø digital signal = discrete time and discrete values
Pangun Park (CNU)
13
Frequency, Period, and Phase
§ Signal parameters of periodic signals:
§ Frequency is measured in Cycles/sec or Hertz
Pangun Park (GNU)
3-3
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Frequency, Period, and Phase
Frequency, Period, and Phase
‰ A Sin(2Sft + T), A = Amplitude, f=Frequency,
T = Phase, Period T = 1/f,
Frequency is measured in Cycles/sec or Hertz
Cycle
Amplitude = 0.5
Phase = 45°
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Frequency, Period, and Phase
Frequency, Period, and Phase
‰ A Sin(2Sft + T), A = Amplitude, f=Frequency,
T = Phase, Period T = 1/f,
Frequency is measured in Cycles/sec or Hertz
Cycle
Amplitude = 0.5
Phase = 45°
14
Wavelength
Wavelength
§ Distance occupied by one cycle
§ Distance between two points of corresponding phase in two
consecutive cycles
§ Wavelength =
§ Assuming signal velocity v
Pangun Park (GNU)
3-5
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Wavelength
Wavelength
‰ Distance occupied by one cycle
‰ Distance between two points of corresponding phase in two
consecutive cycles
‰ Wavelength = O
‰ Assuming signal velocity v
¾ O = vT
¾ Of = v
¾ c = 3×108 m/s (speed of light in free space) = 300 m/Ps
Distance
Amplitude
O
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Wavelength
Wavelength
‰ Distance occupied by one cycle
‰ Distance between two points of corresponding phase in two
consecutive cycles
‰ Wavelength = O
‰ Assuming signal velocity v
¾ O = vT
¾ Of = v
¾ c = 3×108 m/s (speed of light in free space) = 300 m/Ps
Distance
Amplitude
O
Wavelength
Wavelength
‰ Distance occupied by one cycle
‰ Distance between two points of corresponding phase in two
consecutive cycles
‰ Wavelength = O
‰ Assuming signal velocity v
¾ O = vT
¾ Of = v
¾ c = 3×108 m/s (speed of light in free space) = 300 m/Ps
Distance
Amplitude
O
15
Example (Quiz)
Example
§ Frequency = 2.5 GHz
§ Wavelength?
Pangun Park (GNU)
16
Example
Example
§ Frequency = 2.5 GHz
Pangun Park (GNU)
3-6
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Example
Example
‰ Frequency = 2.5 GHz
17
Phase
Phase
§ Sine wave with a phase of 45°
§ In-phase component I + Quadrature component Q
Pangun Park (GNU) 3-4
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Phase
Phase
‰ Sine wave with a phase of 45°
I=Sin(2Sft)
Q=Cos(2Sft)
Phase
In-phase component I + Quadrature component Q
Cos(2Sft)
Sin(2Sft)
Sin(2Sft+S/4)
3-4
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Phase
Phase
‰ Sine wave with a phase of 45°
I=Sin(2Sft)
Q=Cos(2Sft)
Phase
In-phase component I + Quadrature component Q
Cos(2Sft)
Sin(2Sft)
Sin(2Sft+S/4)
18
Fourier series of periodic signals
Pangun Park (CNU)
Mobile and Wireless Networking
2013 / 2014
29
Fourier representation of periodic signals
)
2
cos(
)
2
sin(
2
1
)
(
1
1
nft
b
nft
a
c
t
g
n
n
n
n !
! "
"
#
=
#
=
+
+
=
1
0
1
0
t t
ideal periodic signal real composition
(based on harmonics)
19
Fourier series of periodic signals
Pangun Park (CNU)
Mobile and Wireless Networking
2013 / 2014
29
Fourier representation of periodic signals
)
2
cos(
)
2
sin(
2
1
)
(
1
1
nft
b
nft
a
c
t
g
n
n
n
n !
! "
"
#
=
#
=
+
+
=
1
0
1
0
t t
ideal periodic signal real composition
(based on harmonics)
20
Fourier series of periodic signals
Pangun Park (CNU)
21
Signals II
§ Different representations of signals
Ø amplitude (amplitude domain)
Ø frequency spectrum (frequency domain)
Ø phase state diagram (amplitude M and phase φ in polar coordinates)
§ Composed signals transferred into frequency domain using
Fourier transformation
§ Digital signals need
Ø infinite frequencies for perfect transmission
Ø modulation with a carrier frequency for transmission (analog signal!)
Pangun Park (CNU)
Mobile and Wireless Networking
30
! Different representations of signals
! amplitude (amplitude domain)
! frequency spectrum (frequency domain)
! phase state diagram (amplitude M and phase ϕ in polar
coordinates)
! Composed signals transferred into frequency domain using
Fourier transformation
! Digital signals need
! infinite frequencies for perfect transmission
! modulation with a carrier frequency for transmission (analog
signal!)
Signals II
f [Hz]
A [V]
ϕ
I= M cos ϕ
Q = M sin ϕ
ϕ
A [V]
t[s]
22
Time and Frequency Domains
Time and Frequency Domains
Pangun Park (GNU)
3-7
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Time and Frequency Domains
Time and Frequency Domains
Frequency
Amplitude
Frequency
Amplitude
Frequency
Amplitude
f
3f
A
A
f 3f
A/3
A/3
23
Fourier Transformation
§ Generalization of the complex Fourier series
§ Decompose a function of time into the frequencies
§ Let us have a simple exercise!!
Pangun Park (CNU)
24
Example: Fourier Transformation
§ Example: Original function oscillation 3Hz
Pangun Park (CNU)
25
Overview
Overvie
§ Electromagnetic Spectrum
§ Basic Signals
Ø Frequency, Wavelength, and Phase
§ Brief Antenna
§ Line Coding
§ Modulation
§ Channel Coding
Ø Hamming Distance, Block Codes
§ Multiple Access Methods
Ø Spread Spectrum Technology
Pangun Park (GNU)
26
Importance of Antenna
§ Ex1: Telecommunication
§ Ex2: IoT Sensor network and Missile control
Pangun Park (CNU)
27
Antennas: isotropic radiator
§ Radiation and reception of electromagnetic waves, coupling of
wires to space for radio transmission
§ Isotropic radiator: equal radiation in all directions (three
dimensional) - only a theoretical reference antenna
§ Real antennas always have directive effects (vertically and/or
horizontally)
§ Radiation pattern: measurement of radiation around an antenna
Pangun Park (CNU)
! Radiation and reception of electromagnetic waves, coupling of
wires to space for radio transmission
! Isotropic radiator: equal radiation in all directions (three
dimensional) - only a theoretical reference antenna
! Real antennas always have directive effects (vertically and/or
horizontally)
! Radiation pattern: measurement of radiation around an antenna
Antennas: isotropic radiator
z
y
x
z
y x ideal
isotropic
radiator
28
Antennas: simple dipoles
§ Real antennas are not isotropic radiators but, e.g., dipoles with
lengths λ/4 on car roofs or λ/2 as Hertzian dipole
è shape of antenna proportional to wavelength
§ Example: Radiation pattern of a simple Hertzian dipole
§ Gain: maximum power in the direction of the main lobe
compared to the power of an isotropic radiator (with the same
average power)
Pangun Park (CNU)
side view (xy-plane)
x
y
side view (yz-plane)
z
y
top view (xz-plane)
x
z
simple
dipole
λ/4 λ/2
Antennas: simple dipoles
Real antennas are not isotropic radiators but, e.g., dipoles with
lengths λ/4 on car roofs or λ/2 as Hertzian dipole
# shape of antenna proportional to wavelength
Example: Radiation pattern of a simple Hertzian dipole
Gain: maximum power in the direction of the main lobe compared
to the power of an isotropic radiator (with the same average
power)
32
side view (xy-plane)
x
y
side view (yz-plane)
z
y
top view (xz-plane)
x
z
simple
dipole
λ/4 λ/2
Antennas: simple dipoles
Real antennas are not isotropic radiators but, e.g., dipoles with
lengths λ/4 on car roofs or λ/2 as Hertzian dipole
# shape of antenna proportional to wavelength
Example: Radiation pattern of a simple Hertzian dipole
Gain: maximum power in the direction of the main lobe compared
to the power of an isotropic radiator (with the same average
power)
29
Antennas: diversity
§ Grouping of 2 or more antennas
Ø multi-element antenna arrays
§ Antenna diversity
Ø switched diversity, selection diversity
• receiver chooses antenna with largest output
Ø diversity combining
• combine output power to produce gain
• cophasing needed to avoid cancellation
Ø Smart antennas
• Beam forming
Pangun Park (CNU)
+
λ/4
λ/2
λ/4
ground plane
λ/2
λ/2
+
λ/2
Antennas: diversity
Grouping of 2 or more antennas
! multi-element antenna arrays
Antenna diversity
! switched diversity, selection diversity
" receiver chooses antenna with largest output
! diversity combining
" combine output power to produce gain
" cophasing needed to avoid cancellation
! Smart antennas
" beam forming
30
Overview
Overvie
§ Electromagnetic Spectrum
§ Basic Signals
Ø Frequency, Wavelength, and Phase
§ Brief Antenna
§ Line Coding
§ Modulation
§ Channel Coding
Ø Hamming Distance, Block Codes
§ Multiple Access Methods
Ø Spread Spectrum Technology
Pangun Park (GNU)
31
Coding Terminology
§ Signal element: Pulse (of constant amplitude, frequency, phase)
§ Modulation Rate: 1/Duration of the smallest element =Baud rate
§ Data Rate: Bits per second
Pangun Park (GNU)
3-10
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Coding Terminology
Coding Terminology
‰ Signal element: Pulse (of constant amplitude,
frequency, phase) = Symbol
‰ Modulation Rate: 1/Duration of the smallest element
=Baud rate
‰ Data Rate: Bits per second
Pulse
Bit
+5V
0
-5V
+5V
0
-5V
1 0
32
Line Coding for Digital Communication
§ Goal is to transmit binary data (e.g., PCM encoded voice, MPEG
encoded video, financial information)
Ø Represent digital data by using digital signals
Ø Digital data stream is encoded into a sequence of pulses for transmission
through a base-band analog channel
§ Transmission distance is large enough that communication link
bandwidth is comparable to signal bandwidth.
§ Multiple links may be used, with regenerative repeaters
Pangun Park (GNU)
33
Line Code Examples
Pangun Park (GNU)
34
Data Transfer in Digital System
§ In a synchronous digital system, a common clock signal is used
by all devices.
data + clock
§ Multiple data signals can be transmitted in parallel using a single
clock signal.
§ Serial peripheral communication schemes (RS-232, USB,
FireWire) use various clock extraction methods
Ø RS-232 is asynchronous with (up to) 8 data bits preceded by a start bit (0) and followed
by optional parity bit and stop bit (1); clock recovery by “digital phase-locked loop”
Ø USB needs a real phase-locked loop and uses bit stuffing to ensure enough transitions
Ø FireWire has differential data and clock pairs; clock transitions only when data does not
Pangun Park (GNU)
35
Serial Communication: RS-232 Signaling
§ RS-232 is a standard for asynchronous serial communication.
Ø NRZ Encoding
§ Each transition resynchronizes the receiver’s bit clock.
§ Asynchronous here means “asynchronous at the byte level,” but
the bits are still synchronized; their durations are the same.
Pangun Park (GNU)
Serial Communication: RS-232 Signaling
RS-232 is a standard for asynchronous serial communication.
Each transition resynchronizes the receiver’s bit clock.
EE 179, May 12, 2014 Lecture 18, Page 17
36
Implementation: Differential Manchester Coding
§ Microcontroller : Interrupt Service Routine
Pangun Park (GNU)
37
Overview
Overvie
§ Electromagnetic Spectrum
§ Basic Signals
Ø Frequency, Wavelength, and Phase
§ Brief Antenna
§ Line Coding
§ Modulation
§ Channel Coding
Ø Hamming Distance, Block Codes
§ Multiple Access Methods
Ø Spread Spectrum Technology
Pangun Park (GNU)
38
Modulation and Demodulation
Pangun Park (CNU)
4
Modulation and demodulation
synchronization
decision
digital
data
analog
demodulation
radio
carrier
analog
baseband
signal
101101001 radio receiver
digital
modulation
digital
data analog
modulation
radio
carrier
analog
baseband
signal
101101001 radio transmitter
Communication System Block Diagram (Advanced)
Encoder
Channel
Modulator
Encrypt
Demodulator
Decrypt
Decoder
Channel
Source
Encoder
Sink
Source
Source
Decoder
Noise
Channel
! Source encoder compresses message to remove redundancy
! Encryption protects against eavesdroppers and false messages
! Channel encoder adds redundancy for error protection
! Modulator converts digital inputs to signals suitable for physical channel
EE 179, April 2, 2014 Lecture 2, Page 12
39
Modulation
Modulation
§ Digital version of modulation is called keying
§ Amplitude Shift Keying (ASK):
§ Frequency Shift Keying (FSK):
§ Phase Shift Keying (PSK): Binary PSK (BPSK)
Pangun Park (GNU)
3-11
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Modulation
Modulation
‰ Digital version of modulation is called keying
‰ Amplitude Shift Keying (ASK):
0 1 1 0
‰ Frequency Shift Keying (FSK):
‰ Phase Shift Keying (PSK): Binary PSK (BPSK)
3-11
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Modulation
Modulation
‰ Digital version of modulation is called keying
‰ Amplitude Shift Keying (ASK):
0 1 1 0
‰ Frequency Shift Keying (FSK):
‰ Phase Shift Keying (PSK): Binary PSK (BPSK)
3-11
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Modulation
Modulation
‰ Digital version of modulation is called keying
‰ Amplitude Shift Keying (ASK):
0 1 1 0
‰ Frequency Shift Keying (FSK):
‰ Phase Shift Keying (PSK): Binary PSK (BPSK)
40
Modulation (Cont)
Modulation (Cont)
§ Differential BPSK: Does not require original carrier
§ Quadrature Phase Shift Keying (QPSK):
§ In-phase (I) and Quadrature (Q) or 90 ° components are added
Pangun Park (GNU)
3-12
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Modulation (Cont)
Modulation (Cont)
‰ Differential BPSK: Does not require original carrier
0 1 1 0
‰ Quadrature Phase Shift Keying (QPSK):
11=45° 10=135° 00=225° 01=315°
11
10
00 01
0
1
Ref: Electronic Design, “Understanding Modern Digital Modulation Techniques,”
http://electronicdesign.com/communications/understanding-modern-digital-modulation-techniques
‰ In-phase (I) and Quadrature (Q) or 90 ° components are added
3-12
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Modulation (Cont)
Modulation (Cont)
‰ Differential BPSK: Does not require original carrier
0 1 1 0
‰ Quadrature Phase Shift Keying (QPSK):
11=45° 10=135° 00=225° 01=315°
11
10
00 01
0
1
Ref: Electronic Design, “Understanding Modern Digital Modulation Techniques,”
http://electronicdesign.com/communications/understanding-modern-digital-modulation-techniques
‰ In-phase (I) and Quadrature (Q) or 90 ° components are added
41
QAM
QAM
§ Quadrature Amplitude and Phase Modulation
§ 4-QAM, 16-QAM, 64-QAM, 256-QAM
§ Used in DSL and wireless networks
§ 4-QAM: 2 bits/symbol, 16-QAM: 4 bits/symbol,
Pangun Park (GNU)
3-13
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
QAM
QAM
‰ Quadrature Amplitude and Phase Modulation
‰ 4-QAM, 16-QAM, 64-QAM, 256-QAM
‰ Used in DSL and wireless networks
Binary 4-QAM
0 1 10
00
01 11
16-QAM
I
Q
I
Q
I
Q Amplitude
‰ 4-QAMŸ 2 bits/symbol, 16-QAM Ÿ4 bits/symbol, …
42
Channel Capacity
§ Capacity = Maximum data rate for a channel
§ Nyquist Theorem: Bandwidth = B, Data rate 2 B
§ Bi-level Encoding: Data rate = 2 x Bandwidth
§ Multilevel: Data rate = 2 x Bandwidth x log2 M
Ø M = Number of levels
Ø Example: M=4, Capacity = 4 x Bandwidth
Pangun Park (GNU)
3-14
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Channel Capacity
Channel Capacity
‰ Capacity = Maximum data rate for a channel
‰ Nyquist Theorem:Bandwidth = B
Data rate < 2 B
‰ Bi-level Encoding: Data rate = 2 u Bandwidth
0
5V
‰ Multilevel: Data rate = 2 u Bandwidth u log 2 M
M = Number of levels
Example: M=4, Capacity = 4 u Bandwidth
Worst Case
3-14
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Channel Capacity
Channel Capacity
‰ Capacity = Maximum data rate for a channel
‰ Nyquist Theorem:Bandwidth = B
Data rate < 2 B
‰ Bi-level Encoding: Data rate = 2 u Bandwidth
0
5V
‰ Multilevel: Data rate = 2 u Bandwidth u log 2 M
M = Number of levels
Example: M=4, Capacity = 4 u Bandwidth
Worst Case
43
Shannon's Theorem (Quiz)
Shannon's Theorem
§ Bandwidth = B Hz, Signal-to-noise ratio = S/N
§ Maximum number of bits/sec = B log2 (1+S/N)
Pangun Park (GNU)
3-15
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Shannon's Theorem
Shannon's Theorem
‰ Bandwidth = B Hz
Signal-to-noise ratio = S/N
‰ Maximum number of bits/sec = B log2 (1+S/N)
‰ Example: Phone wire bandwidth = 3100 Hz
S/N = 30 dB
10 Log 10 S/N = 30
Log 10 S/N = 3
S/N = 103 = 1000
Capacity = 3100 log 2 (1+1000)
= 30,894 bps
Capacity?
44
Shannon's Theorem
Shannon's Theorem
§ Bandwidth = B Hz, Signal-to-noise ratio = S/N
§ Maximum number of bits/sec = B log2 (1+S/N)
Pangun Park (GNU)
3-15
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Shannon's Theorem
Shannon's Theorem
‰ Bandwidth = B Hz
Signal-to-noise ratio = S/N
‰ Maximum number of bits/sec = B log2 (1+S/N)
‰ Example: Phone wire bandwidth = 3100 Hz
S/N = 30 dB
10 Log 10 S/N = 30
Log 10 S/N = 3
S/N = 103 = 1000
Capacity = 3100 log 2 (1+1000)
= 30,894 bps
45
Overview
Overvie
§ Electromagnetic Spectrum
§ Basic Signals
Ø Frequency, Wavelength, and Phase
§ Brief Antenna
§ Line Coding
§ Modulation
§ Channel Coding
Ø Hamming Distance, Block Codes
§ Multiple Access Methods
Ø Spread Spectrum Technology
Pangun Park (GNU)
46
Channel Coding
§ This part is about reliable transmission of this digital information
over an unreliable physical medium.
Ø Shannon showed that reliable communications can be achieved by proper
coding of information to be transmitted provided that the rate of information
transmission is below the channel capacity.
Ø Coding is achieved by adding properly designed redundancy to each message
before its transmission. The added redundancy is used for error control.
Pangun Park (GNU)
Communication System Block Diagram (Advanced)
Encoder
Channel
Modulator
Encrypt
Demodulator
Decrypt
Decoder
Channel
Source
Encoder
Sink
Source
Source
Decoder
Noise
Channel
! Source encoder compresses message to remove redundancy
! Encryption protects against eavesdroppers and false messages
! Channel encoder adds redundancy for error protection
! Modulator converts digital inputs to signals suitable for physical channel
EE 179, April 2, 2014 Lecture 2, Page 12
47
Concept of Channel Coding (Block Code)
Pangun Park (GNU)
3/74
! Concept of Channel Coding
Encoding
Decoding
Channel coding
Fall 2012 Lecture 3, Slide #21
The problem with no coding is that the two valid codewords (0
and 1) also have a Hamming distance of 1. So a single-bit error
changes a valid codeword into another valid codeword…
What is the Hamming Distance of the replication code?
1
0
heads tails
single-bit error
48
Simple Repetition Code
§ Replication code to reduce decoding error
§ Code: Bit “b” coded as “bb...b” (n times)
§ Channel coding
Pangun Park (GNU)
How to Introduce Redundancy?
epetition Code
arity Check Code (We can save one bit!)
0 0 0 0
00
0 0 1 1
01
1 1 0 0
10
1 1 1 1
11
0 0 0 0
0 0 1 1
1 1 0 0
1 1 1 1
Original
information
Encoded
information
Channel
Received
information
00
01
10
11
Decoded
information
Erased bit
Encoding Decoding
0 0 0
00
0 1 1
01
1 1 0
10
1 0 1
11
0 0 0
0 1 1
1 1 0
1 0 1
Original
information
Encoded
information
Received
information
00
01
10
11
Decoded
information
Erased bit
Channel
Encoding Decoding
7/74
! Encoding of an [n, k] Block Code
• Redundancy r = n − k
49
Simple Repetition Code
§ Prob(decoding error) over BSC with p=0.01
§ Exponential fall-off (note log scale) But huge overhead (low code rate)
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #11
Replication Code to reduce decoding error
Replication factor, n (1/code_rate)
Prob(decoding error) over BSC w/ p=0.01
Code: Bit b coded as bb…b (n times)
Exponential fall-off (note log scale)
But huge overhead (low code rate)
We can do a lot better!
Replication factor, n
Prob.
Of
decoding
error
50
Basic Problems in Coding Theory
§ To find a good code (e.g., capacity-achieving or capacity-
approaching)
§ To find its decoding algorithm with low complexity
§ To find a way of implementing the decoding algorithm
Pangun Park (GNU)
51
Major Developments of Codes
§ Hamming codes (1950)
§ Reed-Muller codes (1954)
§ BCH codes (by Bose, Ray-Chaudhuri and Hocquenghem, 1959)
§ Reed-Solomon codes (1960)
§ Low-density parity-check codes (by Gallager in 1962, rediscovered
in 90’s)
§ Convolutional codes (by Elias, 1955)
§ Viterbi algorithm (1967)
§ Concatenated codes (by Forney, 1966)
§ Trellis-coded modulation (by Ungerboeck, 1982)
§ Turbo codes (by Berrou , 1993)
§ Space-time codes (by Vahid Tarokh,1998)
Pangun Park (GNU)
52
Major Approaches to Coding Theory
Pangun Park (GNU)
! Major Approaches to Coding Theory
Golay
codes
1950 1960 1980 1990 2000
1970
Hamming
codes
BCH
codes
RS
codes
Algebraic
geometry
codes
Convolutional
codes
LDPC
codes
Turbo
codes
Rediscovery of
LDPC codes
2010
Duo-binary
Turbo codes
Polar
codes
Nonbinary
LDPC codes
Spatially-coupled
LDPC codes
Algebraic
approach
Probabilistic
approach
Information
-theoretic
approach
Nonbinary codes
Goppa
codes
Communications and Signal Design Lab., POSTECH
53
How Close to the Channel Capacity? (AWGN, BPSK)
Pangun Park (GNU)
1 2 3 4 5 6 7 8 9
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
x
x
x
x
x
TPC
x
RS codes +
Convolutional
codes
Convolutional
Codes
Power Efficiency
Eb/No (dB)
Unachievable
Region
Code
Rate
(R)
A
l
g
e
b
r
a
i
c
C
o
d
e
s
.
M
o
d
e
r
n
C
o
d
e
s
Turbo Code
LDPC Code
Shannon Capacity
Reed-Muller
code
x
x
x
x
! How Close to the Channel Capacity? (AWGN, BPSK)
Communications and Signal Design Lab., POSTECH
54
Binary Arithmetic
§ Computations with binary numbers in code construction will
involve Boolean algebra, or algebra in “GF(2)” (Galois field of
order 2), or modulo-2 algebra:
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #26
Binary Arithmetic
• Computations with binary numbers in code
construction will involve Boolean algebra, or
algebra in “GF(2)” (Galois field of order 2), or
modulo-2 algebra:
0+0=0, 1+0=0+1=1, 1+1=0
0*0=0*1=1*0 =0, 1*1=1
55
Hamming Distance
§ Hamming Distance: The number of bit positions in which the
corresponding bits of two encodings of the same length are
different.
Ø The Hamming Distance (HD) between a valid binary codeword and the same
codeword with e errors is e.
§ The problem with no coding is that the two valid codewords (0
and 1) also have a Hamming distance of 1. So a single-bit error
changes a valid codeword into another valid codeword.
Pangun Park (GNU)
56
Embedding for Structure Separation
§ Encode so that the codewords are far enough from each other.
§ Likely error patterns shouldn’t transform one codeword to
another.
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #22
Idea: Embedding for Structural Separation
Encode so that the codewords are far enough from
each other
Likely error patterns shouldn’t transform one codeword
to another
11
00
0 1
01
10
single-bit error may
cause 00 to be 10
(or 01)
110
000
0
1
100
010
111
001
101
011
Code: nodes chosen in
hypercube + mapping
of message bits to nodes
If we choose 2k out of
2n nodes, it means
we can map all k-bit
message strings in a
space of n-bit codewords.
The code rate is k/n.
6.02 Fall 2012 Lecture 3, Slide #22
Idea: Embedding for Structural Separation
Encode so that the codewords are far enough from
each other
Likely error patterns shouldn’t transform one codeword
to another
11
00
0 1
01
10
single-bit error may
cause 00 to be 10
(or 01)
110
000
0
1
100
010
111
001
101
011
Code: nodes chosen in
hypercube + mapping
of message bits to nodes
If we choose 2k out of
2n nodes, it means
we can map all k-bit
message strings in a
space of n-bit codewords.
The code rate is k/n.
Code: nodes chosen in hypercube +
mapping of message bits to nodes
If we choose out of nodes, it means
we can map all k-bit message strings in
a space of n-bit codewords.
The code rate is k/n.
57
Minimum Hamming Distance
§ Minimum Hamming Distance of Code vs. Detection 
Correction Capabilities
§ If d is the minimum Hamming distance between codewords, we
can detect all patterns of = (d-1) bit errors.
§ If d is the minimum Hamming distance between codewords, we
can correct all patterns of or fewer bit errors
Pangun Park (GNU)
Idea: Embedding for Structural Separation
Encode so that the codewords are far enough from
each other
Likely error patterns shouldn’t transform one codeword
to another
11
00
0 1
01
10
single-bit error may
cause 00 to be 10
(or 01)
110
1
100
111
101
Code: nodes chosen in
hypercube + mapping
of message bits to nodes
If we choose 2k out of
2n nodes, it means
we can map all k-bit
6.02 Fall 2012
Idea: Embedding for Structural
Encode so that the codewords are far eno
each other
Likely error patterns shouldn’t transform
to another
11
00
0 1
01
10
single-bit error may
cause 00 to be 10
(or 01)
110
000
0
1
100
010
111
001
101
011
Code: n
hypercu
of mess
If we ch
2n nod
we can
messag
space o
The co
58
How to Construct Codes?
§ Want: 4-bit messages with single-error correction (min HD=3)
§ How to produce a code, i.e., a set of codewords, with this
property?
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #24
How to Construct Codes?
Want: 4-bit messages with single-error correction (min HD=3)
How to produce a code, i.e., a set of codewords, with this property?
59
Example: A Simple Code - Parity Check
§ Add a parity bit to message of length k to make the total number
of 1 bits even (aka even parity).
§ If the number of 1s in the received word is odd, there there has
been an error.
§ Minimum Hamming distance of parity check code is 2
Ø Can detect all single-bit errors
Ø In fact, can detect all odd number of errors
Ø But cannot detect even number of errors
Ø And cannot correct any errors
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #25
A Simple Code: Parity Check
• Add a parity bit to message of length k to make the
total number of 1 bits even (aka even parity).
• If the number of 1s in the received word is odd,
there there has been an error.
0 1 1 0 0 1 0 1 0 0 1 1 → original word with parity bit
0 1 1 0 0 0 0 1 0 0 1 1 → single-bit error (detected)
0 1 1 0 0 0 1 1 0 0 1 1 → 2-bit error (not detected)
• Minimum Hamming distance of parity check code
is 2
– Can detect all single-bit errors
– In fact, can detect all odd number of errors
– But cannot detect even number of errors
– And cannot correct any errors
60
Example: Rectangular Parity Codes (Quiz)
§ Idea: start with rectangular
array of data bits, add parity
checks for each row and
column. Single-bit error in
data will show up as parity
errors in a particular row
and column, pinpointing the
bit that has the error.
Pangun Park (GNU)
D1 D2
D3 D4
P3 P4
P1
Idea: start with rectangular
array of data bits, add parity
checks for each row and
column. Single-bit error in
data will show up as parity P2
errors in a particular row
and column, pinpointing the
bit that has the error.
0 1 1 0 1 1
1 1 0 1 0 0
1 0 1 0
Parity for each row Parity check fails for Parit
and column is row #2 and column #2 for ro
correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bi
D1 D2
D3 D4
P3 P4
P1
for row #
a: start with rectangular
ay of data bits, add parity
cks for each row and
umn. Single-bit error in
a will show up as parity P2 (n,k,d)
ors in a particular row
column, pinpointing the P4 is pari
that has the error. for colum
0 1 1 0 1 1 0 1 1
1 1 0 1 0 0 1 1 1
1 0 1 0 1 0
ty for each row Parity check fails for Parity check o
column is row #2 and column #2 for row #2
ect ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is inc
D1 D2
D3 D4
P3 P4
P1
for row #1
with rectangular
a bits, add parity
each row and
ngle-bit error in
ow up as parity P2 (n,k,d)=?
particular row
n, pinpointing the P4 is parity bit
the error. for column #2
0 1 1 0 1 1
1 0 0 1 1 1
1 0 1 0
ch row Parity check fails for Parity check only fails
is row #2 and column #2 for row #2
errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect
Parity for each row and
column is correct
⇒ no errors
6.02 Fall 2012 Lecture 4, Slide #8
Example: Rectangular Parity Codes
D1 D2
D3 D4
P3 P4
P1
P1 is parity bit
for row #1
Idea: start with rectangular
array of data bits, add parity
checks for each row and
column. Single-bit error in
data will show up as parity P2 (n,k,d)=?
errors in a particular row
and column, pinpointing the P4 is parity bit
bit that has the error. for column #2
0 1 1 0 1 1 0 1 1
1 1 0 1 0 0 1 1 1
1 0 1 0 1 0
Parity for each row Parity check fails for Parity check only fails
and column is row #2 and column #2 for row #2
correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect
P4 is parity bit for column #2
P1 is parity bit for row #1
61
Example: Rectangular Parity Codes (Quiz)
§ Idea: start with rectangular
array of data bits, add parity
checks for each row and
column. Single-bit error in
data will show up as parity
errors in a particular row
and column, pinpointing the
bit that has the error.
Pangun Park (GNU)
D1 D2
D3 D4
P3 P4
P1
Idea: start with rectangular
array of data bits, add parity
checks for each row and
column. Single-bit error in
data will show up as parity P2
errors in a particular row
and column, pinpointing the
bit that has the error.
0 1 1 0 1 1
1 1 0 1 0 0
1 0 1 0
Parity for each row Parity check fails for Parit
and column is row #2 and column #2 for ro
correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bi
D1 D2
D3 D4
P3 P4
P1
for row #
a: start with rectangular
ay of data bits, add parity
cks for each row and
umn. Single-bit error in
a will show up as parity P2 (n,k,d)
ors in a particular row
column, pinpointing the P4 is pari
that has the error. for colum
0 1 1 0 1 1 0 1 1
1 1 0 1 0 0 1 1 1
1 0 1 0 1 0
ty for each row Parity check fails for Parity check o
column is row #2 and column #2 for row #2
ect ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is inc
D1 D2
D3 D4
P3 P4
P1
for row #1
with rectangular
a bits, add parity
each row and
ngle-bit error in
ow up as parity P2 (n,k,d)=?
particular row
n, pinpointing the P4 is parity bit
the error. for column #2
0 1 1 0 1 1
1 0 0 1 1 1
1 0 1 0
ch row Parity check fails for Parity check only fails
is row #2 and column #2 for row #2
errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect
Parity for each row and
column is correct
⇒ no errors
6.02 Fall 2012 Lecture 4, Slide #8
Example: Rectangular Parity Codes
D1 D2
D3 D4
P3 P4
P1
P1 is parity bit
for row #1
Idea: start with rectangular
array of data bits, add parity
checks for each row and
column. Single-bit error in
data will show up as parity P2 (n,k,d)=?
errors in a particular row
and column, pinpointing the P4 is parity bit
bit that has the error. for column #2
0 1 1 0 1 1 0 1 1
1 1 0 1 0 0 1 1 1
1 0 1 0 1 0
Parity for each row Parity check fails for Parity check only fails
and column is row #2 and column #2 for row #2
correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect
P4 is parity bit for column #2
P1 is parity bit for row #1
Parity check fails for row
#2 and column #2
⇒ bit D4 is incorrect
Parity check only fails
for row #2
⇒ bit P2 is incorrect
62
Block Codes: Encoding
§ Redundancy r = n−k
§ Code rate R = k/n
Pangun Park (GNU)
7/74
! Encoding of an [n, k] Block Code
• Redundancy r = n − k
• Code rate R = k/n
(n,k) Systematic Linear Block Codes
• Split data into k-bit blocks
• Add (n-k) parity bits to each block using (n-k) lin
equations, making each block n bits long
• Every linear code can be represented by an
Message bits Parity bits
k
n
The entire block i
called the code w
in systematic form
n-k
63
Block Codes: Decoding
§ Decide what the transmitted information was
§ Optimum decoding rule: Minimum distance decoding in a
memoryless channel
Pangun Park (GNU)
! Decoding of an [n, k] Block Code
• Decide what the transmitted information was
• Optimum decoding rule: Minimum distance decoding in a memoryle
codewords
received
vector
Received data Decoded message
Error vector
Correct errors and remove (n–k) redundant symbols
Communications and Signal Design L
64
Linear Block Codes
§ Block code: k message bits encoded to n code bits, i.e., each of
messages encoded into a unique n-bit combination via a linear
transformation, using GF(2) operations:
Ø C is an n-element row vector containing the codeword
Ø D is a k-element row vector containing the message
Ø G is the kxn generator matrix
Ø Each codeword bit is a specified linear combination of message bits.
§ (n,k) code has rate k/n
§ Sometimes written as (n,k,d), where d is the minimum HD of the
code.
§ The “weight” of a code word is the number of 1’s in it.
§ The minimum HD of a linear code is the minimum weight found
in its nonzero codewords
Pangun Park (GNU)
6.02 Fall 2012 Lectu
(n,k) Systematic Linear Block Code
• Split data into k-bit blocks
• Add (n-k) parity bits to each block using (n-k)
equations, making each block n bits long
• Every linear code can be represented by an
equivalent systematic form
• Corresponds to choosing G = [I | A], i.e., the
identity matrix in the first k columns
Message bits Parity bits
k
n
The entire bl
called the c
in systemati
n-k
65
Quiz: What are n, k, d here?
§ {000, 111}
Ø ?
§ {000, 1100, 0011, 1111}
Ø ?
§
Ø ?
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #29
Examples: What are n, k, d here?
{000, 111}
{0000, 1100, 0011, 1111}
{1111, 0000, 0001}
{1111, 0000, 0010, 1100}
Not linear
codes!
N
c The HD of a
linear code is
the number of
“1”s in the non-
zero codeword
with the
smallest # of
“1”s
(3,1,3). Rate= 1/3.
(4,2,2). Rate = ½.
(7,4,3) code. Rate = 4/7.
The HD of a linear code is the
number of “1”s in the non- zero
codeword with the smallest # of “1”s
66
Quiz: What are n, k, d here?
§ {000, 111}
Ø (3,1,3) Rate = 1/3
§ {000, 1100, 0011, 1111}
Ø (4,2,2) Rate =1/2
§
Ø (7,4,3) Rate = 4/7
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #29
Examples: What are n, k, d here?
{000, 111}
{0000, 1100, 0011, 1111}
{1111, 0000, 0001}
{1111, 0000, 0010, 1100}
Not linear
codes!
N
c The HD of a
linear code is
the number of
“1”s in the non-
zero codeword
with the
smallest # of
“1”s
(3,1,3). Rate= 1/3.
(4,2,2). Rate = ½.
(7,4,3) code. Rate = 4/7.
The HD of a linear code is the
number of “1”s in the non- zero
codeword with the smallest # of “1”s
67
Systematic Linear Block Codes
§ Split data into k-bit blocks
§ Add (n-k) parity bits to each block using (n-k) linear equations,
making each block n bits long
§ Every linear code can be represented by an equivalent systematic
form
§ Corresponds to choosing , i.e., the identity matrix in
the first k columns
Pangun Park (GNU)
6.02 Fall 2012 Lecture 3, Slide #30
(n,k) Systematic Linear Block Codes
• Split data into k-bit blocks
• Add (n-k) parity bits to each block using (n-k) linear
equations, making each block n bits long
• Every linear code can be represented by an
equivalent systematic form
• Corresponds to choosing G = [I | A], i.e., the
identity matrix in the first k columns
Message bits Parity bits
k
n
The entire block is the
called the code word
in systematic form
n-k
68
Summary of Matrix Form
§ Operations of the generator matrix and the parity check matrix
§ Encoding
§ Decoding
Pangun Park (GNU)
Message
Vector
D
Generator
Matrix
G
Code
Vector
C
Code
Vector
C
Parity Check
Matrix
H
Null Vector
0
69
Matrix Notation: Linear Block Codes
§ Task: given k-bit message, compute n-bit codeword. We can use
standard matrix arithmetic (modulo 2) to do the job.
§ For example, here’s how we would describe the (9,4,4) rectangular
code that includes an overall parity bit.
§ The generator matrix
Pangun Park (GNU)
Matrix Notation
Task: given k-bit message, compute n-bit codeword. We can
use standard matrix arithmetic (modulo 2) to do the job. For
example, here’s how we would describe the (9,4,4) rectangular
code that includes an overall parity bit.
1 0 0 0 1 0 1 0 1
0 1 0 0 1 0 0 1 1⎥
D1 D2 D3 D4
[ ] ⎥ = D D D D
[ P P P P P ]
0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5
⎥
0 0 0 1 0 1 0 1 1
6.02 Fall 2012 Lecture 4, Slide #10
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
•
1×k k×n 1×n
message generator code word
vector matrix vector
The generator matrix, Gkxn = Ik×k Ak×(n−k)
⎡
⎣
⎢
⎤
⎦
⎥
D1xk ⋅Gkxn = C1xn
Matrix Notation
Task: given k-bit message, compute n-bit codeword. We can
use standard matrix arithmetic (modulo 2) to do the job. For
example, here’s how we would describe the (9,4,4) rectangular
code that includes an overall parity bit.
1 0 0 0 1 0 1 0 1
0 1 0 0 1 0 0 1 1⎥
D1 D2 D3 D4
[ ] ⎥ = D D D D
[ P P P P P ]
0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5
⎥
0 0 0 1 0 1 0 1 1
6.02 Fall 2012 Lecture 4, Slide #10
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
•
1×k k×n 1×n
message generator code word
vector matrix vector
The generator matrix, Gkxn = Ik×k Ak×(n−k)
⎡
⎣
⎢
⎤
⎦
⎥
D1xk ⋅Gkxn = C1xn
6.02 Fall 2012 Lectur
(n,k) Systematic Linear Block Code
• Split data into k-bit blocks
• Add (n-k) parity bits to each block using (n-k)
equations, making each block n bits long
• Every linear code can be represented by an
equivalent systematic form
• Corresponds to choosing G = [I | A], i.e., the
identity matrix in the first k columns
Message bits Parity bits
k
n
The entire bl
called the c
in systematic
n-k
70
Parity Check Matrix
§ Parity equation
§ Parity relation
§ So entry aij in i-th row, j-th column of A specifies whether data bit
Di is used in constructing parity bit Pj
Pangun Park (GNU)
6.02 Fall 2012 Lecture 5, Slide #3
A closer look at the Parity Check Matrix A
k
Parity equation Pj = ∑Diaij
i=1
k
Parity relation Pj +∑Diaij = 0
i=1
A =[aij ]
So entry aij in i-th row, j-th column of A specifies
whether data bit Di is used in constructing parity bit Pj
Questions: Can two columns of A be the same? Should two
columns of A be the same? How about rows?
6.02 Fall 2012 Lecture 5, Slide #3
A closer look at the Parity Check Matrix A
k
Parity equation Pj = ∑Diaij
i=1
k
Parity relation Pj +∑Diaij = 0
i=1
A =[aij ]
So entry aij in i-th row, j-th column of A specifies
whether data bit Di is used in constructing parity bit Pj
Questions: Can two columns of A be the same? Should two
columns of A be the same? How about rows?
Matrix Notation
Task: given k-bit message, compute n-bit codeword. We can
use standard matrix arithmetic (modulo 2) to do the job. For
example, here’s how we would describe the (9,4,4) rectangular
code that includes an overall parity bit.
1 0 0 0 1 0 1 0 1
0 1 0 0 1 0 0 1 1⎥
D1 D2 D3 D4
[ ] ⎥ = D D D D
[ P P P P P ]
0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5
⎥
0 0 0 1 0 1 0 1 1
6.02 Fall 2012 Lecture 4, Slide #10
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
•
1×k k×n 1×n
message generator code word
vector matrix vector
The generator matrix, Gkxn = Ik×k Ak×(n−k)
⎡
⎣
⎢
⎤
⎦
⎥
D1xk ⋅Gkxn = C1xn
71
Parity Check Matrix
§ Can restate the codeword generation process as a parity check or
nullspace check
§ The parity check matrix
Pangun Park (GNU)
6.02 Fall 2012 Lecture 5, Slide #4
Parity Check Matrix
Can restate the codeword For (9,4,4) example
D
generation process as a ⎡ 1 ⎤
⎢
parity check or D ⎥
⎢ 2 ⎥
nullspace check ⎡1 1 0 0 1 0 0 0 0⎤ ⎢D3
⎥
⎢ 1 1 0 1 0 0 0⎥ ⎢ ⎥
0 0 D
C. HT
0
⎢ ⎥ ⎢ 4 ⎥
= ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢ P ⎥
1 = 05x1
⎢ ⎥ ⎢ ⎥
0 1 0 1 0 0 0 1 0
⎢ ⎥ ⎢ P2 ⎥
H T ⎢
⎣1 1 1 1 0 0 0 0 1⎥ ⎢ ⎥
⎦ P
⎢ 3
⎥
(n−k)xn ⋅C1xn = 0 ⎢ P4 ⎥
(n-k) x n ⎢ ⎥
⎣ P5 ⎦
parity check
The parity check matrix, n×1
matrix
code word
vector
(transpose)
6.02 Fall 2012 Lecture 5, Slide #4
Parity Check Matrix
Can restate the codeword For (9,4,4) example
D
generation process as a ⎡ 1 ⎤
⎢
parity check or D ⎥
⎢ 2 ⎥
nullspace check ⎡1 1 0 0 1 0 0 0 0⎤ ⎢D3
⎥
⎢ 1 1 0 1 0 0 0⎥ ⎢ ⎥
0 0 D
C. HT
0
⎢ ⎥ ⎢ 4 ⎥
= ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢ P ⎥
1 = 05x1
⎢ ⎥ ⎢ ⎥
0 1 0 1 0 0 0 1 0
⎢ ⎥ ⎢ P2 ⎥
H T ⎢
⎣1 1 1 1 0 0 0 0 1⎥ ⎢ ⎥
⎦ P
⎢ 3
⎥
(n−k)xn ⋅C1xn = 0 ⎢ P4 ⎥
(n-k) x n ⎢ ⎥
⎣ P5 ⎦
parity check
The parity check matrix, n×1
matrix
code word
vector
(transpose)
6.02 Fall 2012 Lecture 5, Slide #4
Parity Check Matrix
Can restate the codeword For (9,4,4) example
D
generation process as a ⎡ 1 ⎤
⎢
parity check or D ⎥
⎢ 2 ⎥
nullspace check ⎡1 1 0 0 1 0 0 0 0⎤ ⎢D3
⎥
⎢ 1 1 0 1 0 0 0⎥ ⎢ ⎥
0 0 D
C. HT
0
⎢ ⎥ ⎢ 4 ⎥
= ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢ P ⎥
1 = 05x1
⎢ ⎥ ⎢ ⎥
0 1 0 1 0 0 0 1 0
⎢ ⎥ ⎢ P2 ⎥
H T ⎢
⎣1 1 1 1 0 0 0 0 1⎥ ⎢ ⎥
⎦ P
⎢ 3
⎥
(n−k)xn ⋅C1xn = 0 ⎢ P4 ⎥
(n-k) x n ⎢ ⎥
⎣ P5 ⎦
parity check
The parity check matrix, n×1
matrix
code word
vector
(transpose)
Matrix Notation
Task: given k-bit message, compute n-bit codeword. We can
use standard matrix arithmetic (modulo 2) to do the job. For
example, here’s how we would describe the (9,4,4) rectangular
code that includes an overall parity bit.
1 0 0 0 1 0 1 0 1
0 1 0 0 1 0 0 1 1⎥
D1 D2 D3 D4
[ ] ⎥ = D D D D
[ P P P P P ]
0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5
⎥
0 0 0 1 0 1 0 1 1⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
•
1×k k×n 1×n
message generator code word
vector matrix vector
⎡ ⎤
D1xk ⋅Gkxn = C1xn
G
H
72
Summary of Matrix Form
§ Operations of the generator matrix and the parity check matrix
§ Encoding
§ Decoding
Pangun Park (GNU)
Message
Vector
D
Generator
Matrix
G
Code
Vector
C
Code
Vector
C
Parity Check
Matrix
H
Null Vector
0
73
Simple-minded Decoding
§ Compare received n-bit word R = C + E against each of valid
codewords to see which one is HD 1 away
§ Doesn’t exploit the nice linear structure of the code!
Ø High computation complexity!!
Pangun Park (GNU)
74
Syndrome Decoding – Matrix Form
§ Task: given n-bit code word, compute (n-k) syndrome bits. Again
we can use matrix multiply to do the job.
§ Received word
§ Compute Syndromes on receive word
§ To figure out the relationship of Syndromes to errors:
Ø Knowing the error patterns we want to correct for, we can compute k
Syndrome vector offline (or n, if you want to correct errors in the parity bits,
but this is not needed) and then do a lookup after the Syndrome is calculated
from a received word to find the error type that occurred
Pangun Park (GNU)
Syndrome Decoding – Matrix Form
Task: given n-bit code word, compute (n-k) syndrome bits.
Again we can use matrix multiply to do the job.
received word R = C + E
(n-k) x 1
compute Syndromes
H RT
S syndrome
on receive word ⋅ =
vector
o figure out the relationship of Syndromes to errors:
H ⋅(C + E)T
= S H ⋅CT
use = 0
H ⋅ ET
= S figure-out error type
from Syndrome
Knowing the error patterns we want to correct for, we can
compute k Syndrome vectoroffline (or n, if you want to correct
errors in the parity bits, but this is not needed) and then do a
lookup after the Syndrome is calculated from a received word
to find the error type that occurred
6.02 Fall 2012 Lecture 5, Slide #7
Syndrome Decoding – Matrix Form
Task: given n-bit code word, compute (n-k) syndrome bits.
Again we can use matrix multiply to do the job.
received word R = C + E
(n-k) x 1
compute Syndromes
H RT
S syndrome
on receive word ⋅ =
vector
To figure out the relationship of Syndromes to errors:
H ⋅(C + E)T
= S H ⋅CT
use = 0
H ⋅ ET
= S figure-out error type
from Syndrome
Knowing the error patterns we want to correct for, we can
compute k Syndrome vectoroffline (or n, if you want to correct
errors in the parity bits, but this is not needed) and then do a
lookup after the Syndrome is calculated from a received word
to find the error type that occurred
6.02 Fall 2012 Lecture 5, Slide #7
Syndrome Decoding – Matrix Form
Task: given n-bit code word, compute (n-k) syndrome bits.
Again we can use matrix multiply to do the job.
received word R = C + E
(n-k) x 1
compute Syndromes
H RT
S syndrome
on receive word ⋅ =
vector
To figure out the relationship of Syndromes to errors:
H ⋅(C + E)T
= S H ⋅CT
use = 0
H ⋅ ET
= S figure-out error type
from Syndrome
Knowing the error patterns we want to correct for, we can
compute k Syndrome vectoroffline (or n, if you want to correct
errors in the parity bits, but this is not needed) and then do a
lookup after the Syndrome is calculated from a received word
to find the error type that occurred
6.02 Fall 2012 Lecture 5, Slide #7
Syndrome Decoding – Matrix Form
Task: given n-bit code word, compute (n-k) syndrome bits.
Again we can use matrix multiply to do the job.
received word R = C + E
(n-k) x 1
compute Syndromes
H RT
S syndrome
on receive word ⋅ =
vector
To figure out the relationship of Syndromes to errors:
H ⋅(C + E)T
= S H ⋅CT
use = 0
H ⋅ ET
= S figure-out error type
from Syndrome
Knowing the error patterns we want to correct for, we can
compute k Syndrome vectoroffline (or n, if you want to correct
errors in the parity bits, but this is not needed) and then do a
lookup after the Syndrome is calculated from a received word
to find the error type that occurred
6.02 Fall 2012 Lecture 5, Slide #7
yndrome Decoding – Matrix Form
en n-bit code word, compute (n-k) syndrome bits.
can use matrix multiply to do the job.
word R = C + E
(n-k) x 1
Syndromes
H RT
S syndrome
word ⋅ =
vector
ut the relationship of Syndromes to errors:
H ⋅(C + E)T
= S H ⋅CT
use = 0
H ⋅ ET
= S figure-out error type
from Syndrome
the error patterns we want to correct for, we can
k Syndrome vectoroffline (or n, if you want to correct
the parity bits, but this is not needed) and then do a
fter the Syndrome is calculated from a received word
he error type that occurred
Lecture 5, Slide #7
Figure-out error type
from Syndrome
75
Syndrome Decoding – Steps
§ Step1: For a given code and error patterns Ei, precompute
Syndromes and store them
§ Step 2: For each received word, compute the Syndrome
§ Step 3: Find l such that Sl == S and apply correction for error El
Pangun Park (GNU)
Syndrome Decoding – Steps
given code and error patterns Ei, precompute
nd store them
H ⋅ Ei = Si
ch received word, H ⋅ R = S
Syndrome
such that Sl == S and apply correction for error El
C = R+ El
Lecture 5, Slide #8
me Decoding – Steps
and error patterns Ei, precompute
em
H ⋅ Ei = Si
d word, H ⋅ R = S
Sl == S and apply correction for error El
C = R+ El
Lecture 5, Slide #8
Syndrome Decoding – Steps
a given code and error patterns Ei, precompute
and store them
H ⋅ Ei = Si
each received word, H ⋅ R = S
he Syndrome
d l such that Sl == S and apply correction for error El
C = R+ El
Lecture 5, Slide #8
76
Syndrome Decoding – Steps (9,4,4) example
§ Codeword generation:
§ Received word in error : generation:
Syndrome Decoding – Steps (9,4,4) example
Codeword generation:
⎡1 0 0 0 1 0 1 0 1⎤
⎢0 1 0 0 1 0 0 1 1⎥
1 1 1 1 ⋅⎢
[ ] ⎥ = 1 1 1
[ 1 0 0 0 0 0]
⎢0 0 1 0 0 1 1 0 1⎥
⎢ ⎥
⎣0 0 0 1 0 1 0 1 1⎦
Received word in error:generation:
1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0
[ ] 0
[ +
]
0 1 0 0 0 0 0 0 0
[
Syndrome computation ⎡1⎤
⎢0⎥
for received word ⎥
⎢
⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤
⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥
1
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
0 1 0 1 0 0 0 1 0 ⎢0⎥ 1
⎢ ⎥ ⎢ ⎥
⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥
⎣1
⎣ ⎦ ⎦
⎢ ⎥
⎢0⎥
⎢0⎥
6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦
Lecture 5, Slide #9
]
!
!
!
!
!
!

#
$
$
$
$
$
$
%

=
!
!
!
!
!
!
!
!
!
!
!
!
#
$
$
$
$
$
$
$
$
$
$
$
$

⋅
!
!
!
!
!
!

#
$
$
$
$
$
$
%

1
1
0
0
1
0
0
0
0
0
0
0
1
0
1
0
0
0
0
1
1
1
1
0
1
0
0
0
1
0
1
0
0
0
1
0
0
0
1
0
1
0
0
0
1
0
1
1
0
0
0
0
0
0
1
0
0
1
1
Precomputed Syndrome for
a given error pattern
Syndrome Decoding – Steps (9,4,4) example
Codeword generation:
⎡1 0 0 0 1 0 1 0 1⎤
⎢0 1 0 0 1 0 0 1 1⎥
1 1 1 1 ⋅⎢
[ ] ⎥ = 1 1 1
[ 1 0 0 0 0 0]
⎢0 0 1 0 0 1 1 0 1⎥
⎢ ⎥
⎣0 0 0 1 0 1 0 1 1⎦
Received word in error:generation:
1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0
[ ] 0
[ +
]
0 1 0 0 0 0 0 0 0
[
Syndrome computation ⎡1⎤
⎢0⎥
for received word ⎥
⎢
⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤
⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥
1
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
0 1 0 1 0 0 0 1 0 ⎢0⎥ 1
⎢ ⎥ ⎢ ⎥
⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥
⎣1
⎣ ⎦ ⎦
⎢ ⎥
⎢0⎥
⎢0⎥
6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦
Lecture 5, Slide #9
]
!
!
!
!
!
!

#
$
$
$
$
$
$
%

=
!
!
!
!
!
!
!
!
!
!
!
!
#
$
$
$
$
$
$
$
$
$
$
$
$

⋅
!
!
!
!
!
!

#
$
$
$
$
$
$
%

1
1
0
0
1
0
0
0
0
0
0
0
1
0
1
0
0
0
0
1
1
1
1
0
1
0
0
0
1
0
1
0
0
0
1
0
0
0
1
0
1
0
0
0
1
0
1
1
0
0
0
0
0
0
1
0
0
1
1
Precomputed Syndrome for
a given error pattern
Syndrome Decoding – Steps (9,4,4) example
Codeword generation:
⎡1 0 0 0 1 0 1 0 1⎤
⎢0 1 0 0 1 0 0 1 1⎥
1 1 1 1 ⋅⎢
[ ] ⎥ = 1 1 1
[ 1 0 0 0 0 0]
⎢0 0 1 0 0 1 1 0 1⎥
⎢ ⎥
⎣0 0 0 1 0 1 0 1 1⎦
Received word in error:generation:
1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0
[ ] 0
[ +
]
0 1 0 0 0 0 0 0 0
[
Syndrome computation ⎡1⎤
⎢0⎥
for received word ⎥
⎢
⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤
⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥
1
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
0 1 0 1 0 0 0 1 0 ⎢0⎥ 1
⎢ ⎥ ⎢ ⎥
⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥
⎣1
⎣ ⎦ ⎦
⎢ ⎥
⎢0⎥
⎢0⎥
6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦
Lecture 5, Slide #9
]
!
!
!
!
!
!

#
$
$
$
$
$
$
%

=
!
!
!
!
!
!
!
!
!
!
!
!
#
$
$
$
$
$
$
$
$
$
$
$
$

⋅
!
!
!
!
!
!

#
$
$
$
$
$
$
%

1
1
0
0
1
0
0
0
0
0
0
0
1
0
1
0
0
0
0
1
1
1
1
0
1
0
0
0
1
0
1
0
0
0
1
0
0
0
1
0
1
0
0
0
1
0
1
1
0
0
0
0
0
0
1
0
0
1
1
Precomputed Syndrome for
a given error pattern
Syndrome Decoding – Steps (9,4,4) example
Codeword generation:
⎡1 0 0 0 1 0 1 0 1⎤
⎢0 1 0 0 1 0 0 1 1⎥
1 1 1 1 ⋅⎢
[ ] ⎥ = 1 1 1
[ 1 0 0 0 0 0]
⎢0 0 1 0 0 1 1 0 1⎥
⎢ ⎥
⎣0 0 0 1 0 1 0 1 1⎦
Received word in error:generation:
1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0
[ ] 0
[ +
]
0 1 0 0 0 0 0 0 0
[
Syndrome computation ⎡1⎤
⎢0⎥
for received word ⎥
⎢
⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤
⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥
1
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
0 1 0 1 0 0 0 1 0 ⎢0⎥ 1
⎢ ⎥ ⎢ ⎥
⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥
⎣1
⎣ ⎦ ⎦
⎢ ⎥
⎢0⎥
⎢0⎥
6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦
Lecture 5, Slide #9
]
!
!
!
!
!
!

#
$
$
$
$
$
$
%

=
!
!
!
!
!
!
!
!
!
!
!
!
#
$
$
$
$
$
$
$
$
$
$
$
$

⋅
!
!
!
!
!
!

#
$
$
$
$
$
$
%

1
1
0
0
1
0
0
0
0
0
0
0
1
0
1
0
0
0
0
1
1
1
1
0
1
0
0
0
1
0
1
0
0
0
1
0
0
0
1
0
1
0
0
0
1
0
1
1
0
0
0
0
0
0
1
0
0
1
1
Precomputed Syndrome for
a given error pattern
Syndrome computation for
received word
Precomputed Syndrome for
a given error pattern
G
D C
C
R
E
H R H P
77
Syndrome Decoding – Steps (9,4,4) example
§ Correction:
§ Since received word Syndrome matches the
Syndrome of the error
apply this error to the received word to recover the original
codeword
Pangun Park (GNU)
6.02 Fall 2012 Lecture 5, Slide #10
Syndrome Decoding – Steps (9,4,4) example
Correction:
Since received word Syndrome [1 0 0 1 1]T matches the
Syndrome of the error [0 1 0 0 0 0 0 0 0],
apply this error to the received word to recover the original codeword
Received word
1 1 1 1 0 0 0 0 0 = 1 0 1 1 0 0 0 0
[ ] [ 0 +
]
0 1 0 0 0 0 0 0 0
[
Corrected codeword
Error pattern from
matching Syndrome
]
6.02 Fall 2012 Lecture 5, Slide #10
Syndrome Decoding – Steps (9,4,4) example
Correction:
Since received word Syndrome [1 0 0 1 1]T matches the
Syndrome of the error [0 1 0 0 0 0 0 0 0],
apply this error to the received word to recover the original codeword
Received word
1 1 1 1 0 0 0 0 0 = 1 0 1 1 0 0 0 0
[ ] [ 0 +
]
0 1 0 0 0 0 0 0 0
[
Corrected codeword
Error pattern from
matching Syndrome
]
6.02 Fall 2012 Lecture 5, Slide #10
Syndrome Decoding – Steps (9,4,4) example
Correction:
Since received word Syndrome [1 0 0 1 1]T matches the
Syndrome of the error [0 1 0 0 0 0 0 0 0],
apply this error to the received word to recover the original codeword
Received word
1 1 1 1 0 0 0 0 0 = 1 0 1 1 0 0 0 0
[ ] [ 0 +
]
0 1 0 0 0 0 0 0 0
[
Corrected codeword
Error pattern from
matching Syndrome
]
78
Burst Errors
§ Correcting single-bit errors is good
§ Similar ideas could be used to correct independent multi-bit
errors
§ But in many situations errors come in bursts: correlated multi-bit
errors (e.g., fading or burst of interference on wireless channel,
damage to storage media etc.). How does single-bit error
correction help with that?
Pangun Park (GNU)
79
Coping with Burst Errors by Interleaving
§ Well, can we think of a way to turn a B-bit error burst into B
single-bit errors?
Pangun Park (GNU)
Problem: Bits from a particular
codeword are transmitted
sequentially, so a B-bit burst
produces multi-bit errors.
Solution: interleave bits from B
different codewords. Now a B-bit
burst produces 1-bit errors in B
different codewords.
6.02 Fall 2012 Lecture 5, Slide #14
Coping with Burst Errors by Interleaving
Well, can we think of a way to turn a B-bit error burst
into B single-bit errors?
Row-by-row Col-by-col
B transmission B transmission
order order
Problem: Bits from a Solution: interleave bits
particular codeword are from B different codewords.
transmitted sequentially, Now a B-bit burst produces
so a B-bit burst produces 1-bit errors in B different
multi-bit errors. codewords.
6.02 Fall 2012 Lecture 5, Slide #14
Coping with Burst Errors by Interleaving
Well, can we think of a way to turn a B-bit error burst
into B single-bit errors?
Row-by-row Col-by-col
B transmission B transmission
order order
Problem: Bits from a Solution: interleave bits
particular codeword are from B different codewords.
transmitted sequentially, Now a B-bit burst produces
so a B-bit burst produces 1-bit errors in B different
multi-bit errors. codewords.
80
Framing
§ Looking at a received bit stream, how do we know where a block
of interleaved codewords begins?
§ Physical indication (transmitter turns on, beginning of disk
sector, separate control channel)
§ Place a unique bit pattern (frame sync sequence) in the bit stream
to mark start of a block
Ø Frame = sync pattern + interleaved code word block
Ø Search for sync pattern in bit stream to find start of frame
Ø Bit pattern can’t appear elsewhere in frame (otherwise our search will get
confused), so have to make sure no legal combination of codeword bits can
accidentally generate the sync pattern (can be tricky...)
Ø Sync pattern can’t be protected by ECC, so errors may cause us to lose a frame
every now and then, a problem that will need to be addressed at some higher
level of the communication protocol.
Pangun Park (GNU)
81
Example: Channel Coding Steps (A)
1. Break message stream into k-bit blocks.
2. Add redundant info in the form of (n-k)
parity bits to form n-bit codeword. Goal:
choose parity bits so we can correct
single-bit errors.
3. Interleave bits from a group of B
codewords to protect against B-bit burst
errors.
4. Add unique pattern of bits to start of
each interleaved codeword block so
receiver can tell how to extract blocks
from received bitstream.
5. Send new (longer) bitstream to
transmitter.
Pangun Park (GNU)
Sync pattern has five consecutive 1’s.
To prevent sync from appearing in
message, “bit-stuff” 0’s after any
sequence of four 1’s in the message.
This step is easily reversed at receiver
(just remove 0 after any sequence of
four consecutive 1’s in the message).
6.02 Fall 2012 Lecture 5, Slide #16
Summary: example channel coding steps
1. Break message stream into k-bit
011011101101
blocks. Step 1: k=4
2. Add redundant info in the form of 0110
(n-k) parity bits to form n-bit 1110
1101
codeword. Goal: choose parity Step 2: (8,4,3) code
bits so we can correct single-bit
errors. 01101111
11100101
3. Interleave bits from a group of B 11010110
codewords to protect against B- Step 3: B = 3
bit burst errors.
011111110001100111101110
4. Add unique pattern of bits to Step 4: sync = 0111110
start of each interleaved
codeword block so receiver can 011111001111011100011001111001110
tell how to extract blocks from
Sync pattern has five consecutive 1’s. To
received bitstream. prevent sync from appearing in message,
5. Send new (longer) bitstream to “bit-stuff” 0’s after any sequence of four
1’s in the message. This step is easily
transmitter. reversed at receiver (just remove 0 after
any sequence of four consecutive 1’s in
the message).
82
Example: Channel Coding Steps (B)
1. Search through received bit stream
for sync pattern, extract
interleaved codeword block
2. De-interleave the bits to form B n-
bit codewords
3. Check parity bits in each code
word to see if an error has
occurred. If there’s a single-bit
error, correct it.
4. Extract k message bits from each
corrected codeword and
concatenate to form message
stream.
Pangun Park (GNU)
6.02 Fall 2012 Lecture 5, Slide #17
Summary: example error correction steps
1. Search through received bit 011111001111011100100001111001110
stream for sync pattern, Step 1: sync = 0111110
extract interleaved codeword
block 011111110010000111101110
2. De-interleave the bits to form Step 2: B = 3, n = 8
B n-bit codewords 01100111
3. Check parity bits in each code 11110101
11000110
word to see if an error has
Step 3: (8,4,3) code
occurred. If there’s a single-
bit error, correct it.
010 110 0
4. Extract k message bits from 101 111 001
each corrected codeword and 11 01 10
concatenate to form message Step 4
stream.
0110 1110 1101
11
0 1
0
111
010
11 01 10
11
83
Overview
Overvie
§ Electromagnetic Spectrum
§ Basic Signals
Ø Frequency, Wavelength, and Phase
§ Brief Antenna
§ Line Coding
§ Modulation
§ Channel Coding
Ø Hamming Distance, Block Codes
§ Multiple Access Methods
Ø Spread Spectrum Technology
Pangun Park (GNU)
84
Multiple Access Methods
§ Time Division Multiple Access
§ Frequency Division Multiple Access
§ Code Division Multiple Access
Pangun Park (GNU)
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Multiple Access Methods
Multiple Access Methods
Time Division Multiple Access
Code Division Multiple Access
3-18
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Multiple Access Methods
Multiple Access Methods
Time Division Multiple Access
Code Division Multiple Access
85
Frequency multiplex
§ Separation of the whole spectrum into smaller frequency bands
§ A channel gets a certain band of the spectrum for the whole time
§ Advantages:
Ø No dynamic coordination necessary
Ø Works also for analog signals
§ Disadvantages:
Ø Waste of bandwidth if the traffic is distributed unevenly
Ø inflexible
Ø Guard spaces
Pangun Park (CNU)
Frequency multiplex
Separation of the whole spectrum into smaller frequency bands
A channel gets a certain band of the spectrum for the whole time
Advantages:
! no dynamic coordination
necessary
! works also for analog signals
Disadvantages:
! waste of bandwidth
if the traffic is
distributed unevenly
! inflexible
! guard spaces
k2 k3 k4 k5 k6
k1
f
t
c
86
Time multiplex
§ A channel gets the whole spectrum for a certain amount of time
§ Advantages:
Ø only one carrier in the medium at any time
Ø throughput high even for many users
§ Disadvantages:
Ø precise synchronization necessary
Pangun Park (CNU)
43
f
t
c
k2 k3 k4 k5 k6
k1
Time multiplex
A channel gets the whole spectrum for a certain amount of time
Advantages:
! only one carrier in the
medium at any time
! throughput high even
for many users
Disadvantages:
! precise
synchronization
necessary
87
Time and frequency multiplex
§ Combination of both methods
§ A channel gets a certain frequency band for a certain amount of
time
§ Example: GSM
§ Advantages:
Ø better protection against tapping
Ø protection against frequency selective interference
§ but:
Ø precise coordination required
Pangun Park (CNU) 44
f
Time and frequency multiplex
Combination of both methods
A channel gets a certain frequency band for a certain amount of
time
Example: GSM
Advantages:
! better protection against
tapping
! protection against frequency
selective interference
but:
! precise coordination
required
t
c
k2 k3 k4 k5 k6
k1
88
Code multiplex
§ Each channel has a unique code
§ All channels use the same spectrum at the same time
§ Advantages:
Ø bandwidth efficient
Ø no coordination and synchronization necessary
Ø good protection against interference and tapping
§ Disadvantages:
Ø more complex signal regeneration
§ Implemented using spread spectrum technology
Pangun Park (CNU)
Mobile and Wireless Networking
2013 / 2014
45
Code multiplex
Each channel has a unique code
All channels use the same spectrum
at the same time
Advantages:
! bandwidth efficient
! no coordination and synchronization
necessary
! good protection against interference
and tapping
Disadvantages:
! more complex signal regeneration
Implemented using spread spectrum
technology
k2 k3 k4 k5 k6
k1
f
t
c
89
Spread spectrum technology
§ Problem of radio transmission: frequency dependent fading can
wipe out narrow band signals for duration of the interference
Pangun Park (CNU)
Spreading and frequency selective fading
frequency
channel
quality
1 2
3
4
5 6
narrow band
signal
guard space
2
2
2
2
2
channel
quality
1
narrowband channels
spread spectrum channels
90
Solution: Spreading and frequency selective fading
Pangun Park (CNU)
Mobile and Wireless Networking
15
Spreading and frequency selective fading
frequency
channel
quality
1 2
3
4
5 6
narrow band
signal
guard space
2
2
2
2
2
frequency
channel
quality
1
spread
spectrum
narrowband channels
spread spectrum channels
91
Spread spectrum technology
§ Problem of radio transmission: frequency dependent fading can
wipe out narrow band signals for duration of the interference
§ Solution: spread the narrow band signal into a broad band signal
using a special code protection against narrow band interference
§ Side effects:
Ø coexistence of several signals without dynamic coordination
§ Alternatives: Direct Sequence, Frequency Hopping
Pangun Park (CNU)
Mobile and Wireless Networking
2013 / 2014
13
detection at
receiver
interference spread
signal
signal
spread
interference
f f
power power
Spread spectrum technology
Problem of radio transmission: frequency dependent fading can
wipe out narrow band signals for duration of the interference
Solution: spread the narrow band signal into a broad band signal
using a special code
protection against narrow band interference
Side effects:
! coexistence of several signals without dynamic coordination
! tap-proof
Alternatives: Direct Sequence, Frequency Hopping
92
Spread spectrum technology
§ Protection against narrow band interference
§ Tightly coupled to CDM
Ø coexistence of several signals without dynamic coordination
Ø High security
§ Military use
§ Overlay of new SS technologies on the same spectrum as old NB
§ Civil applications
Ø IEEE802.11, Bluetooth, UMTS
§ Disadvantages
Ø High complexity
Ø Large transmission bandwidth
§ Alternatives: Direct Sequence, Frequency Hopping
Pangun Park (CNU)
93
Frequency Hopping Spread Spectrum
Frequency Hopping Spread Spectrum
§ Pseudo-random frequency hopping
§ Spreads the power over a wide spectrum
Ø Spread Spectrum
§ Developed initially for military
§ Patented by actress Hedy Lamarr
§ Narrowband interference can't jam
Pangun Park (GNU)
3-19
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Frequency Hopping Spread Spectrum
Frequency Hopping Spread Spectrum
‰ Pseudo-random frequency hopping
‰ Spreads the power over a wide spectrum
Ÿ Spread Spectrum
‰ Developed initially for military
‰ Patented by actress Hedy Lamarr
‰ Narrowband interference can't jam
Frequency
Time
50 ms
94
FH Spectrum
Pangun Park (GNU)
3-20
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Spectrum
Spectrum
Signal
Noise
Signal
Noise
(a) Normal (b) Frequency Hopping
95
FHSS (Frequency Hopping Spread Spectrum)
Pangun Park (CNU) Mobile and Wireless Networking
2013 / 2014
23
FHSS (Frequency Hopping Spread Spectrum) III
modulator
user data
hopping
sequence
modulator
narrowband
signal
spread
transmit
signal
transmitter
received
signal
receiver
demodulator
data
frequency
synthesizer
hopping
sequence
demodulator
frequency
synthesizer
narrowband
signal
96
FHSS (Frequency Hopping Spread Spectrum)
§ Example:
Ø Bluetooth (1600 hops/sec on 79 carriers)
§ Advantages
Ø frequency selective fading and interference limited to short period
Ø simple implementation
Ø uses only small portion of spectrum at any time
§ Disadvantages
Ø not as robust as DSSS
Ø simpler to detect
Pangun Park (CNU)
97
Direct-Sequence Spread Spectrum
§ Spreading factor = Code bits/data bit, 10-100 commercial (Min 10
by FCC), 10,000 for military
§ Signal bandwidth 10 × data bandwidth
§ Code sequence synchronization
§ Correlation between codes: Interference, Orthogonal
Pangun Park (GNU)
3-21
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Direct
Direct-
-Sequence Spread Spectrum
Sequence Spread Spectrum
‰ Spreading factor = Code bits/data bit, 10-100 commercial (Min
10 by FCC), 10,000 for military
‰ Signal bandwidth 10 × data bandwidth
‰ Code sequence synchronization
‰ Correlation between codes ŸInterference Orthogonal
Frequency
Time
5Ps
01001011011011010010
Data
0 1
98
DS Spectrum
Time Domain Frequency Domain
Pangun Park (GNU)
3-22
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
DS Spectrum
DS Spectrum
Time Domain Frequency Domain
(a) Data
(b) Code
Frequency
Frequency
Time
99
DSSS (Direct Sequence Spread Spectrum)
§ XOR of the signal with pseudo-random number (chipping
sequence)
Ø many chips per bit (e.g., 128) result in higher bandwidth of the signal
§ Advantages
Ø reduces frequency selective fading
Ø in cellular networks
• base stations can use the same frequency range
• several base stations can detect and recover the signal
• soft handover
§ Disadvantages
Ø precise power control necessary
Pangun Park (CNU)
Mobile and Wireless Networking
2013 / 2014
17
DSSS (Direct Sequence Spread Spectrum) I
XOR of the signal with pseudo-random number (chipping sequence)
! many chips per bit (e.g., 128) result in higher bandwidth of the signal
Advantages
! reduces frequency selective
fading
! in cellular networks
 base stations can use the
same frequency range
 several base stations can
detect and recover the signal
 soft handover
Disadvantages
! precise power control necessary
user data
chipping
sequence
resulting
signal
0 1
0 1 1 0 1 0 1 0
1 0 0 1 1
1
XOR
0 1 1 0 0 1 0 1
1 0 1 0 0
1
=
tb
tc
tb: bit period
tc: chip period
100
DSSS (Direct Sequence Spread Spectrum)
Pangun Park (CNU) Mobile and Wireless Networking
2013 / 2014
18
DSSS (Direct Sequence Spread Spectrum) II
X
user data
chipping
sequence
modulator
radio
carrier
spread
spectrum
signal
transmit
signal
transmitter
demodulator
received
signal
radio
carrier
X
chipping
sequence
lowpass
filtered
signal
receiver
integrator
products
decision
data
sampled
sums
correlator
101
Duplexing
Duplexing
§ Duplex = Bi-Directional Communication
§ Frequency division duplexing (FDD) (Full-Duplex)
§ Time division duplex (TDD): Half-duplex
§ Many LTE deployments will use TDD.
Ø Allows more flexible sharing of DL/UL data rate
Ø Does not require paired spectrum
Ø Easy channel estimation : Simpler transceiver design
Ø Con: All neighboring BS should time synchronize
Pangun Park (GNU)
3-25
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Duplexing
Duplexing
‰ Duplex = Bi-Directional Communication
‰ Frequency division duplexing (FDD) (Full-Duplex)
‰ Time division duplex (TDD): Half-duplex
‰ Many LTE deployments will use TDD.
¾ Allows more flexible sharing of DL/UL data rate
¾ Does not require paired spectrum
¾ Easy channel estimation Ÿ Simpler transceiver design
¾ Con: All neighboring BS should time synchronize
Base Subscriber
Base Subscriber
Base Subscriber
Frequency 1
Frequency 2
3-25
©2016 Raj Jain
http://www.cse.wustl.edu/~jain/cse574-16/
Washington University in St. Louis
Duplexing
Duplexing
‰ Duplex = Bi-Directional Communication
‰ Frequency division duplexing (FDD) (Full-Duplex)
‰ Time division duplex (TDD): Half-duplex
‰ Many LTE deployments will use TDD.
¾ Allows more flexible sharing of DL/UL data rate
¾ Does not require paired spectrum
¾ Easy channel estimation Ÿ Simpler transceiver design
¾ Con: All neighboring BS should time synchronize
Base Subscriber
Base Subscriber
Base Subscriber
Frequency 1
Frequency 2
102
Pangun Park (GNU)

Mobile & Satellite Communications Introduction to Wireless Coding and Modulation

  • 1.
    1 Mobile & SatelliteCommunications 이동 및 위성 통신 Lecture 3: Introduction to Wireless Coding and Modulation Pangun Park Chungnam National University Information Communications Engineering
  • 2.
    2 Overview Overvie § Electromagnetic Spectrum §Basic Signals Ø Frequency, Wavelength, and Phase § Brief Antenna § Line Coding § Modulation § Channel Coding Ø Hamming Distance, Block Codes § Multiple Access Methods Ø Spread Spectrum Technology Pangun Park (GNU)
  • 3.
    3 Electromagnetic Spectrum Electromagnetic Spectrum PangunPark (GNU) Mobile and Wireless Networking 2013 / 2014 24 Frequencies for communication VLF = Very Low Frequency UHF = Ultra High Frequency LF = Low Frequency SHF = Super High Frequency MF = Medium Frequency EHF = Extra High Frequency HF = High Frequency UV = Ultraviolet Light VHF = Very High Frequency Frequency and wave length: !λ = c/f wave length λ, speed of light c ≅ 3x108m/s, frequency f 1 Mm 300 Hz 10 km 30 kHz 100 m 3 MHz 1 m 300 MHz 10 mm 30 GHz 100 µm 3 THz 1 µm 300 THz visible light VLF LF MF HF VHF UHF SHF EHF infrared UV optical transmission coax cable twisted pair
  • 4.
    4 Electromagnetic Spectrum Electromagnetic Spectrum §Wireless communication uses 100 kHz to 60 GHz Pangun Park (GNU) 3-8 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Electromagnetic Spectrum Electromagnetic Spectrum ‰ Wireless communication uses 100 kHz to 60 GHz Wireless
  • 5.
    5 Frequencies for mobilecommunication § UHF-ranges for mobile cellular systems Ø simple, small antenna for cars Ø deterministic propagation characteristics, reliable connections § SHF and higher for directed radio links, satellite communication Ø small antenna, focusing Ø large bandwidth available § Wireless LANs use frequencies in UHF to SHF spectrum Ø some systems planned up to EHF Ø limitations due to absorption by water (>5 GHz) and oxygen (60 GHz) molecules (resonance frequencies) • weather dependent fading, signal loss caused by heavy rainfall etc. Pangun Park (GNU) Frequencies for communication VLF = Very Low Frequency UHF = Ultra High Frequency LF = Low Frequency SHF = Super High Frequency MF = Medium Frequency EHF = Extra High Frequency 1 Mm 300 Hz 10 km 30 kHz 100 m 3 MHz 1 m 300 MHz 10 mm 30 GHz 100 µm 3 THz 1 µm 300 THz visible light VLF LF MF HF VHF UHF SHF EHF infrared UV optical transmission coax cable twisted pair
  • 6.
    6 Licensed vs Unlicensedbands § Mobile cellular typically uses licensed bands Ø Spectrum licensed to operator Ø GSM: • 900 MHz, 1800 MHz (Europe) • 850 MHz, 1900 MHz (US) • other bands Ø UMTS, LTE Ø See e.g., http://www.frequentieland.nl/wie.htm § WLAN typically uses unlicensed bands Ø 2.4 GHz Industrial, Scientific, and Medical (ISM) band: • IEEE 802.11b/g, Bluetooth, Zigbee, microwave oven § 5.8 GHz ISM band: Ø IEEE 802.11a Pangun Park (CNU)
  • 7.
    7 Licensed bands :Korea 2016 Pangun Park (GNU)
  • 8.
    8 Pangun Park (GNU) Mobileand Wireless Networking 25
  • 9.
    9 Overview Overvie § Electromagnetic Spectrum §Basic Signals Ø Frequency, Wavelength, and Phase § Brief Antenna § Line Coding § Modulation § Channel Coding Ø Hamming Distance, Block Codes § Multiple Access Methods Ø Spread Spectrum Technology Pangun Park (GNU)
  • 10.
    10 Basic: Decibels (Quiz) Decibels § § PangunPark (GNU) 3-9 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Decibels Decibels ‰ Attenuation = Log10 Pin Pout ‰ Example 1: Pin = 10 mW, Pout=5 mW Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB ‰ Example 2: Pin = 100mW, Pout=1 mW Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB Bel Pin Pout decibel ‰ Attenuation = 10 Log10 Vin Vout decibel ‰ Attenuation = 20 Log10 3-9 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Decibels Decibels ‰ Attenuation = Log10 Pin Pout ‰ Example 1: Pin = 10 mW, Pout=5 mW Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB ‰ Example 2: Pin = 100mW, Pout=1 mW Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB Bel Pin Pout decibel ‰ Attenuation = 10 Log10 Vin Vout decibel ‰ Attenuation = 20 Log10
  • 11.
    11 Basic: Decibels Decibels § § Pangun Park(GNU) 3-9 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Decibels Decibels ‰ Attenuation = Log10 Pin Pout ‰ Example 1: Pin = 10 mW, Pout=5 mW Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB ‰ Example 2: Pin = 100mW, Pout=1 mW Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB Bel Pin Pout decibel ‰ Attenuation = 10 Log10 Vin Vout decibel ‰ Attenuation = 20 Log10 3-9 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Decibels Decibels ‰ Attenuation = Log10 Pin Pout ‰ Example 1: Pin = 10 mW, Pout=5 mW Attenuation = 10 log 10 (10/5) = 10 log 10 2 = 3 dB ‰ Example 2: Pin = 100mW, Pout=1 mW Attenuation = 10 log 10 (100/1) = 10 log 10 100 = 20 dB Bel Pin Pout decibel ‰ Attenuation = 10 Log10 Vin Vout decibel ‰ Attenuation = 20 Log10
  • 12.
    12 Signals I § Physicalrepresentation of data § Function of time and location § Signal parameters: parameters representing the value of data § Classification Ø continuous time/discrete time Ø continuous values/discrete values Ø analog signal = continuous time and continuous values Ø digital signal = discrete time and discrete values Pangun Park (CNU)
  • 13.
    13 Frequency, Period, andPhase § Signal parameters of periodic signals: § Frequency is measured in Cycles/sec or Hertz Pangun Park (GNU) 3-3 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Frequency, Period, and Phase Frequency, Period, and Phase ‰ A Sin(2Sft + T), A = Amplitude, f=Frequency, T = Phase, Period T = 1/f, Frequency is measured in Cycles/sec or Hertz Cycle Amplitude = 0.5 Phase = 45° ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Frequency, Period, and Phase Frequency, Period, and Phase ‰ A Sin(2Sft + T), A = Amplitude, f=Frequency, T = Phase, Period T = 1/f, Frequency is measured in Cycles/sec or Hertz Cycle Amplitude = 0.5 Phase = 45°
  • 14.
    14 Wavelength Wavelength § Distance occupiedby one cycle § Distance between two points of corresponding phase in two consecutive cycles § Wavelength = § Assuming signal velocity v Pangun Park (GNU) 3-5 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Wavelength Wavelength ‰ Distance occupied by one cycle ‰ Distance between two points of corresponding phase in two consecutive cycles ‰ Wavelength = O ‰ Assuming signal velocity v ¾ O = vT ¾ Of = v ¾ c = 3×108 m/s (speed of light in free space) = 300 m/Ps Distance Amplitude O ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Wavelength Wavelength ‰ Distance occupied by one cycle ‰ Distance between two points of corresponding phase in two consecutive cycles ‰ Wavelength = O ‰ Assuming signal velocity v ¾ O = vT ¾ Of = v ¾ c = 3×108 m/s (speed of light in free space) = 300 m/Ps Distance Amplitude O Wavelength Wavelength ‰ Distance occupied by one cycle ‰ Distance between two points of corresponding phase in two consecutive cycles ‰ Wavelength = O ‰ Assuming signal velocity v ¾ O = vT ¾ Of = v ¾ c = 3×108 m/s (speed of light in free space) = 300 m/Ps Distance Amplitude O
  • 15.
    15 Example (Quiz) Example § Frequency= 2.5 GHz § Wavelength? Pangun Park (GNU)
  • 16.
    16 Example Example § Frequency =2.5 GHz Pangun Park (GNU) 3-6 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Example Example ‰ Frequency = 2.5 GHz
  • 17.
    17 Phase Phase § Sine wavewith a phase of 45° § In-phase component I + Quadrature component Q Pangun Park (GNU) 3-4 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Phase Phase ‰ Sine wave with a phase of 45° I=Sin(2Sft) Q=Cos(2Sft) Phase In-phase component I + Quadrature component Q Cos(2Sft) Sin(2Sft) Sin(2Sft+S/4) 3-4 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Phase Phase ‰ Sine wave with a phase of 45° I=Sin(2Sft) Q=Cos(2Sft) Phase In-phase component I + Quadrature component Q Cos(2Sft) Sin(2Sft) Sin(2Sft+S/4)
  • 18.
    18 Fourier series ofperiodic signals Pangun Park (CNU) Mobile and Wireless Networking 2013 / 2014 29 Fourier representation of periodic signals ) 2 cos( ) 2 sin( 2 1 ) ( 1 1 nft b nft a c t g n n n n ! ! " " # = # = + + = 1 0 1 0 t t ideal periodic signal real composition (based on harmonics)
  • 19.
    19 Fourier series ofperiodic signals Pangun Park (CNU) Mobile and Wireless Networking 2013 / 2014 29 Fourier representation of periodic signals ) 2 cos( ) 2 sin( 2 1 ) ( 1 1 nft b nft a c t g n n n n ! ! " " # = # = + + = 1 0 1 0 t t ideal periodic signal real composition (based on harmonics)
  • 20.
    20 Fourier series ofperiodic signals Pangun Park (CNU)
  • 21.
    21 Signals II § Differentrepresentations of signals Ø amplitude (amplitude domain) Ø frequency spectrum (frequency domain) Ø phase state diagram (amplitude M and phase φ in polar coordinates) § Composed signals transferred into frequency domain using Fourier transformation § Digital signals need Ø infinite frequencies for perfect transmission Ø modulation with a carrier frequency for transmission (analog signal!) Pangun Park (CNU) Mobile and Wireless Networking 30 ! Different representations of signals ! amplitude (amplitude domain) ! frequency spectrum (frequency domain) ! phase state diagram (amplitude M and phase ϕ in polar coordinates) ! Composed signals transferred into frequency domain using Fourier transformation ! Digital signals need ! infinite frequencies for perfect transmission ! modulation with a carrier frequency for transmission (analog signal!) Signals II f [Hz] A [V] ϕ I= M cos ϕ Q = M sin ϕ ϕ A [V] t[s]
  • 22.
    22 Time and FrequencyDomains Time and Frequency Domains Pangun Park (GNU) 3-7 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Time and Frequency Domains Time and Frequency Domains Frequency Amplitude Frequency Amplitude Frequency Amplitude f 3f A A f 3f A/3 A/3
  • 23.
    23 Fourier Transformation § Generalizationof the complex Fourier series § Decompose a function of time into the frequencies § Let us have a simple exercise!! Pangun Park (CNU)
  • 24.
    24 Example: Fourier Transformation §Example: Original function oscillation 3Hz Pangun Park (CNU)
  • 25.
    25 Overview Overvie § Electromagnetic Spectrum §Basic Signals Ø Frequency, Wavelength, and Phase § Brief Antenna § Line Coding § Modulation § Channel Coding Ø Hamming Distance, Block Codes § Multiple Access Methods Ø Spread Spectrum Technology Pangun Park (GNU)
  • 26.
    26 Importance of Antenna §Ex1: Telecommunication § Ex2: IoT Sensor network and Missile control Pangun Park (CNU)
  • 27.
    27 Antennas: isotropic radiator §Radiation and reception of electromagnetic waves, coupling of wires to space for radio transmission § Isotropic radiator: equal radiation in all directions (three dimensional) - only a theoretical reference antenna § Real antennas always have directive effects (vertically and/or horizontally) § Radiation pattern: measurement of radiation around an antenna Pangun Park (CNU) ! Radiation and reception of electromagnetic waves, coupling of wires to space for radio transmission ! Isotropic radiator: equal radiation in all directions (three dimensional) - only a theoretical reference antenna ! Real antennas always have directive effects (vertically and/or horizontally) ! Radiation pattern: measurement of radiation around an antenna Antennas: isotropic radiator z y x z y x ideal isotropic radiator
  • 28.
    28 Antennas: simple dipoles §Real antennas are not isotropic radiators but, e.g., dipoles with lengths λ/4 on car roofs or λ/2 as Hertzian dipole è shape of antenna proportional to wavelength § Example: Radiation pattern of a simple Hertzian dipole § Gain: maximum power in the direction of the main lobe compared to the power of an isotropic radiator (with the same average power) Pangun Park (CNU) side view (xy-plane) x y side view (yz-plane) z y top view (xz-plane) x z simple dipole λ/4 λ/2 Antennas: simple dipoles Real antennas are not isotropic radiators but, e.g., dipoles with lengths λ/4 on car roofs or λ/2 as Hertzian dipole # shape of antenna proportional to wavelength Example: Radiation pattern of a simple Hertzian dipole Gain: maximum power in the direction of the main lobe compared to the power of an isotropic radiator (with the same average power) 32 side view (xy-plane) x y side view (yz-plane) z y top view (xz-plane) x z simple dipole λ/4 λ/2 Antennas: simple dipoles Real antennas are not isotropic radiators but, e.g., dipoles with lengths λ/4 on car roofs or λ/2 as Hertzian dipole # shape of antenna proportional to wavelength Example: Radiation pattern of a simple Hertzian dipole Gain: maximum power in the direction of the main lobe compared to the power of an isotropic radiator (with the same average power)
  • 29.
    29 Antennas: diversity § Groupingof 2 or more antennas Ø multi-element antenna arrays § Antenna diversity Ø switched diversity, selection diversity • receiver chooses antenna with largest output Ø diversity combining • combine output power to produce gain • cophasing needed to avoid cancellation Ø Smart antennas • Beam forming Pangun Park (CNU) + λ/4 λ/2 λ/4 ground plane λ/2 λ/2 + λ/2 Antennas: diversity Grouping of 2 or more antennas ! multi-element antenna arrays Antenna diversity ! switched diversity, selection diversity " receiver chooses antenna with largest output ! diversity combining " combine output power to produce gain " cophasing needed to avoid cancellation ! Smart antennas " beam forming
  • 30.
    30 Overview Overvie § Electromagnetic Spectrum §Basic Signals Ø Frequency, Wavelength, and Phase § Brief Antenna § Line Coding § Modulation § Channel Coding Ø Hamming Distance, Block Codes § Multiple Access Methods Ø Spread Spectrum Technology Pangun Park (GNU)
  • 31.
    31 Coding Terminology § Signalelement: Pulse (of constant amplitude, frequency, phase) § Modulation Rate: 1/Duration of the smallest element =Baud rate § Data Rate: Bits per second Pangun Park (GNU) 3-10 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Coding Terminology Coding Terminology ‰ Signal element: Pulse (of constant amplitude, frequency, phase) = Symbol ‰ Modulation Rate: 1/Duration of the smallest element =Baud rate ‰ Data Rate: Bits per second Pulse Bit +5V 0 -5V +5V 0 -5V 1 0
  • 32.
    32 Line Coding forDigital Communication § Goal is to transmit binary data (e.g., PCM encoded voice, MPEG encoded video, financial information) Ø Represent digital data by using digital signals Ø Digital data stream is encoded into a sequence of pulses for transmission through a base-band analog channel § Transmission distance is large enough that communication link bandwidth is comparable to signal bandwidth. § Multiple links may be used, with regenerative repeaters Pangun Park (GNU)
  • 33.
  • 34.
    34 Data Transfer inDigital System § In a synchronous digital system, a common clock signal is used by all devices. data + clock § Multiple data signals can be transmitted in parallel using a single clock signal. § Serial peripheral communication schemes (RS-232, USB, FireWire) use various clock extraction methods Ø RS-232 is asynchronous with (up to) 8 data bits preceded by a start bit (0) and followed by optional parity bit and stop bit (1); clock recovery by “digital phase-locked loop” Ø USB needs a real phase-locked loop and uses bit stuffing to ensure enough transitions Ø FireWire has differential data and clock pairs; clock transitions only when data does not Pangun Park (GNU)
  • 35.
    35 Serial Communication: RS-232Signaling § RS-232 is a standard for asynchronous serial communication. Ø NRZ Encoding § Each transition resynchronizes the receiver’s bit clock. § Asynchronous here means “asynchronous at the byte level,” but the bits are still synchronized; their durations are the same. Pangun Park (GNU) Serial Communication: RS-232 Signaling RS-232 is a standard for asynchronous serial communication. Each transition resynchronizes the receiver’s bit clock. EE 179, May 12, 2014 Lecture 18, Page 17
  • 36.
    36 Implementation: Differential ManchesterCoding § Microcontroller : Interrupt Service Routine Pangun Park (GNU)
  • 37.
    37 Overview Overvie § Electromagnetic Spectrum §Basic Signals Ø Frequency, Wavelength, and Phase § Brief Antenna § Line Coding § Modulation § Channel Coding Ø Hamming Distance, Block Codes § Multiple Access Methods Ø Spread Spectrum Technology Pangun Park (GNU)
  • 38.
    38 Modulation and Demodulation PangunPark (CNU) 4 Modulation and demodulation synchronization decision digital data analog demodulation radio carrier analog baseband signal 101101001 radio receiver digital modulation digital data analog modulation radio carrier analog baseband signal 101101001 radio transmitter Communication System Block Diagram (Advanced) Encoder Channel Modulator Encrypt Demodulator Decrypt Decoder Channel Source Encoder Sink Source Source Decoder Noise Channel ! Source encoder compresses message to remove redundancy ! Encryption protects against eavesdroppers and false messages ! Channel encoder adds redundancy for error protection ! Modulator converts digital inputs to signals suitable for physical channel EE 179, April 2, 2014 Lecture 2, Page 12
  • 39.
    39 Modulation Modulation § Digital versionof modulation is called keying § Amplitude Shift Keying (ASK): § Frequency Shift Keying (FSK): § Phase Shift Keying (PSK): Binary PSK (BPSK) Pangun Park (GNU) 3-11 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Modulation Modulation ‰ Digital version of modulation is called keying ‰ Amplitude Shift Keying (ASK): 0 1 1 0 ‰ Frequency Shift Keying (FSK): ‰ Phase Shift Keying (PSK): Binary PSK (BPSK) 3-11 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Modulation Modulation ‰ Digital version of modulation is called keying ‰ Amplitude Shift Keying (ASK): 0 1 1 0 ‰ Frequency Shift Keying (FSK): ‰ Phase Shift Keying (PSK): Binary PSK (BPSK) 3-11 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Modulation Modulation ‰ Digital version of modulation is called keying ‰ Amplitude Shift Keying (ASK): 0 1 1 0 ‰ Frequency Shift Keying (FSK): ‰ Phase Shift Keying (PSK): Binary PSK (BPSK)
  • 40.
    40 Modulation (Cont) Modulation (Cont) §Differential BPSK: Does not require original carrier § Quadrature Phase Shift Keying (QPSK): § In-phase (I) and Quadrature (Q) or 90 ° components are added Pangun Park (GNU) 3-12 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Modulation (Cont) Modulation (Cont) ‰ Differential BPSK: Does not require original carrier 0 1 1 0 ‰ Quadrature Phase Shift Keying (QPSK): 11=45° 10=135° 00=225° 01=315° 11 10 00 01 0 1 Ref: Electronic Design, “Understanding Modern Digital Modulation Techniques,” http://electronicdesign.com/communications/understanding-modern-digital-modulation-techniques ‰ In-phase (I) and Quadrature (Q) or 90 ° components are added 3-12 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Modulation (Cont) Modulation (Cont) ‰ Differential BPSK: Does not require original carrier 0 1 1 0 ‰ Quadrature Phase Shift Keying (QPSK): 11=45° 10=135° 00=225° 01=315° 11 10 00 01 0 1 Ref: Electronic Design, “Understanding Modern Digital Modulation Techniques,” http://electronicdesign.com/communications/understanding-modern-digital-modulation-techniques ‰ In-phase (I) and Quadrature (Q) or 90 ° components are added
  • 41.
    41 QAM QAM § Quadrature Amplitudeand Phase Modulation § 4-QAM, 16-QAM, 64-QAM, 256-QAM § Used in DSL and wireless networks § 4-QAM: 2 bits/symbol, 16-QAM: 4 bits/symbol, Pangun Park (GNU) 3-13 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis QAM QAM ‰ Quadrature Amplitude and Phase Modulation ‰ 4-QAM, 16-QAM, 64-QAM, 256-QAM ‰ Used in DSL and wireless networks Binary 4-QAM 0 1 10 00 01 11 16-QAM I Q I Q I Q Amplitude ‰ 4-QAMŸ 2 bits/symbol, 16-QAM Ÿ4 bits/symbol, …
  • 42.
    42 Channel Capacity § Capacity= Maximum data rate for a channel § Nyquist Theorem: Bandwidth = B, Data rate 2 B § Bi-level Encoding: Data rate = 2 x Bandwidth § Multilevel: Data rate = 2 x Bandwidth x log2 M Ø M = Number of levels Ø Example: M=4, Capacity = 4 x Bandwidth Pangun Park (GNU) 3-14 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Channel Capacity Channel Capacity ‰ Capacity = Maximum data rate for a channel ‰ Nyquist Theorem:Bandwidth = B Data rate < 2 B ‰ Bi-level Encoding: Data rate = 2 u Bandwidth 0 5V ‰ Multilevel: Data rate = 2 u Bandwidth u log 2 M M = Number of levels Example: M=4, Capacity = 4 u Bandwidth Worst Case 3-14 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Channel Capacity Channel Capacity ‰ Capacity = Maximum data rate for a channel ‰ Nyquist Theorem:Bandwidth = B Data rate < 2 B ‰ Bi-level Encoding: Data rate = 2 u Bandwidth 0 5V ‰ Multilevel: Data rate = 2 u Bandwidth u log 2 M M = Number of levels Example: M=4, Capacity = 4 u Bandwidth Worst Case
  • 43.
    43 Shannon's Theorem (Quiz) Shannon'sTheorem § Bandwidth = B Hz, Signal-to-noise ratio = S/N § Maximum number of bits/sec = B log2 (1+S/N) Pangun Park (GNU) 3-15 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Shannon's Theorem Shannon's Theorem ‰ Bandwidth = B Hz Signal-to-noise ratio = S/N ‰ Maximum number of bits/sec = B log2 (1+S/N) ‰ Example: Phone wire bandwidth = 3100 Hz S/N = 30 dB 10 Log 10 S/N = 30 Log 10 S/N = 3 S/N = 103 = 1000 Capacity = 3100 log 2 (1+1000) = 30,894 bps Capacity?
  • 44.
    44 Shannon's Theorem Shannon's Theorem §Bandwidth = B Hz, Signal-to-noise ratio = S/N § Maximum number of bits/sec = B log2 (1+S/N) Pangun Park (GNU) 3-15 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Shannon's Theorem Shannon's Theorem ‰ Bandwidth = B Hz Signal-to-noise ratio = S/N ‰ Maximum number of bits/sec = B log2 (1+S/N) ‰ Example: Phone wire bandwidth = 3100 Hz S/N = 30 dB 10 Log 10 S/N = 30 Log 10 S/N = 3 S/N = 103 = 1000 Capacity = 3100 log 2 (1+1000) = 30,894 bps
  • 45.
    45 Overview Overvie § Electromagnetic Spectrum §Basic Signals Ø Frequency, Wavelength, and Phase § Brief Antenna § Line Coding § Modulation § Channel Coding Ø Hamming Distance, Block Codes § Multiple Access Methods Ø Spread Spectrum Technology Pangun Park (GNU)
  • 46.
    46 Channel Coding § Thispart is about reliable transmission of this digital information over an unreliable physical medium. Ø Shannon showed that reliable communications can be achieved by proper coding of information to be transmitted provided that the rate of information transmission is below the channel capacity. Ø Coding is achieved by adding properly designed redundancy to each message before its transmission. The added redundancy is used for error control. Pangun Park (GNU) Communication System Block Diagram (Advanced) Encoder Channel Modulator Encrypt Demodulator Decrypt Decoder Channel Source Encoder Sink Source Source Decoder Noise Channel ! Source encoder compresses message to remove redundancy ! Encryption protects against eavesdroppers and false messages ! Channel encoder adds redundancy for error protection ! Modulator converts digital inputs to signals suitable for physical channel EE 179, April 2, 2014 Lecture 2, Page 12
  • 47.
    47 Concept of ChannelCoding (Block Code) Pangun Park (GNU) 3/74 ! Concept of Channel Coding Encoding Decoding Channel coding Fall 2012 Lecture 3, Slide #21 The problem with no coding is that the two valid codewords (0 and 1) also have a Hamming distance of 1. So a single-bit error changes a valid codeword into another valid codeword… What is the Hamming Distance of the replication code? 1 0 heads tails single-bit error
  • 48.
    48 Simple Repetition Code §Replication code to reduce decoding error § Code: Bit “b” coded as “bb...b” (n times) § Channel coding Pangun Park (GNU) How to Introduce Redundancy? epetition Code arity Check Code (We can save one bit!) 0 0 0 0 00 0 0 1 1 01 1 1 0 0 10 1 1 1 1 11 0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1 Original information Encoded information Channel Received information 00 01 10 11 Decoded information Erased bit Encoding Decoding 0 0 0 00 0 1 1 01 1 1 0 10 1 0 1 11 0 0 0 0 1 1 1 1 0 1 0 1 Original information Encoded information Received information 00 01 10 11 Decoded information Erased bit Channel Encoding Decoding 7/74 ! Encoding of an [n, k] Block Code • Redundancy r = n − k
  • 49.
    49 Simple Repetition Code §Prob(decoding error) over BSC with p=0.01 § Exponential fall-off (note log scale) But huge overhead (low code rate) Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #11 Replication Code to reduce decoding error Replication factor, n (1/code_rate) Prob(decoding error) over BSC w/ p=0.01 Code: Bit b coded as bb…b (n times) Exponential fall-off (note log scale) But huge overhead (low code rate) We can do a lot better! Replication factor, n Prob. Of decoding error
  • 50.
    50 Basic Problems inCoding Theory § To find a good code (e.g., capacity-achieving or capacity- approaching) § To find its decoding algorithm with low complexity § To find a way of implementing the decoding algorithm Pangun Park (GNU)
  • 51.
    51 Major Developments ofCodes § Hamming codes (1950) § Reed-Muller codes (1954) § BCH codes (by Bose, Ray-Chaudhuri and Hocquenghem, 1959) § Reed-Solomon codes (1960) § Low-density parity-check codes (by Gallager in 1962, rediscovered in 90’s) § Convolutional codes (by Elias, 1955) § Viterbi algorithm (1967) § Concatenated codes (by Forney, 1966) § Trellis-coded modulation (by Ungerboeck, 1982) § Turbo codes (by Berrou , 1993) § Space-time codes (by Vahid Tarokh,1998) Pangun Park (GNU)
  • 52.
    52 Major Approaches toCoding Theory Pangun Park (GNU) ! Major Approaches to Coding Theory Golay codes 1950 1960 1980 1990 2000 1970 Hamming codes BCH codes RS codes Algebraic geometry codes Convolutional codes LDPC codes Turbo codes Rediscovery of LDPC codes 2010 Duo-binary Turbo codes Polar codes Nonbinary LDPC codes Spatially-coupled LDPC codes Algebraic approach Probabilistic approach Information -theoretic approach Nonbinary codes Goppa codes Communications and Signal Design Lab., POSTECH
  • 53.
    53 How Close tothe Channel Capacity? (AWGN, BPSK) Pangun Park (GNU) 1 2 3 4 5 6 7 8 9 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 x x x x x TPC x RS codes + Convolutional codes Convolutional Codes Power Efficiency Eb/No (dB) Unachievable Region Code Rate (R) A l g e b r a i c C o d e s . M o d e r n C o d e s Turbo Code LDPC Code Shannon Capacity Reed-Muller code x x x x ! How Close to the Channel Capacity? (AWGN, BPSK) Communications and Signal Design Lab., POSTECH
  • 54.
    54 Binary Arithmetic § Computationswith binary numbers in code construction will involve Boolean algebra, or algebra in “GF(2)” (Galois field of order 2), or modulo-2 algebra: Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #26 Binary Arithmetic • Computations with binary numbers in code construction will involve Boolean algebra, or algebra in “GF(2)” (Galois field of order 2), or modulo-2 algebra: 0+0=0, 1+0=0+1=1, 1+1=0 0*0=0*1=1*0 =0, 1*1=1
  • 55.
    55 Hamming Distance § HammingDistance: The number of bit positions in which the corresponding bits of two encodings of the same length are different. Ø The Hamming Distance (HD) between a valid binary codeword and the same codeword with e errors is e. § The problem with no coding is that the two valid codewords (0 and 1) also have a Hamming distance of 1. So a single-bit error changes a valid codeword into another valid codeword. Pangun Park (GNU)
  • 56.
    56 Embedding for StructureSeparation § Encode so that the codewords are far enough from each other. § Likely error patterns shouldn’t transform one codeword to another. Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #22 Idea: Embedding for Structural Separation Encode so that the codewords are far enough from each other Likely error patterns shouldn’t transform one codeword to another 11 00 0 1 01 10 single-bit error may cause 00 to be 10 (or 01) 110 000 0 1 100 010 111 001 101 011 Code: nodes chosen in hypercube + mapping of message bits to nodes If we choose 2k out of 2n nodes, it means we can map all k-bit message strings in a space of n-bit codewords. The code rate is k/n. 6.02 Fall 2012 Lecture 3, Slide #22 Idea: Embedding for Structural Separation Encode so that the codewords are far enough from each other Likely error patterns shouldn’t transform one codeword to another 11 00 0 1 01 10 single-bit error may cause 00 to be 10 (or 01) 110 000 0 1 100 010 111 001 101 011 Code: nodes chosen in hypercube + mapping of message bits to nodes If we choose 2k out of 2n nodes, it means we can map all k-bit message strings in a space of n-bit codewords. The code rate is k/n. Code: nodes chosen in hypercube + mapping of message bits to nodes If we choose out of nodes, it means we can map all k-bit message strings in a space of n-bit codewords. The code rate is k/n.
  • 57.
    57 Minimum Hamming Distance §Minimum Hamming Distance of Code vs. Detection Correction Capabilities § If d is the minimum Hamming distance between codewords, we can detect all patterns of = (d-1) bit errors. § If d is the minimum Hamming distance between codewords, we can correct all patterns of or fewer bit errors Pangun Park (GNU) Idea: Embedding for Structural Separation Encode so that the codewords are far enough from each other Likely error patterns shouldn’t transform one codeword to another 11 00 0 1 01 10 single-bit error may cause 00 to be 10 (or 01) 110 1 100 111 101 Code: nodes chosen in hypercube + mapping of message bits to nodes If we choose 2k out of 2n nodes, it means we can map all k-bit 6.02 Fall 2012 Idea: Embedding for Structural Encode so that the codewords are far eno each other Likely error patterns shouldn’t transform to another 11 00 0 1 01 10 single-bit error may cause 00 to be 10 (or 01) 110 000 0 1 100 010 111 001 101 011 Code: n hypercu of mess If we ch 2n nod we can messag space o The co
  • 58.
    58 How to ConstructCodes? § Want: 4-bit messages with single-error correction (min HD=3) § How to produce a code, i.e., a set of codewords, with this property? Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #24 How to Construct Codes? Want: 4-bit messages with single-error correction (min HD=3) How to produce a code, i.e., a set of codewords, with this property?
  • 59.
    59 Example: A SimpleCode - Parity Check § Add a parity bit to message of length k to make the total number of 1 bits even (aka even parity). § If the number of 1s in the received word is odd, there there has been an error. § Minimum Hamming distance of parity check code is 2 Ø Can detect all single-bit errors Ø In fact, can detect all odd number of errors Ø But cannot detect even number of errors Ø And cannot correct any errors Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #25 A Simple Code: Parity Check • Add a parity bit to message of length k to make the total number of 1 bits even (aka even parity). • If the number of 1s in the received word is odd, there there has been an error. 0 1 1 0 0 1 0 1 0 0 1 1 → original word with parity bit 0 1 1 0 0 0 0 1 0 0 1 1 → single-bit error (detected) 0 1 1 0 0 0 1 1 0 0 1 1 → 2-bit error (not detected) • Minimum Hamming distance of parity check code is 2 – Can detect all single-bit errors – In fact, can detect all odd number of errors – But cannot detect even number of errors – And cannot correct any errors
  • 60.
    60 Example: Rectangular ParityCodes (Quiz) § Idea: start with rectangular array of data bits, add parity checks for each row and column. Single-bit error in data will show up as parity errors in a particular row and column, pinpointing the bit that has the error. Pangun Park (GNU) D1 D2 D3 D4 P3 P4 P1 Idea: start with rectangular array of data bits, add parity checks for each row and column. Single-bit error in data will show up as parity P2 errors in a particular row and column, pinpointing the bit that has the error. 0 1 1 0 1 1 1 1 0 1 0 0 1 0 1 0 Parity for each row Parity check fails for Parit and column is row #2 and column #2 for ro correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bi D1 D2 D3 D4 P3 P4 P1 for row # a: start with rectangular ay of data bits, add parity cks for each row and umn. Single-bit error in a will show up as parity P2 (n,k,d) ors in a particular row column, pinpointing the P4 is pari that has the error. for colum 0 1 1 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 0 ty for each row Parity check fails for Parity check o column is row #2 and column #2 for row #2 ect ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is inc D1 D2 D3 D4 P3 P4 P1 for row #1 with rectangular a bits, add parity each row and ngle-bit error in ow up as parity P2 (n,k,d)=? particular row n, pinpointing the P4 is parity bit the error. for column #2 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 0 ch row Parity check fails for Parity check only fails is row #2 and column #2 for row #2 errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect Parity for each row and column is correct ⇒ no errors 6.02 Fall 2012 Lecture 4, Slide #8 Example: Rectangular Parity Codes D1 D2 D3 D4 P3 P4 P1 P1 is parity bit for row #1 Idea: start with rectangular array of data bits, add parity checks for each row and column. Single-bit error in data will show up as parity P2 (n,k,d)=? errors in a particular row and column, pinpointing the P4 is parity bit bit that has the error. for column #2 0 1 1 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 0 Parity for each row Parity check fails for Parity check only fails and column is row #2 and column #2 for row #2 correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect P4 is parity bit for column #2 P1 is parity bit for row #1
  • 61.
    61 Example: Rectangular ParityCodes (Quiz) § Idea: start with rectangular array of data bits, add parity checks for each row and column. Single-bit error in data will show up as parity errors in a particular row and column, pinpointing the bit that has the error. Pangun Park (GNU) D1 D2 D3 D4 P3 P4 P1 Idea: start with rectangular array of data bits, add parity checks for each row and column. Single-bit error in data will show up as parity P2 errors in a particular row and column, pinpointing the bit that has the error. 0 1 1 0 1 1 1 1 0 1 0 0 1 0 1 0 Parity for each row Parity check fails for Parit and column is row #2 and column #2 for ro correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bi D1 D2 D3 D4 P3 P4 P1 for row # a: start with rectangular ay of data bits, add parity cks for each row and umn. Single-bit error in a will show up as parity P2 (n,k,d) ors in a particular row column, pinpointing the P4 is pari that has the error. for colum 0 1 1 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 0 ty for each row Parity check fails for Parity check o column is row #2 and column #2 for row #2 ect ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is inc D1 D2 D3 D4 P3 P4 P1 for row #1 with rectangular a bits, add parity each row and ngle-bit error in ow up as parity P2 (n,k,d)=? particular row n, pinpointing the P4 is parity bit the error. for column #2 0 1 1 0 1 1 1 0 0 1 1 1 1 0 1 0 ch row Parity check fails for Parity check only fails is row #2 and column #2 for row #2 errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect Parity for each row and column is correct ⇒ no errors 6.02 Fall 2012 Lecture 4, Slide #8 Example: Rectangular Parity Codes D1 D2 D3 D4 P3 P4 P1 P1 is parity bit for row #1 Idea: start with rectangular array of data bits, add parity checks for each row and column. Single-bit error in data will show up as parity P2 (n,k,d)=? errors in a particular row and column, pinpointing the P4 is parity bit bit that has the error. for column #2 0 1 1 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 0 Parity for each row Parity check fails for Parity check only fails and column is row #2 and column #2 for row #2 correct ⇒ no errors ⇒ bit D4 is incorrect ⇒ bit P2 is incorrect P4 is parity bit for column #2 P1 is parity bit for row #1 Parity check fails for row #2 and column #2 ⇒ bit D4 is incorrect Parity check only fails for row #2 ⇒ bit P2 is incorrect
  • 62.
    62 Block Codes: Encoding §Redundancy r = n−k § Code rate R = k/n Pangun Park (GNU) 7/74 ! Encoding of an [n, k] Block Code • Redundancy r = n − k • Code rate R = k/n (n,k) Systematic Linear Block Codes • Split data into k-bit blocks • Add (n-k) parity bits to each block using (n-k) lin equations, making each block n bits long • Every linear code can be represented by an Message bits Parity bits k n The entire block i called the code w in systematic form n-k
  • 63.
    63 Block Codes: Decoding §Decide what the transmitted information was § Optimum decoding rule: Minimum distance decoding in a memoryless channel Pangun Park (GNU) ! Decoding of an [n, k] Block Code • Decide what the transmitted information was • Optimum decoding rule: Minimum distance decoding in a memoryle codewords received vector Received data Decoded message Error vector Correct errors and remove (n–k) redundant symbols Communications and Signal Design L
  • 64.
    64 Linear Block Codes §Block code: k message bits encoded to n code bits, i.e., each of messages encoded into a unique n-bit combination via a linear transformation, using GF(2) operations: Ø C is an n-element row vector containing the codeword Ø D is a k-element row vector containing the message Ø G is the kxn generator matrix Ø Each codeword bit is a specified linear combination of message bits. § (n,k) code has rate k/n § Sometimes written as (n,k,d), where d is the minimum HD of the code. § The “weight” of a code word is the number of 1’s in it. § The minimum HD of a linear code is the minimum weight found in its nonzero codewords Pangun Park (GNU) 6.02 Fall 2012 Lectu (n,k) Systematic Linear Block Code • Split data into k-bit blocks • Add (n-k) parity bits to each block using (n-k) equations, making each block n bits long • Every linear code can be represented by an equivalent systematic form • Corresponds to choosing G = [I | A], i.e., the identity matrix in the first k columns Message bits Parity bits k n The entire bl called the c in systemati n-k
  • 65.
    65 Quiz: What aren, k, d here? § {000, 111} Ø ? § {000, 1100, 0011, 1111} Ø ? § Ø ? Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #29 Examples: What are n, k, d here? {000, 111} {0000, 1100, 0011, 1111} {1111, 0000, 0001} {1111, 0000, 0010, 1100} Not linear codes! N c The HD of a linear code is the number of “1”s in the non- zero codeword with the smallest # of “1”s (3,1,3). Rate= 1/3. (4,2,2). Rate = ½. (7,4,3) code. Rate = 4/7. The HD of a linear code is the number of “1”s in the non- zero codeword with the smallest # of “1”s
  • 66.
    66 Quiz: What aren, k, d here? § {000, 111} Ø (3,1,3) Rate = 1/3 § {000, 1100, 0011, 1111} Ø (4,2,2) Rate =1/2 § Ø (7,4,3) Rate = 4/7 Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #29 Examples: What are n, k, d here? {000, 111} {0000, 1100, 0011, 1111} {1111, 0000, 0001} {1111, 0000, 0010, 1100} Not linear codes! N c The HD of a linear code is the number of “1”s in the non- zero codeword with the smallest # of “1”s (3,1,3). Rate= 1/3. (4,2,2). Rate = ½. (7,4,3) code. Rate = 4/7. The HD of a linear code is the number of “1”s in the non- zero codeword with the smallest # of “1”s
  • 67.
    67 Systematic Linear BlockCodes § Split data into k-bit blocks § Add (n-k) parity bits to each block using (n-k) linear equations, making each block n bits long § Every linear code can be represented by an equivalent systematic form § Corresponds to choosing , i.e., the identity matrix in the first k columns Pangun Park (GNU) 6.02 Fall 2012 Lecture 3, Slide #30 (n,k) Systematic Linear Block Codes • Split data into k-bit blocks • Add (n-k) parity bits to each block using (n-k) linear equations, making each block n bits long • Every linear code can be represented by an equivalent systematic form • Corresponds to choosing G = [I | A], i.e., the identity matrix in the first k columns Message bits Parity bits k n The entire block is the called the code word in systematic form n-k
  • 68.
    68 Summary of MatrixForm § Operations of the generator matrix and the parity check matrix § Encoding § Decoding Pangun Park (GNU) Message Vector D Generator Matrix G Code Vector C Code Vector C Parity Check Matrix H Null Vector 0
  • 69.
    69 Matrix Notation: LinearBlock Codes § Task: given k-bit message, compute n-bit codeword. We can use standard matrix arithmetic (modulo 2) to do the job. § For example, here’s how we would describe the (9,4,4) rectangular code that includes an overall parity bit. § The generator matrix Pangun Park (GNU) Matrix Notation Task: given k-bit message, compute n-bit codeword. We can use standard matrix arithmetic (modulo 2) to do the job. For example, here’s how we would describe the (9,4,4) rectangular code that includes an overall parity bit. 1 0 0 0 1 0 1 0 1 0 1 0 0 1 0 0 1 1⎥ D1 D2 D3 D4 [ ] ⎥ = D D D D [ P P P P P ] 0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5 ⎥ 0 0 0 1 0 1 0 1 1 6.02 Fall 2012 Lecture 4, Slide #10 ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ • 1×k k×n 1×n message generator code word vector matrix vector The generator matrix, Gkxn = Ik×k Ak×(n−k) ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ D1xk ⋅Gkxn = C1xn Matrix Notation Task: given k-bit message, compute n-bit codeword. We can use standard matrix arithmetic (modulo 2) to do the job. For example, here’s how we would describe the (9,4,4) rectangular code that includes an overall parity bit. 1 0 0 0 1 0 1 0 1 0 1 0 0 1 0 0 1 1⎥ D1 D2 D3 D4 [ ] ⎥ = D D D D [ P P P P P ] 0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5 ⎥ 0 0 0 1 0 1 0 1 1 6.02 Fall 2012 Lecture 4, Slide #10 ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ • 1×k k×n 1×n message generator code word vector matrix vector The generator matrix, Gkxn = Ik×k Ak×(n−k) ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ D1xk ⋅Gkxn = C1xn 6.02 Fall 2012 Lectur (n,k) Systematic Linear Block Code • Split data into k-bit blocks • Add (n-k) parity bits to each block using (n-k) equations, making each block n bits long • Every linear code can be represented by an equivalent systematic form • Corresponds to choosing G = [I | A], i.e., the identity matrix in the first k columns Message bits Parity bits k n The entire bl called the c in systematic n-k
  • 70.
    70 Parity Check Matrix §Parity equation § Parity relation § So entry aij in i-th row, j-th column of A specifies whether data bit Di is used in constructing parity bit Pj Pangun Park (GNU) 6.02 Fall 2012 Lecture 5, Slide #3 A closer look at the Parity Check Matrix A k Parity equation Pj = ∑Diaij i=1 k Parity relation Pj +∑Diaij = 0 i=1 A =[aij ] So entry aij in i-th row, j-th column of A specifies whether data bit Di is used in constructing parity bit Pj Questions: Can two columns of A be the same? Should two columns of A be the same? How about rows? 6.02 Fall 2012 Lecture 5, Slide #3 A closer look at the Parity Check Matrix A k Parity equation Pj = ∑Diaij i=1 k Parity relation Pj +∑Diaij = 0 i=1 A =[aij ] So entry aij in i-th row, j-th column of A specifies whether data bit Di is used in constructing parity bit Pj Questions: Can two columns of A be the same? Should two columns of A be the same? How about rows? Matrix Notation Task: given k-bit message, compute n-bit codeword. We can use standard matrix arithmetic (modulo 2) to do the job. For example, here’s how we would describe the (9,4,4) rectangular code that includes an overall parity bit. 1 0 0 0 1 0 1 0 1 0 1 0 0 1 0 0 1 1⎥ D1 D2 D3 D4 [ ] ⎥ = D D D D [ P P P P P ] 0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5 ⎥ 0 0 0 1 0 1 0 1 1 6.02 Fall 2012 Lecture 4, Slide #10 ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ • 1×k k×n 1×n message generator code word vector matrix vector The generator matrix, Gkxn = Ik×k Ak×(n−k) ⎡ ⎣ ⎢ ⎤ ⎦ ⎥ D1xk ⋅Gkxn = C1xn
  • 71.
    71 Parity Check Matrix §Can restate the codeword generation process as a parity check or nullspace check § The parity check matrix Pangun Park (GNU) 6.02 Fall 2012 Lecture 5, Slide #4 Parity Check Matrix Can restate the codeword For (9,4,4) example D generation process as a ⎡ 1 ⎤ ⎢ parity check or D ⎥ ⎢ 2 ⎥ nullspace check ⎡1 1 0 0 1 0 0 0 0⎤ ⎢D3 ⎥ ⎢ 1 1 0 1 0 0 0⎥ ⎢ ⎥ 0 0 D C. HT 0 ⎢ ⎥ ⎢ 4 ⎥ = ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢ P ⎥ 1 = 05x1 ⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 0 0 1 0 ⎢ ⎥ ⎢ P2 ⎥ H T ⎢ ⎣1 1 1 1 0 0 0 0 1⎥ ⎢ ⎥ ⎦ P ⎢ 3 ⎥ (n−k)xn ⋅C1xn = 0 ⎢ P4 ⎥ (n-k) x n ⎢ ⎥ ⎣ P5 ⎦ parity check The parity check matrix, n×1 matrix code word vector (transpose) 6.02 Fall 2012 Lecture 5, Slide #4 Parity Check Matrix Can restate the codeword For (9,4,4) example D generation process as a ⎡ 1 ⎤ ⎢ parity check or D ⎥ ⎢ 2 ⎥ nullspace check ⎡1 1 0 0 1 0 0 0 0⎤ ⎢D3 ⎥ ⎢ 1 1 0 1 0 0 0⎥ ⎢ ⎥ 0 0 D C. HT 0 ⎢ ⎥ ⎢ 4 ⎥ = ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢ P ⎥ 1 = 05x1 ⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 0 0 1 0 ⎢ ⎥ ⎢ P2 ⎥ H T ⎢ ⎣1 1 1 1 0 0 0 0 1⎥ ⎢ ⎥ ⎦ P ⎢ 3 ⎥ (n−k)xn ⋅C1xn = 0 ⎢ P4 ⎥ (n-k) x n ⎢ ⎥ ⎣ P5 ⎦ parity check The parity check matrix, n×1 matrix code word vector (transpose) 6.02 Fall 2012 Lecture 5, Slide #4 Parity Check Matrix Can restate the codeword For (9,4,4) example D generation process as a ⎡ 1 ⎤ ⎢ parity check or D ⎥ ⎢ 2 ⎥ nullspace check ⎡1 1 0 0 1 0 0 0 0⎤ ⎢D3 ⎥ ⎢ 1 1 0 1 0 0 0⎥ ⎢ ⎥ 0 0 D C. HT 0 ⎢ ⎥ ⎢ 4 ⎥ = ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢ P ⎥ 1 = 05x1 ⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 0 0 1 0 ⎢ ⎥ ⎢ P2 ⎥ H T ⎢ ⎣1 1 1 1 0 0 0 0 1⎥ ⎢ ⎥ ⎦ P ⎢ 3 ⎥ (n−k)xn ⋅C1xn = 0 ⎢ P4 ⎥ (n-k) x n ⎢ ⎥ ⎣ P5 ⎦ parity check The parity check matrix, n×1 matrix code word vector (transpose) Matrix Notation Task: given k-bit message, compute n-bit codeword. We can use standard matrix arithmetic (modulo 2) to do the job. For example, here’s how we would describe the (9,4,4) rectangular code that includes an overall parity bit. 1 0 0 0 1 0 1 0 1 0 1 0 0 1 0 0 1 1⎥ D1 D2 D3 D4 [ ] ⎥ = D D D D [ P P P P P ] 0 0 1 0 0 1 1 0 1⎥ 1 2 3 4 1 2 3 4 5 ⎥ 0 0 0 1 0 1 0 1 1⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ • 1×k k×n 1×n message generator code word vector matrix vector ⎡ ⎤ D1xk ⋅Gkxn = C1xn G H
  • 72.
    72 Summary of MatrixForm § Operations of the generator matrix and the parity check matrix § Encoding § Decoding Pangun Park (GNU) Message Vector D Generator Matrix G Code Vector C Code Vector C Parity Check Matrix H Null Vector 0
  • 73.
    73 Simple-minded Decoding § Comparereceived n-bit word R = C + E against each of valid codewords to see which one is HD 1 away § Doesn’t exploit the nice linear structure of the code! Ø High computation complexity!! Pangun Park (GNU)
  • 74.
    74 Syndrome Decoding –Matrix Form § Task: given n-bit code word, compute (n-k) syndrome bits. Again we can use matrix multiply to do the job. § Received word § Compute Syndromes on receive word § To figure out the relationship of Syndromes to errors: Ø Knowing the error patterns we want to correct for, we can compute k Syndrome vector offline (or n, if you want to correct errors in the parity bits, but this is not needed) and then do a lookup after the Syndrome is calculated from a received word to find the error type that occurred Pangun Park (GNU) Syndrome Decoding – Matrix Form Task: given n-bit code word, compute (n-k) syndrome bits. Again we can use matrix multiply to do the job. received word R = C + E (n-k) x 1 compute Syndromes H RT S syndrome on receive word ⋅ = vector o figure out the relationship of Syndromes to errors: H ⋅(C + E)T = S H ⋅CT use = 0 H ⋅ ET = S figure-out error type from Syndrome Knowing the error patterns we want to correct for, we can compute k Syndrome vectoroffline (or n, if you want to correct errors in the parity bits, but this is not needed) and then do a lookup after the Syndrome is calculated from a received word to find the error type that occurred 6.02 Fall 2012 Lecture 5, Slide #7 Syndrome Decoding – Matrix Form Task: given n-bit code word, compute (n-k) syndrome bits. Again we can use matrix multiply to do the job. received word R = C + E (n-k) x 1 compute Syndromes H RT S syndrome on receive word ⋅ = vector To figure out the relationship of Syndromes to errors: H ⋅(C + E)T = S H ⋅CT use = 0 H ⋅ ET = S figure-out error type from Syndrome Knowing the error patterns we want to correct for, we can compute k Syndrome vectoroffline (or n, if you want to correct errors in the parity bits, but this is not needed) and then do a lookup after the Syndrome is calculated from a received word to find the error type that occurred 6.02 Fall 2012 Lecture 5, Slide #7 Syndrome Decoding – Matrix Form Task: given n-bit code word, compute (n-k) syndrome bits. Again we can use matrix multiply to do the job. received word R = C + E (n-k) x 1 compute Syndromes H RT S syndrome on receive word ⋅ = vector To figure out the relationship of Syndromes to errors: H ⋅(C + E)T = S H ⋅CT use = 0 H ⋅ ET = S figure-out error type from Syndrome Knowing the error patterns we want to correct for, we can compute k Syndrome vectoroffline (or n, if you want to correct errors in the parity bits, but this is not needed) and then do a lookup after the Syndrome is calculated from a received word to find the error type that occurred 6.02 Fall 2012 Lecture 5, Slide #7 Syndrome Decoding – Matrix Form Task: given n-bit code word, compute (n-k) syndrome bits. Again we can use matrix multiply to do the job. received word R = C + E (n-k) x 1 compute Syndromes H RT S syndrome on receive word ⋅ = vector To figure out the relationship of Syndromes to errors: H ⋅(C + E)T = S H ⋅CT use = 0 H ⋅ ET = S figure-out error type from Syndrome Knowing the error patterns we want to correct for, we can compute k Syndrome vectoroffline (or n, if you want to correct errors in the parity bits, but this is not needed) and then do a lookup after the Syndrome is calculated from a received word to find the error type that occurred 6.02 Fall 2012 Lecture 5, Slide #7 yndrome Decoding – Matrix Form en n-bit code word, compute (n-k) syndrome bits. can use matrix multiply to do the job. word R = C + E (n-k) x 1 Syndromes H RT S syndrome word ⋅ = vector ut the relationship of Syndromes to errors: H ⋅(C + E)T = S H ⋅CT use = 0 H ⋅ ET = S figure-out error type from Syndrome the error patterns we want to correct for, we can k Syndrome vectoroffline (or n, if you want to correct the parity bits, but this is not needed) and then do a fter the Syndrome is calculated from a received word he error type that occurred Lecture 5, Slide #7 Figure-out error type from Syndrome
  • 75.
    75 Syndrome Decoding –Steps § Step1: For a given code and error patterns Ei, precompute Syndromes and store them § Step 2: For each received word, compute the Syndrome § Step 3: Find l such that Sl == S and apply correction for error El Pangun Park (GNU) Syndrome Decoding – Steps given code and error patterns Ei, precompute nd store them H ⋅ Ei = Si ch received word, H ⋅ R = S Syndrome such that Sl == S and apply correction for error El C = R+ El Lecture 5, Slide #8 me Decoding – Steps and error patterns Ei, precompute em H ⋅ Ei = Si d word, H ⋅ R = S Sl == S and apply correction for error El C = R+ El Lecture 5, Slide #8 Syndrome Decoding – Steps a given code and error patterns Ei, precompute and store them H ⋅ Ei = Si each received word, H ⋅ R = S he Syndrome d l such that Sl == S and apply correction for error El C = R+ El Lecture 5, Slide #8
  • 76.
    76 Syndrome Decoding –Steps (9,4,4) example § Codeword generation: § Received word in error : generation: Syndrome Decoding – Steps (9,4,4) example Codeword generation: ⎡1 0 0 0 1 0 1 0 1⎤ ⎢0 1 0 0 1 0 0 1 1⎥ 1 1 1 1 ⋅⎢ [ ] ⎥ = 1 1 1 [ 1 0 0 0 0 0] ⎢0 0 1 0 0 1 1 0 1⎥ ⎢ ⎥ ⎣0 0 0 1 0 1 0 1 1⎦ Received word in error:generation: 1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0 [ ] 0 [ + ] 0 1 0 0 0 0 0 0 0 [ Syndrome computation ⎡1⎤ ⎢0⎥ for received word ⎥ ⎢ ⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤ ⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 0 0 1 0 ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥ ⎣1 ⎣ ⎦ ⎦ ⎢ ⎥ ⎢0⎥ ⎢0⎥ 6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦ Lecture 5, Slide #9 ] ! ! ! ! ! ! # $ $ $ $ $ $ % = ! ! ! ! ! ! ! ! ! ! ! ! # $ $ $ $ $ $ $ $ $ $ $ $ ⋅ ! ! ! ! ! ! # $ $ $ $ $ $ % 1 1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 1 Precomputed Syndrome for a given error pattern Syndrome Decoding – Steps (9,4,4) example Codeword generation: ⎡1 0 0 0 1 0 1 0 1⎤ ⎢0 1 0 0 1 0 0 1 1⎥ 1 1 1 1 ⋅⎢ [ ] ⎥ = 1 1 1 [ 1 0 0 0 0 0] ⎢0 0 1 0 0 1 1 0 1⎥ ⎢ ⎥ ⎣0 0 0 1 0 1 0 1 1⎦ Received word in error:generation: 1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0 [ ] 0 [ + ] 0 1 0 0 0 0 0 0 0 [ Syndrome computation ⎡1⎤ ⎢0⎥ for received word ⎥ ⎢ ⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤ ⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 0 0 1 0 ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥ ⎣1 ⎣ ⎦ ⎦ ⎢ ⎥ ⎢0⎥ ⎢0⎥ 6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦ Lecture 5, Slide #9 ] ! ! ! ! ! ! # $ $ $ $ $ $ % = ! ! ! ! ! ! ! ! ! ! ! ! # $ $ $ $ $ $ $ $ $ $ $ $ ⋅ ! ! ! ! ! ! # $ $ $ $ $ $ % 1 1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 1 Precomputed Syndrome for a given error pattern Syndrome Decoding – Steps (9,4,4) example Codeword generation: ⎡1 0 0 0 1 0 1 0 1⎤ ⎢0 1 0 0 1 0 0 1 1⎥ 1 1 1 1 ⋅⎢ [ ] ⎥ = 1 1 1 [ 1 0 0 0 0 0] ⎢0 0 1 0 0 1 1 0 1⎥ ⎢ ⎥ ⎣0 0 0 1 0 1 0 1 1⎦ Received word in error:generation: 1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0 [ ] 0 [ + ] 0 1 0 0 0 0 0 0 0 [ Syndrome computation ⎡1⎤ ⎢0⎥ for received word ⎥ ⎢ ⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤ ⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 0 0 1 0 ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥ ⎣1 ⎣ ⎦ ⎦ ⎢ ⎥ ⎢0⎥ ⎢0⎥ 6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦ Lecture 5, Slide #9 ] ! ! ! ! ! ! # $ $ $ $ $ $ % = ! ! ! ! ! ! ! ! ! ! ! ! # $ $ $ $ $ $ $ $ $ $ $ $ ⋅ ! ! ! ! ! ! # $ $ $ $ $ $ % 1 1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 1 Precomputed Syndrome for a given error pattern Syndrome Decoding – Steps (9,4,4) example Codeword generation: ⎡1 0 0 0 1 0 1 0 1⎤ ⎢0 1 0 0 1 0 0 1 1⎥ 1 1 1 1 ⋅⎢ [ ] ⎥ = 1 1 1 [ 1 0 0 0 0 0] ⎢0 0 1 0 0 1 1 0 1⎥ ⎢ ⎥ ⎣0 0 0 1 0 1 0 1 1⎦ Received word in error:generation: 1 0 1 1 0 0 0 0 0 = 1 1 1 1 0 0 0 0 [ ] 0 [ + ] 0 1 0 0 0 0 0 0 0 [ Syndrome computation ⎡1⎤ ⎢0⎥ for received word ⎥ ⎢ ⎡1 1 0 0 1 0 0 0 0⎤ ⎢1⎥ ⎡1⎤ ⎢0 0 1 1 0 1 0 0 0⎥ ⎢ ⎥ ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 0 1 0 0 0 1 0 0⎥⋅⎢0⎥ = ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 0 0 1 0 ⎢0⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢1 1 1 1 0 0 0 0 1⎥ ⎢0⎥ ⎢ ⎥ ⎣1 ⎣ ⎦ ⎦ ⎢ ⎥ ⎢0⎥ ⎢0⎥ 6.02 Fall 2012 ⎣ ⎦ ⎣ ⎦ Lecture 5, Slide #9 ] ! ! ! ! ! ! # $ $ $ $ $ $ % = ! ! ! ! ! ! ! ! ! ! ! ! # $ $ $ $ $ $ $ $ $ $ $ $ ⋅ ! ! ! ! ! ! # $ $ $ $ $ $ % 1 1 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 1 0 0 1 1 Precomputed Syndrome for a given error pattern Syndrome computation for received word Precomputed Syndrome for a given error pattern G D C C R E H R H P
  • 77.
    77 Syndrome Decoding –Steps (9,4,4) example § Correction: § Since received word Syndrome matches the Syndrome of the error apply this error to the received word to recover the original codeword Pangun Park (GNU) 6.02 Fall 2012 Lecture 5, Slide #10 Syndrome Decoding – Steps (9,4,4) example Correction: Since received word Syndrome [1 0 0 1 1]T matches the Syndrome of the error [0 1 0 0 0 0 0 0 0], apply this error to the received word to recover the original codeword Received word 1 1 1 1 0 0 0 0 0 = 1 0 1 1 0 0 0 0 [ ] [ 0 + ] 0 1 0 0 0 0 0 0 0 [ Corrected codeword Error pattern from matching Syndrome ] 6.02 Fall 2012 Lecture 5, Slide #10 Syndrome Decoding – Steps (9,4,4) example Correction: Since received word Syndrome [1 0 0 1 1]T matches the Syndrome of the error [0 1 0 0 0 0 0 0 0], apply this error to the received word to recover the original codeword Received word 1 1 1 1 0 0 0 0 0 = 1 0 1 1 0 0 0 0 [ ] [ 0 + ] 0 1 0 0 0 0 0 0 0 [ Corrected codeword Error pattern from matching Syndrome ] 6.02 Fall 2012 Lecture 5, Slide #10 Syndrome Decoding – Steps (9,4,4) example Correction: Since received word Syndrome [1 0 0 1 1]T matches the Syndrome of the error [0 1 0 0 0 0 0 0 0], apply this error to the received word to recover the original codeword Received word 1 1 1 1 0 0 0 0 0 = 1 0 1 1 0 0 0 0 [ ] [ 0 + ] 0 1 0 0 0 0 0 0 0 [ Corrected codeword Error pattern from matching Syndrome ]
  • 78.
    78 Burst Errors § Correctingsingle-bit errors is good § Similar ideas could be used to correct independent multi-bit errors § But in many situations errors come in bursts: correlated multi-bit errors (e.g., fading or burst of interference on wireless channel, damage to storage media etc.). How does single-bit error correction help with that? Pangun Park (GNU)
  • 79.
    79 Coping with BurstErrors by Interleaving § Well, can we think of a way to turn a B-bit error burst into B single-bit errors? Pangun Park (GNU) Problem: Bits from a particular codeword are transmitted sequentially, so a B-bit burst produces multi-bit errors. Solution: interleave bits from B different codewords. Now a B-bit burst produces 1-bit errors in B different codewords. 6.02 Fall 2012 Lecture 5, Slide #14 Coping with Burst Errors by Interleaving Well, can we think of a way to turn a B-bit error burst into B single-bit errors? Row-by-row Col-by-col B transmission B transmission order order Problem: Bits from a Solution: interleave bits particular codeword are from B different codewords. transmitted sequentially, Now a B-bit burst produces so a B-bit burst produces 1-bit errors in B different multi-bit errors. codewords. 6.02 Fall 2012 Lecture 5, Slide #14 Coping with Burst Errors by Interleaving Well, can we think of a way to turn a B-bit error burst into B single-bit errors? Row-by-row Col-by-col B transmission B transmission order order Problem: Bits from a Solution: interleave bits particular codeword are from B different codewords. transmitted sequentially, Now a B-bit burst produces so a B-bit burst produces 1-bit errors in B different multi-bit errors. codewords.
  • 80.
    80 Framing § Looking ata received bit stream, how do we know where a block of interleaved codewords begins? § Physical indication (transmitter turns on, beginning of disk sector, separate control channel) § Place a unique bit pattern (frame sync sequence) in the bit stream to mark start of a block Ø Frame = sync pattern + interleaved code word block Ø Search for sync pattern in bit stream to find start of frame Ø Bit pattern can’t appear elsewhere in frame (otherwise our search will get confused), so have to make sure no legal combination of codeword bits can accidentally generate the sync pattern (can be tricky...) Ø Sync pattern can’t be protected by ECC, so errors may cause us to lose a frame every now and then, a problem that will need to be addressed at some higher level of the communication protocol. Pangun Park (GNU)
  • 81.
    81 Example: Channel CodingSteps (A) 1. Break message stream into k-bit blocks. 2. Add redundant info in the form of (n-k) parity bits to form n-bit codeword. Goal: choose parity bits so we can correct single-bit errors. 3. Interleave bits from a group of B codewords to protect against B-bit burst errors. 4. Add unique pattern of bits to start of each interleaved codeword block so receiver can tell how to extract blocks from received bitstream. 5. Send new (longer) bitstream to transmitter. Pangun Park (GNU) Sync pattern has five consecutive 1’s. To prevent sync from appearing in message, “bit-stuff” 0’s after any sequence of four 1’s in the message. This step is easily reversed at receiver (just remove 0 after any sequence of four consecutive 1’s in the message). 6.02 Fall 2012 Lecture 5, Slide #16 Summary: example channel coding steps 1. Break message stream into k-bit 011011101101 blocks. Step 1: k=4 2. Add redundant info in the form of 0110 (n-k) parity bits to form n-bit 1110 1101 codeword. Goal: choose parity Step 2: (8,4,3) code bits so we can correct single-bit errors. 01101111 11100101 3. Interleave bits from a group of B 11010110 codewords to protect against B- Step 3: B = 3 bit burst errors. 011111110001100111101110 4. Add unique pattern of bits to Step 4: sync = 0111110 start of each interleaved codeword block so receiver can 011111001111011100011001111001110 tell how to extract blocks from Sync pattern has five consecutive 1’s. To received bitstream. prevent sync from appearing in message, 5. Send new (longer) bitstream to “bit-stuff” 0’s after any sequence of four 1’s in the message. This step is easily transmitter. reversed at receiver (just remove 0 after any sequence of four consecutive 1’s in the message).
  • 82.
    82 Example: Channel CodingSteps (B) 1. Search through received bit stream for sync pattern, extract interleaved codeword block 2. De-interleave the bits to form B n- bit codewords 3. Check parity bits in each code word to see if an error has occurred. If there’s a single-bit error, correct it. 4. Extract k message bits from each corrected codeword and concatenate to form message stream. Pangun Park (GNU) 6.02 Fall 2012 Lecture 5, Slide #17 Summary: example error correction steps 1. Search through received bit 011111001111011100100001111001110 stream for sync pattern, Step 1: sync = 0111110 extract interleaved codeword block 011111110010000111101110 2. De-interleave the bits to form Step 2: B = 3, n = 8 B n-bit codewords 01100111 3. Check parity bits in each code 11110101 11000110 word to see if an error has Step 3: (8,4,3) code occurred. If there’s a single- bit error, correct it. 010 110 0 4. Extract k message bits from 101 111 001 each corrected codeword and 11 01 10 concatenate to form message Step 4 stream. 0110 1110 1101 11 0 1 0 111 010 11 01 10 11
  • 83.
    83 Overview Overvie § Electromagnetic Spectrum §Basic Signals Ø Frequency, Wavelength, and Phase § Brief Antenna § Line Coding § Modulation § Channel Coding Ø Hamming Distance, Block Codes § Multiple Access Methods Ø Spread Spectrum Technology Pangun Park (GNU)
  • 84.
    84 Multiple Access Methods §Time Division Multiple Access § Frequency Division Multiple Access § Code Division Multiple Access Pangun Park (GNU) ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Multiple Access Methods Multiple Access Methods Time Division Multiple Access Code Division Multiple Access 3-18 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Multiple Access Methods Multiple Access Methods Time Division Multiple Access Code Division Multiple Access
  • 85.
    85 Frequency multiplex § Separationof the whole spectrum into smaller frequency bands § A channel gets a certain band of the spectrum for the whole time § Advantages: Ø No dynamic coordination necessary Ø Works also for analog signals § Disadvantages: Ø Waste of bandwidth if the traffic is distributed unevenly Ø inflexible Ø Guard spaces Pangun Park (CNU) Frequency multiplex Separation of the whole spectrum into smaller frequency bands A channel gets a certain band of the spectrum for the whole time Advantages: ! no dynamic coordination necessary ! works also for analog signals Disadvantages: ! waste of bandwidth if the traffic is distributed unevenly ! inflexible ! guard spaces k2 k3 k4 k5 k6 k1 f t c
  • 86.
    86 Time multiplex § Achannel gets the whole spectrum for a certain amount of time § Advantages: Ø only one carrier in the medium at any time Ø throughput high even for many users § Disadvantages: Ø precise synchronization necessary Pangun Park (CNU) 43 f t c k2 k3 k4 k5 k6 k1 Time multiplex A channel gets the whole spectrum for a certain amount of time Advantages: ! only one carrier in the medium at any time ! throughput high even for many users Disadvantages: ! precise synchronization necessary
  • 87.
    87 Time and frequencymultiplex § Combination of both methods § A channel gets a certain frequency band for a certain amount of time § Example: GSM § Advantages: Ø better protection against tapping Ø protection against frequency selective interference § but: Ø precise coordination required Pangun Park (CNU) 44 f Time and frequency multiplex Combination of both methods A channel gets a certain frequency band for a certain amount of time Example: GSM Advantages: ! better protection against tapping ! protection against frequency selective interference but: ! precise coordination required t c k2 k3 k4 k5 k6 k1
  • 88.
    88 Code multiplex § Eachchannel has a unique code § All channels use the same spectrum at the same time § Advantages: Ø bandwidth efficient Ø no coordination and synchronization necessary Ø good protection against interference and tapping § Disadvantages: Ø more complex signal regeneration § Implemented using spread spectrum technology Pangun Park (CNU) Mobile and Wireless Networking 2013 / 2014 45 Code multiplex Each channel has a unique code All channels use the same spectrum at the same time Advantages: ! bandwidth efficient ! no coordination and synchronization necessary ! good protection against interference and tapping Disadvantages: ! more complex signal regeneration Implemented using spread spectrum technology k2 k3 k4 k5 k6 k1 f t c
  • 89.
    89 Spread spectrum technology §Problem of radio transmission: frequency dependent fading can wipe out narrow band signals for duration of the interference Pangun Park (CNU) Spreading and frequency selective fading frequency channel quality 1 2 3 4 5 6 narrow band signal guard space 2 2 2 2 2 channel quality 1 narrowband channels spread spectrum channels
  • 90.
    90 Solution: Spreading andfrequency selective fading Pangun Park (CNU) Mobile and Wireless Networking 15 Spreading and frequency selective fading frequency channel quality 1 2 3 4 5 6 narrow band signal guard space 2 2 2 2 2 frequency channel quality 1 spread spectrum narrowband channels spread spectrum channels
  • 91.
    91 Spread spectrum technology §Problem of radio transmission: frequency dependent fading can wipe out narrow band signals for duration of the interference § Solution: spread the narrow band signal into a broad band signal using a special code protection against narrow band interference § Side effects: Ø coexistence of several signals without dynamic coordination § Alternatives: Direct Sequence, Frequency Hopping Pangun Park (CNU) Mobile and Wireless Networking 2013 / 2014 13 detection at receiver interference spread signal signal spread interference f f power power Spread spectrum technology Problem of radio transmission: frequency dependent fading can wipe out narrow band signals for duration of the interference Solution: spread the narrow band signal into a broad band signal using a special code protection against narrow band interference Side effects: ! coexistence of several signals without dynamic coordination ! tap-proof Alternatives: Direct Sequence, Frequency Hopping
  • 92.
    92 Spread spectrum technology §Protection against narrow band interference § Tightly coupled to CDM Ø coexistence of several signals without dynamic coordination Ø High security § Military use § Overlay of new SS technologies on the same spectrum as old NB § Civil applications Ø IEEE802.11, Bluetooth, UMTS § Disadvantages Ø High complexity Ø Large transmission bandwidth § Alternatives: Direct Sequence, Frequency Hopping Pangun Park (CNU)
  • 93.
    93 Frequency Hopping SpreadSpectrum Frequency Hopping Spread Spectrum § Pseudo-random frequency hopping § Spreads the power over a wide spectrum Ø Spread Spectrum § Developed initially for military § Patented by actress Hedy Lamarr § Narrowband interference can't jam Pangun Park (GNU) 3-19 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Frequency Hopping Spread Spectrum Frequency Hopping Spread Spectrum ‰ Pseudo-random frequency hopping ‰ Spreads the power over a wide spectrum Ÿ Spread Spectrum ‰ Developed initially for military ‰ Patented by actress Hedy Lamarr ‰ Narrowband interference can't jam Frequency Time 50 ms
  • 94.
    94 FH Spectrum Pangun Park(GNU) 3-20 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Spectrum Spectrum Signal Noise Signal Noise (a) Normal (b) Frequency Hopping
  • 95.
    95 FHSS (Frequency HoppingSpread Spectrum) Pangun Park (CNU) Mobile and Wireless Networking 2013 / 2014 23 FHSS (Frequency Hopping Spread Spectrum) III modulator user data hopping sequence modulator narrowband signal spread transmit signal transmitter received signal receiver demodulator data frequency synthesizer hopping sequence demodulator frequency synthesizer narrowband signal
  • 96.
    96 FHSS (Frequency HoppingSpread Spectrum) § Example: Ø Bluetooth (1600 hops/sec on 79 carriers) § Advantages Ø frequency selective fading and interference limited to short period Ø simple implementation Ø uses only small portion of spectrum at any time § Disadvantages Ø not as robust as DSSS Ø simpler to detect Pangun Park (CNU)
  • 97.
    97 Direct-Sequence Spread Spectrum §Spreading factor = Code bits/data bit, 10-100 commercial (Min 10 by FCC), 10,000 for military § Signal bandwidth 10 × data bandwidth § Code sequence synchronization § Correlation between codes: Interference, Orthogonal Pangun Park (GNU) 3-21 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Direct Direct- -Sequence Spread Spectrum Sequence Spread Spectrum ‰ Spreading factor = Code bits/data bit, 10-100 commercial (Min 10 by FCC), 10,000 for military ‰ Signal bandwidth 10 × data bandwidth ‰ Code sequence synchronization ‰ Correlation between codes ŸInterference Orthogonal Frequency Time 5Ps 01001011011011010010 Data 0 1
  • 98.
    98 DS Spectrum Time DomainFrequency Domain Pangun Park (GNU) 3-22 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis DS Spectrum DS Spectrum Time Domain Frequency Domain (a) Data (b) Code Frequency Frequency Time
  • 99.
    99 DSSS (Direct SequenceSpread Spectrum) § XOR of the signal with pseudo-random number (chipping sequence) Ø many chips per bit (e.g., 128) result in higher bandwidth of the signal § Advantages Ø reduces frequency selective fading Ø in cellular networks • base stations can use the same frequency range • several base stations can detect and recover the signal • soft handover § Disadvantages Ø precise power control necessary Pangun Park (CNU) Mobile and Wireless Networking 2013 / 2014 17 DSSS (Direct Sequence Spread Spectrum) I XOR of the signal with pseudo-random number (chipping sequence) ! many chips per bit (e.g., 128) result in higher bandwidth of the signal Advantages ! reduces frequency selective fading ! in cellular networks base stations can use the same frequency range several base stations can detect and recover the signal soft handover Disadvantages ! precise power control necessary user data chipping sequence resulting signal 0 1 0 1 1 0 1 0 1 0 1 0 0 1 1 1 XOR 0 1 1 0 0 1 0 1 1 0 1 0 0 1 = tb tc tb: bit period tc: chip period
  • 100.
    100 DSSS (Direct SequenceSpread Spectrum) Pangun Park (CNU) Mobile and Wireless Networking 2013 / 2014 18 DSSS (Direct Sequence Spread Spectrum) II X user data chipping sequence modulator radio carrier spread spectrum signal transmit signal transmitter demodulator received signal radio carrier X chipping sequence lowpass filtered signal receiver integrator products decision data sampled sums correlator
  • 101.
    101 Duplexing Duplexing § Duplex =Bi-Directional Communication § Frequency division duplexing (FDD) (Full-Duplex) § Time division duplex (TDD): Half-duplex § Many LTE deployments will use TDD. Ø Allows more flexible sharing of DL/UL data rate Ø Does not require paired spectrum Ø Easy channel estimation : Simpler transceiver design Ø Con: All neighboring BS should time synchronize Pangun Park (GNU) 3-25 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Duplexing Duplexing ‰ Duplex = Bi-Directional Communication ‰ Frequency division duplexing (FDD) (Full-Duplex) ‰ Time division duplex (TDD): Half-duplex ‰ Many LTE deployments will use TDD. ¾ Allows more flexible sharing of DL/UL data rate ¾ Does not require paired spectrum ¾ Easy channel estimation Ÿ Simpler transceiver design ¾ Con: All neighboring BS should time synchronize Base Subscriber Base Subscriber Base Subscriber Frequency 1 Frequency 2 3-25 ©2016 Raj Jain http://www.cse.wustl.edu/~jain/cse574-16/ Washington University in St. Louis Duplexing Duplexing ‰ Duplex = Bi-Directional Communication ‰ Frequency division duplexing (FDD) (Full-Duplex) ‰ Time division duplex (TDD): Half-duplex ‰ Many LTE deployments will use TDD. ¾ Allows more flexible sharing of DL/UL data rate ¾ Does not require paired spectrum ¾ Easy channel estimation Ÿ Simpler transceiver design ¾ Con: All neighboring BS should time synchronize Base Subscriber Base Subscriber Base Subscriber Frequency 1 Frequency 2
  • 102.