0% found this document useful (0 votes)
35 views2 pages

Rohini 15720602071

Uploaded by

priyakanna821011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views2 pages

Rohini 15720602071

Uploaded by

priyakanna821011
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

CHANNEL CODING THEOREM

The noisy-channel coding theorem (sometimes Shannon's theorem), establishes


that for any given degree of noise contamination of a communication channel, it
is possible to communicate discrete data (digital information) nearly error-free
up to a computable maximum rate through the channel. This result was
presented by Claude Shannon in 1948 and was based in part on earlier work and
ideas of Harry Nyquist and Hartley. The Shannon limit or Shannon capacity of a
communications channel is the theoretical maximum information transfer rate
of the channel, for a particular noise level.
The theorem describes the maximum possible efficiency of error-correcting
methods versus levels of noise interference and data corruption. Shannon's
theorem has wide-ranging applications in both communications and data
storage. This theorem is of foundational importance to the modern field of
information theory. Shannon only gave an outline of the proof. The first
rigorous proof for the discrete case is due to Amiel Feinstein in 1954.
The Shannon theorem states that given a noisy channel with channel capacity
C and information transmitted at a rate R, then if R<C there exist codes that
allow the probability of error at the receiver to be made arbitrarily small. This
means that, theoretically, it is possible to transmit information nearly without
error at any rate below a limiting rate, C.
The converse is also important. If R>C , an arbitrarily small probability of
error is not achievable. All codes will have a probability of error greater than a
certain positive minimal level, and this level increases as the rate increases. So,
information cannot be guaranteed to be transmitted reliably across a channel at
rates beyond the channel capacity. The theorem does not address the rare
situation in which rate and capacity are equal.
The channel capacity C can be calculated from the physical properties of a
channel; for a band-limited channel with Gaussian noise, using the Shannon–
Hartley theorem.
For every discrete memory less channel, the channel capacity has the following
property. For any ε > 0 and R < C, for large enough N, there exists a code of
length N and rate ≥ R and a decoding algorithm, such that the maximal
probability of block error is ≤ ε.
2. If a probability of bit error pb is acceptable, rates up to R(pb) are achievable,
where
SOURCE CODING
A code is defined as an n-tuple of q elements. Where q is any alphabet. Ex.
1001 n=4, q={1,0} Ex.
2389047298738904 n=16, q={0,1,2,3,4,5,6,7,8,9} Ex. (a,b,c,d,e) n=5,
q={a,b,c,d,e,…,y,z} The most common code is when q={1,0}. This is known as
a binary code. The purpose A message can become distorted through a wide
range of unpredictable errors.
• Humans
• Equipment failure
• Lighting interference
• Scratches in a magnetic tape
Error-correcting code:
To add redundancy to a message so the original message can be recovered if it
has be garbled. e.g. message = 10 code = 1010101010

Send a message:

Fig 1.3 Block Diagram of Transmission


(Source:https://www.google.com/search?q=Block+Diagram+of+a+typical+communication+s
ystem)

Source Coding loss:


It may consider semantics of the data depends on characteristics of the data e.g.
DCT, DPCM, ADPCM, color model transform A code is distinct if each code
word can be distinguished from every other (mapping is one-to-one) uniquely
decodable if every code word is identifiable when immersed in a sequence of
code words e.g., with previous table, message 11 could be defined as either
ddddd or bbbbbb Measure of Information Consider symbols si and the
probability of occurrence of each symbol p(si)
Example Alphabet = {A, B} p(A) = 0.4; p(B) = 0.6 Compute Entropy (H) -
0.4*log2 0.4 + - 0.6*log2 0.6 = .97 bits Maximum uncertainty (gives largest H)
occurs when all probabilities are equal Redundancy Difference between avg.
code word length (L) and avg. information content (H) If H is constant, then can
just use L Relative to the optimal value

You might also like