0% found this document useful (0 votes)
15 views31 pages

Class5 2019

Uploaded by

shivam maurya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views31 pages

Class5 2019

Uploaded by

shivam maurya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Signal Space Concepts — Why we Care


I Signal Space Concepts are a powerful tool for the
analysis of communication systems and for the design of
optimum receivers.
I Key Concepts:
I Orthonormal basis functions — tailored to signals of
interest — span the signal space.
I Representation theorem: allows any signal to be
represented as a (usually finite dimensional) vector
I Signals are interpreted as points in signal space.
I For random processes, representation theorem leads to
random signals being described by random vectors with
uncorrelated components.
I Theorem of Irrelavance allows us to disregrad nearly all
components of noise in the receiver.
I We will briefly review key ideas that provide underpinning
for signal spaces.
© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 77
Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Linear Vector Spaces

I The basic structure needed by our signal spaces is the


idea of linear vector space.
I Definition: A linear vector space S is a collection of
elements (“vectors”) with the following properties:
I Addition of vectors is defined and satisfies the following
conditions for any x, y , z 2 S :
1. x + y 2 S (closed under addition)
2. x + y = y + x (commutative)
3. (x + y ) + z = x + (y + z ) (associative)
4. The zero vector ~0 exists and ~0 2 S . x + ~0 = x for all x 2 S .
5. For each x 2 S , a unique vector ( x ) is also in S and
x + ( x ) = ~0.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 78


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Linear Vector Spaces — continued

I Definition — continued:
I Associated with the set of vectors in S is a set of scalars. If
a, b are scalars, then for any x, y 2 S the following
properties hold:
1. a · x is defined and a · x 2 S .
2. a · (b · x ) = (a · b ) · x
3. Let 1 and 0 denote the multiplicative and additive identies of
the field of scalars, then 1 · x = x and 0 · x = ~0 for all x 2 S .
4. Associative properties:

a · (x + y ) = a · x + a · y
(a + b ) · x = a · x + b · x

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 79


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Running Examples
I The space of length-N vectors RN
0 1 0 1 0 1 0 1 0 1
x1 y1 x1 + y1 x1 a · x1
B . C B . C B . C B . C B . C
B .. C + B .. C = B .. C and a · B .. C = B .. C
@ A @ A @ A @ A @ A
xN yN xN + yN xN a · xN

I The collection of all square-integrable signals over [Ta , Tb ],


i.e., all signals x (t ) satisfying
Z Tb
|x (t )|2 dt < •.
Ta

I Verifying that this is a linear vector space is easy.


I This space is called L2 (Ta , Tb ) (pronounced: ell-two).

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 80


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Inner Product
I To be truly useful, we need linear vector spaces to provide
I means to measure the length of vectors and
I to measure the distance between vectors.
I Both of these can be achieved with the help of inner
products.
I Definition: The inner product of two vectors x, y , 2 S is
denoted by hx, y i. The inner product is a scalar assigned
to x and y so that the following conditions are satisfied:
1. hx, y i = hy , x i (for complex vectors hx, y i = hy , x i⇤ )
2. ha · x, y i = a · hx, y i, with scalar a
3. hx + y , z i = hx, z i + hy , z i, with vector z
4. hx, x i > 0, except when x = ~0; then, hx, x i = 0.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 81


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Exercise: Valid Inner Products?

I x, y 2 RN with
N
hx, y i = Â xn yn
n =1
I Answer: Yes; this is the standard dot product.
I x, y 2 RN with
N N
hx, y i = Â xn · Â yn
n =1 n =1

I Answer: No; last condition does not hold, which makes


this inner product useless for measuring distances.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 82


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Exercise: Valid Inner Products?


I x (t ), y (t ) 2 L2 (a, b ) with
Z b
hx (t ), y (t )i = x (t )y (t ) dt
a
I Answer: Yes; continuous-time equivalent of the
dot-product.
I x, y 2 CN with
N
hx, y i = Â xn yn⇤
n =1
I Answer: Yes; the conjugate complex is critical to meet the
last condition (e.g., hj, j i = 1 < 0).
I x, y 2 RN with
N N
hx, y i = x T Ky = Â Â xn Kn,m ym
n =1 m =1
© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 83
with K an N ⇥ N-matrix
Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Exercise: Valid Inner Products?

I x, y 2 RN with
N N
hx, y i = x T Ky = Â Â xn Kn,m ym
n =1 m =1

with K an N ⇥ N-matrix
I Answer: Only if K is positive definite (i.e., x T Kx > 0 for all
x 6= ~0).

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 84


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Norm of a Vector
I Definition: The norm of vector x 2 S is denoted by kx k
and is defined via the inner product as
q
kx k = hx, x i.

I Notice that kx k > 0 unless x = ~0, then kx k = 0.


I The norm of a vector measures the length of a vector.
I For signals kx (t )k2 measures the energy of the signal.
I Example: For x 2 RN , Cartesian length of a vector
v
u N
u
k x k = t  | xn | 2
n =1

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 85


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Norm of a Vector — continued

I Illustration:
q
ka · x k = ha · x, a · x i = |a|kx k

I Scaling the vector by a, scales its length by a.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 86


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Inner Product Space

I We call a linear vector space with an associated, valid


inner product an inner product space.
I Definition: An inner product space is a linear vector space
in which a inner product is defined for all elements of the
space and the norm is given by kx k = hx, x i.
I Standard Examples:
1. RN with hx, y i = ÂN n=1 xn yn .R
b
2. L2 (a, b ) with hx (t ), y (t )i = a x (t )y (t ) dt.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 87


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Schwartz Inequality

I The following relationship between norms and inner


products holds for all inner product spaces.
I Schwartz Inequality: For any x, y 2 S , where S is an
inner product space,

|hx, y i|  kx k · ky k

with equality if and only if x = c · y with scalar c


I Proof follows from kx + a · y k2 0 with a = hx ,y2i .
ky k

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 88


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Orthogonality
I Definition: Two vectors are orthogonal if the inner product
of the vectors is zero, i.e.,

hx, y i = 0.
I Example: The standard basis vectors em in RN are
orthogonal; recall
0 1
0
B.C
B .. C
B C
B C
em = B1C the 1 occurs on the m-th row
B.C
B.C
@.A
0

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 89


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Orthogonality

I Example: The basis functions for the Fourier Series


expansion wm (t ) 2 L2 (0, T ) are orthogonal; recall

1 j2pmt /T
wm ( t ) = p e .
T

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 90


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Distance between Vectors


I Definition: The distance d between two vectors is defined
as the norm of their difference, i.e.,
d (x, y ) = kx yk
I Example: The Cartesian (or Euclidean) distance between
vectors in RN :
v
u N
u
d (x, y ) = kx y k = t  |xn yn |2 .
n =1

I Example: The root-mean-squared error (RMSE) between


two signals in L2 (a, b ) is
s
Z b
d (x (t ), y (t )) = kx (t ) y (t )k = |x (t ) y (t )|2 dt
a

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 91


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Properties of Distances

I Distance measures defined by the norm of the difference


between vectors x, y have the following properties:
1. d (x, y ) = d (y , x )
2. d (x, y ) = 0 if and only if x = y
3. d (x, y )  d (x, z ) + d (y , z ) for all vectors z (Triangle
inequality)

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 92


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Exercise: Prove the Triangle Inequality


I Begin like this:
d 2 (x, y ) = kx y k2
= k(x z ) + (z y )k2
= h(x z ) + (z y ), (x z ) + (z y )i
I
d 2 (x, y ) = hx z, x z i + 2hx z, z y i + hz y, z yi
 hx z, x z i + 2|hx z, z y i| + hz y, z yi
(Schwartz ) :  hx z, x z i + 2kx z k · kz y k + hz y, z y
= d (x, z )2 + 2d (x, z ) · d (y , z ) + d (y , z )2
= (d (x, z ) + d (y , z ))2

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 93


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Hilbert Spaces — Why we Care


I We would like our vector spaces to have one more
property.
I We say the sequence of vectors {xn } converges to vector
x, if
lim kxn x k = 0.
n!•
I We would like the limit point x of any sequence {xn } to be
in our vector space.
I Integrals and derivatives are fundamentally limits; we want
derivatives and integrals to stay in the vector space.
I A vector space is said to be closed if it contains all of its
limit points.
I Definition: A closed, inner product space is A Hilbert
Space.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 94


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Hilbert Spaces — Examples

I Examples: Both RN and L2 (a, b ) are Hilbert Spaces.


I Counter Example: The space of rational number Q is not
closed (i.e., not a Hilbert space)
I E.g.,

1
 n! = e 2/ Q,
n =0
1
even though all n! 2 Q.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 95


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Subspaces
I Definition: Let S be a linear vector space. The space L is
a subspace of S if
1. L is a subset of S and
2. L is closed.
I If x, y 2 L then also x, y , 2 S .
I And, a · x + b · y 2 L for all scalars a, b.
I Example: Let S be L2 (Ta , Tb ). Define L as the set of all
sinusoids of frequency f0 , i.e., signals of the form
x (t ) = A cos(2pf0 t + f), with 0  A < • and 0  f < 2p
1. All such sinusoids are square integrable.
2. Linear combination of two sinusoids of frequency f0 is a
sinusoid of the same frequency.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 96


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Projection Theorem
I Definition: Let L be a subspace of the Hilbert Space H.
The vector x 2 H (and x 2 / L) is orthogonal to the
subspace L if hx, y i = 0 for every y 2 L.
I Projection Theorem: Let H be a Hilbert Space and L is a
subspace of H.
Every vector x 2 H has a unique decomposition
x = y +z
with y 2 L and z orthogonal to L.
Furthermore,
kz k = kx y k = min kx n k.
n2L

I y is called the projection of x onto L.


I Distance from x to all elements of L is minimized by y .
© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 97
Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Exercise: Fourier Series


I Let x (t ) be a signal in the Hilbert space L2 (0, T ).
I Define the subspace L of signals nn (t ) = An cos(2pnt /T )
for a fixed n and T .
I Find the signal y (t ) 2 L that minimizes
min kx (t ) y (t )k2 .
y (t )2L

I Answer: y (t ) is the sinusoid with amplitude


Z T
2 2
An = x (t ) cos(2pnt /T ) dt = hx (t ), cos(2pnt /T )i.
T 0 T
I Note that this is (part of the trigonometric form of) the
Fourier Series expansion.
I Note that the inner product involves the projection of x (t )
onto an element of L.
© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 98
Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Projection Theorem

I The Projection Theorem is most useful when the subspace


L has certain structural properties.
I In particular, we will be interested in the case when L is
spanned by a set of orthonormal vectors.
I Let’s define what that means.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 99


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Separable Vector Spaces


I Definition: A Hilbert space H is said to be separable if
there exists a set of vectors {Fn }, n = 1, 2, . . . that are
elements of H and such that every element x 2 H can be
expressed as

x= Â Xn Fn .
n =1

I The coefficients Xn are scalars associated with vectors Fn .


I Equality is taken to mean

• 2
lim
n!•
x  Xn Fn = 0.
n =1

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 100


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Representation of a Vector

I The set of vectors {Fn } is said to be complete if the above


is valid for every x 2 H.
I A complete set of vectors {Fn } is said to form a basis for
H.
I Definition: The representation of the vector x (with
respect to the basis {Fn }) is the sequence of coefficients
{Xn }.
I Definition: The number of vectors Fn that is required to
express every element x of a separable vector space is
called the dimension of the space.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 101


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Example: Length-N column Vectors

I The space RN is separable and has dimension N.


I Basis vectors (m = 1, . . . , N):
0 1
0
B.C
B .. C
B C
B C
Fm = em = B1C the 1 occurs on the m-th row
B C
B .. C
@.A
0

I There are N basis vectors; dimension is N.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 102


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Example: Length-N column Vectors — continued

I (con’t)
I For any vector x 2 RN :
0 1
x1
B C
B x2 C N
B .. C = Â xm em
x=B C
@ . A m =1
xN

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 103


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Examples: L2

I Fourier Bases: The following is a complete basis for


L2 (0, T )
r
2
F2n (t ) = cos(2pnt /T )
T
r n = 0, 1, 2, . . .
2
F2n+1 (t ) = sin(2pnt /T )
T

I This implies that L2 (0, T ) is a separable vector space.


I L2 (0, T ) is infinite-dimensional.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 104


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Examples: L2

I Piecewise Linear Signals: The set of vectors (signals)


(
p1 (n 1)T  t < nT
Fn (t ) = T
0 else

is not a basis for L2 (0, •).


I Only piecewise constant signals can be represented.
I But, this is a basis for the subspace of L2 consisting of
piecewise constant signals.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 105


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Orthonormal Bases

I Definition: A basis for a separable vector space is an


orthonormal basis if the elements of the vectors that
constitute the basis satisfy
1. hFn , Fm i = 0 for all n 6= m. (orthogonal)
2. kFn k = 1, for all n = 1, 2, . . . (normalized)
I Note:
I Not every basis is orthonormal.
I We will see shortly, every basis can be turned into an
orthonormal basis.
I Not every set of orthonornal vectors constitutes a basis.
I Example: Piecewise Linear Signals.

© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 106


Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Representation with Orthonormal Basis


I An orthonormal basis is much prefered over an arbitrary
basis because the representation of vector x is very easy
to compute.
I The representation {Xn } of a vector x

x= Â Xn Fn
n =1
with respect to an orthonormal basis {Fn } is computed
using
Xn = hx, Fn i.
The representation Xn is obtained by projecting x onto the
basis vector Fn !
I In contrast, when bases are not orthonormal, finding the
representation of x requires solving a system of linear
equations.
© 2018, B.-P. Paris ECE 630: Statistical Communication Theory 107

You might also like