f
f
MODULE -5
INNER PRODUCT SPACES
An inner product on a vector space V is a function that associates with each pair of vectors x and y a
scalar denoted by x, y such that, if x, y, and z are vectors and c is a scalar, then
(a) x, y = y, x ;
(b) x, y + z = x, y + x, z ;
(c) cx, y = cx, y ;
(d) x, y 0; x, x = 0 if and only if x = 0.
Thus
x y = x, y = x T y
Cauchy-Schwarz Inequality: | x y | | x | | y |.
If the vectors x and y are orthogonal, then the triangle inequality yields the Pythagorean formula:
x+y = x + y
2 2 2
Exercise: Is the converse of the above statement true? Justify your answer.
Note: Any set of n mutually orthogonal nonzero vectors in n constitutes a basis for n - called an
orthogonal basis.
Note: The vector x in n is a solution vector of Ax = 0 if and only if x is orthogonal to each vector in
the row space Row (A).
Theorem: Let A be an m x n matrix. Then the row space Row(A) and the null space Null(A) are
orthogonal complements in n . That is,
The linear system Ax = b is called overdetermined if A is m x n with m > n, i.e., the system has
more equations than variables.
Assumption in the following discussion: Matrix A has rank n, which means that its n column
vectors a 1 , a 2 ,..., a n are linearly independent and hence form a basis for Col(A).
Our discussions here are on the situation when the system Ax = b is inconsistent. Observe that the
system may be written as
b = x1a1 + x2a 2 + ... + xn a n ,
and inconsistency of the system simply means that b is not in the n-dimensional subspace Col(A) of
m . Motivated by the situation, we want to find the orthogonal projection p of b into Col(A) i.e.,
given a vector b in m that does not lie in the subspace V of m with basis vectors a 1 , a 2 ,..., a n , we
want to find vectors p in V and q in V ⊥ such that
b=p+q
The unique vector p is called the orthogonal projection of the vector b into the subspace V =
Col(A).
AT Ay = AT b
(Note that AT A is nonsingular);
3. Consequently, the normal system has a unique solution
y = ( AT A) −1 AT b ;
4. The orthogonal projection p of b into Col(A) is given by
p = Ay = A( AT A) −1 AT b .
Definition: Let the m x n matrix A have rank n. Then by the least squares solution of the system Ax =
b is meant the solution y of the corresponding normal system AT Ay = AT b .
Note:
1. If the system Ax = b is inconsistent, then its least squares solution is as close as possible to being
a solution to the inconsistent system;
2. If the system is consistent, then its least squares solution is the actual (unique) solution.
3. The 'least sum of squares of errors' is given by | p - b | 2 .
Example: The orthogonal projection of the vector b into the line through the origin in m , and
determined by the single nonzero vector a, is given by
a Tb ab
p = ay = T a = a
a a aa
Example: To find the straight line y = a + bx that best fits the data points
( x1 , y1 ), ( x2 , y2 ),..., ( xn , yn ) , the least squares solution x = (a, b) is obtained by solving the normal
equations:
n n
na + ( xi )b = yi
1 1
n n n
( xi )a + ( xi2 )b = xi yi
1 1 1
Inner Product:
A vector space V along with a specified inner product on V is called an inner product space.
Orthogonality: Let V be an inner product space. The vectors u,v belongs to V are said to be
orthogonal and u is said to be orthogonal to v. If
u, v = 0
f (t ) = 3t − 5, g (t ) = t 2
1
f,g = (3t − 5) t
2
dt
0
1
= (3t − 5 t 2 ) dt
3
0
1
t4 t3
= 3
−5
3
4 t =0
3 5
= −
4 3
11
=− .
12
⟨𝒖, 𝒗⟩
cos 𝜃 = , 0≤𝜃≤𝜋
‖𝒖‖‖𝒗‖
Since vr is orthogonal to the other vectors in the set and vr is a unit vector. Hence each λr = 0.
Because of the above theorem, if we want to show that a set of vectors is an orthonormal basis we
need only show that it is orthonormal and that it spans the space. Linear independences come free.
Another important consequence of the above theorem is that it is very easy to find the coordinates of
a vector relative to an orthonormal basis.
Orthogonal Projections- R2:
• Let u and v be vectors in the plane. If v is nonzero, then u can be orthogonally projected onto
v. This projection is denoted by projvu.
𝒖⋅𝒗
Then, 𝑝𝑟𝑜𝑗𝒗 𝒖 = 𝒗
𝒗⋅𝒗
Projvu in R2:
• If a < 0, then cos 𝜃 < 0. The orthogonal projection of u onto v is given by the same formula.
• The orthogonal projection of u = (4, 2) onto v = (3, 4) is given by
𝒖⋅𝒗 20 12 16
𝑝𝑟𝑜𝑗𝒗 𝒖 = 𝒗= (3, 4) = ( , )
𝒗⋅𝒗 25 5 5
Orthogonal Projection:
• Let u and v be vectors in an inner product space V. Then the orthogonal projection of u onto
v is given by
⟨𝒖, 𝒗⟩
𝑝𝑟𝑜𝑗𝒗 𝒖 = 𝒗, 𝒗 ≠ 𝟎.
⟨𝒗, 𝒗⟩
Hence every ci must be zero and the set must be linearly independent.
Corollary
If V is an inner product space of dimension n, then any orthogonal set of n nonzero vectors is a
basis for V. Show that the following set is a basis for R4.
Sol: Because 𝑆 = {(2, 3, 2, −2), (1, 0, 0, 1), (−1, 0, 2, 1), (−1, 2, −1, 1)}
𝒗1 ⋅ 𝒗2 = 0, 𝒗1 ⋅ 𝒗3 = 0, 𝒗1 ⋅ 𝒗4 = 0, 𝒗2 ⋅ 𝒗3 = 0, 𝒗2 ⋅ 𝒗4 = 0, 𝒗3 ⋅ 𝒗4 = 0
Thus, S is orthogonal.
1)Apply the Gram-Schmidt orthonormal process to the following basis for R2: B = {(1, 1), (0, 1)}.
Sol:
𝒘1 √2 √2 𝒗2 ⋅ 𝒘1 1
𝒘1 = 𝒗1 = (1, 1) ⇒ 𝒖1 = =( , ) 𝒘2 = 𝒗2 − 𝒘 = (0, 1) − (1, 1)
‖𝒘1 ‖ 2 2 𝒘1 ⋅ 𝒘1 1 2
−1 1 𝒘2 −√2 √2
= ( , ) ⇒ 𝒖2 = =( , )
2 2 ‖𝒘2 ‖ 2 2
2)Apply the Gram-Schmidt orthonormal process to the following basis for R3: B = {(1, 1, 0), (1, 2, 0),
(0, 1, 2)}.
√2 √2 𝒗2 ⋅ 𝒘1
𝒘1 = 𝒗1 = (1, 1, 0) ⇒ 𝒖1 = 𝒘1 ⁄‖𝒘1 ‖ = ( , , 0) 𝒘2 = 𝒗2 − 𝒘
2 2 𝒘1 ⋅ 𝒘1 1
3 −1 1 −√2 √2
= (1, 1, 0) − (1, 1, 0) = ( , , 0) ⇒ 𝒖2 = 𝒘2 ⁄‖𝒘2 ‖ = ( , , 0) 𝒘3
2 2 2 2 2
𝒗3 ⋅ 𝒘1 𝒗3 ⋅ 𝒘 2 1 1⁄2
= 𝒗3 − 𝒘1 − 𝒘 = 𝒗3 − 𝒘1 − 𝒘 = (0, 0, 2) ⇒ 𝒖3
𝒘1 ⋅ 𝒘1 𝒘2 ⋅ 𝒘 2 2 2 1⁄2 2
= 𝒘3 ⁄‖𝒘3 ‖ = (0, 0, 1)
3) The vectors v1 = (0, 1, 0) and v2 = (1, 1, 1) span a plane in R3. Find an orthonormal
basis for this subspace.
Sol: 𝒘1 = 𝒗1 = (0, 1, 0) ⇒ 𝒖1 = 𝒘1 ⁄‖𝒘1 ‖ = (0, 1, 0)
𝒗2 ⋅ 𝒘1 1
𝒘2 = 𝒗2 − 𝒘1 = (1, 1, 1) − (0, 1, 0)
𝒘1 ⋅ 𝒘1 1
√2 √2
= (1, 0, 1) ⇒ 𝒖2 = 𝒘2 ⁄‖𝒘2 ‖ = ( , 0, )
2 2
4) Find an orthonormal basis for the solution space of the following homogeneous
system of linear equations
𝑥1 + 𝑥2 ⥂ +7𝑥4 = 0, 2𝑥1 + 𝑥2 + 2𝑥3 + 6𝑥4 = 0
Let x3 = s and x4 = t,
1 1 0 7 0 1 0 2 −1 0
[ ]⇒[ ]
2 1 2 6 0 0 1 −2 8 0
𝑥1 −2 1
𝑥2 2 −8
[𝑥 ] = 𝑠 [ ] + 𝑡 [ ]
3 1 0
𝑥4 0 1
𝒗1 ⋅ 𝒗2 = 0, 𝒗1 ⋅ 𝒗3 = 0, 𝒗1 ⋅ 𝒗4 = 0, 𝒗2 ⋅ 𝒗3 = 0, 𝒗2 ⋅ 𝒗4 = 0, 𝒗3 ⋅
𝒗4 = 0
Thus, S is orthogonal.
GRAM-SCHMIDT PROCESS:
Every finite-dimensional inner product space V has an orthogonal basis.
Proof: We prove this by induction on the dimension of V.
If dim(V) = 0 then the empty set is an orthonormal basis.
Suppose that every vector space of dimension n has an orthonormal basis and suppose that V is a vector
space of dimension n + 1.
Let (v1, …, vn+1) be a basis for V and let U = v1, …, vn.
By the induction hypothesis U has an orthogonal basis {u1, …, un}.
< 𝒗2 , 𝒘1 > < 𝒗3 , 𝒘1 > < 𝒗3 , 𝒘2 >
𝒘1 = 𝒗1, 𝒘2 = 𝒗2 − 𝒘1, 𝒘3 = 𝒗3 − 𝒘1 − 𝒘 … . 𝒘𝑛
< 𝒘1 , 𝒘1 > < 𝒘1 , 𝒘1 > < 𝒘2 , 𝒘2 > 2
< 𝒗𝑛 , 𝒘1 > < 𝒗𝑛 , 𝒘2 > < 𝒗𝑛−1 , 𝒘𝑛−1 >
= 𝒗𝑛 − 𝒘 − 𝒘 −⋯− 𝒘
< 𝒘1 , 𝒘1 > 1 < 𝒘2 , 𝒘2 > 2 < 𝒘𝑛−1 , 𝒘𝑛−1 > 𝑛−1
Solution:
𝒘1 √2 √2
𝒘1 = 𝒗1 = (1, 1) ⇒ 𝒖1 = =( , ) 𝒘2
‖𝒘1 ‖ 2 2
𝒗2 ⋅ 𝒘1
= 𝒗2 − 𝒘
𝒘1 ⋅ 𝒘1 1
1 −1 1
= (0, 1) − (1, 1) = ( , )
2 2 2
𝒘2 −√2 √2
⇒ 𝒖2 = =( , )
‖𝒘2 ‖ 2 2
2)Apply the Gram-Schmidt orthonormal process to the following basis for R3: B = {(1, 1, 0), (1, 2, 0),
(0, 1, 2)}.
Sol:
√2 √2 𝒗2 ⋅ 𝒘1
𝒘1 = 𝒗1 = (1, 1, 0) ⇒ 𝒖1 = 𝒘1 ⁄‖𝒘1 ‖ = ( , , 0) 𝒘2 = 𝒗2 − 𝒘
2 2 𝒘1 ⋅ 𝒘1 1
3 −1 1 −√2 √2
= (1, 1, 0) − (1, 1, 0) = ( , , 0) ⇒ 𝒖2 = 𝒘2 ⁄‖𝒘2 ‖ = ( , , 0) 𝒘3
2 2 2 2 2
𝒗3 ⋅ 𝒘1 𝒗3 ⋅ 𝒘2 1 1⁄2
= 𝒗3 − 𝒘 − 𝒘 = 𝒗3 − 𝒘1 − 𝒘 = (0, 0, 2) ⇒ 𝒖3
𝒘1 ⋅ 𝒘1 1 𝒘2 ⋅ 𝒘2 2 2 1⁄2 2
= 𝒘3 ⁄‖𝒘3 ‖ = (0, 0, 1)
QR Factorization:
QR Factorization (or QR Decomposition) is a technique in linear algebra used to decompose a given
matrix A into a product of two matrices:
Q is an orthogonal (or unitary) matrix, meaning QTQ=I (or QHQ=I for complex matrices).
R is an upper triangular matrix.
Compute q1Ta2 :
Thus, R is: