0% found this document useful (0 votes)
78 views13 pages

Math 110: Linear Algebra Homework #3: Farmer Schlutzenberg

Uploaded by

Cody Sage
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views13 pages

Math 110: Linear Algebra Homework #3: Farmer Schlutzenberg

Uploaded by

Cody Sage
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

MATH 110: LINEAR ALGEBRA

HOMEWORK #3
FARMER SCHLUTZENBERG
2.1: Linear Transformations, Null Spaces, and Ranges
Problem 1. Here V and W are vector spaces over a eld F and T : V W (but T may
not be linear).
(a) True. I cant nd it dened in the book, but T preserves sums and scalar products
is just a short way of saying that T satises conditions (a) and (b) in the denition
of linear transformation.
(b) False. (Note rstly that the statement is to be read if x, y [T(x + y) = T(x) + T(y)]
then T is linear.) If the base eld F = Z
p
then the statement is true. (To see T
preserves scalar multiplication, let c F. Then there is some n < p such that
c = 1 +. . . +1, where there are n terms on the right side. Then use distribution and
preservation of addition to get T(cv) = cT(v).) One can argue similarly to show the
statement is also true if F = Q.
To get a counter-example, we need F to be less trivial. For some motivation, consider
Theorem 2.1. It tells us that if Rg(T) is not a subspace of W, then T is not linear.
So we can look for T like this. Let V = W = Q(

2) over F = Q(

2). (Here we use


the fact that any eld is a vector space over itself.) Note that the only subspaces of
W (or V) are {0} and Q(

2) (why is Q not a subspace?). So T will be non-linear


if its range is between these two sets - i.e. 0 Rg(T) Q(

2). But we can get


Rg(T) = Q dening T by:
T(q + r

2) = q
where q, r Q. There are a few things to check.
Firstly we need to know that this really denes a function. That is, given v V,
there must be exactly one value that were telling T to send v to. If we had v =
q
1
+ r
1

2 = q
2
+ r
2

2 (where the qs and rs are in Q), but q


1
= q
2
, then wed be
giving conicting instructions to T: both T(v) = q
1
and T(v) = q
2
. But considering
Q(

2) as a vector space over Q, the set {1,

2} is a basis (linear independence comes


down to the fact that

2 / Q). So if v = q
1
+r
1

2 = q
2
+r
2

2, then by uniqueness
of representation of v as a linear combination of the basis, q
1
= q
2
and r
1
= r
2
. So
there is no such problem. Also, we have dened T on all elements of V, as every
v V is of the form q + r

2 as above. Thus T is a function.


Its easy to check T preserves addition.
Its also easy to check that Rg(T) = Q. So T cannot be linear, as discussed above.
Thus T does not preserve scalar multiplication. However, we can check this directly.
T will preserve scalar multiplication for scalars in Q, so we should check

2. Note
Date: September 22.
1
2 FARMER SCHLUTZENBERG
that T(1) = T(1 + 0

2) = 1, so

2T(1) =

2. But T(

2.1) = T(0 + 1.

2) = 0.
Thus T does not preserve scalar multiplication.
Note also that the nullspace of T is not a subspace of V.
Verifying that T is a function can also be done by appealing to Theorem 2.6 (the
argument above is similar to that proof). For we can let V

= W

= Q(

2) be vector
spaces over Q, so {1,

2} is a basis for V

. By Theorem 2.6, there is a unique linear


T

: V

such that T(1) = 1 and T(

2) = 0. Then by linearity, T

satises the
equation given above for T. This is not a contradiction as T

is only guaranteed to
be linear in terms of V

and W

- i.e., with eld of scalars Q. When we extend the


eld of scalars to Q(

2), scalar multiplication is no longer preserved.


(c) False. E.g. V = W = R, T(0) = 0, T(x) = 1 if x = 0. (If T is linear it is true.)
(d) True. T(0
V
) = T(0
F
0
V
) = 0
F
T(0
V
) = 0
W
, using linearity for the second equality.
(e) False. If dim(V) = dim(W), then by the dimension theorem, nullity(T) +rank(T) =
dim(V) = dim(W).
(f) False. E.g. The 0-map, where dim(V) > 0.
(g) True. Corollary to Theorem 2.6.
(h) False. E.g. if x
1
, x
2
are dependent but y
1
, y
2
are independent.
For a few of the following problems, Ill use the following simple fact, which sits naturally
with Theorems 2.4 and 2.5:
Fact 1. Suppose V and W are nite dimensional vector spaces over eld a F and T : V W
is linear. Then T is 1-1 i nullity(T) = 0 and T is onto i rank(T) = dim(W).
Proof. The rst statement is simply a rephrasing of Theorem 2.4, as nullity(T) = 0
N(T) = {0}.
For the second, if T is onto then Rg(T) = W, so rank(T) = dim(Rg(T)) = dim(W). On
the other hand, Rg(T) is a subspace of W, so if dim(Rg(T)) = dim(W) then by Theorem
1.11, we must have Rg(T) = W, i.e., T is onto.
Problem 3. Let V = R
2
and W = R
3
.
T is linear.
T(a + b) = T(a
1
+ b
1
, a
2
+ b
2
) = ((a
1
+ b
1
) + (a
2
+ b
2
), 0, 2(a
1
+ b
1
) (a
2
+ b
2
)) =
= ((a
1
+ a
2
) + (b
1
+ b
2
), 0, (2a
1
a
2
) + (2b
1
b
2
)) = T(a) + T(b).
T(ra) = T(ra
1
, ra
2
, ra
3
) = (ra
1
+ ra
2
, 0, 2ra
1
ra
2
) =
(r(a
1
+ a
2
), 0, r(2a
1
a
2
)) = rT(a).
Nullspace.
a N(T) (a
1
+ a
2
, 0, 2a
1
a
2
) = 0
a
1
= a
2
& a
1
= a
2
/2 a
1
= a
2
= 0 a = 0.
So N(T) = {0}. Therefore is a (the) basis for N(T).
Range.
Rg(T) = {T(a)|a R
2
} = {(a
1
+ a
2
, 0, 2a
1
a
2
)|(a
1
, a
2
) R
2
} =
MATH 110: HOMEWORK #3 3
= {a
1
(1, 0, 2) + a
2
(1, 0, 1)|a
1
, a
2
R} = span((1, 0, 2), (1, 0, 1)).
The last equality is by denition of span: notice that the set on the left is the set of all
linear combinations of the vectors (1, 0, 2) and (1, 0, 1). The set B = {(1, 0, 2), (1, 0, 1)}
is independent (the computation is done in showing N(T) = ). Since B also spans Rg(T),
it is a basis. (If we didnt have to verify the dimension theorem, we could actually apply
the dimension theorem here to conclude that it is independent without checking it, for the
dimension theorem tells us that there must be 2 vectors in a basis for Rg(T).)
Dimension Theorem.
From the above, nullity(T) = 0 and rank(T) = 2, and 0 + 2 = 2 = dim(R
2
), agreeing with
the dimension theorem.
1-1/onto.
As nullity(T) = 0 and rank(T) = 2 < dim(W) = 3, the fact following problem 1 tells us that
T is 1-1 but not onto.
Problem 4. Let V = M
23
(F), W = M
22
(F).
T is linear.
I will use the criterion specied in remark 2 following the denition of linearity in the text-
book. Let A, B V and c F. Then T(A + cB) =
= T
__
A
11
A
12
A
13
A
21
A
22
A
23
_
+
_
cB
11
cB
12
cB
13
cB
21
cB
22
cB
23
__
= T
__
A
11
+ cB
11
A
12
+ cB
12
A
13
+ cB
13
A
21
+ cB
21
A
22
+ cB
22
A
23
+ cB
23
__
=
_
2(A
11
+ cB
11
) (A
12
+ cB
12
) A
13
+ cB
13
+ 2(A
12
+ cB
12
)
0 0
_
=
_
(2A
11
A
12
) + (2cB
11
cB
12
) (A
13
+ 2A
12
) + (cB
13
+ 2cB
12
)
0 0
_
=
_
(2A
11
A
12
) + c(2B
11
B
12
) (A
13
+ 2A
12
) + c(B
13
+ 2B
12
)
0 0
_
=
_
2A
11
A
12
A
13
+ 2A
12
0 0
_
+ c
_
(2B
11
B
12
) (B
13
+ 2B
12
)
0 0
_
= T(A) + cT(B).
Nullspace.
T(A) = 0
_
2A
11
A
12
A
13
+ 2A
12
0 0
_
= 0 A
11
= A
12
/2 & A
13
= 2A
12
,
where Im assuming char(F) = 2. So
N(T) = {A V|A
11
= A
12
/2 & A
13
= 2A
12
}
=
__
a
11
a
12
a
13
a
21
a
22
a
23
_
|a
ij
F, a
11
= a
12
/2 & a
13
= 2a
12
_
=
__
a/2 a 2a
b
1
b
2
b
3
_
|a, b
i
F
_
4 FARMER SCHLUTZENBERG
(1) =
_
aF + b
1
E
(21)
+ b
2
E
(22)
+ b
3
E
(23)
|a, b
i
F
_
,
where
F =
_
1/2 1 2
0 0 0
_
,
and the E
(kl)
are the standard basis matrices for V (so E
(kl)
has a 1 in the k
th
row, l
th
column, and 0 elsewhere).
So we have that N(T) = span(B), where B = {F, E
(21)
, E
(22)
, E
(23)
} (by the form of N(T)
in (1), similarly to the range in the previous problem). Its also clear that B is linearly
independent, so it is a basis for N(T).
Range.
Let
B

=
__
1 0
0 0
_
,
_
0 1
0 0
__
.
Then Rg(T) = span(B

). To see this, we check that each side is a subset of the other. From
the denition of T, we clearly have Rg(T) span(B

). To see , note that


T
__
1
2
0 0
0 0 0
__
=
_
1 0
0 0
_
,
and similarly, we can get the second matrix in B

to be hit by T (meaning in Rg(T)). So


B

Rg(T). But Rg(T) is a subspace of W, so by Theorem 1.5, span(B

) Rg(T). As B
is also independent, it is a basis for Rg(T).
Dimensions.
Inspecting the bases we found, B and B

, for the nullspace and range respectively, we have


nullity(T) = 4 and rank(T) = 2. dim(V) = 6.
1-1/onto.
By Theorem 2.4, T is not 1-1 (its nullspace is non-trivial). rank(T) = 4 < dim(W), so by
the fact following problem 1, T is not onto.
Problem 5. Let V = P
2
(R) and W = P
3
(R).
T is linear.
Let f, g V and c R. Then
T(f +cg) = x(f +cg)+(f +cg)

= xf +xcg+f

+cg

= (xf +f

)+c(xg+g

) = T(f)+cT(g).
Note that the linearity of the derivative has been used to obtain the second equality.
Nullspace.
I will use the standard basis to represent functions in P
2
(R). That basis is {x
2
, x, 1}. Here
x / R, but x V = P
2
(R), i.e. x is a quadratic function - the identity function, dened by
x(r) = r for r R. x
2
denotes the square function - for r R, x
2
(r) = r
2
. 1 denotes the
constant function taking value 1. Also, x
3
W = P
3
(R) will denote the cube function.
For f V, let f
2
, f
1
, f
0
R be fs coecients with respect to this basis, so f = f
2
x
2
+f
1
x+f
0
(it should read f
0
1 but one can identify some a R with the constant function a1, which
MATH 110: HOMEWORK #3 5
Ill do).
A digression: note that in the denition of T, T(f) = xf + f

, we are multiplying two


vectors, x and f, and producing a vector xf W. In some situations, such as this, we can
make sense of vector multiplication (but note that V isnt closed under this multiplication).
Anyway, the nullspace:
T(f) = 0 xf + f

= 0 x(f
2
x
2
+ f
1
x + f
0
) + 2f
2
x + f
1
= 0
f
2
x
3
+ f
1
x
2
+ (f
0
+ 2f
2
)x + f
1
= 0 f
2
= f
1
= (f
0
+ 2f
2
) = 0
f
2
= f
1
= f
0
= 0 f = 0.
Note we have used the fact that {x
3
, x
2
, x, 1} is a linearly independent subset of W, to obtain
the equivalence of the 4
th
and 5
th
statements. Therefore N(T) = {0}, so is the basis for
the nullspace.
Range.
Using some of the calculations just done,
Rg(T) = {T(f)|f V} = {ax
3
+ bx
2
+ (c + 2a)x + b|a, b, c R} =
= {a(x
3
+ 2x) + b(x
2
+ 1) + cx|a, b, c R} =
= span(x
3
+ 2x, x
2
+ 1, x) = span(x
3
, x
2
+ 1, x).
Note that the second to last equality is by denition of span. For the last equality it is
enough to see that x
3
is in span(x
3
+2x, x
2
+1, x) and that x
3
+2x is in span(x
3
, x
2
+1, x).
Finally, the set {x
3
, x
2
+ 1, x} is clearly linearly independent (remember the 0 polynomial
must have all coecients 0). As this set also spans the range, it is a basis for the range.
(Again, we could use the dimension theorem here to conclude linear independence of the set.)
Dimension Theorem.
From the bases given, we have that rank(T) = 3 and nullity(T) = 0. Verifyingly, dim(P
2
(R)) =
3.
1-1/onto.
nullity(T) = 0 and dim(P
3
(R)) = 4 > rank(T), so by the fact following problem 1, T is 1-1,
but not onto.
Problem 6. Let V = M
nn
(F) and W = F.
If you couldnt gure this problem out, before reading the solution (if youre about to), you
should try doing it in the n = 2 case, as it is simpler, then the n = 3 case, which has one
more idea. The general case is really no dierent to the latter. Also, its easy to see what
rank(tr) is, so using the dimension theorem, this can give you a hint about the nullspace.
So stop reading now.
Linearity.
Let A, B V and c F. Then
tr(A + cB) =
i=n

i=1
(A + cB)
ii
=
i=n

i=1
(A
ii
+ cB
ii
) =
i=n

i=1
A
ii
+ c
i=n

i=1
B
ii
= tr(A) + c tr(B).
6 FARMER SCHLUTZENBERG
Nullspace.
This is the most dicult part. The point is that the condition tr(A) = 0 is just one solv-
able linear equation in (some of) the entries in A, so we can allow all entries to vary freely
except one on the diagonal, and then set that one to satisfy the equation. Another way to
see something like this will happen, is to note rst that rank(T) = 1 (see below), and as
dim(V) = n
2
, we must have nullity(T) = n
2
1 by the dimension theorem. So there will be
n
2
1 elements in the basis for the nullspace, which is why we can allow all but one (n
2
1)
of the entries to vary freely. Other than that we just need good notation.
A N(tr) tr(A) = 0
i=n

i=1
A
ii
= 0 A
nn
=
i=n1

i=1
A
ii
As weve reduced membership in N(tr) to A
nn
=

i=n1
i=1
A
ii
(where the sum is 0 if n = 1),
we have
N(tr) =
_
A V

A
nn
=
i=n1

i=1
A
ii
_
=

a
11
. . . a
1n
.
.
.
.
.
.
.
.
.
a
n1
. . . a
nn

a
ij
F, a
nn
=
i=n1

i=1
a
ii

a
11
. . . a
1,n1
a
1n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1,1
. . . a
n1,n1
a
n1,n
a
n1
. . . a
n,n1

i=n1
i=1
a
ii

|a
ij
F

.
Now let E
(kl)
, for 1 k, l n, be the standard basis vectors for V. (So E
(kl)
has a 1
in the k
th
row, l
th
column, and 0 elsewhere.) For 1 k n1, let D
(k)
= E
(kk)
E
(nn)
. Then

a
11
. . . a
1,n1
a
1n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1,1
. . . a
n1,n1
a
n1,n
a
n1
. . . a
n,n1

i=n1
i=1
a
ii

1i,jn
i=j
a
ij
E
(ij)
+
i=n1

i=1
a
ii
D
(i)
.
To see this equation holds, consider the n = 2 and n = 3 cases rst. Write the matrix as a
sum of matrices, one for each a
ij
(where i = n or j = n). Combining this equation and the
description of the nullspace above, we have N(tr) = span(B), where
B = {E
(kl)
|1 k, l n, k = l} {D
(k)
|1 k n 1}).
Its easy to check directly that B is linearly independent. So weve constructed a basis for
N(tr).
Range.
tr is linear, so its range must be a subspace of F. F is 1-dimensional, so its only subspaces
are {0} and F. Clearly 1 Rg(tr), as tr(E
(nn)
) = 1, for example. So Rg(tr) = F and a
basis for it is {1}.
MATH 110: HOMEWORK #3 7
Demented Theorem.
We had n
2
1 elements in our basis for N(tr), so nullity(tr) = n
2
1, and rank(tr) = 1, and
dim(V) = n
2
.
1-1/onto.
We already saw Rg(tr) = F, so tr is onto. Clearly its not 1-1?? Note quite - using that
nullity(tr) = n
2
1 from above, and the fact following problem 1, if n > 1, then tr is not
one-one, but if n = 1, then tr is one-one. (This makes sense as in the n = 1 case tr(A) = A
11
,
the single entry of A.)
Problem 7. Let T : V W where V and W are vector spaces over the eld F. (T is not
assumed to be linear.)
1. This is done in problem 1(d).
2. This property easily holds if T is linear. So let us assume the property holds, and
show that T is linear. We know that
(2) x, y V c F [T(cx + y) = cT(x) + T(y)]
So let x, y V. We have T(x +y) = T(1x+y) = 1T(x) +T(y) = T(x) +T(y) where
the middle equality holds by (2). So T preserves sums. To see T preserves scalar
multiplication, rst note that T(0) = T(0 + 0) = T(0) + T(0), so (by cancellation)
T(0) = 0. Now let x V and c F. Then
T(cx) = T(cx + 0) = cT(x) + T(0) = cT(x),
where we have again used (2) for the second equality, and the fact that T(0) = 0 for
the third.
3. Suppose T is linear. Then T(xy) = T(x+(1)y) = T(x)+(1)T(y) = T(x)T(y),
where condition (2) has been used for the second equality.
4. For n 1 an integer, let us denote by L
n
(linearity-n) the property
L
n
x
i
V a
i
F
_
T
_
i=n

i=1
a
i
x
i
_
=
i=n

i=1
a
i
T(x
i
)
_
.
Translating a little, L
2
says
x
1
, x
2
V a
1
, a
2
F [T(a
1
x
1
+ a
2
x
2
) = a
1
T(x
1
) + a
2
T(x
2
)] .
First, setting a
2
= 1, we get that L
2
implies condition (2), and therefore implies T is
linear.
So suppose T is linear. We want to prove that L
n
is true for every n. Given some
particular n, its easy to prove - you just keep applying Ts linearity to separate all
the terms in the sum and pull all the coecients through. But since we have to prove
it for innitely many cases, well use induction.
Firstly notice that L
1
is just the statement that T preserves multiplication, which is
true by linearity.
8 FARMER SCHLUTZENBERG
Now assume L
m
for some positive m. We want to prove L
m+1
. Let x
i
V and a
i
F
for 1 i m + 1. Then
T
_
i=m+1

i=1
a
i
x
i
_
= T
_
i=m

i=1
a
i
x
i
+ a
m+1
x
m+1
_
=
= T
_
i=m

i=1
a
i
x
i
_
+ T(a
m+1
x
m+1
) = T
_
i=m

i=1
a
i
x
i
_
+ a
m+1
T(x
m+1
).
Here we have used the linearity of T for the last two equalities. Now by inductive
hypothesis, L
m
, so the T(

ax) term is equal to a

aT(x) term, i.e. the above is


=
i=m

i=1
a
i
T(x
i
) + a
m+1
T(x
m+1
) =
i=m+1

i=1
a
i
T(x
i
).
Thus we have shown L
m+1
is true. So by induction, nL
n
.
Problem 13. Suppose c
1
v
1
+ . . . + c
k
v
k
= 0. Then
0 = T(0) = T(

c
i
v
i
) =

c
i
T(v
i
) =

c
i
w
i
.
(Here we have used property 4 from problem 7.) The w
i
s form an independent set, so (as the
above equation begins with 0), c
i
= 0 for each i, and therefore the v
i
s form an independent
set.
Problem 14.
(a) Suppose T is 1-1 and S is an independent subset of V. We need to show T(S) is
independent.
Let w
i
T(S), 1 i n, where the w
i
s are distinct, and suppose

c
i
w
i
= 0 for
some c
i
F.
Let v
i
be such that T(v
i
) = w
i
(the v
i
s exist by denition of T(S)). Notice that
the v
i
s are distinct: if not, we have some k < j such that v
k
= v
j
. But then
T(v
k
) = T(v
j
) so w
k
= w
j
. But the w
i
s were chosen distinct, which means k = j, a
contradiction. Now using property 4 of problem 7,
T(

c
i
v
i
) =

c
i
T(v
i
) =

c
i
w
i
= 0.
But T is 1-1, so N(T) = {0}, so we must have

c
i
v
i
= 0. As the v
i
s are distinct
elements of S, an independent set, we get c
i
= 0 (for each i). Thus we have shown
that T(S) is an independent set.
Now suppose T is not 1-1. Then N(T) {0}. Let v N(T), v = 0. Then {v} is
an independent set, but T({v}) = {0}, which is a dependent set. Therefore it is not
true that T carries all independent sets onto independent sets.
(b) Suppose T is 1-1. If S V is independent, then by (a), T(S) is independent. So
suppose S is dependent. Let v
1
, . . . , v
k
S be distinct, mutually dependent vectors,
and c
1
, . . . , c
k
F be such that c
1
v
1
+ . . . + c
k
v
k
= 0 non-trivially. Let w
i
= T(v
i
).
Then the w
i
s are distinct as T is 1-1 (if w
k
= w
j
then T(v
k
) = T(v
j
) but T is 1-1, so
v
k
= v
j
, but the v
i
s are distinct, so k = j). Applying T to the linear combination,
0 = T(0) = T(c
1
v
1
+ . . . + c
k
v
k
) = c
1
w
1
+ . . . + c
k
w
k
.
MATH 110: HOMEWORK #3 9
The c
i
s were chosen to be non-trivial (not all 0), so they provide a non-trivial so-
lution for the w
i
s. w
i
T(S), and as noted above, they are distinct, so T(S) is a
dependent set.
Note that it really is important to show the distinctness of the vectors above. Con-
sider the example of T : R
2
R
2
, T projecting onto the x-axis. T is not 1-1, and
there are dependent sets S such that T(S) is independent. Take S to be the line
parallel to the y=axis, through (1, 0). S is dependent as it has more than 2 elements.
But T(S) = {(1, 0)}, an independent set. Ignoring the issue of distinctness, the above
proof (that S is dependent implies T(S) is dependent) goes through. So make sure
you dont ignore it.
(c) As T is 1-1 and is independent, by part (b), T() is independent. We need
span() = W. But T is onto, so
W = T(V) = T(span()) = T
_
{

c
i
v
i
|c
i
F}
_
=
= {T(

c
i
v
i
)|c
i
F} = {

c
i
T(v
i
)|c
i
F} = span(T()).
Again weve used property 4 from problem 7.
Problem 15. Let V = P(R). First Ill nd a representation of T that it easier to deal with.
Let f V. We can express f in terms of the standard basis (as in problem 5), i.e.
f = a
n
x
n
+ a
n1
x
n1
+ . . . + a
1
x + a
0
where a
i
R. Note that the x
i
s are basis elements. Now
T(f)(r) =
_
r
0
f(t)dt =
_
t=r
t=0
(a
n
t
n
+ . . . + a
1
t + a
0
)dt =
=
a
n
n + 1
t
n+1
+ . . . + a
0
t + c|
t=r
t=0
=
a
n
n + 1
r
n+1
+ . . . + a
0
r.
Thus we have
(3) T(a
n
x
n
+ a
n1
x
n1
+ . . . + a
1
x + a
0
) =
a
n
n + 1
x
n+1
+ . . . + a
0
x.
From now on we can use this as our denition of T. Using this, linearity of T is then easy
to show.
Let us consider 1-1ness. It suces to prove that N(T) = {0}, by Theorem 2.4. Suppose
T(f) = 0, where f is as above. So the polynomial on the right side of (3) is the 0 function.
Hence the coecients
a
i
i+1
= 0 (as the x
i
s are independent, or in calculus language, the only
polynomial that is constantly 0 is the polynomial with all coecients 0), and so a
i
= 0 for
all i. Therefore f = 0. So we have shown N(T) is trivial, as required.
Now to see T is not onto. Inspecting (3), we can see that the constant term (the coecient
of 1) is 0. As this is true for all elements of Rg(T), we know 1 / Rg(T), so T is not onto.
Problem 16. Good-o, here we can assume T is linear. Its easy to see T is not 1-1, as the
derivative of any constant function is 0. So we just need ontoness. Here we can use that
correspondence between integral and derivative. Let T

be the transformation from problem


15 (the integral). Then for any f P(R), T(T

(f)) = f. This is easily seen using the form


of T

(f) in (3), and dierentiating that function. All the


a
i
i+1
terms return to a
i
s when we
10 FARMER SCHLUTZENBERG
bring down the power, and we get f back. Thus f Rg(T), and as f was arbitrary,
Rg(T) = P(R).
Problem 17.
(a) We have
rank(T) rank(T) + nullity(T) = dim(V) < dim(W),
where the rst inequality is because dimension is non-negative, the equality is the
Dimension Theorem, and the last inequality is by hypothesis. Putting it together,
rank(T) < dim(W), so by the fact following problem 1, T is not onto.
(b) If T were 1-1, nullity(T) = 0 (by the fact again), so dim(V) = rank(T) by the
Dimension Theorem. But rank(T) dim(W) as Rg(T) is a subspace of W, which
contradicts dim(V) > dim(W).
Problem 18. To invent an example satisfying some particular property, one can often be
motivated by experimenting with transformations in R
n
, particularly in R
2
and R
3
. This
problem actually requires us to have T : R
2
R
2
anyway. Linear transformations in
R
2
can be constructed from projections onto lines, reections about lines, rotations and
scaling. Any combination of these can be made (one following the other), and the resulting
transformation will still be linear (how does one prove that?) In this case, also notice that
N(T) = Rg(T) = nullity(T) = rank(T), and combining this with the Dimension Theorem
and V = R
2
, were forced to have nullity(T) = rank(T) = 1. The 1-dimensional subspaces of
R
2
are lines through the origin. So we need some line which is both the nullspace and range
of T. As all such lines are just a rotation away from one another, it wont matter what line
we work with, so lets aim for the x-axis.
Projection onto the y-axis has nullspace the x-axis. But its range is the y-axis, so it doesnt
work. But as I mentioned, we can compose it with another map. There are a maps which
swap the x-axis and y-axis, such as reection about the line x = y, or rotation by 90

. If we
rst project onto the y-axis, and then rotate (say clockwise, 90

), the resulting range will be


the x-axis. Also, rotation is 1-1, so it wont change the nullspace. So we have an example.
Let Proj
y
: R
2
R
2
be projection onto the y-axis (so Proj
y
(a, b) = (0, b)) and Rot : R
2

R
2
be clockwise rotation by 90

(so Rot(a, b) = (b, a)). Let T = Rot Proj


y
(this notation
means T(a, b) = Rot(Proj
y
(a, b))). So T(a, b) = (b, 0). Then N(T) = Rg(T), the x-axis.
Problem 19. Considering the previous example, we also could have rotated counter-
clockwise. This would have resulted in U(a, b) = (b, 0). The T above and this U suce.
Notice that we just have U = T (i.e. U(x) = T(x) for all x). We can get more examples
by taking any linear transformation T, letting a = 0, a = 1 (this needs char(F) = 2), and
letting U = aT (dened similarly to T).
There are ways where T and U seem less similar though. For instance, say T : R
3
R
3
,
and Rg(T) is a plane P. Suppose U

: P P is linear, and is 1-1 and onto (say reection


about some line in the plane, or rotation about the origin). Then letting U = U

T, we
have a more interesting example.
Problem 20. First we show T(V
1
) is a subspace of W. As T : V W, T(V
1
) W. As
0 V
1
and T(0) = 0 by linearity, we have 0 T(V
1
).
Let w
1
, w
2
T(V
1
) and c F. We need to show cw
1
+ w
2
T(V
1
). Let v
1
, v
2
V
1
be
such that T(v
1
) = w
1
and T(v
2
) = w
2
(these exist by denition of T(V
1
). By linearity, we
MATH 110: HOMEWORK #3 11
have
(4) T(cv
1
+ v
2
) = cT(v
1
) + T(v
2
) = cw
1
+ w
2
.
But V
1
is a subspace, so cv
1
+v
2
V
1
, so T(cv
1
+v
2
) T(V
1
). So by (4), cw
1
+w
2
T(V
1
),
as desired.
Now we show T
1
(W
1
) = {v V|T(v) W
1
} is a subspace of V. Firstly, T(0) = 0 W
1
,
so 0 T
1
(W
1
).
Now let x, y T
1
(W
1
) and c F. We need cx + y T
1
(W
1
), which is equivalent to
T(cx + y) W
1
. But T(cx + y) = cT(x) + T(y) and we know T(x), T(y) W
1
because
x, y T
1
(W
1
). As W
1
is a subspace, we have cT(x) + T(y) W
1
, so T(cx + y) W
1
, as
required.
Problem 21.
(a) Let y, z V and b F. T(y + bz) =
T(y
1
+ bz
1
, y
2
+ bz
2
, . . .) = (y
2
+ bz
2
, y
3
+ bz
3
, . . .) = (y
2
, y
3
, . . .) + b(z
2
, z
3
, . . .)
Showing U is linear is almost the same.
(b) Clearly T is not 1-1 as two sequences y, z may dier only on their rst entry. T is
onto because given y V, right-shifting then left-shifting the result gives y back, i.e.
T(U(y)) = y. Thus y Rg(T).
(c) U is not onto because it doesnt produce the sequence (1, 0, 0, . . .). It is 1-1 because
if U(y) = U(z) then T(U(y)) = T(U(z)), and as mentioned above, T(U(y)) = y and
likewise for z, so y = z.
Problem 22. Given v = (x, y, z) R
3
, we have v = x(1, 0, 0) + y(0, 1, 0) + z(0, 0, 1).
Therefore
T(v) = T(x(1, 0, 0) + y(0, 1, 0) + z(0, 0, 1)) = xT(1, 0, 0) + yT(0, 1, 0) + zT(0, 0, 1),
using linearity for the second equality. So we have to set a = T(1, 0, 0), b = T(0, 1, 0), c =
T(0, 0, 1), and then we get T(x, y, z) = ax + by + cz as required.
Notice we had no choice about what a, b, c were - they were chosen by T. Also notice we
can also write the action of T as matrix multiplication:
T(x, y, z) =
_
a b c

x
y
z

.
For the linear T : F
n
F case, we do the same thing, and get some a
i
F, such that
T(x
1
, . . . , x
n
) = a
1
x
1
+ . . . + a
n
x
n
.
Now suppose T : F
n
F
m
. Let {e
(1)
, . . . , e
(n)
} be the standard basis for F
n
(so e
(2)
=
(0, 1, 0, 0, . . . , 0), etc). We can do exactly the same thing as the R
3
R case, and express
any input vector as a linear combination of the e
(i)
s. So well need to know the value T
takes on these vectors. T(e
(i)
) F
m
for each i, so it is some m-tuple. Lets denote it a
(i)
, so
a
(i)
= T(e
(i)
). (This could be a little deceptive though - remember the a
(i)
s are not scalars
in this case, but m-tuples of scalars. Now by linearity, we get
T(x
1
, . . . , x
n
) = T(x
1
e
(1)
+ . . . + x
n
e
(n)
) = x
1
a
(1)
+ . . . + x
n
a
(n)
.
So weve found a generalization of the above - instead of scalars, we just have vectors. Again,
T forced the choice of the a
(i)
s on us. But lets try writing out all the vectors explicitly. Say
12 FARMER SCHLUTZENBERG
a
(i)
= (a
1i
, . . . , a
mi
) (remember they were m-tuples). Then, writing the vectors as columns,
T

x
1
.
.
.
x
n

= x
1

a
11
.
.
.
a
m1

+ . . . + x
n

a
1n
.
.
.
a
mn

a
11
. . . a
1n
.
.
.
.
.
.
.
.
.
a
m1
. . . a
mn

x
1
.
.
.
x
n

where at the bottom we have the multiplication of a matrix and vector. So any linear
transformation T : F
n
F
m
can be represented in the form T(x) = Ax, where A is the
matrix determined as above. As the choice of the a
(i)
s was determined by T, and the matrix
is composed of the components of the a
(i)
s, the matrix is completely determined by T.
Problem 25. To do this problem, make sure you read the denition on the previous page,
before problem 24. When a vector space V = W
1
W
2
, every vector in V has a unique
representation as v = w
1
+ w
2
, where w
i
W
i
. Then the projection onto W
1
along W
2
just
sends v to w
1
. You might think of the vectors in V being represented with two components
(though those components are vectors, not scalars), and were projecting onto one of the
components. This denition is abstract, though, and you can also think about it as follows.
Consider the xyplane in R
3
and a line L through the origin, not lying in the xyplane. We
can dene projection Pr onto the xyplane along L by sending any point p to the xyplane
along the line through p, parallel to L. This line intersects the xyplane at some unique
point, and we dene Pr(p) to be that point of intersection. This is a case of the general
denition.
(a) First we need to check that this makes sense, i.e. that V = R
3
= xyplane zaxis.
But by denition, this is just that R
3
= xyplane+zaxis and xyplanezaxis =
{0}. The rst of these comes from noting that (x, y, z) = (x, y, 0) + (0, 0, z)
xyplane + zaxis. The second is easy too.
Now, given v = (x, y, z) R
3
, v = (x, y, 0) +(0, 0, z), and (x, y, 0) xyplane and
(0, 0, z) zaxis. By denition, T(x, y, z) = (x, y, 0), so T is the projection on the
xyplane along the zaxis.
(b) We already know R
3
= zaxis xyplane from part (a) (by commutativity of
+ and , the denition is symmetric, so V = W
1
W
2
V = W
2
W
1
).
Given v = (x, y, z) R
3
, we have v = (0, 0, z) + (x, y, 0) is the zaxis + xyplane
representation of v, so we need T(x, y, z) = (0, 0, z).
(c) As the line L does not lie in the xyplane, its easy to show xyplane L = R
3
.
Given (a, b, c) R
3
, T(a, b, c) = (a c, b, 0), which is certainly in the xyplane, so
thats ne. Now consider (a, b, c) T(a, b, c) = (a, b, c) (a c, b, 0) = (c, 0, c). This
is in L. So (a, b, c) = v +w where v = T(a, b, c) xyplane and w = (c, 0, c) L, as
required.
Problem 35.
(a) We just need to verify Rg(T) N(T) = {0}. By exercise 29 of 1.6,
dim(V) = dim(Rg(T)) + dim(N(T)) dim(Rg(T) N(T))
= rank(T) + nullity(T) dim(Rg(T) N(T)).
MATH 110: HOMEWORK #3 13
So by the Dimension Theorem, dim(Rg(T) N(T)) = 0, so Rg(T) N(T) = {0}.
Finite-dimensionality is required by both the Dimension Theorem and exercise 29.
(b) By the same result,
dim(Rg(T) +N(T)) = dim(Rg(T)) + dim(N(T)) dim(Rg(T) N(T)),
but the last term is 0 by hypothesis. Combining this with the Dimension Theorem,
dim(Rg(T) +N(T)) = dim(V).
But Rg(T) + N(T) is a subspace of V, so by Theorem 1.11, V = Rg(T) + N(T),
which is all we needed. Finite-dimensionality was used in the same way here.
Problem 38. Let r + si, x + yi C. Then
Conj((r + si) + (x + yi)) = Conj((r + x) + (s + y)i) = (r + x) (s + y)i =
= (r si) + (x yi) = Conj(r + si) + Conj(x + yi).
However, i Conj(1) = i.1 = i, but Conj(i.1) = Conj(i) = i. So Conj does not preserve
multiplication by complex scalars.

You might also like