Math 110: Linear Algebra Homework #3: Farmer Schlutzenberg
Math 110: Linear Algebra Homework #3: Farmer Schlutzenberg
HOMEWORK #3
FARMER SCHLUTZENBERG
2.1: Linear Transformations, Null Spaces, and Ranges
Problem 1. Here V and W are vector spaces over a eld F and T : V W (but T may
not be linear).
(a) True. I cant nd it dened in the book, but T preserves sums and scalar products
is just a short way of saying that T satises conditions (a) and (b) in the denition
of linear transformation.
(b) False. (Note rstly that the statement is to be read if x, y [T(x + y) = T(x) + T(y)]
then T is linear.) If the base eld F = Z
p
then the statement is true. (To see T
preserves scalar multiplication, let c F. Then there is some n < p such that
c = 1 +. . . +1, where there are n terms on the right side. Then use distribution and
preservation of addition to get T(cv) = cT(v).) One can argue similarly to show the
statement is also true if F = Q.
To get a counter-example, we need F to be less trivial. For some motivation, consider
Theorem 2.1. It tells us that if Rg(T) is not a subspace of W, then T is not linear.
So we can look for T like this. Let V = W = Q(
2) over F = Q(
2) = q
where q, r Q. There are a few things to check.
Firstly we need to know that this really denes a function. That is, given v V,
there must be exactly one value that were telling T to send v to. If we had v =
q
1
+ r
1
2 = q
2
+ r
2
2 / Q). So if v = q
1
+r
1
2 = q
2
+r
2
2, then by uniqueness
of representation of v as a linear combination of the basis, q
1
= q
2
and r
1
= r
2
. So
there is no such problem. Also, we have dened T on all elements of V, as every
v V is of the form q + r
2. Note
Date: September 22.
1
2 FARMER SCHLUTZENBERG
that T(1) = T(1 + 0
2) = 1, so
2T(1) =
2. But T(
2.1) = T(0 + 1.
2) = 0.
Thus T does not preserve scalar multiplication.
Note also that the nullspace of T is not a subspace of V.
Verifying that T is a function can also be done by appealing to Theorem 2.6 (the
argument above is similar to that proof). For we can let V
= W
= Q(
2) be vector
spaces over Q, so {1,
2} is a basis for V
: V
2) = 0. Then by linearity, T
satises the
equation given above for T. This is not a contradiction as T
is only guaranteed to
be linear in terms of V
and W
=
__
1 0
0 0
_
,
_
0 1
0 0
__
.
Then Rg(T) = span(B
). To see this, we check that each side is a subset of the other. From
the denition of T, we clearly have Rg(T) span(B
) Rg(T). As B
is also independent, it is a basis for Rg(T).
Dimensions.
Inspecting the bases we found, B and B
= xf +xcg+f
+cg
= (xf +f
)+c(xg+g
) = T(f)+cT(g).
Note that the linearity of the derivative has been used to obtain the second equality.
Nullspace.
I will use the standard basis to represent functions in P
2
(R). That basis is {x
2
, x, 1}. Here
x / R, but x V = P
2
(R), i.e. x is a quadratic function - the identity function, dened by
x(r) = r for r R. x
2
denotes the square function - for r R, x
2
(r) = r
2
. 1 denotes the
constant function taking value 1. Also, x
3
W = P
3
(R) will denote the cube function.
For f V, let f
2
, f
1
, f
0
R be fs coecients with respect to this basis, so f = f
2
x
2
+f
1
x+f
0
(it should read f
0
1 but one can identify some a R with the constant function a1, which
MATH 110: HOMEWORK #3 5
Ill do).
A digression: note that in the denition of T, T(f) = xf + f
= 0 x(f
2
x
2
+ f
1
x + f
0
) + 2f
2
x + f
1
= 0
f
2
x
3
+ f
1
x
2
+ (f
0
+ 2f
2
)x + f
1
= 0 f
2
= f
1
= (f
0
+ 2f
2
) = 0
f
2
= f
1
= f
0
= 0 f = 0.
Note we have used the fact that {x
3
, x
2
, x, 1} is a linearly independent subset of W, to obtain
the equivalence of the 4
th
and 5
th
statements. Therefore N(T) = {0}, so is the basis for
the nullspace.
Range.
Using some of the calculations just done,
Rg(T) = {T(f)|f V} = {ax
3
+ bx
2
+ (c + 2a)x + b|a, b, c R} =
= {a(x
3
+ 2x) + b(x
2
+ 1) + cx|a, b, c R} =
= span(x
3
+ 2x, x
2
+ 1, x) = span(x
3
, x
2
+ 1, x).
Note that the second to last equality is by denition of span. For the last equality it is
enough to see that x
3
is in span(x
3
+2x, x
2
+1, x) and that x
3
+2x is in span(x
3
, x
2
+1, x).
Finally, the set {x
3
, x
2
+ 1, x} is clearly linearly independent (remember the 0 polynomial
must have all coecients 0). As this set also spans the range, it is a basis for the range.
(Again, we could use the dimension theorem here to conclude linear independence of the set.)
Dimension Theorem.
From the bases given, we have that rank(T) = 3 and nullity(T) = 0. Verifyingly, dim(P
2
(R)) =
3.
1-1/onto.
nullity(T) = 0 and dim(P
3
(R)) = 4 > rank(T), so by the fact following problem 1, T is 1-1,
but not onto.
Problem 6. Let V = M
nn
(F) and W = F.
If you couldnt gure this problem out, before reading the solution (if youre about to), you
should try doing it in the n = 2 case, as it is simpler, then the n = 3 case, which has one
more idea. The general case is really no dierent to the latter. Also, its easy to see what
rank(tr) is, so using the dimension theorem, this can give you a hint about the nullspace.
So stop reading now.
Linearity.
Let A, B V and c F. Then
tr(A + cB) =
i=n
i=1
(A + cB)
ii
=
i=n
i=1
(A
ii
+ cB
ii
) =
i=n
i=1
A
ii
+ c
i=n
i=1
B
ii
= tr(A) + c tr(B).
6 FARMER SCHLUTZENBERG
Nullspace.
This is the most dicult part. The point is that the condition tr(A) = 0 is just one solv-
able linear equation in (some of) the entries in A, so we can allow all entries to vary freely
except one on the diagonal, and then set that one to satisfy the equation. Another way to
see something like this will happen, is to note rst that rank(T) = 1 (see below), and as
dim(V) = n
2
, we must have nullity(T) = n
2
1 by the dimension theorem. So there will be
n
2
1 elements in the basis for the nullspace, which is why we can allow all but one (n
2
1)
of the entries to vary freely. Other than that we just need good notation.
A N(tr) tr(A) = 0
i=n
i=1
A
ii
= 0 A
nn
=
i=n1
i=1
A
ii
As weve reduced membership in N(tr) to A
nn
=
i=n1
i=1
A
ii
(where the sum is 0 if n = 1),
we have
N(tr) =
_
A V
A
nn
=
i=n1
i=1
A
ii
_
=
a
11
. . . a
1n
.
.
.
.
.
.
.
.
.
a
n1
. . . a
nn
a
ij
F, a
nn
=
i=n1
i=1
a
ii
a
11
. . . a
1,n1
a
1n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1,1
. . . a
n1,n1
a
n1,n
a
n1
. . . a
n,n1
i=n1
i=1
a
ii
|a
ij
F
.
Now let E
(kl)
, for 1 k, l n, be the standard basis vectors for V. (So E
(kl)
has a 1
in the k
th
row, l
th
column, and 0 elsewhere.) For 1 k n1, let D
(k)
= E
(kk)
E
(nn)
. Then
a
11
. . . a
1,n1
a
1n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1,1
. . . a
n1,n1
a
n1,n
a
n1
. . . a
n,n1
i=n1
i=1
a
ii
1i,jn
i=j
a
ij
E
(ij)
+
i=n1
i=1
a
ii
D
(i)
.
To see this equation holds, consider the n = 2 and n = 3 cases rst. Write the matrix as a
sum of matrices, one for each a
ij
(where i = n or j = n). Combining this equation and the
description of the nullspace above, we have N(tr) = span(B), where
B = {E
(kl)
|1 k, l n, k = l} {D
(k)
|1 k n 1}).
Its easy to check directly that B is linearly independent. So weve constructed a basis for
N(tr).
Range.
tr is linear, so its range must be a subspace of F. F is 1-dimensional, so its only subspaces
are {0} and F. Clearly 1 Rg(tr), as tr(E
(nn)
) = 1, for example. So Rg(tr) = F and a
basis for it is {1}.
MATH 110: HOMEWORK #3 7
Demented Theorem.
We had n
2
1 elements in our basis for N(tr), so nullity(tr) = n
2
1, and rank(tr) = 1, and
dim(V) = n
2
.
1-1/onto.
We already saw Rg(tr) = F, so tr is onto. Clearly its not 1-1?? Note quite - using that
nullity(tr) = n
2
1 from above, and the fact following problem 1, if n > 1, then tr is not
one-one, but if n = 1, then tr is one-one. (This makes sense as in the n = 1 case tr(A) = A
11
,
the single entry of A.)
Problem 7. Let T : V W where V and W are vector spaces over the eld F. (T is not
assumed to be linear.)
1. This is done in problem 1(d).
2. This property easily holds if T is linear. So let us assume the property holds, and
show that T is linear. We know that
(2) x, y V c F [T(cx + y) = cT(x) + T(y)]
So let x, y V. We have T(x +y) = T(1x+y) = 1T(x) +T(y) = T(x) +T(y) where
the middle equality holds by (2). So T preserves sums. To see T preserves scalar
multiplication, rst note that T(0) = T(0 + 0) = T(0) + T(0), so (by cancellation)
T(0) = 0. Now let x V and c F. Then
T(cx) = T(cx + 0) = cT(x) + T(0) = cT(x),
where we have again used (2) for the second equality, and the fact that T(0) = 0 for
the third.
3. Suppose T is linear. Then T(xy) = T(x+(1)y) = T(x)+(1)T(y) = T(x)T(y),
where condition (2) has been used for the second equality.
4. For n 1 an integer, let us denote by L
n
(linearity-n) the property
L
n
x
i
V a
i
F
_
T
_
i=n
i=1
a
i
x
i
_
=
i=n
i=1
a
i
T(x
i
)
_
.
Translating a little, L
2
says
x
1
, x
2
V a
1
, a
2
F [T(a
1
x
1
+ a
2
x
2
) = a
1
T(x
1
) + a
2
T(x
2
)] .
First, setting a
2
= 1, we get that L
2
implies condition (2), and therefore implies T is
linear.
So suppose T is linear. We want to prove that L
n
is true for every n. Given some
particular n, its easy to prove - you just keep applying Ts linearity to separate all
the terms in the sum and pull all the coecients through. But since we have to prove
it for innitely many cases, well use induction.
Firstly notice that L
1
is just the statement that T preserves multiplication, which is
true by linearity.
8 FARMER SCHLUTZENBERG
Now assume L
m
for some positive m. We want to prove L
m+1
. Let x
i
V and a
i
F
for 1 i m + 1. Then
T
_
i=m+1
i=1
a
i
x
i
_
= T
_
i=m
i=1
a
i
x
i
+ a
m+1
x
m+1
_
=
= T
_
i=m
i=1
a
i
x
i
_
+ T(a
m+1
x
m+1
) = T
_
i=m
i=1
a
i
x
i
_
+ a
m+1
T(x
m+1
).
Here we have used the linearity of T for the last two equalities. Now by inductive
hypothesis, L
m
, so the T(
i=1
a
i
T(x
i
) + a
m+1
T(x
m+1
) =
i=m+1
i=1
a
i
T(x
i
).
Thus we have shown L
m+1
is true. So by induction, nL
n
.
Problem 13. Suppose c
1
v
1
+ . . . + c
k
v
k
= 0. Then
0 = T(0) = T(
c
i
v
i
) =
c
i
T(v
i
) =
c
i
w
i
.
(Here we have used property 4 from problem 7.) The w
i
s form an independent set, so (as the
above equation begins with 0), c
i
= 0 for each i, and therefore the v
i
s form an independent
set.
Problem 14.
(a) Suppose T is 1-1 and S is an independent subset of V. We need to show T(S) is
independent.
Let w
i
T(S), 1 i n, where the w
i
s are distinct, and suppose
c
i
w
i
= 0 for
some c
i
F.
Let v
i
be such that T(v
i
) = w
i
(the v
i
s exist by denition of T(S)). Notice that
the v
i
s are distinct: if not, we have some k < j such that v
k
= v
j
. But then
T(v
k
) = T(v
j
) so w
k
= w
j
. But the w
i
s were chosen distinct, which means k = j, a
contradiction. Now using property 4 of problem 7,
T(
c
i
v
i
) =
c
i
T(v
i
) =
c
i
w
i
= 0.
But T is 1-1, so N(T) = {0}, so we must have
c
i
v
i
= 0. As the v
i
s are distinct
elements of S, an independent set, we get c
i
= 0 (for each i). Thus we have shown
that T(S) is an independent set.
Now suppose T is not 1-1. Then N(T) {0}. Let v N(T), v = 0. Then {v} is
an independent set, but T({v}) = {0}, which is a dependent set. Therefore it is not
true that T carries all independent sets onto independent sets.
(b) Suppose T is 1-1. If S V is independent, then by (a), T(S) is independent. So
suppose S is dependent. Let v
1
, . . . , v
k
S be distinct, mutually dependent vectors,
and c
1
, . . . , c
k
F be such that c
1
v
1
+ . . . + c
k
v
k
= 0 non-trivially. Let w
i
= T(v
i
).
Then the w
i
s are distinct as T is 1-1 (if w
k
= w
j
then T(v
k
) = T(v
j
) but T is 1-1, so
v
k
= v
j
, but the v
i
s are distinct, so k = j). Applying T to the linear combination,
0 = T(0) = T(c
1
v
1
+ . . . + c
k
v
k
) = c
1
w
1
+ . . . + c
k
w
k
.
MATH 110: HOMEWORK #3 9
The c
i
s were chosen to be non-trivial (not all 0), so they provide a non-trivial so-
lution for the w
i
s. w
i
T(S), and as noted above, they are distinct, so T(S) is a
dependent set.
Note that it really is important to show the distinctness of the vectors above. Con-
sider the example of T : R
2
R
2
, T projecting onto the x-axis. T is not 1-1, and
there are dependent sets S such that T(S) is independent. Take S to be the line
parallel to the y=axis, through (1, 0). S is dependent as it has more than 2 elements.
But T(S) = {(1, 0)}, an independent set. Ignoring the issue of distinctness, the above
proof (that S is dependent implies T(S) is dependent) goes through. So make sure
you dont ignore it.
(c) As T is 1-1 and is independent, by part (b), T() is independent. We need
span() = W. But T is onto, so
W = T(V) = T(span()) = T
_
{
c
i
v
i
|c
i
F}
_
=
= {T(
c
i
v
i
)|c
i
F} = {
c
i
T(v
i
)|c
i
F} = span(T()).
Again weve used property 4 from problem 7.
Problem 15. Let V = P(R). First Ill nd a representation of T that it easier to deal with.
Let f V. We can express f in terms of the standard basis (as in problem 5), i.e.
f = a
n
x
n
+ a
n1
x
n1
+ . . . + a
1
x + a
0
where a
i
R. Note that the x
i
s are basis elements. Now
T(f)(r) =
_
r
0
f(t)dt =
_
t=r
t=0
(a
n
t
n
+ . . . + a
1
t + a
0
)dt =
=
a
n
n + 1
t
n+1
+ . . . + a
0
t + c|
t=r
t=0
=
a
n
n + 1
r
n+1
+ . . . + a
0
r.
Thus we have
(3) T(a
n
x
n
+ a
n1
x
n1
+ . . . + a
1
x + a
0
) =
a
n
n + 1
x
n+1
+ . . . + a
0
x.
From now on we can use this as our denition of T. Using this, linearity of T is then easy
to show.
Let us consider 1-1ness. It suces to prove that N(T) = {0}, by Theorem 2.4. Suppose
T(f) = 0, where f is as above. So the polynomial on the right side of (3) is the 0 function.
Hence the coecients
a
i
i+1
= 0 (as the x
i
s are independent, or in calculus language, the only
polynomial that is constantly 0 is the polynomial with all coecients 0), and so a
i
= 0 for
all i. Therefore f = 0. So we have shown N(T) is trivial, as required.
Now to see T is not onto. Inspecting (3), we can see that the constant term (the coecient
of 1) is 0. As this is true for all elements of Rg(T), we know 1 / Rg(T), so T is not onto.
Problem 16. Good-o, here we can assume T is linear. Its easy to see T is not 1-1, as the
derivative of any constant function is 0. So we just need ontoness. Here we can use that
correspondence between integral and derivative. Let T
. If we
rst project onto the y-axis, and then rotate (say clockwise, 90
R
2
be clockwise rotation by 90
T, we
have a more interesting example.
Problem 20. First we show T(V
1
) is a subspace of W. As T : V W, T(V
1
) W. As
0 V
1
and T(0) = 0 by linearity, we have 0 T(V
1
).
Let w
1
, w
2
T(V
1
) and c F. We need to show cw
1
+ w
2
T(V
1
). Let v
1
, v
2
V
1
be
such that T(v
1
) = w
1
and T(v
2
) = w
2
(these exist by denition of T(V
1
). By linearity, we
MATH 110: HOMEWORK #3 11
have
(4) T(cv
1
+ v
2
) = cT(v
1
) + T(v
2
) = cw
1
+ w
2
.
But V
1
is a subspace, so cv
1
+v
2
V
1
, so T(cv
1
+v
2
) T(V
1
). So by (4), cw
1
+w
2
T(V
1
),
as desired.
Now we show T
1
(W
1
) = {v V|T(v) W
1
} is a subspace of V. Firstly, T(0) = 0 W
1
,
so 0 T
1
(W
1
).
Now let x, y T
1
(W
1
) and c F. We need cx + y T
1
(W
1
), which is equivalent to
T(cx + y) W
1
. But T(cx + y) = cT(x) + T(y) and we know T(x), T(y) W
1
because
x, y T
1
(W
1
). As W
1
is a subspace, we have cT(x) + T(y) W
1
, so T(cx + y) W
1
, as
required.
Problem 21.
(a) Let y, z V and b F. T(y + bz) =
T(y
1
+ bz
1
, y
2
+ bz
2
, . . .) = (y
2
+ bz
2
, y
3
+ bz
3
, . . .) = (y
2
, y
3
, . . .) + b(z
2
, z
3
, . . .)
Showing U is linear is almost the same.
(b) Clearly T is not 1-1 as two sequences y, z may dier only on their rst entry. T is
onto because given y V, right-shifting then left-shifting the result gives y back, i.e.
T(U(y)) = y. Thus y Rg(T).
(c) U is not onto because it doesnt produce the sequence (1, 0, 0, . . .). It is 1-1 because
if U(y) = U(z) then T(U(y)) = T(U(z)), and as mentioned above, T(U(y)) = y and
likewise for z, so y = z.
Problem 22. Given v = (x, y, z) R
3
, we have v = x(1, 0, 0) + y(0, 1, 0) + z(0, 0, 1).
Therefore
T(v) = T(x(1, 0, 0) + y(0, 1, 0) + z(0, 0, 1)) = xT(1, 0, 0) + yT(0, 1, 0) + zT(0, 0, 1),
using linearity for the second equality. So we have to set a = T(1, 0, 0), b = T(0, 1, 0), c =
T(0, 0, 1), and then we get T(x, y, z) = ax + by + cz as required.
Notice we had no choice about what a, b, c were - they were chosen by T. Also notice we
can also write the action of T as matrix multiplication:
T(x, y, z) =
_
a b c
x
y
z
.
For the linear T : F
n
F case, we do the same thing, and get some a
i
F, such that
T(x
1
, . . . , x
n
) = a
1
x
1
+ . . . + a
n
x
n
.
Now suppose T : F
n
F
m
. Let {e
(1)
, . . . , e
(n)
} be the standard basis for F
n
(so e
(2)
=
(0, 1, 0, 0, . . . , 0), etc). We can do exactly the same thing as the R
3
R case, and express
any input vector as a linear combination of the e
(i)
s. So well need to know the value T
takes on these vectors. T(e
(i)
) F
m
for each i, so it is some m-tuple. Lets denote it a
(i)
, so
a
(i)
= T(e
(i)
). (This could be a little deceptive though - remember the a
(i)
s are not scalars
in this case, but m-tuples of scalars. Now by linearity, we get
T(x
1
, . . . , x
n
) = T(x
1
e
(1)
+ . . . + x
n
e
(n)
) = x
1
a
(1)
+ . . . + x
n
a
(n)
.
So weve found a generalization of the above - instead of scalars, we just have vectors. Again,
T forced the choice of the a
(i)
s on us. But lets try writing out all the vectors explicitly. Say
12 FARMER SCHLUTZENBERG
a
(i)
= (a
1i
, . . . , a
mi
) (remember they were m-tuples). Then, writing the vectors as columns,
T
x
1
.
.
.
x
n
= x
1
a
11
.
.
.
a
m1
+ . . . + x
n
a
1n
.
.
.
a
mn
a
11
. . . a
1n
.
.
.
.
.
.
.
.
.
a
m1
. . . a
mn
x
1
.
.
.
x
n
where at the bottom we have the multiplication of a matrix and vector. So any linear
transformation T : F
n
F
m
can be represented in the form T(x) = Ax, where A is the
matrix determined as above. As the choice of the a
(i)
s was determined by T, and the matrix
is composed of the components of the a
(i)
s, the matrix is completely determined by T.
Problem 25. To do this problem, make sure you read the denition on the previous page,
before problem 24. When a vector space V = W
1
W
2
, every vector in V has a unique
representation as v = w
1
+ w
2
, where w
i
W
i
. Then the projection onto W
1
along W
2
just
sends v to w
1
. You might think of the vectors in V being represented with two components
(though those components are vectors, not scalars), and were projecting onto one of the
components. This denition is abstract, though, and you can also think about it as follows.
Consider the xyplane in R
3
and a line L through the origin, not lying in the xyplane. We
can dene projection Pr onto the xyplane along L by sending any point p to the xyplane
along the line through p, parallel to L. This line intersects the xyplane at some unique
point, and we dene Pr(p) to be that point of intersection. This is a case of the general
denition.
(a) First we need to check that this makes sense, i.e. that V = R
3
= xyplane zaxis.
But by denition, this is just that R
3
= xyplane+zaxis and xyplanezaxis =
{0}. The rst of these comes from noting that (x, y, z) = (x, y, 0) + (0, 0, z)
xyplane + zaxis. The second is easy too.
Now, given v = (x, y, z) R
3
, v = (x, y, 0) +(0, 0, z), and (x, y, 0) xyplane and
(0, 0, z) zaxis. By denition, T(x, y, z) = (x, y, 0), so T is the projection on the
xyplane along the zaxis.
(b) We already know R
3
= zaxis xyplane from part (a) (by commutativity of
+ and , the denition is symmetric, so V = W
1
W
2
V = W
2
W
1
).
Given v = (x, y, z) R
3
, we have v = (0, 0, z) + (x, y, 0) is the zaxis + xyplane
representation of v, so we need T(x, y, z) = (0, 0, z).
(c) As the line L does not lie in the xyplane, its easy to show xyplane L = R
3
.
Given (a, b, c) R
3
, T(a, b, c) = (a c, b, 0), which is certainly in the xyplane, so
thats ne. Now consider (a, b, c) T(a, b, c) = (a, b, c) (a c, b, 0) = (c, 0, c). This
is in L. So (a, b, c) = v +w where v = T(a, b, c) xyplane and w = (c, 0, c) L, as
required.
Problem 35.
(a) We just need to verify Rg(T) N(T) = {0}. By exercise 29 of 1.6,
dim(V) = dim(Rg(T)) + dim(N(T)) dim(Rg(T) N(T))
= rank(T) + nullity(T) dim(Rg(T) N(T)).
MATH 110: HOMEWORK #3 13
So by the Dimension Theorem, dim(Rg(T) N(T)) = 0, so Rg(T) N(T) = {0}.
Finite-dimensionality is required by both the Dimension Theorem and exercise 29.
(b) By the same result,
dim(Rg(T) +N(T)) = dim(Rg(T)) + dim(N(T)) dim(Rg(T) N(T)),
but the last term is 0 by hypothesis. Combining this with the Dimension Theorem,
dim(Rg(T) +N(T)) = dim(V).
But Rg(T) + N(T) is a subspace of V, so by Theorem 1.11, V = Rg(T) + N(T),
which is all we needed. Finite-dimensionality was used in the same way here.
Problem 38. Let r + si, x + yi C. Then
Conj((r + si) + (x + yi)) = Conj((r + x) + (s + y)i) = (r + x) (s + y)i =
= (r si) + (x yi) = Conj(r + si) + Conj(x + yi).
However, i Conj(1) = i.1 = i, but Conj(i.1) = Conj(i) = i. So Conj does not preserve
multiplication by complex scalars.