0% found this document useful (0 votes)
78 views465 pages

Quantum Theory, Groups and Representations: An Introduction

The document is a draft of an introduction to Quantum Theory, Groups, and Representations by Peter Woit, covering fundamental principles of quantum mechanics, unitary group representations, and various mathematical structures such as Lie algebras and symplectic geometry. It includes detailed sections on specific groups like U(1) and SU(2), as well as applications to quantum systems and particle physics. The document is structured into chapters with extensive references for further reading.

Uploaded by

Manel Paiva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views465 pages

Quantum Theory, Groups and Representations: An Introduction

The document is a draft of an introduction to Quantum Theory, Groups, and Representations by Peter Woit, covering fundamental principles of quantum mechanics, unitary group representations, and various mathematical structures such as Lie algebras and symplectic geometry. It includes detailed sections on specific groups like U(1) and SU(2), as well as applications to quantum systems and particle physics. The document is structured into chapters with extensive references for further reading.

Uploaded by

Manel Paiva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 465

Quantum Theory, Groups and Representations:

An Introduction
(under construction)

Peter Woit
Department of Mathematics, Columbia University
[email protected]

February 25, 2015


c 2015 Peter Woit
All rights reserved.

ii
Contents

Preface xi

1 Introduction and Overview 1


1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Basic principles of quantum mechanics . . . . . . . . . . . . . . . 2
1.2.1 Fundamental axioms of quantum mechanics . . . . . . . . 3
1.2.2 Principles of measurement theory . . . . . . . . . . . . . . 4
1.3 Unitary group representations . . . . . . . . . . . . . . . . . . . . 5
1.4 Representations and quantum mechanics . . . . . . . . . . . . . . 7
1.5 Symmetry groups and their representations on function spaces . 8
1.6 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 The Group U (1) and its Representations 13


2.1 Some representation theory . . . . . . . . . . . . . . . . . . . . . 14
2.2 The group U (1) and its representations . . . . . . . . . . . . . . 16
2.3 The charge operator . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Conservation of charge and U (1) symmetry . . . . . . . . . . . . 21
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Two-state Systems and SU (2) 23


3.1 The two-state quantum system . . . . . . . . . . . . . . . . . . . 24
3.1.1 The Pauli matrices: observables of the two-state quantum
system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.2 Exponentials of Pauli matrices: unitary transformations
of the two-state system . . . . . . . . . . . . . . . . . . . 26
3.2 Commutation relations for Pauli matrices . . . . . . . . . . . . . 29
3.3 Dynamics of a two-state system . . . . . . . . . . . . . . . . . . . 31
3.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4 Linear Algebra Review, Unitary and Orthogonal Groups 33


4.1 Vector spaces and linear maps . . . . . . . . . . . . . . . . . . . . 33
4.2 Dual vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

iii
4.4 Inner products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5 Adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.6 Orthogonal and unitary transformations . . . . . . . . . . . . . . 40
4.6.1 Orthogonal groups . . . . . . . . . . . . . . . . . . . . . . 41
4.6.2 Unitary groups . . . . . . . . . . . . . . . . . . . . . . . . 42
4.7 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . 43
4.8 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5 Lie Algebras and Lie Algebra Representations 45


5.1 Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Lie algebras of the orthogonal and unitary groups . . . . . . . . . 48
5.2.1 Lie algebra of the orthogonal group . . . . . . . . . . . . . 49
5.2.2 Lie algebra of the unitary group . . . . . . . . . . . . . . 50
5.3 Lie algebra representations . . . . . . . . . . . . . . . . . . . . . 51
5.4 Complexification . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.5 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 56

6 The Rotation and Spin Groups in 3 and 4 Dimensions 59


6.1 The rotation group in three dimensions . . . . . . . . . . . . . . 59
6.2 Spin groups in three and four dimensions . . . . . . . . . . . . . 62
6.2.1 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.2.2 Rotations and spin groups in four dimensions . . . . . . . 64
6.2.3 Rotations and spin groups in three dimensions . . . . . . 64
6.2.4 The spin group and SU (2) . . . . . . . . . . . . . . . . . 68
6.3 A summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 71

7 Rotations and the Spin 12 Particle in a Magnetic Field 73


7.1 The spinor representation . . . . . . . . . . . . . . . . . . . . . . 73
7.2 The spin 1/2 particle in a magnetic field . . . . . . . . . . . . . . 74
7.3 The Heisenberg picture . . . . . . . . . . . . . . . . . . . . . . . 78
7.4 The Bloch sphere and complex projective space . . . . . . . . . . 79
7.5 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 84

8 Representations of SU (2) and SO(3) 85


8.1 Representations of SU (2): classification . . . . . . . . . . . . . . 86
8.1.1 Weight decomposition . . . . . . . . . . . . . . . . . . . . 86
8.1.2 Lie algebra representations: raising and lowering operators 88
8.2 Representations of SU (2): construction . . . . . . . . . . . . . . 92
8.3 Representations of SO(3) and spherical harmonics . . . . . . . . 95
8.4 The Casimir operator . . . . . . . . . . . . . . . . . . . . . . . . 101
8.5 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 103

iv
9 Tensor Products, Entanglement, and Addition of Spin 105
9.1 Tensor products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
9.2 Composite quantum systems and tensor products . . . . . . . . . 107
9.3 Indecomposable vectors and entanglement . . . . . . . . . . . . . 109
9.4 Tensor products of representations . . . . . . . . . . . . . . . . . 109
9.4.1 Tensor products of SU (2) representations . . . . . . . . . 110
9.4.2 Characters of representations . . . . . . . . . . . . . . . . 111
9.4.3 Some examples . . . . . . . . . . . . . . . . . . . . . . . . 112
9.5 Bilinear forms and tensor products . . . . . . . . . . . . . . . . . 113
9.6 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 115

10 Energy, Momentum and Translation Groups 117


10.1 Energy, momentum and space-time translations . . . . . . . . . . 118
10.2 Periodic boundary conditions and the group U (1) . . . . . . . . . 123
10.3 The group R and the Fourier transform . . . . . . . . . . . . . . 126
10.3.1 Delta functions . . . . . . . . . . . . . . . . . . . . . . . . 129
10.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 131

11 The Heisenberg group and the Schrödinger Representation 133


11.1 The position operator and the Heisenberg Lie algebra . . . . . . 134
11.1.1 Position space representation . . . . . . . . . . . . . . . . 134
11.1.2 Momentum space representation . . . . . . . . . . . . . . 135
11.1.3 Physical interpretation . . . . . . . . . . . . . . . . . . . . 136
11.2 The Heisenberg Lie algebra . . . . . . . . . . . . . . . . . . . . . 137
11.3 The Heisenberg group . . . . . . . . . . . . . . . . . . . . . . . . 138
11.4 The Schrödinger representation . . . . . . . . . . . . . . . . . . . 139
11.5 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 142

12 The Poisson Bracket and Symplectic Geometry 143


12.1 Classical mechanics and the Poisson bracket . . . . . . . . . . . . 143
12.2 The Poisson bracket and the Heisenberg Lie algebra . . . . . . . 146
12.3 Symplectic geometry . . . . . . . . . . . . . . . . . . . . . . . . . 148
12.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 151

13 Hamiltonian Vector Fields and the Moment Map 153


13.1 Vector fields and the exponential map . . . . . . . . . . . . . . . 153
13.2 Hamiltonian vector fields and canonical transformations . . . . . 155
13.3 Group actions on M and the moment map . . . . . . . . . . . . . 160
13.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 164

14 Quadratic Polynomials and the Symplectic Group 165


14.1 The symplectic group . . . . . . . . . . . . . . . . . . . . . . . . 165
14.1.1 The symplectic group for d = 1 . . . . . . . . . . . . . . . 166
14.1.2 The symplectic group for arbitary d . . . . . . . . . . . . 169
14.2 The symplectic group and automorphisms of the Heisenberg group170
14.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 175

v
15 Quantization 177
15.1 Canonical quantization . . . . . . . . . . . . . . . . . . . . . . . . 177
15.2 The Groenewold-van Hove no-go theorem . . . . . . . . . . . . . 179
15.3 Canonical quantization in d dimensions . . . . . . . . . . . . . . 180
15.4 Quantization and symmetries . . . . . . . . . . . . . . . . . . . . 181
15.5 More general notions of quantization . . . . . . . . . . . . . . . . 182
15.6 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 183

16 Semi-direct Products 185


16.1 An example: the Euclidean group . . . . . . . . . . . . . . . . . . 185
16.2 Semi-direct product groups . . . . . . . . . . . . . . . . . . . . . 186
16.3 Semi-direct product Lie algebras . . . . . . . . . . . . . . . . . . 188
16.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 190

17 The Quantum Free Particle as a Representation of the Eu-


clidean Group 191
17.1 The quantum free particle and representations of E(2) . . . . . . 192
17.2 The case of E(3) . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
17.3 Other representations of E(3) . . . . . . . . . . . . . . . . . . . . 199
17.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 201

18 Representations of Semi-direct Products 203


18.1 Intertwining operators and the metaplectic representation . . . . 204
18.2 Some examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
18.2.1 The SO(2) action on the d = 1 phase space . . . . . . . . 206
18.2.2 The SO(2) action by rotations of the plane for d = 2 . . . 208
18.3 Representations of N o K, N commutative . . . . . . . . . . . . 209
18.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 212

19 Central Potentials and the Hydrogen Atom 213


19.1 Quantum particle in a central potential . . . . . . . . . . . . . . 213
19.2 so(4) symmetry and the Coulomb potential . . . . . . . . . . . . 217
19.3 The hydrogen atom . . . . . . . . . . . . . . . . . . . . . . . . . . 221
19.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 222

20 The Harmonic Oscillator 223


20.1 The harmonic oscillator with one degree of freedom . . . . . . . . 224
20.2 Creation and annihilation operators . . . . . . . . . . . . . . . . 226
20.3 The Bargmann-Fock representation . . . . . . . . . . . . . . . . . 229
20.4 The Bargmann transform . . . . . . . . . . . . . . . . . . . . . . 231
20.5 Multiple degrees of freedom . . . . . . . . . . . . . . . . . . . . . 232
20.6 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 234

vi
21 The Harmonic Oscillator as a Representation of the Heisenberg
Group 235
21.1 Complex structures and phase space . . . . . . . . . . . . . . . . 236
21.2 Complex structures and quantization . . . . . . . . . . . . . . . . 238
21.3 The positivity condition on J . . . . . . . . . . . . . . . . . . . . 240
21.4 Complex structures for d = 1 and squeezed states . . . . . . . . . 243
21.5 Coherent states and the Heisenberg group action . . . . . . . . . 245
21.6 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 247

22 The Harmonic Oscillator and the Metaplectic Representation,


d=1 249
22.1 The metaplectic representation for d = 1 . . . . . . . . . . . . . . 249
22.2 Complex structures and the SL(2, R) action on M . . . . . . . . 253
22.3 Normal Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
22.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 257

23 The Harmonic Oscillator as a Representation of U (d) 259


23.1 Complex structures and the Sp(2d, R) action on M . . . . . . . 260
23.2 The metaplectic representation and U (d) ⊂ Sp(2d, R) . . . . . . 263
23.3 Examples in d = 2 and 3 . . . . . . . . . . . . . . . . . . . . . . . 265
23.3.1 Two degrees of freedom and SU (2) . . . . . . . . . . . . . 265
23.3.2 Three degrees of freedom and SO(3) . . . . . . . . . . . . 268
23.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 269

24 The Fermionic Oscillator 271


24.1 Canonical anticommutation relations and the fermionic oscillator 271
24.2 Multiple degrees of freedom . . . . . . . . . . . . . . . . . . . . . 273
24.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 276

25 Weyl and Clifford Algebras 277


25.1 The Complex Weyl and Clifford algebras . . . . . . . . . . . . . . 277
25.1.1 One degree of freedom, bosonic case . . . . . . . . . . . . 277
25.1.2 One degree of freedom, fermionic case . . . . . . . . . . . 278
25.1.3 Multiple degrees of freedom . . . . . . . . . . . . . . . . . 280
25.2 Real Clifford algebras . . . . . . . . . . . . . . . . . . . . . . . . 281
25.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 283

26 Clifford Algebras and Geometry 285


26.1 Non-degenerate bilinear forms . . . . . . . . . . . . . . . . . . . . 285
26.2 Clifford algebras and geometry . . . . . . . . . . . . . . . . . . . 287
26.2.1 Rotations as iterated orthogonal reflections . . . . . . . . 289
26.2.2 The Lie algebra of the rotation group and quadratic ele-
ments of the Clifford algebra . . . . . . . . . . . . . . . . 290
26.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 292

vii
27 Anticommuting Variables and Pseudo-classical Mechanics 293
27.1 The Grassmann algebra of polynomials on anticommuting gener-
ators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
27.2 Pseudo-classical mechanics and the fermionic Poisson bracket . . 296
27.3 Examples of pseudo-classical mechanics . . . . . . . . . . . . . . 299
27.3.1 The pseudo-classical spin degree of freedom . . . . . . . . 300
27.3.2 The pseudo-classical fermionic oscillator . . . . . . . . . . 301
27.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 302

28 Fermionic Quantization and Spinors 303


28.1 Quantization of pseudo-classical systems . . . . . . . . . . . . . . 303
28.1.1 Quantization of the pseudo-classical spin . . . . . . . . . . 307
28.2 The Schrödinger representation for fermions: ghosts . . . . . . . 307
28.3 Spinors and the Bargmann-Fock construction . . . . . . . . . . . 309
28.4 Complex structures, U (d) ⊂ SO(2d) and the spinor representation 311
28.5 An example: spinors for SO(4) . . . . . . . . . . . . . . . . . . . 314
28.6 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 316

29 A Summary: Parallels Between Bosonic and Fermionic Quan-


tization 317

30 Supersymmetry, Some Simple Examples 319


30.1 The supersymmetric oscillator . . . . . . . . . . . . . . . . . . . . 319
30.2 Supersymmetric quantum mechanics with a superpotential . . . . 322
30.3 Supersymmetric quantum mechanics and differential forms . . . . 325
30.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 325

31 The Pauli Equation and the Dirac Operator 327


1
31.1 The Pauli operator and free spin 2 particles in d = 3 . . . . . . . 327
31.2 The Dirac operator . . . . . . . . . . . . . . . . . . . . . . . . . . 333
31.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 333

32 Lagrangian Methods and the Path Integral 335


32.1 Lagrangian mechanics . . . . . . . . . . . . . . . . . . . . . . . . 335
32.2 Path integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
32.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 343

33 Quantization of Infinite-dimensional Phase Spaces 345


33.1 Inequivalent irreducible representations . . . . . . . . . . . . . . 346
33.2 The anomaly and the Schwinger term . . . . . . . . . . . . . . . 347
33.3 Higher order operators and renormalization . . . . . . . . . . . . 349
33.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 350

viii
34 Multi-particle Systems and Non-relativistic Quantum Fields 351
34.1 Multi-particle quantum systems as quanta of a harmonic oscillator352
34.1.1 Bosons and the quantum harmonic oscillator . . . . . . . 352
34.1.2 Fermions and the fermionic oscillator . . . . . . . . . . . . 354
34.2 Solutions to the free particle Schrödinger equation . . . . . . . . 354
34.2.1 Box normalization . . . . . . . . . . . . . . . . . . . . . . 355
34.2.2 Continuum normalization . . . . . . . . . . . . . . . . . . 358
34.3 Quantum field operators . . . . . . . . . . . . . . . . . . . . . . . 359
34.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 364

35 Field Quantization and Dynamics for Non-relativistic Quantum


Fields 365
35.1 Quantization of classical fields . . . . . . . . . . . . . . . . . . . . 365
35.2 Dynamics of the free quantum field . . . . . . . . . . . . . . . . . 368
35.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 372

36 Symmetries and Non-relativistic Quantum Fields 373


36.1 Internal symmetries . . . . . . . . . . . . . . . . . . . . . . . . . 373
36.1.1 U (1) symmetry . . . . . . . . . . . . . . . . . . . . . . . . 374
36.1.2 U (m) symmetry . . . . . . . . . . . . . . . . . . . . . . . 376
36.2 Spatial symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . 378
36.3 Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
36.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 381

37 Minkowski Space and the Lorentz Group 383


37.1 Minkowski space . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
37.2 The Lorentz group and its Lie algebra . . . . . . . . . . . . . . . 387
37.3 Spin and the Lorentz group . . . . . . . . . . . . . . . . . . . . . 389
37.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 392

38 Representations of the Lorentz Group 393


38.1 Representations of the Lorentz group . . . . . . . . . . . . . . . . 393
38.2 Dirac γ matrices and Cliff(3, 1) . . . . . . . . . . . . . . . . . . . 398
38.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 402

39 The Poincaré Group and its Representations 403


39.1 The Poincaré group and its Lie algebra . . . . . . . . . . . . . . . 403
39.2 Representations of the Poincaré group . . . . . . . . . . . . . . . 405
39.2.1 Positive energy time-like orbits . . . . . . . . . . . . . . . 407
39.2.2 Negative energy time-like orbits . . . . . . . . . . . . . . . 408
39.2.3 Space-like orbits . . . . . . . . . . . . . . . . . . . . . . . 408
39.2.4 The zero orbit . . . . . . . . . . . . . . . . . . . . . . . . 408
39.2.5 Positive energy null orbits . . . . . . . . . . . . . . . . . . 409
39.2.6 Negative energy null orbits . . . . . . . . . . . . . . . . . 409
39.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 410

ix
40 The Klein-Gordon Equation and Scalar Quantum Fields 411
40.1 The Klein-Gordon equation and its solutions . . . . . . . . . . . 412
40.2 Classical relativistic scalar field theory . . . . . . . . . . . . . . . 415
40.3 The complex structure on the space of Klein-Gordon solutions . 417
40.4 Quantization of the real scalar field . . . . . . . . . . . . . . . . . 419
40.5 The propagator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
40.6 Fermionic scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
40.7 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 422

41 Symmetries and Relativistic Scalar Quantum Fields 423


41.1 Internal symmetries . . . . . . . . . . . . . . . . . . . . . . . . . 423
41.1.1 SO(m) symmetry and real scalar fields . . . . . . . . . . . 424
41.1.2 U (1) symmetry and complex scalar fields . . . . . . . . . 426
41.2 Poincaré symmetry and scalar fields . . . . . . . . . . . . . . . . 429
41.3 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 429

42 U (1) Gauge Symmetry and Coupling to the Electromagnetic


Field 431
42.1 U (1) gauge symmetry . . . . . . . . . . . . . . . . . . . . . . . . 431
42.2 Electric and magnetic fields . . . . . . . . . . . . . . . . . . . . . 433
42.3 The Pauli-Schrödinger equation in an electromagnetic field . . . 434
42.4 Non-abelian gauge symmetry . . . . . . . . . . . . . . . . . . . . 434
42.5 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 434

43 Quantization of the Electromagnetic Field: the Photon 435


43.1 Maxwell’s equations . . . . . . . . . . . . . . . . . . . . . . . . . 435
43.2 Hamiltonian formalism for electromagnetic fields . . . . . . . . . 436
43.3 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
43.4 Field operators for the vector potential . . . . . . . . . . . . . . . 436
43.5 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 436

44 The Dirac Equation and Spin-1/2 Fields 437


44.1 The Dirac and Weyl Equations . . . . . . . . . . . . . . . . . . . 437
44.2 Quantum Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
44.3 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
44.4 The propagator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
44.5 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 440

45 An Introduction to the Standard Model 441


45.1 Non-Abelian gauge fields . . . . . . . . . . . . . . . . . . . . . . . 441
45.2 Fundamental fermions . . . . . . . . . . . . . . . . . . . . . . . . 441
45.3 Spontaneous symmetry breaking . . . . . . . . . . . . . . . . . . 441
45.4 For further reading . . . . . . . . . . . . . . . . . . . . . . . . . . 441

46 Further Topics 443

A Conventions 445

x
Preface

This document began as course notes prepared for a class taught at Columbia
during the 2012-13 academic year. The intent was to cover the basics of quantum
mechanics, up to and including basic material on relativistic quantum field
theory, from a point of view emphasizing the role of unitary representations of
Lie groups in the foundations of the subject. It has been significantly rewritten
and extended during the past year and the intent is to continue this process
based upon experience teaching the same material during 2014-5. The current
state of the document is that of a first draft of a book. As changes are made,
the latest version will be available at
http://www.math.columbia.edu/~woit/QM/qmbook.pdf
Corrections, comments, criticism, and suggestions about improvements are
encouraged, with the best way to contact me email to [email protected]
The approach to this material is simultaneously rather advanced, using cru-
cially some fundamental mathematical structures normally only discussed in
graduate mathematics courses, while at the same time trying to do this in as
elementary terms as possible. The Lie groups needed are relatively simple ones
that can be described purely in terms of small matrices. Much of the represen-
tation theory will just use standard manipulations of such matrices. The only
prerequisite for the course as taught was linear algebra and multi-variable cal-
culus. My hope is that this level of presentation will simultaneously be useful to
mathematics students trying to learn something about both quantum mechan-
ics and representation theory, as well as to physics students who already have
seen some quantum mechanics, but would like to know more about the mathe-
matics underlying the subject, especially that relevant to exploiting symmetry
principles.
The topics covered often intentionally avoid overlap with the material of
standard physics courses in quantum mechanics and quantum field theory, for
which many excellent textbooks are available. This document is best read in
conjunction with such a text. Some of the main differences with standard physics
presentations include:

• The role of Lie groups, Lie algebras, and their unitary representations is
systematically emphasized, including not just the standard use of these to
derive consequences for the theory of a “symmetry” generated by operators
commuting with the Hamiltonian.

xi
• Symplectic geometry and the role of the Lie algebra of functions on phase
space in Hamiltonian mechanics is emphasized, with quantization just the
passage to a unitary representation of (a subalgebra of) this Lie algebra.
• The role of the metaplectic representation and the subtleties of the pro-
jective factor involved are described in detail.

• The parallel role of the Clifford algebra and spinor representation are
extensively investigated.
• Some topics usually first encountered in the context of relativistic quan-
tum field theory are instead first developed in simpler non-relativistic or
finite-dimensional contexts. Non-relativistic quantum field theory based
on the Schrödinger equation is described in detail before moving on to
the relativistic case. The topic of irreducible representations of space-
time symmetry groups is first encountered with the case of the Euclidean
group, where the implications for the non-relativistic theory are explained.
The analogous problem for the relativistic case, that of the irreducible rep-
resentations of the Poincaré group, is then worked out later on.
• The emphasis is on the Hamiltonian formalism and its representation-
theoretical implications, with the Lagrangian formalism de-emphasized.
In particular, the operators generating symmetry transformations are de-
rived using the moment map for the action of such transformations on
phase space, not by invoking Noether’s theorem for transformations that
leave invariant a Lagrangian.
• Care is taken to keep track of the distinction between vector spaces and
their duals, as well as the distinction between real and complex vector
spaces, making clear exactly where complexification and the choice of a
complex structure enters the theory.
• A fully rigorous treatment of the subject is beyond the scope of what is
covered here, but an attempt is made to keep clear the difference between
where a rigorous treatment could be pursued relatively straight-forwardly,
and where there are serious problems of principle making a rigorous treat-
ment very hard to achieve.

xii
Chapter 1

Introduction and Overview

1.1 Introduction
A famous quote from Richard Feynman goes “I think it is safe to say that no one
understands quantum mechanics.”[17]. In this book we’ll pursue one possible
route to such an understanding, emphasizing the deep connections of quantum
mechanics to fundamental ideas and powerful techniques of modern mathemat-
ics. The strangeness inherent in quantum theory that Feynman was referring
to has two rather different sources. One of them is the inherent disjunction and
incommensurability between the conceptual framework of the classical physics
which governs our everyday experience of the physical world, and the very dif-
ferent framework which governs physical reality at the atomic scale. Familiarity
with the powerful formalisms of classical mechanics and electromagnetism pro-
vides deep understanding of the world at the distance scales familiar to us.
Supplementing these with the more modern (but still “classical” in the sense
of “not quantum”) subjects of special and general relativity extends our under-
standing into other less accessible regimes, while still leaving atomic physics a
mystery.
Read in context though, Feynman was pointing to a second source of diffi-
culty, contrasting the mathematical formalism of quantum mechanics with that
of the theory of general relativity, a supposedly equally hard to understand
subject. General relativity can be a difficult subject to master, but its math-
ematical and conceptual structure involves a fairly straight-forward extension
of structures that characterize 19th century physics. The fundamental physical
laws (Einstein’s equations for general relativity) are expressed as partial differ-
ential equations, a familiar if difficult mathematical subject. The state of the
system is determined by the set of fields satisfying these equations, and observ-
able quantities are functionals of these fields. The mathematics is just that of
the usual calculus: differential equations and their real-valued solutions.
In quantum mechanics, the state of a system is best thought of as a different
sort of mathematical object: a vector in a complex vector space, the so-called

1
state space. One can sometimes interpret this vector as a function, the wave-
function, although this comes with the non-classical feature that wavefunctions
are complex-valued. What’s truly completely different is the treatment of ob-
servable quantities, which correspond to self-adjoint linear operators on the state
space. This has no parallel in classical physics, and violates our intuitions about
how physics should work, with observables now often no longer commuting.
During the earliest days of quantum mechanics, the mathematician Hermann
Weyl quickly recognized that the mathematical structures being used were ones
he was quite familiar with from his work in the field of representation theory.
From the point of view that takes representation theory as a fundamental struc-
ture, the framework of quantum mechanics looks perfectly natural. Weyl soon
wrote a book expounding such ideas [72], but this got a mixed reaction from
physicists unhappy with the penetration of unfamiliar mathematical structures
into their subject (with some of them characterizing the situation as the “Grup-
penpest”, the group theory plague). One goal of this course will be to try and
make some of this mathematics as accessible as possible, boiling down Weyl’s
exposition to its essentials while updating it in the light of many decades of
progress and better understanding of the subject.
Weyl’s insight that quantum mechanics crucially involves understanding the
Lie groups that act on the phase space of a physical system and the unitary rep-
resentations of these groups has been vindicated by later developments which
dramatically expanded the scope of these ideas. The use of representation the-
ory to exploit the symmetries of a problem has become a powerful tool that has
found uses in many areas of science, not just quantum mechanics. I hope that
readers whose main interest is physics will learn to appreciate the mathematical
structures that lie behind the calculations of standard textbooks, helping them
understand how to effectively exploit them in other contexts. Those whose main
interest is mathematics will hopefully gain some understanding of fundamen-
tal physics, at the same time as seeing some crucial examples of groups and
representations. These should provide a good grounding for appreciating more
abstract presentations of the subject that are part of the standard mathemat-
ical curriculum. Anyone curious about the relation of fundamental physics to
mathematics, and what Eugene Wigner described as “The Unreasonable Ef-
fectiveness of Mathematics in the Natural Sciences”[73] should benefit from an
exposure to this remarkable story at the intersection of the two subjects.
The following sections give an overview of the fundamental ideas behind
much of the material to follow. In this sketchy and abstract form they will
likely seem rather mystifying to those meeting them for the first time. As we
work through basic examples in coming chapters, a better understanding of the
overall picture described here should start to emerge.

1.2 Basic principles of quantum mechanics


We’ll divide the conventional list of basic principles of quantum mechanics into
two parts, with the first covering the fundamental mathematics structures.

2
1.2.1 Fundamental axioms of quantum mechanics
In classical physics, the state of a system is given by a point in a “phase space”,
which one can think of equivalently as the space of solutions of an equation
of motion, or as (parametrizing solutions by initial value data) the space of
coordinates and momenta. Observable quantities are just functions on this space
(i.e. functions of the coordinates and momenta). There is one distinguished
observable, the energy or Hamiltonian, and it determines how states evolve in
time through Hamilton’s equations.
The basic structure of quantum mechanics is quite different, with the for-
malism built on the following simple axioms:
Axiom (States). The state of a quantum mechanical system is given by a non-
zero vector in a complex vector space H with Hermitian inner product h·, ·i.
We’ll review in chapter 4 some linear algebra, including the properties of in-
ner products on complex vector spaces. H may be finite or infinite dimensional,
with further restrictions required in the infinite-dimensional case (e.g. we may
want to require H to be a Hilbert space). Note two very important differences
with classical mechanical states:
• The state space is always linear: a linear combination of states is also a
state.
• The state space is a complex vector space: these linear combinations can
and do crucially involve complex numbers, in an inescapable way. In the
classical case only real numbers appear, with complex numbers used only
as an inessential calculational tool.
In this course we will sometimes use the notation introduced by Dirac for
vectors in the state space H: such a vector with a label ψ is denoted

|ψi

Axiom (Observables). The observables of a quantum mechanical system are


given by self-adjoint linear operators on H.
We’ll also review the notion of self-adjointness in our review of linear algebra.
When H is infinite-dimensional, further restrictions will be needed on the class
of linear operators to be used.
Axiom (Dynamics). There is a distinguished observable, the Hamiltonian H.
Time evolution of states |ψ(t)i ∈ H is given by the Schrödinger equation

d i
|ψ(t)i = − H|ψ(t)i
dt ~
The Hamiltonian observable H will have a physical interpretation in terms
of energy, and one may also want to specify some sort of positivity property on
H, in order to assure the existence of a stable lowest energy state.

3
~ is a dimensional constant, the value of which depends on what units one
uses for time and for energy. It has the dimensions [energy] · [time] and its
experimental values are
1.054571726(47) × 10−34 Joule · seconds = 6.58211928(15) × 10−16 eV · seconds
(eV is the unit of “electron-Volt”, the energy acquired by an electron moving
through a one-Volt electric potential). The most natural units to use for quan-
tum mechanical problems would be energy and time units chosen so that ~ = 1.
For instance one could use seconds for time and measure energies in the very
small units of 6.6 × 10−16 eV, or use eV for energies, and then the very small
units of 6.6 × 10−16 seconds for time. Schrödinger’s equation implies that if one
is looking at a system where the typical energy scale is an eV, one’s state-vector
will be changing on the very short time scale of 6.6 × 10−16 seconds. When
we do computations, usually we will just set ~ = 1, implicitly going to a unit
system natural for quantum mechanics. When we get our final result, we can
insert appropriate factors of ~ to allow one to get answers in more conventional
unit systems.
It is sometimes convenient however to carry along factors of ~, since this
can help make clear which terms correspond to classical physics behavior, and
which ones are purely quantum mechanical in nature. Typically classical physics
comes about in the limit where
(energy scale)(time scale)
~
is large. This is true for the energy and time scales encountered in everyday
life, but it can also always be achieved by taking ~ → 0, and this is what will
often be referred to as the “classical limit”.

1.2.2 Principles of measurement theory


The above axioms characterize the mathematical structure of a quantum theory,
but they don’t address the “measurement problem”. This is the question of
how to apply this structure to a physical system interacting with some sort
of macroscopic, human-scale experimental apparatus that “measures” what is
going on. This is highly thorny issue, requiring in principle the study of two
interacting quantum systems (the one being measured, and the measurement
apparatus) in an overall state that is not just the product of the two states,
but is highly “entangled” (for the meaning of this term, see chapter 9). Since a
macroscopic apparatus will involve something like 1023 degrees of freedom, this
question is extremely hard to analyze purely within the quantum mechanical
framework (for one thing, one would need to solve a Schrödinger equation in
1023 variables).
Instead of trying to resolve in general this problem of how classical physics
behavior emerges for macroscopic objects, one can adopt the following two prin-
ciples as describing what will happen, and these allow one to make precise sta-
tistical predictions using quantum theory:

4
Principle (Observables). States where the value of an observable can be char-
acterized by a well-defined number are the states that are eigenvectors for the
corresponding self-adjoint operator. The value of the observable in such a state
will be a real number, the eigenvalue of the operator.
This principle identifies the states we have some hope of sensibly associating
a label to (the eigenvalue), a label which in some contexts corresponds to an
observable quantity characterizing states in classical mechanics. The observables
of most use will turn out to correspond to some group action on the physical
system (for instance the energy, momentum, angular momentum, or charge).
Principle (The Born rule). Given an observable O and two unit-norm states
|ψ1 i and |ψ2 i that are eigenvectors of O with eigenvalues λ1 and λ2 (i.e. O|ψ1 i =
λ1 |ψ1 i and O|ψ2 i = λ2 |ψ2 i), the complex linear combination state

c1 |ψ1 i + c2 |ψ2 i

may not have a well-defined value for the observable O. If one attempts to
measure this observable, one will get either λ1 or λ2 , with probabilities
|c21 |
|c21 |+ |c22 |
and
|c22 |
|c21 | + |c22 |
respectively.
The Born rule is sometimes raised to the level of an axiom of the theory, but
it is plausible to expect that, given a full understanding of how measurements
work, it can be derived from the more fundamental axioms of the previous
section. Such an understanding though of how classical behavior emerges in
experiments is a very challenging topic, with the notion of “decoherence” playing
an important role. See the end of this chapter for some references that discuss
the these issues in detail.
Note that the state c|ψi will have the same eigenvalues and probabilities as
the state |ψi, for any complex number c. It is conventional to work with states
of norm fixed to the value 1, which fixes the amplitude of c, leaving a remaining
ambiguity which is a phase eiθ . By the above principles this phase will not
contribute to the calculated probabilities of measurements. We will however
not at all take the point of view that this phase information can be ignored. It
plays an important role in the mathematical structure, and the relative phase
of two different states certainly does affect measurement probabilities.

1.3 Unitary group representations


The mathematical framework of quantum mechanics is closely related to what
mathematicians describe as the theory of “unitary group representations”. We

5
will be examining this notion in great detail and working through many examples
in coming chapters, but here’s a quick summary of the relevant definitions, as
well as an indication of the relationship to the quantum theory formalism.
Definition (Group). A group G is a set with an associative multiplication, such
that the set contains an identity element, as well as the multiplicative inverse
of each element.
Many different kinds of groups are of interest in mathematics, with an ex-
ample of the sort that we will be interested in the group of all rotations about
a point in 3-dimensional space. Most of the groups we will consider are “matrix
groups”, i.e. subgroups of the group of n by n invertible matrices (with real or
complex coefficients).
Definition (Representation). A (complex) representation (π, V ) of a group G
is a homomorphism
π : g ∈ G → π(g) ∈ GL(V )
where GL(V ) is the group of invertible linear maps V → V , with V a complex
vector space.
Saying the map π is a homomorphism means

π(g1 )π(g2 ) = π(g1 g2 )

for all g1 , g2 ∈ G. When V is finite dimensional and we have chosen a basis of


V , then we have an identification of linear maps and matrices

GL(V ) ' GL(n, C)

where GL(n, C) is the group of invertible n by n complex matrices. We will


begin by studying representations that are finite dimensional and will try to
make rigorous statements. Later on we will get to representations on function
spaces, which are infinite dimensional, and from then on will need to consider the
serious analytical difficulties that arise when one tries to make mathematically
precise statements in the infinite-dimensional case.
One source of confusion is that representations (π, V ) are sometimes referred
to by the map π, leaving implicit the vector space V that the matrices π(g) act
on, but at other times referred to by specifying the vector space V , leaving
implicit the map π. One reason for this is that the map π may be the identity
map: often G is a matrix group, so a subgroup of GL(n, C), acting on V ' Cn
by the standard action of matrices on vectors. One should keep in mind though
that just specifying V is generally not enough to specify the representation,
since it may not be the standard one. For example, it could very well carry the
trivial representation, where
π(g) = 1n
i.e. each element of G acts on V as the identity.
It turns out that in mathematics the most interesting classes of complex
representations are “unitary”, i.e. preserving the notion of length given by

6
the standard Hermitian inner product in a complex vector space. In physical
applications, the group representations under consideration typically correspond
to physical symmetries, and will preserve lengths in H, since these correspond
to probabilities of various observations. We have the definition
Definition (Unitary representation). A representation (π, V ) on a complex vec-
tor space V with Hermitian inner product h·, ·i is a unitary representation if it
preserves the inner product, i.e.

hπ(g)v1 , π(g)v2 i = hv1 , v2 i

for all g ∈ G and v1 , v2 ∈ V .


For a unitary representation, the matrices π(g) take values in a subgroup
U (n) ⊂ GL(n, C). In our review of linear algebra we will see that U (n) can be
characterized as the group of n by n complex matrices U such that

U −1 = U †

where U † is the conjugate-transpose of U . Note that we’ll be using the notation


“† ” to mean the “adjoint” or conjugate-transpose matrix. This notation is pretty
universal in physics, whereas mathematicians prefer to use “∗ ” instead of “† ”.

1.4 Representations and quantum mechanics


The fundamental relationship between quantum mechanics and representation
theory is that whenever we have a physical quantum system with a group G
acting on it, the space of states H will carry a unitary representation of G (at
least up to a phase factor). For physicists working with quantum mechanics,
this implies that representation theory provides information about quantum
mechanical state spaces. For mathematicians studying representation theory,
this means that physics is a very fruitful source of unitary representations to
study: any physical system with a symmetry group G will provide one.
For a representation π and group elements g that are close to the identity,
one can use exponentiation to write π(g) ∈ GL(n, C) as

π(g) = eA

where A is also a matrix, close to the zero matrix.


We will study this situation in much more detail and work extensively with
examples, showing in particular that if π(g) is unitary (i.e. in the subgroup
U (n) ⊂ GL(n, C)), then A will be skew-adjoint:

A† = −A

where A† is the conjugate-transpose matrix. Defining B = iA, we find that B


is self-adjoint
B† = B

7
We thus see that, at least in the case of finite-dimensional H, the unitary
representation π of G on H coming from a symmetry G of our physical sys-
tem gives us not just unitary matrices π(g), but also corresponding self-adjoint
operators B on H. Symmetries thus give us quantum mechanical observables,
with the fact that these are self-adjoint linear operators corresponding to the
fact that symmetries are realized as unitary representations on the state space.
In the following chapters we’ll see many examples of this phenomenon. A
fundamental example that we will study in detail is that of time-translation
symmetry. Here the group G = R and we get a unitary representation of
R on the space of states H. The corresponding self-adjoint operator is the
Hamiltonian operator H. This unitary representation gives the dynamics of the
theory, with the Schrödinger equation just the statement that ~i H∆t is the skew-
adjoint operator that gets exponentiated to give the unitary transformation that
moves states ψ(t) ahead in time by an amount ∆t.

1.5 Symmetry groups and their representations


on function spaces
It is conventional to refer to the groups that appear in this subject as “symmetry
groups”, which emphasizes the phenomenon of invariance of properties of objects
under sets of transformations that form a group. This is a bit misleading though,
since we are interested in not just invariance, but the more general phenomenon
of groups acting on sets, according to the following definition:

Definition (Group action on a set). An action of a group G on a set M is


given by a map
(g, x) ∈ G × M → g · x ∈ M
such that
g1 · (g2 · x) = (g1 g2 ) · x
and
e·x=x
where e is the identity element of G

A good example to keep in mind is that of 3-dimensional space M = R3 with


the standard inner product. This comes with an action of the group G = R3 on
X = M by translations, and of the group G0 = O(3) of 3-dimensional orthogonal
transformations (by rotations about the origin). Note that order matters: we
will often be interested in non-commutative groups like G0 where g1 g2 6= g2 g1
for some group elements g1 , g2
A fundamental principle of modern mathematics is that the way to under-
stand a space X, given as some set of points, is to look at F un(M ), the set of
functions on this space. This “linearizes” the problem, since the function space
is a vector space, no matter what the geometrical structure of the original set
is. If our original set has a finite number of elements, the function space will be

8
a finite dimensional vector space. In general though it will be infinite dimen-
sional and we will need to further specify the space of functions (i.e. continuous
functions, differentiable functions, functions with finite integral, etc.).
Given a group action of G on M , taking complex functions on M provides
a representation (π, F un(M )) of G, with π defined on functions f by

(π(g)f )(x) = f (g −1 · x)

Note the inverse that is needed to get the group homomorphism property to
work since one has

(π(g1 )π(g2 )f )(x) = (π(g2 )f )(g1−1 · x)


= f (g2−1 · (g1−1 · x))
= f ((g2−1 g1−1 ) · x)
= f ((g1 g2 )−1 · x)
= π(g1 g2 )f (x)

This calculation would not work out properly for non-commutative G if one
defined (π(g)f )(x) = f (g · x).
One way to construct quantum mechanical state spaces H is as “wavefunc-
tions”, meaning complex-valued functions on space-time. The above shows that
given any group action on space-time, we get a representation π on the state
space H of such wavefunctions.
Note that only in the case of M a finite set of points will we get a finite-
dimensional representation this way, since only then will F un(M ) be a finite-
dimensional vector space (C# of points in M ). A good example to consider to
understand this construction is the following:
• Take M to be a set of 3 elements x1 , x2 , x3 . So F un(M ) = C3 . For
f ∈ F un(M ), f is a vector in C3 , with components (f (x1 ), f (x2 ), f (x3 )).
• Take G = S3 , the group of permutations of 3 elements. This group has
3! = 6 elements.
• Take G to act on M by permuting the 3 elements.

(g, xi ) → g · xi

where i = 1, 2, 3 gives the three elements of M .


• Find the representation matrices π(g) for the representation of G on
F un(M ) as above
(π(g)f )(xi ) = f (g −1 · xi )

This construction gives six 3 by 3 complex matrices, which under multiplication


of matrices satisfy the same relations as the elements of the group under group
multiplication. In this particular case, all the entries of the matrix will be 0 or
1, but that is special to the permutation representation.

9
The discussion here has been just a quick sketch of some of the ideas behind
the material we will cover in later chapters. These ideas will be examined in
much greater detail, beginning with the next two chapters, where they will
appear very concretely when we discuss the simplest possible quantum systems,
those with one and two-complex dimensional state spaces.

1.6 For further reading


We will be approaching the subject of quantum theory from a different direc-
tion than the conventional one, starting with the role of symmetry and with the
simplest possible finite-dimensional quantum systems, systems which are purely
quantum mechanical, with no classical analog. This means that the early dis-
cussion one finds in most physics textbooks is rather different than the one
here. They will generally include the same fundamental principles described
here, but often begin with the theory of motion of a quantized particle, trying
to motivate it from classical mechanics. The state space is then a space of wave-
functions, which is infinite-dimensional and necessarily brings some analytical
difficulties. Quantum mechanics is inherently a quite different conceptual struc-
ture than classical mechanics. The relationship of the two subjects is rather
complicated, but it is clear that quantum mechanics cannot be derived from
classical mechanics, so attempts to motivate it that way are of necessity un-
convincing, although they correspond to the very interesting historical story of
how the subject evolved. We will come to the topic of the quantized motion of
a particle only in chapter 10, at which point it should become much easier to
follow the standard books.
There are many good physics quantum mechanics textbooks available, aimed
at a wide variety of backgrounds, and a reader of this book should look for one
at an appropriate level to supplement the discussions here. One example would
be [57], which is not really an introductory text, but it includes the physicist’s
version of many of the standard calculations we will also be considering. Some
useful textbooks on the subject aimed at mathematicians are [13], [29], [30], [40],
and [64]. The first few chapters of [20] provide an excellent while very concise
summary of both basic physics and quantum mechanics. One important topic
we won’t discuss is that of the application of the representation theory of finite
groups in quantum mechanics. For this as well as a discussion that overlaps
quite a bit with the point of view of this course while emphasizing different
areas, see [59].
For the difficult issue of how measurements work and how classical physics
emerges from quantum theory, an important part of the story is the notion of
“decoherence”. Good places to read about this are Wojciech Zurek’s updated
version of his 1991 Physics Today article [80], as well as his more recent work
on “quantum Darwinism” [81]. There is an excellent book on the subject by
Schlosshauer [52] and for the details of what happens in real experimental setups,
see the book by Haroche and Raimond [31]. For a review of how classical
physics emerges from quantum written from the mathematical point of view,

10
see Landsman [37]. Finally, to get an idea of the wide variety of points of view
available on the topic of the “interpretation” of quantum mechanics, there’s a
volume of interviews [53] with experts on the topic.

11
12
Chapter 2

The Group U (1) and its


Representations

The simplest example of a Lie group is the group of rotations of the plane,
with elements parametrized by a single number, the angle of rotation θ. It is
useful to identify such group elements with unit vectors in the complex plane,
given by eiθ . The group is then denoted U (1), since such complex numbers
can be thought of as 1 by 1 unitary matrices . We will see in this chapter how
the general picture described in chapter 1 works out in this simple case. State
spaces will be unitary representations of the group U (1), and we will see that any
such representation decomposes into a sum of one-dimensional representations.
These one-dimensional representations will be characterized by an integer q, and
such integers are the eigenvalues of a self-adjoint operator we will call Q, which
is an observable of the quantum theory.
One motivation for the notation Q is that this is the conventional physics
notation for electric charge, and this is one of the places where a U (1) group
occurs in physics. Examples of U (1) groups acting on physical systems include:
• Quantum particles can be described by a complex-valued “wavefunction”,
and U (1) acts on such wavefunctions by phase transformations of the
value of the function. This phenomenon can be used to understand how
particles interact with electromagnetic fields, and in this case the physical
interpretation of the eigenvalues of the Q operator will be the electric
charge of the particle. We will discuss this in detail in chapter 42.
• If one chooses a particular direction in three-dimensional space, then the
group of rotations about that axis can be identified with the group U (1).
The eigenvalues of Q will have a physical interpretation as the quantum
version of angular momentum in the chosen direction. The fact that such
eigenvalues are not continuous, but integral, shows that quantum angular
momentum has quite different behavior than classical angular momentum.
• When we study the harmonic oscillator we will find that it has a U (1) sym-

13
metry (rotations in the position-momentum plane), and that the Hamil-
tonian operator is a multiple of the operator Q for this case. This im-
plies that the eigenvalues of the Hamiltonian (which give the energy of
the system) will be integers times some fixed value. When one describes
multi-particle systems in terms of quantum fields one finds a harmonic
oscillator for each momentum mode, and then the Q for that mode counts
the number of particles with that momentum.

We will sometimes refer to the operator Q as a “charge” operator, assigning


a much more general meaning to the term than that of the specific example of
electric charge. U (1) representations are also ubiquitous in mathematics, where
often the integral eigenvalues of the Q operator will be called “weights”.
In a very real sense, the reason for the “quantum” in “quantum mechanics”
is precisely because of the role of U (1) symmetries. Such symmetries imply
observables that characterize states by an integer eigenvalue of an operator Q,
and it is this “quantization” of observables that motivates the name of the
subject.

2.1 Some representation theory


Recall the definition of a group representation:

Definition (Representation). A (complex) representation (π, V ) of a group G


on a complex vector space V (with a chosen basis identifying V ' Cn ) is a
homomorphism
π : G → GL(n, C)

This is just a set of n by n matrices, one for each group element, satisfying
the multiplication rules of the group elements. n is called the dimension of the
representation.
The groups G we are interested in will be examples of what mathematicians
call “Lie groups”. For those familiar with differential geometry, such groups
are examples of smooth manifolds. This means one can define derivatives of
functions on G and more generally the derivative of maps between Lie groups.
We will assume that our representations are given by differentiable maps π.
Some difficult general theory shows that considering the more general case of
continuous maps gives nothing new since the homomorphism property of these
maps is highly constraining. In any case, our goal in this course will be to study
quite explicitly certain specific groups and representations which are central in
quantum mechanics, and these representations will always be easily seen to be
differentiable.
Given two representations one can form their direct sum:

Definition (Direct sum representation). Given representations π1 and π2 of


dimensions n1 and n2 , one can define another representation, of dimension
n1 + n2 called the direct sum of the two representations, denoted by π1 ⊕ π2 .

14
This representation is given by the homomorphism
 
π1 (g) 0
(π1 ⊕ π2 ) : g ∈ G →
0 π2 (g)

In other words, one just takes as representation matrices block-diagonal


matrices with π1 and π2 giving the blocks.
To understand the representations of a group G, one proceeds by first iden-
tifying the irreducible ones, those that cannot be decomposed into two repre-
sentations of lower dimension:
Definition (Irreducible representation). A representation π is called irreducible
if it is not of the form π1 ⊕π2 , for π1 and π2 representations of dimension greater
than zero.
This criterion is not so easy to check, and the decomposition of an arbitrary
reducible representation into irreducible components can be a very non-trivial
problem. Recall that one gets explicit matrices for the π(g) of a representation
(π, V ) only when a basis for V is chosen. To see if the representation is reducible,
one can’t just look to see if the π(g) are all in block-diagonal form. One needs
to find out whether there is some basis for V with respect to which they are
all in such form, something very non-obvious from just looking at the matrices
themselves.
Digression. Another approach to this would be to check to see if the represen-
tation has no proper non-trivial sub-representations (subspaces of V preserved
by the π(g)). This is not necessarily equivalent to our definition of irreducibil-
ity (which is often called “indecomposability”), since a sub-representation may
have no complement that is also a sub-representation. A simple example of
this occurs for the action of upper triangular matrices on column vectors. Such
representations are however non-unitary. In the unitary case indecomposability
and irreducibility are equivalent. In these notes unless otherwise specified, one
should assume that all representations are unitary, so the distinction between
irreducibility and indecomposability will generally not arise.
The following theorem provides a criterion for determining if a representation
is irreducible or not:
Theorem (Schur’s lemma). If a complex representation (π, V ) is irreducible,
then the only linear maps M : V → V commuting with all the π(g) are λ1,
multiplication by a scalar λ ∈ C.
Proof. Since we are working over the field C (this doesn’t work for R), we can
always solve the eigenvalue equation

det(M − λ1) = 0

to find the eigenvalues λ of M . The eigenspaces

Vλ = {v ∈ V : M v = λv}

15
are non-zero vector subspaces of V and can also be described as ker(M − λ1),
the kernel of the operator M −λ1. Since this operator and all the π(g) commute,
we have
v ∈ ker(M − λ1) =⇒ π(g)v ∈ ker(M − λ1)

so ker(M − λ1) ⊂ V is a representation of G. If V is irreducible, we must


have either ker(M − λ1) = V or ker(M − λ1) = 0. Since λ is an eigenvalue,
ker(M − λ1) 6= 0, so ker(M − λ1) = V and thus M = λ1 as a linear operator
on V .

More concretely Schur’s lemma says that for an irreducible representation, if


a matrix M commutes with all the representation matrices π(g), then M must
be a scalar multiple of the unit matrix.
Note that the proof crucially uses the fact that one can solve the eigenvalue
equation. This will only be true in general if one works with C and thus with
complex representations. For the theory of representations on real vector spaces,
Schur’s lemma is no longer true.
An important corollary of Schur’s lemma is the following characterization of
irreducible representations of G when G is commutative.

Theorem. If G is commutative, all of its irreducible representations are one-


dimensional.

Proof. For G commutative, g ∈ G, any representation will satisfy

π(g)π(h) = π(h)π(g)

for all h ∈ G. If π is irreducible, Schur’s lemma implies that, since they commute
with all the π(g), the matrices π(h) are all scalar matrices, i.e. π(h) = λh 1 for
some λh ∈ C. π is then irreducible exactly when it is the one-dimensional
representation given by π(h) = λh .

2.2 The group U (1) and its representations


One can think of the group U (1) as the unit circle, with the multiplication rule
on its points given by addition of angles. More explicitly:

Definition (The group U (1)). The elements of the group U (1) are points on
the unit circle, which can be labeled by the unit complex number eiθ , for θ ∈ R.
Note that θ and θ +N 2π label the same group element for N ∈ Z. Multiplication
of group elements is just complex multiplication, which by the properties of the
exponential satisfies
eiθ1 eiθ2 = ei(θ1 +θ2 )

so in terms of angles the group law is just addition (mod 2π).

16
By our theorem from the last section, since U (1) is a commutative group,
all irreducible representations will be one-dimensional. Such an irreducible rep-
resentation will be given by a map

π : U (1) → GL(1, C)

but an invertible 1 by 1 matrix is just an invertible complex number, and we will


denote the group of these as C∗ . We will always assume that our representations
are given by differentiable maps, since we will often want to study them in terms
of their derivatives. A differentiable map π that is a representation of U (1) must
satisfy homomorphism and periodicity properties which can be used to show:

Theorem 2.1. All irreducible representations of the group U (1) are unitary,
and given by

πk : θ ∈ U (1) → πk (θ) = eikθ ∈ U (1) ⊂ GL(1, C) ' C∗

for k ∈ Z.

Proof. The given πk satisfy the homomorphism property

πk (θ1 + θ2 ) = πk (θ1 )πk (θ2 )

and periodicity property


πk (2π) = πk (0) = 1
We just need to show that any differentiable map

f : U (1) → C∗

satisfying the homomorphism and periodicity properties is of this form. Com-


df
puting the derivative f 0 (θ) = dθ we find

f (θ + ∆θ) − f (θ)
f 0 (θ) = lim
∆θ→0 ∆θ
(f (∆θ) − 1)
= f (θ) lim (using the homomorphism property)
∆θ→0 ∆θ
= f (θ)f 0 (0)

Denoting the constant f 0 (0) by C, the only solutions to this differential equation
satisfying f (0) = 1 are
f (θ) = eCθ
Requiring periodicity we find

f (2π) = eC2π = f (0) = 1

which implies C = ik for k ∈ Z, and f = πk for some integer k.

17
The representations we have found are all unitary, with πk taking values not
just in C∗ , but in U (1) ⊂ C∗ . One can check that the complex numbers eikθ
satisfy the condition to be a unitary 1 by 1 matrix, since

(eikθ )−1 = e−ikθ = eikθ

These representations are restrictions to the unit circle U (1) of the irre-
ducible representations of the group C∗ , which are given by

πk : z ∈ C∗ → πk (z) = z k ∈ C∗

Such representations are not unitary, but they have an extremely simple form,
so it sometimes is convenient to work with them, later restricting to the unit
circle, where the representation is unitary.
Digression (Fourier analysis of periodic functions). We’ll discuss Fourier anal-
ysis more seriously in chapter 10 when we come to the case of the translation
groups and of state-spaces that are spaces of “wavefunctions” on space-time. For
now though, it might be worth pointing out an important example of a repre-
sentation of U (1): the space F un(S 1 ) of complex-valued functions on the circle
S 1 . We will evade discussion here of the very non-trivial analysis involved, by
not specifying what class of functions we are talking about (e.g. continuous,
integrable, differentiable, etc.). Periodic functions can be studied by rescaling
the period to 2π, thus looking at complex-valued functions of a real variable φ
satisfying
f (φ + N 2π) = f (φ)
for integer N , which we can think of as functions on a circle, parametrized by
angle φ. We have an action of the group U (1) on the circle by rotation, with
the group element eiθ acting as:

φ→φ+θ

where φ is the angle parametrizing the circle S 1 .


In chapter 1 we saw that given an action of a group on a space X, we can
“linearize” and get a representation (π, F un(X)) of the group on the functions
on the space, by taking
(π(g)f )(x) = f (g −1 · x)
for f ∈ F un(X), x ∈ X. Here X = S 1 , the action is the rotation action and we
find
(π(θ)f )(φ) = f (φ − θ)
since the inverse of a rotation by θ is a rotation by −θ.
This representation (π, F un(S 1 )) is infinite-dimensional, but one can still
ask how it decomposes into the one-dimensional irreducible representations (πk , C)
of U (1). What we learn from the subject of Fourier analysis is that each (πk , C)
occurs exactly once in the decomposition of F un(S 1 ) into irreducibles, i.e.
M
(π, F un(S 1 )) = (πk , C)
d
k∈Z

18
where we have matched the sin of not specifying the class of functions in F un(S 1 )
on the left-hand side with the sin of not explaining how to handle the infinite
L
direct sum c on the right-hand side. What can be specified precisely is how
the irreducible sub-representation (πk , C) sits inside F un(S 1 ). It is the set of
functions f satisfying

(π(θ)f )(φ) = f (φ − θ) = eikθ f (φ)

so explicitly given by the one-complex dimensional space of functions propor-


tional to e−ikφ .
One part of the relevance of representation theory to Fourier analysis is
that the representation theory of U (1) picks out a distinguished basis of the
infinite-dimensional space of periodic functions by using the decomposition of
the function space into irreducible representations. One can then effectively
study functions by expanding them in terms of their components in this special
basis, writing an f ∈ F un(S 1 ) as
X
f (φ) = ck eikφ
k∈Z

for some complex coefficients ck , with analytical difficulties then appearing as


questions about the convergence of this series.

2.3 The charge operator


Recall from chapter 1 the claim of a general principle that, since the state space
H is a unitary representation of a Lie group, we get an associated self-adjoint
operator on H. We’ll now illustrate this for the simple case of G = U (1). For H
irreducible, the representation is one-dimensional, of the form (πq , C) for some
q ∈ Z, and the self-adjoint operator will just be multiplication by the integer q.
In general, we have
H = Hq1 ⊕ Hq2 ⊕ · · · ⊕ Hqn
for some set of integers q1 , q2 , . . . , qn (n is the dimension of H, the qi may not
be distinct) and can define:

Definition. The charge operator Q for the U (1) representation (π, H) is the
self-adjoint linear operator on H that acts by multiplication by qj on the irre-
ducible representation Hqj . Taking basis elements in Hqj it acts on H as the
matrix  
q1 0 · · · 0
 0 q2 · · · 0
 
· · · · · ·
0 0 · · · qn

Q is our first example of a quantum mechanical observable, a self-adjoint


operator on H. States in the subspaces Hqj will be eigenvectors for Q and will

19
have a well-defined numerical value for this observable, the integer qj . A general
state will be a linear superposition of state vectors from different Hqj and there
will not be a well-defined numerical value for the observable Q on such a state.
From the action of Q on H, one can recover the representation, i.e. the action
of the symmetry group U (1) on H, by multiplying by i and exponentiating, to
get
 iq θ 
e 1 0 ··· 0
 0 eiq2 θ · · · 0 
π(θ) = eiQθ =   ···
 ∈ U (n) ⊂ GL(n, C)
··· 
0 0 · · · eiqn θ

The standard physics terminology is that “Q generates the U (1) symmetry


transformations”.
The general abstract mathematical point of view (which we will discuss in
more detail later) is that the representation π is a map between manifolds,
from the Lie group U (1) to the Lie group GL(n, C) that takes the identity of
U (1) to the identity of GL(n, C). As such it has a differential π 0 , which is a
map from the tangent space at the identity of U (1) (which here is iR) to the
tangent space at the identity of GL(n, C) (which is the space M (n, C) of n by
n complex matrices). The tangent space at the identity of a Lie group is called
a “Lie algebra”. We will later study these in detail in many examples to come,
including their role in specifying a representation.
Here the relation between the differential of π and the operator Q is

π 0 : iθ ∈ iR → π 0 (iθ) = iQθ

One can sketch the situation like this:

20
The right-hand side of the picture is supposed to somehow represent GL(n, C),
which is the 2n2 dimensional real vector space of n by n complex matrices, mi-
nus the locus of matrices with zero determinant, which are those that can’t be
inverted. It has a distinguished point, the identity. The derivative π 0 of the
representation map π is the linear operator iQ.
In this very simple example, this abstract picture is over-kill and likely con-
fusing. We will see the same picture though occurring in many other examples
in later chapters, examples where the abstract technology is increasingly useful.
Keep in mind that, just like in this U (1) case, the maps π will just be exponen-
tial maps in the examples we care about, with very concrete incarnations given
by exponentiating matrices.

2.4 Conservation of charge and U (1) symmetry


The way we have defined observable operators in terms a group representation
on H, the action of these operators has nothing to do with the dynamics. If
we start at time t = 0 in a state in Hqj , with definite numerical value qj for
the observable, there is no reason that time evolution should preserve this.
Recall from one of our basic axioms that time evolution of states is given by the
Schrödinger equation
d
|ψ(t)i = −iH|ψ(t)i
dt
(we have set ~ = 1). We will later more carefully study the relation of this
equation to the symmetry of time translation (basically the Hamiltonian op-
erator H generates an action of the group R of time translations, just as the
operator Q generates an action of the group U (1)). For now though, note that
for time-independent Hamiltonian operators H, the solution to this equation is
given by exponentiating H, with

|ψ(t)i = U (t)|ψ(0)i

for
U (t) = e−itH
The commutator of two operators O1 , O2 is defined by

[O1 , O2 ] := O1 O2 − O2 O1

and such operators are said to commute if [O1 , O2 ] = 0. If the Hamiltonian


operator H and the charge operator Q commute then Q will also commute with
all powers of H
[H k , Q] = 0
and thus with the exponential of H, so

[U (t), Q] = 0

21
This condition
U (t)Q = QU (t) (2.1)
implies that if a state has a well-defined value qj for the observable Q at time
t = 0, it will continue to have the same value at any other time t, since

Q|ψ(t)i = QU (t)|ψ(0)i = U (t)Q|ψ(0)i = U (t)qj |ψ(0)i = qj |ψ(t)i

This will be a general phenomenon: if an observable commutes with the Hamil-


tonian observable, we get a conservation law. This conservation law says that
if we start in a state with a well-defined numerical value for the observable (an
eigenvector for the observable operator), we will remain in such a state, with
the value not changing, i.e. “conserved”.
In this situation, the group U (1) is said to act as a “symmetry group” of the
system. Equation 2.1 implies that

U (t)eiQθ = eiQθ U (t)

so the action of the U (1) group on the state space of the system commutes with
the time evolution law determined by the choice of Hamiltonian. We see that
this notion of symmetry implies a corresponding conservation law.

2.5 Summary
To summarize the situation for G = U (1), we have found
• Irreducible representations π are one-dimensional and characterized by
their derivative π 0 at the identity. If G = R, π 0 could be any complex
number. If G = U (1), periodicity requires that π 0 must be iq, q ∈ Z, so
irreducible representations are labeled by an integer.
• An arbitrary representation π of U (1) is of the form

π(eiθ ) = eiθQ

where Q is a matrix with eigenvalues a set of integers qj . For a quan-


tum system, Q is the self-adjoint observable corresponding to the U (1)
symmetry of the system, and is said to be a “generator” of the symmetry.
• If [Q, H] = 0, the U (1) group acts on the state space as “symmetries”. In
this case the qj will be “conserved quantities”, numbers that characterize
the quantum states, and do not change as the states evolve in time.

2.6 For further reading


I’ve had trouble finding another source that covers the material here. Most
quantum mechanics books consider it somehow too trivial to mention, starting
their discussion of symmetries with more complicated examples.

22
Chapter 3

Two-state Systems and


SU (2)

The simplest truly non-trivial quantum systems have state spaces that are in-
herently two-complex dimensional. This provides a great deal more structure
than that seen in chapter 2, which could be analyzed by breaking up the space
of states into one-dimensional subspaces of given charge. We’ll study these two-
state systems in this section, encountering for the first time the implications of
working with representations of non-commutative groups. Since they give the
simplest non-trivial realization of many quantum phenomena, such systems are
the fundamental objects of quantum information theory (the “qubit”) and the
focus of attempts to build a quantum computer (which would be built out of
multiple copies of this sort of fundamental object). Many different possible two-
state quantum systems could potentially be used as the physical implementation
of a qubit.
One of the simplest possibilities to take would be the idealized situation
of a single electron, somehow fixed so that its spatial motion could be ignored,
leaving its quantum state described just by its so-called “spin degree of freedom”,
which takes values in H = C2 . The term “spin” is supposed to call to mind
the angular momentum of an object spinning about about some axis, but such
classical physics has nothing to do with the qubit, which is a purely quantum
system.
In this chapter we will analyze what happens for general quantum systems
with H = C2 by first finding the possible observables. Exponentiating these
will give the group U (2) of unitary 2 by 2 matrices acting on H = C2 . This
is a specific representation of U (2), the “defining” representation. By restrict-
ing to the subgroup SU (2) ⊂ U (2) of elements of determinant one, we get a
representation of SU (2) on C2 often called the “spin 1/2” representation.
Later on, in chapter 8, we will find all the irreducible representations of
SU (2). These are labeled by a natural number
N = 0, 1, 2, 3, . . .

23
and have dimension N + 1. The corresponding quantum systes are said to have
“spin N/2”. The case N = 0 is the trivial representation on C and the case
N = 1 is the case of this chapter. In the limit N → ∞ one can make contact
with classical notions of spinning objects and angular momentum, but the spin
1/2 case is at the other limit, where the behavior is purely quantum-mechanical.

3.1 The two-state quantum system


3.1.1 The Pauli matrices: observables of the two-state
quantum system
For a quantum system with two-dimensional state space H = C2 , observables
are self-adjoint linear operators on C2 . With respect to a chosen basis of C2 ,
these are 2 by 2 complex matrices M satisfying the condition M = M † (M † is the
conjugate transpose of M ). Any such matrix will be a (real) linear combination
of four matrices:
M = c0 1 + c1 σ1 + c2 σ2 + c3 σ3
with cj ∈ R and the standard choice of basis elements given by
       
1 0 0 1 0 −i 1 0
1= , σ1 = , σ2 = , σ3 =
0 1 1 0 i 0 0 −1

The σj are called the “Pauli matrices” and are a pretty universal choice of basis
in this subject. This choice of basis is a convention, with one aspect of this
convention that of taking the basis element in the 3-direction to be diagonal.
In common physical situations and conventions, the third direction is the dis-
tinguished “up-down” direction in space, so often chosen when a distinguished
direction in R3 is needed.
Recall that the basic principle of how measurements are supposed to work
in quantum theory says that the only states that have well-defined values for
these four observables are the eigenvectors for these matrices. The first matrix
gives a trivial observable (the identity on every state), whereas the last one, σ3 ,
has the two eigenvectors
   
1 1
σ3 =
0 0
and    
0 0
σ3 =−
1 1
with eigenvalues +1 and −1. In quantum information theory, where this is
the qubit system, these two eigenstates are labeled |0i and |1i because of the
analogy with a classical bit of information. Later on when we get to the theory of
spin, we will see that 21 σ3 is the observable corresponding to the SO(2) = U (1)
symmetry group of rotations about the third spatial axis, and the eigenvalues

24
− 12 , + 12 of this operator will be used to label the two eigenstates
   
1 1 1 0
|+ i= and | − i =
2 0 2 1

The two eigenstates | + 21 i and | − 12 i provide a basis for C 2 , so an arbitrary


vector in H can be written as
1 1
|ψi = α| + i + β| − i
2 2
for α, β ∈ C. Only if α or β is 0 does the observable σ3 correspond to a well-
defined number that characterizes the state and can be measured. This will be
either 12 (if β = 0 so the state is an eigenvector | + 21 i), or − 21 (if α = 0 so the
state is an eigenvector | − 12 i).
An easy to check fact is that | + 12 i and | − 12 i are NOT eigenvectors for the
operators σ1 and σ2 . One can also check that no pair of the three σj commute,
which implies that one cannot find vectors that are simultaneous eigenvectors for
more than one σj . This non-commutativity of the operators is responsible for the
characteristic classically paradoxical property of quantum observables: one can
find states with a well defined number for the measured value of one observable
σj , but such states will not have a well-defined number for the measured value
of the other two non-commuting observables. The physical description of this
phenomenon in the realization of this system as a spin 12 particle is that if one
prepares states with a well-defined spin component in the j-direction, the two
other components of the spin can’t be assigned a numerical value in such a
state. Any attempt to prepare states that simultaneously have specific chosen
numerical values for the 3 observables corresponding to the σj is doomed. So is
any attempt to simultaneously measure such values: if one measures the value
for a particular observable σj , then going on to measure one of the other two
will ensure that the first measurement is no longer valid (repeating it will not
necessarily give the same thing). There are many subtleties in the theory of
measurement for quantum systems, but this simple two-state example already
shows some of the main features of how the behavior of observables is quite
different than in classical physics.
The choice we have made for the σj corresponds to a choice of basis for H
such that the basis vectors are eigenvectors of σ3 . σ1 and σ2 take these basis
vectors to non-trivial linear combinations of basis vectors. It turns out that
there are two specific linear combinations of σ1 and σ2 that do something very
simple to the basis vectors, since
   
0 2 0 0
(σ1 + iσ2 ) = and (σ1 − iσ2 ) =
0 0 2 0

we have        
0 1 1 0
(σ1 + iσ2 ) =2 (σ1 + iσ2 ) =
1 0 0 0

25
and        
1 0 0 0
(σ1 − iσ2 ) =2 (σ1 − iσ2 ) =
0 1 1 0

(σ1 + iσ2 ) is called a “raising operator”: on eigenvectors of σ3 it either


increases the eigenvalue by 2, or annihilates the vector. (σ1 − iσ2 ) is called
a “lowering operator”: on eigenvectors of σ3 it either decreases the eigenvalue
by 2, or annihilates the vector. Note that these linear combinations are not
self-adjoint, (σ1 + iσ2 ) is the adjoint of (σ1 − iσ2 ) and vice-versa.

3.1.2 Exponentials of Pauli matrices: unitary transforma-


tions of the two-state system
We saw in chapter 2 that in the U (1) case, knowing the observable operator Q on
H determined the representation of U (1), with the representation matrices found
by exponentiating iθQ. Here we will find the representation corresponding to
the two-state system observables by exponentiating the observables in a similar
way.
Taking the the identity matrix first, multiplication by iθ and exponentiation
gives the diagonal unitary matrix
 iθ 
iθ1 e 0
e =
0 eiθ

This is just exactly the case studied in chapter 2, for a U (1) group acting on
H = C2 , with
 
1 0
Q=
0 1
This matrix commutes with any other 2 by 2 matrix, so we can treat its action
on H independently of the action of the σj .
Turning to the other three basis elements of the space of observables, the
Pauli matrices, it turns out that since all the σj satisfy σj2 = 1, their exponentials
also take a simple form.

1 1
eiθσj = 1 + iθσj + (iθ)2 σj2 + (iθ)3 σj3 + · · ·
2 3!
1 1
= 1 + iθσj − θ2 1 − i θ3 σj + · · ·
2 3!
1 1
= (1 − θ2 + · · · )1 + i(θ − θ3 + · · · )σj
2! 3!
= (cos θ)1 + iσj (sin θ) (3.1)

As θ goes from θ = 0 to θ = 2π, this exponential traces out a circle in the


space of unitary 2 by 2 matrices, starting and ending at the unit matrix. This
circle is a group, isomorphic to U (1). So, we have found three different U (1)

26
subgroups inside the unitary 2 by 2 matrices, but only one of them (the case
j = 3) will act diagonally on H, with the U (1) representation determined by
 
1 0
Q=
0 −1

For the other two cases j = 1 and j = 2, by a change of basis one could put
either one in the same diagonal form, but doing this for one value of j makes
the other two no longer diagonal. All three values of j need to be treated
simultaneously, and one needs to consider not just the U (1)s but the group one
gets by exponentiating general linear combinations of Pauli matrices.
To compute such exponentials, one can check that these matrices satisfy the
following relations, useful in general for doing calculations with them instead of
multiplying out explicitly the 2 by 2 matrices:

[σj , σk ]+ = σj σk + σk σj = 2δjk 1

Here [·, ·]+ is the anticommutator. This relation says that all σj satisfy σj2 = 1
and distinct σj anticommute (e.g. σj σk = −σk σj for j 6= k).
Notice that the anticommutation relations imply that, if we take a vector
v = (v1 , v2 , v3 ) ∈ R3 and define a 2 by 2 matrix by
 
v3 v1 − iv2
v · σ = v1 σ 1 + v 2 σ 2 + v3 σ 3 =
v1 + iv2 −v3

then taking powers of this matrix we find

(v · σ)2 = (v12 + v22 + v32 )1 = |v|2 1

If v is a unit vector, we have


(
1 n even
(v · σ)n =
(v · σ) n odd

Replacing σj by v · σ, the same calculation as for equation 3.1 gives (for v


a unit vector)
eiθv·σ = (cos θ)1 + i(sin θ)v · σ
Notice that one can easily compute the inverse of this matrix:

(eiθv·σ )−1 = (cos θ)1 − i(sin θ)v · σ

since

((cos θ)1 + i(sin θ)v · σ)((cos θ)1 − i(sin θ)v · σ) = (cos2 θ + sin2 θ)1 = 1

We’ll review linear algebra and the notion of a unitary matrix in chapter 4, but
one form of the condition for a matrix M to be unitary is

M † = M −1

27
so the self-adjointness of the σj implies unitarity of eiθv·σ since

(eiθv·σ )† = ((cos θ)1 + i(sin θ)v · σ)†


= ((cos θ)1 − i(sin θ)v · σ † )
= ((cos θ)1 − i(sin θ)v · σ)
= (eiθv·σ )−1

One can also easily compute the determinant of eiθv·σ , finding

det(eiθv·σ ) = det((cos θ)1 + i(sin θ)v · σ)


 
cos θ + i sin θv3 i sin θ(v1 − iv2 )
= det
i sin θ(v1 + iv2 ) cos θ − i sin θv3
= cos2 θ + sin2 θ(v12 + v22 + v32 )
=1

So, we see that by exponentiating i times linear combinations of the self-


adjoint Pauli matrices (which all have trace zero), we get unitary matrices of
determinant one. These are invertible, and form the group named SU (2), the
group of unitary 2 by 2 matrices of determinant one. If we exponentiated not
just iθv · σ, but i(φ1 + θv · σ) for some real constant φ (such matrices will not
have trace zero unless φ = 0), we would get a unitary matrix with determinant
ei2φ . The group of unitary 2 by 2 matrices with arbitrary determinant is called
U (2). It contains as subgroups SU (2) as well as the U (1) described at the
beginning of this section. U (2) is slightly different than the product of these
two subgroups, since the group element
 
−1 0
0 −1
is in both subgroups. In our review of linear algebra to come we will encounter
the generalization to SU (n) and U (n), groups of unitary n by n complex ma-
trices.
To get some more insight into the structure of the group SU (2), consider an
arbitrary 2 by 2 complex matrix
 
α β
γ δ
Unitarity implies that the rows are orthonormal. One can see this explicitly
from the condition that the matrix times its conjugate-transpose is the identity
    
α β α γ 1 0
=
γ δ β δ 0 1
Orthogonality of the two rows gives the relation
γα
γα + δβ = 0 =⇒ δ = −
β

28
The condition that the first row has length one gives
αα + ββ = |α|2 + |β|2 = 1
Using these two relations and computing the determinant (which has to be 1)
gives
ααγ γ γ
αδ − βγ = − − βγ = − (αα + ββ) = − = 1
β β β
so one must have
γ = −β, δ = α
and an SU (2) matrix will have the form
 
α β
−β α

where (α, β) ∈ C2 and


|α|2 + |β|2 = 1
So, the elements of SU (2) are parametrized by two complex numbers, with
the sum of their length-squareds equal to one. Identifying C2 = R4 , these are
just vectors of length one in R4 . Just as U (1) could be identified as a space with
the unit circle S 1 in C = R2 , SU (2) can be identified with the unit three-sphere
S 3 in R4 .

3.2 Commutation relations for Pauli matrices


An important set of relations satisfied by Pauli matrices are their commutation
relations:
3
X
[σj , σk ] = σj σk − σk σj = 2i jkl σl
l=1
where jkl satisfies 123 = 1, is antisymmetric under permutation of two of its
subscripts, and vanishes if two of the subscripts take the same value. More
explicitly, this says:
[σ1 , σ2 ] = 2iσ3 , [σ2 , σ3 ] = 2iσ1 , [σ3 , σ1 ] = 2iσ2
One can easily check these relations by explicitly computing with the matrices.
Putting together the anticommutation and commutation relations, one gets a
formula for the product of two Pauli matrices:
3
X
σj σk = δjk 1 + i jkl σl
l=1

While physicists prefer to work with self-adjoint Pauli matrices and their
real eigenvalues, one can work instead with the following skew-adjoint matrices
σj
Xj = −i
2

29
which satisfy the slightly simpler commutation relations

3
X
[Xj , Xk ] = jkl Xl
l=1

or more explicitly

[X1 , X2 ] = X3 , [X2 , X3 ] = X1 , [X3 , X1 ] = X2

If these commutators were zero, the SU (2) elements one gets by exponentiat-
ing linear combinations of the Xj would be commuting group elements. The
non-triviality of the commutators reflects the non-commutativity of the group.
Group elements U ∈ SU (2) near the identity satisfy

U ' 1 + 1 X1 + 2 X2 + 3 X2

for j small and real, just as group elements z ∈ U (1) near the identity satisfy

z ' 1 + i

One can think of the Xj and their commutation relations as an infinites-


imal version of the full group and its group multiplication law, valid near
the identity. In terms of the geometry of manifolds, recall that SU (2) is the
space S 3 . The Xj give a basis of the tangent space R3 to the identity of
SU (2), just as i gives a basis of the tangent space to the identity of U (1).

30
3.3 Dynamics of a two-state system
Recall that the time dependence of states in quantum mechanics is given by the
Schrödinger equation
d
|ψ(t)i = −iH|ψ(t)i
dt
where H is a particular self-adjoint linear operator on H, the Hamiltonian op-
erator. The most general such operator on C2 will be given by

H = h0 1 + h1 σ1 + h2 σ2 + h3 σ3

for four real parameters h0 , h1 , h2 , h3 . The solution to the Schrödinger equation


is just given by exponentiation:

|ψ(t)i = U (t)|ψ(0)i

where
U (t) = e−itH
The h0 1 term in H just contributes an overall phase factor e−ih0 t , with the
remaining factor of U (t) an element of the group SU (2) rather than the larger
group U (2) of all 2 by 2 unitaries.
Using our earlier equation

eiθv·σ = (cos θ)1 + i(sin θ)v · σ


h
valid for a unit vector v, our U (t) is given by taking h = (h1 , h2 , h3 ), v = |h|
and θ = −t|h|, so we find

h1 σ1 + h2 σ2 + h3 σ3
U (t) =e−ih0 t (cos(−t|h|)1 + i sin(−t|h|) )
|h|
h1 σ1 + h2 σ2 + h3 σ3
=e−ih0 t (cos(t|h|)1 − i sin(t|h|) )
|h|
!
h3 −ih2
e−ih0 t (cos(t|h|) − i |h| sin(t|h|)) −i sin(t|h|) h1|h|
= h1 +ih2 h3
−i sin(t|h|) |h| e−ih0 t (cos(t|h|) + i |h| sin(t|h|))

In the special case h = (0, 0, h3 ) we have


 −it(h +h ) 
e 0 3
0
U (t) =
0 e−it(h0 −h3 )
so if our initial state is
1 1
|ψ(0)i = α| + i + β| − i
2 2
for α, β ∈ C, at later times the state will be
1 1
|ψ(t)i = αe−it(h0 +h3 ) | + i + βe−it(h0 −h3 ) | − i
2 2

31
In this special case, one can see that the eigenvalues of the Hamiltonian are
h0 ± h3 .
In the physical realization of this system by a spin 1/2 particle (ignoring its
spatial motion), the Hamiltonian is given by
ge
H= (B1 σ1 + B2 σ2 + B3 σ3 )
4mc
where the Bj are the components of the magnetic field, and the physical con-
stants are the gyromagnetic ratio (g), the electric charge (e), the mass (m) and
the speed of light (c), so we have solved the problem of the time evolution of
ge
such a system, setting hj = 4mc Bj . For magnetic fields of size |B| in the 3-
direction, we see that the two different states with well-defined energy (| + 12 i
and | − 12 i) will have an energy difference between them of
ge
|B|
2mc
This is known as the Zeeman effect and is readily visible in the spectra of atoms
subjected to a magnetic field. We will consider this example in more detail in
chapter 7, seeing how the group of rotations of R3 appears. Much later, in
chapter 42, we will derive this Hamiltonian term from general principles of how
electromagnetic fields couple to such spin 1/2 particles.

3.4 For further reading


Many quantum mechanics textbooks now begin with the two-state system, giv-
ing a much more detailed treatment than the one given here, including much
more about the physical interpretation of such systems (see for example [68]).
Volume III of Feynman’s Lectures on Physics [16] is a quantum mechanics text
with much of the first half devoted to two-state systems. The field of “Quantum
Information Theory” gives a perspective on quantum theory that puts such sys-
tems (in this context called the “qubit”) front and center. One possible reference
for this material is John Preskill’s notes on quantum computation [48].

32
Chapter 4

Linear Algebra Review,


Unitary and Orthogonal
Groups

A significant background in linear algebra will be assumed in later chapters,


and we’ll need a range of specific facts from that subject. These will include
some aspects of linear algebra not emphasized in a typical linear algebra course,
such as the role of the dual space and the consideration of various classes of
invertible matrices as defining a group. For now our vector spaces will be finite-
dimensional. Later on we will come to state spaces that are infinite dimensional,
and will address the various issues that this raises at that time.

4.1 Vector spaces and linear maps


A vector space V over a field k is just a set such that one can consistently take
linear combinations of elements with coefficients in k. We will only be using
the cases k = R and k = C, so such finite-dimensional V will just be Rn or
Cn . Choosing a basis (set of n linearly independent vectors) {ej }, an arbitrary
vector v ∈ V can be written as

v = v1 e1 + v2 e2 + · · · + vn en

giving an explicit identification of V with n-tuples vj of real or complex numbers


which we will usually write as column vectors
 
v1
 v2 
v= . 
 
 .. 
vn

33
The choice of a basis {ej } also allows us to express the action of a linear
operator Ω on V
Ω : v ∈ V → Ωv ∈ V
as multiplication by an n by n matrix:
    
v1 Ω11 Ω12 ... Ω1n v1
 v2   Ω21 Ω22 ... Ω2n   v2 
 ..  →  ..
    
.. .. ..   .. 
.  . . . .  . 
vn Ωn1 Ωn2 ... Ωnn vn
The invertible linear operators on V form a group under composition, a
group we will sometimes denote GL(V ). Choosing a basis identifies this group
with the group of invertible matrices, with group law matrix multiplication.
For V n-dimensional, we will denote this group by GL(n, R) in the real case,
GL(n, C) in the complex case.
Note that when working with vectors as linear combinations of basis vectors,
we can use matrix notation to write a linear transformation as
  
Ω11 Ω12 . . . Ω1n v1

  21
 Ω 22 . . . Ω 2n   v2 
  
v → Ωv = e1 · · · en  . . . . .
 .. .. .. ..   .. 
 

Ωn1 Ωn2 ... Ωnn vn
One sees from this that we can think of the transformed vector as we did
above in terms of transformed coefficients vj with respect to fixed basis vectors,
but also could leave the vj unchanged and transform the basis vectors. At times
we will want to use matrix notation to write formulas for how the basis vectors
transform in this way, and then will write
    
e1 Ω11 Ω21 . . . Ωn1 e1
 e2   Ω12 Ω22 . . . Ωn2   e2 
 ..  →  ..
    
.. .. ..   .. 
.  . . . .  . 
en Ω1n Ω2n . . . Ωnn en
Note that putting the basis vectors ej in a column vector like this causes the
matrix for Ω to act on them by the transposed matrix. This is not a group action
since in general the product of two transposed matrices is not the transpose of
the product.

4.2 Dual vector spaces


To any vector space V one can associate a new vector space, its dual:
Definition (Dual vector space). Given a vector space V over a field k, the dual
vector space V ∗ is the set of all linear maps V → k, i.e.
V ∗ = {l : V → k such that l(αv + βw) = αl(v) + βl(w)}

34
for α, β ∈ k, v, w ∈ V .
Given a linear transformation Ω acting on V , one can define:
Definition (Transpose transformation). The transpose of Ω is the linear trans-
formation
ΩT : V ∗ → V ∗
that satisfies
(ΩT l)(v) = l(Ωv)
for l ∈ V ∗ , v ∈ V .
For any representation (π, V ) of a group G on V , one can define a corre-
sponding representation on V ∗
Definition (Dual or contragredient representation). The dual or contragredient
representation on V ∗ is given by taking as linear operators
(π T )−1 (g) : V ∗ → V ∗
These satisfy the homomorphism property since
(π T (g1 ))−1 (π T (g2 ))−1 = (π T (g2 )π T (g1 ))−1 = ((π(g1 )π(g2 ))T )−1
For any choice of basis {ej } of V , one has a dual basis {e∗j } of V ∗ that
satisfies
e∗j (ek ) = δjk
Coordinates on V with respect to a basis are linear functions, and thus elements
of V ∗ . One can identify the coordinate function vj with the dual basis vector
e∗j since
e∗j (v1 e1 + v2 e2 + · · · + vn en ) = vj
One can easily show that the elements of the matrix for Ω in the basis ej
are given by
Ωjk = e∗j (Ωek )
and that the matrix for the transpose map (with respect to the dual basis) is
just the matrix transpose
(ΩT )jk = Ωkj
One can use matrix notation to write elements
l = l1 e∗1 + l2 e∗2 + · · · + ln e∗n ∈ V ∗
of V ∗ as row vectors 
l1 l2 ··· ln
of coordinates on V ∗ . Then evaluation of l on a vector v given by matrix
multiplication
 
v1
 v2 

l(v) = l1 l2 · · · ln  .  = l1 v1 + l2 v2 + · · · + ln vn
 .. 
vn

35
4.3 Change of basis
Any invertible transformation A on V can be used to change the basis ej of V
to a new basis e0j by taking
ej → e0j = Aej
The matrix for a linear transformation Ω transforms under this change of basis
as

Ωjk = e∗j (Ωek ) → (e0j )∗ (Ωe0k ) =(Aej )∗ (ΩAek )


=(AT )−1 (e∗j )(ΩAek )
=e∗j (A−1 ΩAek )
=(A−1 ΩA)jk

In the second step we are using the fact that elements of the dual basis transform
as the dual representation. One can check that this is what is needed to ensure
the relation
(e0j )∗ (e0k ) = δjk
The change of basis formula shows that if two matrices Ω1 and Ω2 are related
by conjugation by a third matrix A

Ω2 = A−1 Ω1 A

then one can think of them as both representing the same linear transforma-
tion, just with respect to two different choices of basis. Recall that a finite-
dimensional representation is given by a set of matrices π(g), one for each group
element. If two representations are related by

π2 (g) = A−1 π1 (g)A

(for all g, A does not depend on g), then we can think of them as being the
same representation, with different choices of basis. In such a case the represen-
tations π1 and π2 are called “equivalent”, and we will often implicitly identify
representations that are equivalent.

4.4 Inner products


An inner product on a vector space V is an additional structure that provides
a notion of length for vectors, of angle between vectors, and identifies V ∗ ' V .
One has, in the real case:

Definition (Inner Product, real case). An inner product on a real vector space
V is a map
h·, ·i : V × V → R
that is linear in both variables and symmetric (hv, wi = hw, vi).

36
Our inner products will usually be positive-definite (hv, vi ≥ 0 and hv, vi =
0 =⇒ v = 0), with indefinite inner products only appearing in the con-
text of special or general relativity, where an indefinite inner product on four-
dimensional space-time is used.
In the complex case, one has

Definition (Inner Product, complex case). An Hermitian inner product on a


complex vector space V is a map

h·, ·i : V × V → C

that is conjugate symmetric

hv, wi = hw, vi

as well as linear in the second variable, and antilinear in the first variable: for
α ∈ C and u, v, w ∈ V

hu + v, wi = hu, wi + hv, wi, hαu, vi = αhu, vi

An inner product gives a notion of length || · || for vectors, with

||v||2 = hv, vi

Note that whether to specify antilinearity in the first or second variable is a


matter of convention. The choice we are making is universal among physicists,
with the opposite choice common among mathematicians.
An inner product also provides an (antilinear in the complex case) isomor-
phism V ' V ∗ by the map

v ∈ V → lv ∈ V ∗

where lv is defined by
lv (w) = hv, wi
Physicists have a useful notation for elements of vector space and their duals,
for the case when V is a complex vector space with an Hermitian inner product
(such as the state space for a quantum theory). An element of such a vector
space V is written as a “ket vector”

|vi

where v is a label for a vector. An element of the dual vector space V ∗ is written
as a “bra vector”
hl|
Evaluating l ∈ V ∗ on v ∈ V gives an element of C, written

hl|vi

37
If Ω : V → V is a linear map
hl|Ω|vi = hl|Ωvi = l(Ωv)
In the bra-ket notation, one denotes the dual vector lv by hv|. Note that in
the inner product the angle bracket notation means something different than in
the bra-ket notation. The similarity is intentional though, since in the bra-ket
notation one has
hv|wi = hv, wi
Note that our convention of linearity in the second variable of the inner product,
antilinearity in the first, implies
|αvi = α|vi, hαv| = αhv|
for α ∈ C.
For a choice of orthonormal basis {ej }, i.e. satisfying
hej , ek i = δjk
a useful notation is
|ji = ej
Because of orthonormality, coefficients of vectors can be calculated as
vj = hej , vi
In bra-ket notation we have
vj = hj|vi
and
n
X
|vi = |jihj|vi
j=1

For corresponding elements of V ∗ , one has (using antilinearity)


n
X n
X
hv| = vj hj| = hv|jihj|
j=1 j=1

With respect to the chosen orthonormal basis {ej }, one can represent vectors
v as column vectors and the operation of taking a vector |vi to a dual vector hv|
corresponds to taking a column vector to the row vector that is its conjugate-
transpose.

hv| = v1 v2 ··· vn
Then one has
 
w1
 w2 
 
hv|wi = v1 v2 ··· vn  .  = v1 w1 + v2 w2 + · · · + vn wn
 .. 
wn

38
If Ω is a linear operator Ω : V → V , then with respect to the chosen basis it
becomes a matrix with matrix elements

Ωkj = hk|Ωji

The decomposition of a vector v in terms of coefficients


n
X
|vi = |jihj|vi
j=1

can be interpreted as a matrix multiplication by the identity matrix


n
X
1= |jihj|
j=1

and this kind of expression is referred to by physicists as a “completeness rela-


tion”, since it requires that the set of |ji be a basis with no missing elements.
The operator
Pj = |jihj|
is called the projection operator onto the j’th basis vector, it corresponds to the
matrix that has 0s everywhere except in the jj component.
Digression. In this course, all our indices will be lower indices. One way to
keep straight the difference between vectors and dual vectors is to use upper
indices for components of vectors, lower indices for components of dual vectors.
This is quite useful in Riemannian geometry and general relativity, where the
inner product is given by a metric that can vary from point to point, causing
the isomorphism between vectors and dual vectors to also vary. For quantum
mechanical state spaces, we will be using a single, standard, fixed inner product,
so there will be a single isomorphism between vectors and dual vectors. The
bra-ket notation will take care of the notational distinction between vectors and
dual vectors as necessary.

4.5 Adjoint operators


When V is a vector space with inner product, one can define the adjoint of Ω
by
Definition (Adjoint Operator). The adjoint of a linear operator Ω : V → V is
the operator Ω† satisfying

hΩv, wi = hv, Ω† wi

or, in bra-ket notation


hΩv|wi = hv|Ω† wi
for all v, w ∈ V .

39
Generalizing the fact that
hαv| = αhv|
for α ∈ C, one can write
hΩv| = hv|Ω†
Note that mathematicians tend to favor Ω∗ as notation for the adjoint of Ω,
as opposed to the physicist’s notation Ω† that we are using.
In terms of explicit matrices, since hΩv| is the conjugate-transpose of |Ωvi,
the matrix for Ω† will be given by the conjugate transpose ΩT of the matrix for
Ω:
Ω†jk = Ωkj
In the real case, the matrix for the adjoint is just the transpose matrix. We
will say that a linear transformation is self-adjoint if Ω† = Ω, skew-adjoint if
Ω† = −Ω.

4.6 Orthogonal and unitary transformations


A special class of linear transformations will be invertible transformations that
preserve the inner product, i.e. satisfying
hΩv, Ωwi = hΩv|Ωwi = hv, wi = hv|wi
for all v, w ∈ V . Such transformations take orthonormal bases to orthonormal
bases, so they will appear in one role as change of basis transformations.
In terms of adjoints, this condition becomes
hΩv, Ωwi = hv, Ω† Ωwi = hv, wi
so
Ω† Ω = 1
or equivalently
Ω† = Ω−1
In matrix notation this first condition becomes
Xn Xn
(Ω† )jk Ωkl = Ωkj Ωkl = δjl
k=1 k=1

which says that the column vectors of the matrix for Ω are orthonormal vectors.
Using instead the equivalent condition
ΩΩ† = 1
one finds that the row vectors of the matrix for Ω are also orthornormal.
Since such linear transformations preserving the inner product can be com-
posed and are invertible, they form a group, and some of the basic examples of
Lie groups are given by these groups for the cases of real and complex vector
spaces.

40
4.6.1 Orthogonal groups
We’ll begin with the real case, where these groups are called orthogonal groups:
Definition (Orthogonal group). The orthogonal group O(n) in n-dimensions
is the group of invertible transformations preserving an inner product on a real
n-dimensional vector space V . This is isomorphic to the group of n by n real
invertible matrices Ω satisfying

Ω−1 = ΩT

The subgroup of O(n) of matrices with determinant 1 (equivalently, the subgroup


preserving orientation of orthonormal bases) is called SO(n).
Recall that for a representation π of a group G on V , one has a dual repre-
sentation on V ∗ given by taking the transpose-inverse of π. If G is an orthogonal
group, then π and its dual are the same matrices, with V identified by V ∗ by
the inner product.
Since the determinant of the transpose of a matrix is the same as the deter-
minant of the matrix, we have

Ω−1 Ω = 1 =⇒ det(Ω−1 )det(Ω) = det(ΩT )det(Ω) = (det(Ω))2 = 1

so
det(Ω) = ±1
O(n) is a continuous Lie group, with two components: SO(n), the subgroup of
orientation-preserving transformations, which include the identity, and a com-
ponent of orientation-changing transformations.
The simplest non-trivial example is for n = 2, where all elements of SO(2)
are given by matrices of the form
 
cos θ − sin θ
sin θ cos θ

These matrices give counter-clockwise rotations in R2 by an angle θ. The other


component of O(2) will be given by matrices of the form
 
cos θ sin θ
sin θ − cos θ

Note that the group SO(2) is isomorphic to the group U (1) by


 
cos θ − sin θ
⇔ eiθ
sin θ cos θ

so the representation theory of SO(2) is just as for U (1), with irreducible com-
plex representations one-dimensional and classified by an integer.
In chapter 6 we will consider in detail the case of SO(3), which is crucial
for physical applications because it is the group of rotations in the physical
three-dimensional space.

41
4.6.2 Unitary groups
In the complex case, groups of invertible transformations preserving the Hermi-
tian inner product are called unitary groups:
Definition (Unitary group). The unitary group U (n) in n-dimensions is the
group of invertible transformations preserving an Hermitian inner product on a
complex n-dimensional vector space V . This is isomorphic to the group of n by
n complex invertible matrices satisfying
T
Ω−1 = Ω = Ω†

The subgroup of U (n) of matrices with determinant 1 is called SU (n).


In the unitary case, the dual of a representation π has representation matrices
that are transpose-inverses of those for π, but

(π(g)T )−1 = π(g)

so the dual representation is given by conjugating all elements of the matrix.


The same calculation as in the real case here gives

det(Ω−1 )det(Ω) = det(Ω† )det(Ω) = det(Ω)det(Ω) = |det(Ω)|2 = 1

so det(Ω) is a complex number of modulus one. The map

Ω ∈ U (n) → det(Ω) ∈ U (1)

is a group homomorphism.
We have already seen the examples U (1), U (2) and SU (2). For general
values of n, the case of U (n) can be split into the study of its determinant,
which lies in U (1) so is easy to deal with, and the subgroup SU (n), which is a
much more complicated story.
Digression. Note that is not quite true that the group U (n) is the product
group SU (n) × U (1). If one tries to identify the U (1) as the subgroup of U (n)
of elements of the form eiθ 1, then matrices of the form
m
ei n 2π 1

for m an integer will lie in both SU (n) and U (1), so U (n) is not a product of
those two groups.
We saw at the end of section 3.1.2 that SU (2) can be identified with the three-
sphere S 3 , since an arbitrary group element can be constructed by specifying one
row (or one column), which must be a vector of length one in C2 . For the case
n = 3, the same sort of construction starts by picking a row of length one in C3 ,
which will be a point in S 5 . The second row must be orthornormal, and one can
show that the possibilities lie in a three-sphere S 3 . Once the first two rows are
specified, the third row is uniquely determined. So as a manifold, SU (3) is eight-
dimensional, and one might think it could be identified with S 5 ×S 3 . It turns out

42
that this is not the case, since the S 3 varies in a topologically non-trivial way
as one varies the point in S 5 . As spaces, the SU (n) are topologically “twisted”
products of odd-dimensional spheres, providing some of the basic examples of
quite non-trivial topological manifolds.

4.7 Eigenvalues and eigenvectors


We have seen that that the matrix for a linear transformation Ω of a vector
space V changes by conjugation when we change our choice of basis of V . To
get basis-independent information about Ω, one considers the eigenvalues of the
matrix. Complex matrices behave in a much simpler fashion than real matrices,
since in the complex case the eigenvalue equation

det(λ1 − Ω) = 0

can always be factored into linear factors, and solved for the eigenvalues λ. For
an arbitrary n by n complex matrix there will be n solutions (counting repeated
eigenvalues with multiplicity). One can always find a basis for which the matrix
will be in upper triangular form.
The case of self-adjoint matrices Ω is much more constrained, since transpo-
sition relates matrix elements. One has:
Theorem (Spectral theorem for self-adjoint matrices). Given a self-adjoint
complex n by n matrix Ω, one can always find a unitary matrix U such that

U ΩU −1 = D

where D is a diagonal matrix with entries Djj = λj , λj ∈ R.


Given Ω, one finds the eigenvalues λj by solving the eigenvalue equation. One
can then go on to solve for the eigenvectors and use these to find U . For distinct
eigenvalues one finds that the corresponding eigenvectors are orthogonal.
This theorem is of crucial importance in quantum mechanics, where for Ω an
observable, the eigenvectors are the states in the state space with well-defined
numerical values characterizing the state, and these numerical values are the
eigenvalues. The theorem also tells us that given an observable, we can use
it to choose distinguished orthonormal bases for the state space by picking a
basis of eigenvectors, normalized to length one. This is a theorem about finite-
dimensional vector spaces, but later on in the course we will see that something
similar will be true even in the case of infinite-dimensional state spaces.
One can also diagonalize unitary matrices themselves by conjugation by
another unitary. The diagonal entries will all be complex numbers of unit length,
so of the form eiλj , λj ∈ R.
For the simplest examples, consider the cases of the groups SU (2) and U (2).
Any matrix in U (2) can be conjugated by a unitary matrix to the diagonal
matrix  iλ 
e 1 0
0 eiλ2

43
which is the exponential of a corresponding diagonalized skew-adjoint matrix
 
iλ1 0
0 iλ2

For matrices in the subgroup SU (2), one has λ2 = −λ1 = λ so in diagonal form
an SU (2) matrix will be  iλ 
e 0
0 e−iλ
which is the exponential of a corresponding diagonalized skew-adjoint matrix
that has trace zero  
iλ 0
0 −iλ

4.8 For further reading


Almost any of the more advanced linear algebra textbooks should cover the
material of this chapter.

44
Chapter 5

Lie Algebras and Lie


Algebra Representations

For a group G we have defined unitary representations (π, V ) for finite-dimensional


vector spaces V of complex dimension n as homomorphisms

π : G → U (n)

Recall that in the case of G = U (1), we could use the homomorphism property
of π to determine π in terms of its derivative at the identity. This turns out to
be a general phenomenon for Lie groups G: we can study their representations
by considering the derivative of π at the identity, which we will call π 0 . Because
of the homomorphism property, knowing π 0 is often sufficient to characterize
the representation π it comes from. π 0 is a linear map from the tangent space
to G at the identity to the tangent space of U (n) at the identity. The tangent
space to G at the identity will carry some extra structure coming from the group
multiplication, and this vector space with this structure will be called the Lie
algebra of G.
The subject of differential geometry gives many equivalent ways of defining
the tangent space at a point of manifolds like G, but we do not want to enter
here into the subject of differential geometry in general. One of the standard
definitions of the tangent space is as the space of tangent vectors, with tangent
vectors defined as the possible velocity vectors of parametrized curves g(t) in
the group G.
More advanced treatments of Lie group theory develop this point of view
(see for example [70]) which applies to arbitrary Lie groups, whether or not
they are groups of matrices. In our case though, since we are interested in
specific groups that are explicitly given as groups of matrices, we can give a
more concrete definition, just using the exponential map on matrices. For a
more detailed exposition of this subject, using the same concrete definition of
the Lie algebra in terms of matrices, see Brian Hall’s book [27] or the abbreviated
on-line version [28].

45
Note that the material of this chapter is quite general, and may be hard
to make sense of until one has some experience with basic examples. The next
chapter will discuss in detail the groups SU (2) and SO(3) and their Lie algebras,
as well as giving some examples of their representations, and this may be helpful
in making sense of the general theory of this chapter.

5.1 Lie algebras


We’ll work with the following definition of a Lie algebra:
Definition (Lie algebra). For G a Lie group of n by n invertible matrices, the
Lie algebra of G (written Lie(G) or g) is the space of n by n matrices X such
that etX ∈ G for t ∈ R.
Notice that while the group G determines the Lie algebra g, the Lie algebra
does not determine the group. For example, O(n) and SO(n) have the same
tangent space at the identity, and thus the same Lie algebra, but elements in
O(n) not in the component of the identity can’t be written in the form etX
(since then you could make a path of matrices connecting such an element to
the identity by shrinking t to zero). Note also that, for a given X, different
values of t may give the same group element, and this may happen in different
ways for different groups sharing the same Lie algebra. For example, consider
G = U (1) and G = (R, +), which both have the same Lie algebra g = R. In the
first case an infinity of values of t give the same group element, in the second,
only one does. In the next chapter we’ll see a more subtle example of this:
SU (2) and SO(3) are different groups with the same Lie algebra.
We have G ⊂ GL(n, C), and X ∈ M (n, C), the space of n by n complex
matrices. For all t ∈ R, the exponential etX is an invertible matrix (with inverse
e−tX ), so in GL(n, C). For each X, we thus have a path of elements of GL(n, C)
going through the identity matrix at t = 0, with velocity vector
d tX
e = XetX
dt
which takes the value X at t = 0:
d tX
(e )|t=0 = X
dt
To calculate this derivative, just use the power series expansion for the expo-
nential, and differentiate term-by-term.
For the case G = GL(n, C), we just have gl(n, C) = M (n, C), which is a
linear space of the right dimension to be the tangent space to G at the identity,
so this definition is consistent with our general motivation. For subgroups G ⊂
GL(n, C) given by some condition (for example that of preserving an inner
product), we will need to identify the corresponding condition on X ∈ M (n, C)
and check that this defines a linear space.
The existence of such a linear space g ⊂ M (n, C) will provide us with a
distinguished representation, called the “adjoint representation”

46
Definition (Adjoint representation). The adjoint representation (Ad, g) is given
by the homomorphism
Ad : g ∈ G → Ad(g) ∈ GL(g)
where Ad(g) acts on X ∈ g by
(Ad(g))(X) = gXg −1
To show that this is well-defined, one needs to check that gXg −1 ∈ g when
X ∈ g, but this can be shown using the identity
−1
etgXg = getX g −1
−1
which implies that etgXg ∈ G if etX ∈ G. To check this, just expand the
exponential and use
(gXg −1 )k = (gXg −1 )(gXg −1 ) · · · (gXg −1 ) = gX k g −1
It is also easy to check that this is a homomorphism, with
Ad(g1 )Ad(g2 ) = Ad(g1 g2 )
A Lie algebra g is not just a real vector space, but comes with an extra
structure on the vector space
Definition (Lie bracket). The Lie bracket operation on g is the bilinear anti-
symmetric map given by the commutator of matrices
[·, ·] : (X, Y ) ∈ g × g → [X, Y ] = XY − Y X ∈ g
We need to check that this is well-defined, i.e. that it takes values in g.
Theorem. If X, Y ∈ g, [X, Y ] = XY − Y X ∈ g.
Proof. Since X ∈ g, we have etX ∈ G and we can act on Y ∈ g by the adjoint
representation
Ad(etX )Y = etX Y e−tX ∈ g
As t varies this gives us a parametrized curve in g. Its velocity vector will also
be in g, so
d tX −tX
(e Y e )∈g
dt
One has (by the product rule, which can easily be shown to apply in this case)
d tX −tX d d
(e Y e ) = ( (etX Y ))e−tX + etX Y ( e−tX )
dt dt dt
= XetX Y e−tX − etX Y Xe−tX
Evaluating this at t = 0 gives
XY − Y X
which is thus shown to be in g.

47
The relation
d tX −tX
(e Y e )|t=0 = [X, Y ] (5.1)
dt
used in this proof will be continually useful in relating Lie groups and Lie alge-
bras.
To do calculations with a Lie algebra, one can just choose a basis X1 , X2 , . . . , Xn
for the vector space g, and use the fact that the Lie bracket can be written in
terms of this basis as
Xn
[Xj , Xk ] = cjkl Xl
l=1

where cjkl is a set of constants known as the “structure constants” of the Lie
algebra. For example, in the case of su(2), the Lie algebra of SU (2) one has a
basis X1 , X2 , X3 satisfying
3
X
[Xj , Xk ] = jkl Xl
l=1

so the structure constants of su(2) are just the totally antisymmetric jkl .

5.2 Lie algebras of the orthogonal and unitary


groups
The groups we are most interested in are the groups of linear transformations
preserving an inner product: the orthogonal and unitary groups. We have seen
that these are subgroups of GL(n, R) or GL(n, C), consisting of those elements
Ω satisfying the condition
ΩΩ† = 1
In order to see what this condition becomes on the Lie algebra, write Ω = etX ,
for some parameter t, and X a matrix in the Lie algebra. Since the transpose of
a product of matrices is the product (order-reversed) of the transposed matrices,
i.e.
(XY )T = Y T X T
and the complex conjugate of a product of matrices is the product of the complex
conjugates of the matrices, one has

(etX )† = etX

The condition
ΩΩ† = 1
thus becomes

etX (etX )† = etX etX = 1

48
Taking the derivative of this equation gives
† †
etX X † etX + XetX etX = 0

Evaluating this at t = 0 one finds

X + X† = 0

so the matrices we want to exponentiate are skew-adjoint, satisfying

X † = −X

Note that physicists often choose to define the Lie algebra in these cases
as self-adjoint matrices, then multiplying by i before exponentiating to get a
group element. We will not use this definition, with one reason that we want to
think of the Lie algebra as a real vector space, so want to avoid an unnecessary
introduction of complex numbers at this point.

5.2.1 Lie algebra of the orthogonal group


Recall that the orthogonal group O(n) is the subgroup of GL(n, R) of matrices
Ω satisfying ΩT = Ω−1 . We will restrict attention to the subgroup SO(n) of
matrices with determinant 1 which is the component of the group containing
the identity, and thus elements that can be written as

Ω = etX

These give a path connecting Ω to the identity (taking esX , s ∈ [0, t]). We
saw above that the condition ΩT = Ω−1 corresponds to skew-symmetry of the
matrix X
X T = −X
So in the case of G = SO(n), we see that the Lie algebra so(n) is the space of
skew-symmetric (X T = −X) n by n real matrices, together with the bilinear,
antisymmetric product given by the commutator:

(X, Y ) ∈ so(n) × so(n) → [X, Y ] ∈ so(n)

The dimension of the space of such matrices will be


n2 − n
1 + 2 + · · · + (n − 1) =
2
and a basis will be given by the matrices jk , with j, k = 1, . . . , n, j < k defined
as 
−1 if j = l, k = m

(jk )lm = +1 if j = m, k = l (5.2)

0 otherwise

In chapter 6 we will examine in detail the n = 3 case, where the Lie algebra
so(3) is R3 , realized as the space of antisymmetric real 3 by 3 matrices.

49
5.2.2 Lie algebra of the unitary group
For the case of the group U (n), the group is connected and one can write all
group elements as etX , where now X is a complex n by n matrix. The unitarity
condition implies that X is skew-adjoint (also called skew-Hermitian), satisfying
X † = −X
So the Lie algebra u(n) is the space of skew-adjoint n by n complex matrices,
together with the bilinear, antisymmetric product given by the commutator:
(X, Y ) ∈ u(n) × u(n) → [X, Y ] ∈ u(n)
2
Note that these matrices form a subspace of Cn of half the dimension,
so of real dimension n2 . u(n) is a real vector space of dimension n2 , but it
is NOT a space of real n by n matrices. It is the space of skew-Hermitian
matrices, which in general are complex. While the matrices are complex, only
real linear combinations of skew-Hermitian matrices are skew-Hermitian (recall
that multiplication by i changes a skew-Hermitian matrix into a Hermitian
matrix). Within this space of complex matrices, if one looks at the subspace of
real matrices one gets the sub-Lie algebra so(n) of antisymmetric matrices (the
Lie algebra of SO(n) ⊂ U (n)).
Given any complex matrix Z ∈ M (n, C), one can write it as a sum of
1 1
Z= (Z + Z † ) + (Z − Z † )
2 2
where the first term is self-adjoint, the second skew-Hermitian. This second
term can also be written as i times a self-adjoint matrix
1 1
(Z − Z † ) = i( (Z − Z † ))
2 2i
so we see that we can get all of M (n, C) by taking all complex linear combina-
tions of self-adjoint matrices..
There is an identity relating the determinant and the trace of a matrix
det(eX ) = etrace(X)
which can be proved by conjugating the matrix to upper-triangular form and
using the fact that the trace and the determinant of a matrix are conjugation-
invariant. Since the determinant of an SU (n) matrix is 1, this shows that the
Lie algebra su(n) of SU (n) will consist of matrices that are not only skew-
Hermitian, but also of trace zero. So in this case su(n) is again a real vector
space, of dimension n2 − 1.
One can show that U (n) and u(n) matrices can be diagonalized by conju-
gation by a unitary matrix to show that any U (n) matrix can be written as an
exponential of something in the Lie algebra. The corresponding theorem is also
true for SO(n) but requires looking at diagonalization into 2 by 2 blocks. It is
not true for O(n) (you can’t reach the disconnected component of the identity
by exponentiation). It also turns out to not be true for the groups GL(n, R)
and GL(n, C) for n ≥ 2.

50
5.3 Lie algebra representations
We have defined a group representation as a homomorphism (a map of groups
preserving group multiplication)

π : G → GL(n, C)

We can similarly define a Lie algebra representation as a map of Lie algebras


preserving the Lie bracket:

Definition (Lie algebra representation). A (complex) Lie algebra representation


(φ, V ) of a Lie algebra g on an n-dimensional complex vector space V is given
by a linear map

φ : X ∈ g → φ(X) ∈ gl(n, C) = M (n, C)

satisfying
φ([X, Y ]) = [φ(X), φ(Y )]

Such a representation is called unitary if its image is in u(n), i.e. it satisfies

φ(X)† = −φ(X)

More concretely, given a basis X1 , X2 , . . . , Xd of a Lie algebra g of dimension


d with structure constants cjkl , a representation is given by a choice of d complex
n-dimensional matrices φ(Xj ) satisfying the commutation relations

d
X
[φ(Xj ), φ(Xk )] = cjkl φ(Xl )
l=1

The representation is unitary when the matrices are skew-adjoint.


The notion of a Lie algebra is motivated by the fact that the homomorphism
property causes the map π to be largely determined by its behavior infinitesi-
mally near the identity, and thus by the derivative π 0 . One way to define the
derivative of such a map is in terms of velocity vectors of paths, and this sort of
definition in this case associates to a representation π : G → GL(n, C) a linear
map
π 0 : g → M (n, C)

where
d
π 0 (X) = (π(etX ))|t=0
dt

51
In the case of U (1) we classified all irreducible representations (homomor-
phisms U (1) → GL(1, C) = C∗ ) by looking at the derivative of the map at
the identity. For general Lie groups G, one can do something similar, show-
ing that a representation π of G gives a representation of the Lie algebra (by
taking the derivative at the identity), and then trying to classify Lie algebra
representations.

Theorem. If π : G → GL(n, C) is a group homomorphism, then

d
π 0 : X ∈ g → π 0 (X) = (π(etX ))|t=0 ∈ gl(n, C) = M (n, C)
dt

satisfies

1.
0
π(etX ) = etπ (X)

2. For g ∈ G
π 0 (gXg −1 ) = π(g)π 0 (X)(π(g))−1

3. π 0 is a Lie algebra homomorphism:

π 0 ([X, Y ]) = [π 0 (X), π 0 (Y )]

52
Proof. 1. We have
d d
π(etX ) = π(e(t+s)X )|s=0
dt ds
d
= π(etX esX )|s=0
ds
d
= π(etX ) π(esX )|s=0
ds
= π(etX )π 0 (X)
d
So f (t) = π(etX ) satisfies the differential equation dt f = f π 0 (X) with
0
initial condition f (0) = 1. This has the unique solution f (t) = etπ (X)
2. We have
0 −1 −1
etπ (gXg )
= π(etgXg )
tX −1
= π(ge g )
= π(g)π(e tX
)π(g)−1
0
= π(g)etπ (X) π(g)−1

Differentiating with respect to t at t = 0 gives

π 0 (gXg −1 ) = π(g)π 0 (X)(π(g))−1

3. Recall that (5.1)


d tX −tX
[X, Y ] = (e Y e )|t=0
dt
so
d tX −tX
π 0 ([X, Y ]) = π 0 ( (e Y e )|t=0 )
dt
d 0 tX −tX
= π (e Y e )|t=0 (by linearity)
dt
d
= (π(etX )π 0 (Y )π(e−tX ))|t=0 (by 2.)
dt
d 0 0
= (etπ (X) π 0 (Y )e−tπ (X) )|t=0 (by 1.)
dt
= [π 0 (X), π 0 (Y )]

This theorem shows that we can study Lie group representations (π, V )
by studying the corresponding Lie algebra representation (π 0 , V ). This will
generally be much easier since the π 0 (X) are just linear maps. We will proceed
in this manner in chapter 8 when we construct and classify all SU (2) and SO(3)
representations, finding that the corresponding Lie algebra representations are
much simpler to analyze.

53
For any Lie group G, we have seen that there is a distinguished representa-
tion, the adjoint representation (Ad, g). The corresponding Lie algebra represen-
tation is also called the adjoint representation, but written as (Ad0 , g) = (ad, g).
From the fact that
Ad(etX )(Y ) = etX Y e−tX
we can differentiate with respect to t to get the Lie algebra representation

d tX −tX
ad(X)(Y ) = (e Y e )|t=0 = [X, Y ] (5.3)
dt
From this we see that one can define

Definition (Adjoint Lie algebra representation). ad is the Lie algebra repre-


sentation given by
X ∈ g → ad(X)
where ad(X) is defined as the linear map from g to itself given by

Y → [X, Y ]

Note that this linear map ad(X), which one can write as [X, ·], can be
thought of as the infinitesimal version of the conjugation action

(·) → etX (·)e−tX

The Lie algebra homomorphism property of ad says that

ad([X, Y ]) = ad(X) ◦ ad(Y ) − ad(Y ) ◦ ad(X)

where these are linear maps on g, with ◦ composition of linear maps, so operating
on Z ∈ g we have

ad([X, Y ])(Z) = (ad(X) ◦ ad(Y )((Z) − (ad(Y ) ◦ ad(X))(Z)

Using our expression for ad as a commutator, we find

[[X, Y ], Z] = [X, [Y, Z]] − [Y, [X, Z]]

This is called the Jacobi identity. It could have been more simply derived as
an identity about matrix multiplication, but here we see that it is true for a
more abstract reason, reflecting the existence of the adjoint representation. It
can be written in other forms, rearranging terms using antisymmetry of the
commutator, with one example the sum of cyclic permutations

[[X, Y ], Z] + [[Z, X], Y ] + [[Y, Z], X] = 0

One can define Lie algebras much more abstractly as follows

54
Definition (Abstract Lie algebra). An abstract Lie algebra over a field k is a
vector space A over k, with a bilinear operation

[·, ·] : (X, Y ) ∈ A × A → [X, Y ] ∈ A

satisfying
1. Antisymmetry:
[X, Y ] = −[Y, X]

2. Jacobi identity:

[[X, Y ], Z] + [[Z, X], Y ] + [[Y, Z], X] = 0

Such Lie algebras do not need to be defined as matrices, and their Lie bracket
operation does not need to be defined in terms of a matrix commutator (al-
though the same notation continues to be used). Later on in this course we
will encounter important examples of Lie algebras that are defined in this more
abstract way.

5.4 Complexification
The way we have defined a Lie algebra g, it is a real vector space, not a complex
vector space. Even if G is a group of complex matrices, when it is not GL(n, C)
itself but some subgroup, its tangent space at the identity will not necessarily
be a complex vector space. Consider for example the cases G = U (1) and
G = SU (2), where u(1) = R and su(2) = R3 . While the tangent space to the
group of all invertible complex matrices is a complex vector space, imposing
some condition such as unitarity picks out a subspace which generally is just a
real vector space, not a complex one. So the adjoint representation (Ad, g) is in
general not a complex representation, but a real representation, with

Ad(g) ∈ GL(g) = GL(dim g, R)

The derivative of this is the Lie algebra representation

ad : X ∈ g → ad(X) ∈ gl(dim g, R)

and once we pick a basis of g, we can identify gl(dim g, R) = M (dim g, R). So,
for each X ∈ g we get a real linear operator on a real vector space.
We would however often like to work with not real representations, but
complex representations, since it is for these that Schur’s lemma applies, and
representation operators can be diagonalized. To get from a real Lie algebra
representation to a complex one, we can “complexify”, extending the action of
real scalars to complex scalars. If we are working with real matrices, complex-
ification is nothing but allowing complex entries and using the same rules for
multiplying scalars as before.
More generally, for any real vector space we can define:

55
Definition. The complexification VC of a real vector space V is the space of
pairs (v1 , v2 ) of elements of V with multiplication by a + bi ∈ C given by

(a + ib)(v1 , v2 ) = (av1 − bv2 , av2 + bv1 )

One should think of the complexification of V as

VC = V + iV

with v1 in the first copy of V , v2 in the second copy. Then the rule for mul-
tiplication by a complex number comes from the standard rules for complex
multiplication. In the cases we will be interested in this level of abstraction is
not really needed, since V will be given as a subspace of a complex space, and VC
will just be the larger subspace you get by taking complex linear combinations
of elements of V .
Given a real Lie algebra g, the complexification gC is pairs of elements
(X1 , X2 ) of g, with the above rule for multiplication by complex scalars. The
Lie bracket on g extends to a Lie bracket on gC by the rule

[(X1 , X2 ), (Y1 , Y2 )] = ([X1 , X2 ] − [Y1 , Y2 ], [X1 , Y2 ] + [X2 , Y1 ])

and gC is a Lie algebra over the complex numbers. In many cases this definition
is isomorphic to something just defined in terms of complex matrices, with the
simplest case
gl(n, R)C = gl(n, C)
Recalling our discussion from section 5.2.2 of u(n), a real Lie algebra, with
elements certain (skew-Hermitian) complex matrices, one can see that complex-
ifying will just give all complex matrices so

u(n)C = gl(n, C)

This example shows that two different real Lie algebras may have the same com-
plexification. For yet another example, since so(n) is the Lie algebra of all real
antisymmetric matrices, so(n)C is the Lie algebra of all complex antisymmetric
matrices.
We can extend the operators ad(X) on g by complex linearity to turn ad
into a complex representation of gC on the vector space gC itself

ad : Z ∈ gC → ad(Z)

Here Z is now a complex linear combination of elements of Xj ∈ g, and ad(Z)


is the corresponding complex linear combination of the real matrices ad(Xj ).

5.5 For further reading


The material of this section is quite conventional mathematics, with many good
expositions, although most aimed at a higher level than this course. An example

56
at the level of this course is the book Naive Lie Theory [60]. It covers basics of
Lie groups and Lie algebras, but without representations. The notes [28] and
book [27] of Brian Hall are a good source to study from. Some parts of the
proofs given here are drawn from those notes.

57
58
Chapter 6

The Rotation and Spin


Groups in 3 and 4
Dimensions

Among the basic symmetry groups of the physical world is the orthogonal group
SO(3) of rotations about a point in three-dimensional space. The observables
one gets from this group are the components of angular momentum, and under-
standing how the state space of a quantum system behaves as a representation
of this group is a crucial part of the analysis of atomic physics examples and
many others. This is a topic one will find in some version or other in every
quantum mechanics textbook.
Remarkably, it turns out that the quantum systems in nature are often
representations not of SO(3), but of a larger group called Spin(3), one that has
two elements corresponding to every element of SO(3). Such a group exists in
any dimension n, always as a “doubled” version of the orthogonal group SO(n),
one that is needed to understand some of the more subtle aspects of geometry
in n dimensions. In the n = 3 case it turns out that Spin(3) ' SU (2) and we
will study in detail the relationship of SO(3) and SU (2). This appearance of
the unitary group SU (2) is special to geometry in 3 and 4 dimensions, and we
will see that quaternions provide an explanation for this.

6.1 The rotation group in three dimensions


In R2 rotations about the origin are given by elements of SO(2), with a counter-
clockwise rotation by an angle θ given by the matrix
 
cos θ − sin θ
R(θ) =
sin θ cos θ

59
This can be written as an exponential, R(θ) = eθL = cos θ1 + L sin θ for
 
0 −1
L=
1 0

Here SO(2) is a commutative Lie group with Lie algebra so(2) = R (it is one-
dimensional, with trivial Lie bracket, all elements of the Lie algebra commute).
Note that we have a representation on V = R2 here, but it is a real representa-
tion, not one of the complex ones we have when we have a representation on a
quantum mechanical state space.
In three dimensions the group SO(3) is 3-dimensional and non-commutative.
Choosing a unit vector w and angle θ, one gets an element R(θ, w) of SO(3),
rotation by θ about the w axis. Using standard basis vectors ej , rotations about
the coordinate axes are given by
   
1 0 0 cos θ 0 sin θ
R(θ, e1 ) = 0 cos θ − sin θ , R(θ, e2 ) =  0 1 0 
0 sin θ cos θ − sin θ 0 cos θ
 
cos θ − sin θ 0
R(θ, e3 ) =  sin θ cos θ 0
0 0 1
A standard parametrization for elements of SO(3) is in terms of 3 “Euler angles”
φ, θ, ψ with a general rotation given by

R(φ, θ, ψ) = R(ψ, e3 )R(θ, e1 )R(φ, e3 ) (6.1)

i.e. first a rotation about the z-axis by an angle φ, then a rotation by an angle
θ about the new x-axis, followed by a rotation by ψ about the new z-axis.
Multiplying out the matrices gives a rather complicated expression for a rotation
in terms of the three angles, and one needs to figure out what range to choose
for the angles to avoid multiple counting.
The infinitesimal picture near the identity of the group, given by the Lie
algebra structure on so(3), is much easier to understand. Recall that for orthog-
onal groups the Lie algebra can be identified with the space of antisymmetric
matrices, so one in this case has a basis
     
0 0 0 0 0 1 0 −1 0
l1 = 0 0 −1 l2 =  0 0 0 l3 = 1 0 0
0 1 0 −1 0 0 0 0 0

which satisfy the commutation relations

[l1 , l2 ] = l3 , [l2 , l3 ] = l1 , [l3 , l1 ] = l2

Note that these are exactly the same commutation relations satisfied by
the basis vectors X1 , X2 , X3 of the Lie algebra su(2), so so(3) and su(2) are

60
isomorphic Lie algebras. They both are the vector space R3 with the same Lie
bracket operation on pairs of vectors. This operation is familiar in yet another
context, that of the cross-product of standard basis vectors ej in R3 :

e1 × e2 = e3 , e2 × e3 = e1 , e3 × e1 = e2

We see that the Lie bracket operation

(X, Y ) ∈ R3 × R3 → [X, Y ] ∈ R3

that makes R3 a Lie algebra so(3) is just the cross-product on vectors in R3 .


So far we have three different isomorphic ways of putting a Lie bracket on
R3 , making it into a Lie algebra:

1. Identify R3 with antisymmetric real 3 by 3 matrices and take the matrix


commutator as Lie bracket.

2. Identify R3 with skew-adjoint, traceless, complex 2 by 2 matrices and take


the matrix commutator as Lie bracket.

3. Use the vector cross-product on R3 to get a Lie bracket, i.e. define

[v, w] = v × w

Something very special that happens for orthogonal groups only in three di-
mensions is that the vector representation (the defining representation of SO(n)
matrices on Rn ) is isomorphic to the adjoint representation. Recall that any Lie
group G has a representation (Ad, g) on its Lie algebra g. so(n) can be identified
2
with the antisymmetric n by n matrices, so is of (real) dimension n 2−n . Only
for n = 3 is this equal to n, the dimension of the representation on vectors in
Rn . This corresponds to the geometrical fact that only in 3 dimensions is a
plane (in all dimensions rotations are built out of rotations in various planes)
determined uniquely by a vector (the vector perpendicular to the plane). Equiv-
alently, only in 3 dimensions is there a cross-product v × w which takes two
vectors determining a plane to a unique vector perpendicular to the plane.
The isomorphism between the vector representation (πvector , R3 ) on column
vectors and the adjoint representation (Ad, so(3)) on antisymmetric matrices is
given by    
v1 0 −v3 v2
v2  ↔ v1 l1 + v2 l2 + v3 l3 =  v3 0 −v1 
v3 −v2 v1 0
or in terms of bases by
ej ↔ lj
0
For the vector representation on column vectors, πvector (g) = g and πvector (X) =
X, where X is an antisymmetric 3 by 3 matrix, and g = eX is an orthogonal 3
by 3 matrix. Both act on column vectors by the usual multiplication.

61
For the adjoint representation on antisymmetric matrices, one has
   
0 −v3 v2 0 −v3 v2
Ad(g)  v3 0 −v1  = g  v3 0 −v1  g −1
−v2 v1 0 −v2 v1 0
The corresponding Lie algebra representation is given by
   
0 −v3 v2 0 −v3 v2
ad(X)  v3 0 −v1  = [X,  v3 0 −v1 ]
−v2 v1 0 −v2 v1 0
where X is a 3 by 3 antisymmetric matrix.
One can explicitly check that these representations are isomorphic, for in-
stance by calculating how basis elements lj ∈ so(3) act. On vectors, these lj act
by matrix multiplication, giving for instance, for j = 1
l1 e1 = 0, l1 e2 = e3 , l1 e3 = −e2
On antisymmetric matrices one has instead the isomorphic relations
(ad(l1 ))(l1 ) = 0, (ad(l1 ))(l2 ) = l3 , (ad(l1 ))(l3 ) = −l2

6.2 Spin groups in three and four dimensions


A remarkable property of the orthogonal groups SO(n) is that they come with an
associated group, called Spin(n), with every element of SO(n) corresponding
to two distinct elements of Spin(n). If you have seen some topology, what
is at work here is that (for n > 2) the fundamental group of SO(n) is non-
trivial, with π1 (SO(n)) = Z2 (this means there is a non-contractible loop in
SO(n), contractible if you go around it twice). Spin(n) is topologically the
simply-connected double-cover of SO(n), and one can choose the covering map
Φ : Spin(n) → SO(n) to be a group homomorphism. Spin(n) is a Lie group of
the same dimension as SO(n), with an isomorphic tangent space at the identity,
so the Lie algebras of the two groups are isomorphic: so(n) ' spin(n).
In chapter 26 we will explicitly construct the groups Spin(n) for any n but
here we will just do this for n = 3 and n = 4, using methods specific to these two
cases. In the cases n = 5 (where Spin(5) = Sp(2), the 2 by 2 norm-preserving
quaternionic matrices) and n = 6 (where Spin(6) = SU (4)) one can also use
special methods to identify Spin(n) with other matrix groups. For n > 6 the
group Spin(n) will be something truly distinct.
Given such a construction of Spin(n), we also need to explicitly construct
the homomorphism Φ, and show that its derivative Φ0 is an isomorphism of
Lie algebras. We will see that the simplest construction of the spin groups
here uses the group Sp(1) of unit-length quaternions, with Spin(3) = Sp(1)
and Spin(4) = Sp(1) × Sp(1). By identifying quaternions and pairs of complex
numbers, we can show that Sp(1) = SU (2) and thus work with these spin groups
as either 2 by 2 complex matrices (for Spin(3)), or pairs of such matrices (for
Spin(4)).

62
6.2.1 Quaternions
The quaternions are a number system (denoted by H) generalizing the complex
number system, with elements q ∈ H that can be written as

q = q0 + q1 i + q2 j + q3 k, qi ∈ R

with i, j, k ∈ H satisfying

i2 = j2 = k2 = −1, ij = −ji = k, ki = −ik = j, jk = −kj = i

and a conjugation operation that takes

q → q̄ = q0 − q1 i − q2 j − q3 k

This operation satisfies (for u, v ∈ H)

uv = v̄ū

As a vector space over R, H is isomorphic with R4 . The length-squared


function on this R4 can be written in terms of quaternions as

|q|2 = q q̄ = q02 + q12 + q22 + q32

and is multiplicative since

|uv|2 = uvuv = uvv̄ū = |u|2 |v|2

Using
q q̄
=1
|q|2
one has a formula for the inverse of a quaternion


q −1 =
|q|2

The length one quaternions thus form a group under multiplication, called
Sp(1). There are also Lie groups called Sp(n) for larger values of n, consisting
of invertible matrices with quaternionic entries that act on quaternionic vectors
preserving the quaternionic length-squared, but these play no significant role in
quantum mechanics so we won’t study them further. Sp(1) can be identified
with the three-dimensional sphere since the length one condition on q is

q02 + q12 + q22 + q32 = 1

the equation of the unit sphere S 3 ⊂ R4 .

63
6.2.2 Rotations and spin groups in four dimensions

Pairs (u, v) of unit quaternions give the product group Sp(1) × Sp(1). An
element of this group acts on H = R4 by

q → uqv

This action preserves lengths of vectors and is linear in q, so it must correspond


to an element of the group SO(4). One can easily see that pairs (u, v) and
(−u, −v) give the same linear transformation of R4 , so the same element of
SO(4). One can show that SO(4) is the group Sp(1) × Sp(1), with the two
elements (u, v) and (−u, −v) identified. The name Spin(4) is given to the Lie
group Sp(1) × Sp(1) that “double covers” SO(4) in this manner.

6.2.3 Rotations and spin groups in three dimensions

Later on in the course we’ll encounter Spin(4) and SO(4) again, but for now
we’re interested in the subgroup Spin(3) that only acts non-trivially on 3 of the
dimensions, and double-covers not SO(4) but SO(3). To find this, consider the
subgroup of Spin(4) consisting of pairs (u, v) of the form (u, u−1 ) (a subgroup
isomorphic to Sp(1), since elements correspond to a single unit length quaternion
u). This subgroup acts on quaternions by conjugation

q → uqu−1

an action which is trivial on the real quaternions, but nontrivial on the “pure
imaginary” quaternions of the form

q = ~v = v1 i + v2 j + v3 k

An element u ∈ Sp(1) acts on ~v ∈ R3 ⊂ H as

~v → u~v u−1

This is a linear action, preserving the length |~v |, so corresponds to an element


of SO(3). We thus have a map (which can easily be checked to be a homomor-
phism)

Φ : u ∈ Sp(1) → {~v → u~v u−1 } ∈ SO(3)

64
Both u and −u act in the same way on ~v , so we have two elements in
Sp(1) corresponding to the same element in SO(3). One can show that Φ is a
surjective map (one can get any element of SO(3) this way), so it is what is called
a “covering” map, specifically a two-fold cover. It makes Sp(1) a double-cover of
SO(3), and we give this the name “Spin(3)”. This also allows us to characterize
more simply SO(3) as a geometrical space. It is S 3 = Sp(1) = Spin(3) with
opposite points on the three-sphere identified. This space is known as RP(3),
real projective 3-space, which can also be thought of as the space of lines through
the origin in R4 (each such line intersects S 3 in two opposite points).

65
For those who have seen some topology, note that the covering map Φ is
an example of a topologically non-trivial cover. It is just not true that topo-
logically S 3 ' RP3 × (+1, −1). S 3 is a connected space, not two disconnected
pieces. This topological non-triviality implies that globally there is no possible
homomorphism going in the opposite direction from Φ (i.e. SO(3) → Spin(3)).
One can do this locally, picking a local patch in SO(3) and taking the inverse
of Φ to a local patch in Spin(3), but this won’t work if we try and extend it
globally to all of SO(3).
The identification R2 = C allowed us to represent elements of the unit circle
group U (1) as exponentials eiθ , where iθ was in the Lie algebra u(1) = R of
U (1). For Sp(1) one can do much the same thing, with the Lie algebra sp(1)
now the space of all pure imaginary quaternions, which one can identify with
R3 by  
w1
w = w2  ∈ R3 ↔ w ~ = w1 i + w2 j + w3 k ∈ H
w3
Unlike the U (1) case, there’s a non-trivial Lie bracket, just the commutator of
quaternions.
Elements of the group Sp(1) are given by exponentiating such Lie algebra
elements, which we will write in the form

u(θ, w) = eθw~ = cos θ + w


~ sin θ

66
where θ ∈ R and w ~ is a purely imaginary quaternion of unit length. Taking θ
as a parameter, these give paths in Sp(1) going through the identity at θ = 0,
with velocity vector w
~ since

d
u(θ, w)|θ=0 = (− sin θ + w
~ cos θ)|θ=0 = w
~

We can explicitly evaluate the homomorphism Φ on such elements u(θ, w) ∈


Sp(1), with the result that Φ takes u(θ, w) to a rotation by an angle 2θ around
the axis w:

Theorem 6.1.
Φ(u(θ, w)) = R(2θ, w)

Proof. First consider the special case w = e3 of rotations about the 3-axis.

u(θ, e3 ) = eθk = cos θ + k sin θ

and
u(θ, e3 )−1 = e−θk = cos θ − k sin θ
so Φ(u(θ, e3 )) is the rotation that takes v (identified with the quaternion ~v =
v1 i + v2 j + v3 k) to

u(θ, e3 )~v u(θ, e3 )−1 =(cos θ + k sin θ)(v1 i + v2 j + v3 k)(cos θ − k sin θ)


=(v1 (cos2 θ − sin2 θ) − v2 (2 sin θ cos θ))i
+ (2v1 sin θ cos θ + v2 (cos2 θ − sin2 θ))j + v3 k
=(v1 cos 2θ − v2 sin 2θ)i + (v1 sin 2θ + v2 cos 2θ)j + v3 k

This is the orthogonal transformation of R3 given by


    
v1 cos 2θ − sin 2θ 0 v1
v = v2  →  sin 2θ cos 2θ 0 v2  (6.2)
v3 0 0 1 v3

One can readily do the same calculation for the case of e1 , then use the
Euler angle parametrization of equation 6.1 to show that a general u(θ, w) can
be written as a product of the cases already worked out.

Notice that as θ goes from 0 to 2π, u(θ, w) traces out a circle in Sp(1). The
homomorphism Φ takes this to a circle in SO(3), one that gets traced out twice
as θ goes from 0 to 2π, explicitly showing the nature of the double covering
above that particular circle in SO(3).
The derivative of the map Φ will be a Lie algebra homomorphism, a linear
map
Φ0 : sp(1) → so(3)

67
It takes the Lie algebra sp(1) of pure imaginary quaternions to the Lie algebra
so(3) of 3 by 3 antisymmetric real matrices. One can compute it easily on basis
vectors, using for instance equation 6.2 above to find for the case w
~ =k
d
Φ0 (k) = Φ(cos θ + k sin θ)|θ=0

 
−2 sin 2θ −2 cos 2θ 0
=  2 cos 2θ −2 sin 2θ 0
0 0 0 |θ=0
 
0 −2 0
= 2 0 0 = 2l3
0 0 0

Repeating this on other basis vectors one finds that

Φ0 (i) = 2l1 , Φ0 (j) = 2l2 , Φ0 (k) = 2l3

Thus Φ0 is an isomorphism of sp(1) and so(3) identifying the bases

i j k
, , and l1 , l2 , l3
2 2 2

Note that it is the 2i , 2j , k2 that satisfy simple commutation relations

i j k j k i k i j
[ , ]= , [ , ]= , [ , ]=
2 2 2 2 2 2 2 2 2

6.2.4 The spin group and SU (2)


Instead of doing calculations using quaternions with their non-commutativity
and special multiplication laws, it is more conventional to choose an isomorphism
between quaternions H and a space of 2 by 2 complex matrices, and work just
with matrix multiplication and complex numbers. The Pauli matrices can be
used to gives such an isomorphism, taking
     
1 0 0 −i 0 −1
1→1= , i → −iσ1 = , j → −iσ2 =
0 1 −i 0 1 0
 
−i 0
k → −iσ3 =
0 i
The correspondence between H and 2 by 2 complex matrices is then given
by  
q − iq3 −q2 − iq1
q = q0 + q1 i + q2 j + q3 k ↔ 0
q2 − iq1 q0 + iq3
Since  
q0 − iq3 −q2 − iq1
det = q02 + q12 + q22 + q32
q2 − iq1 q0 + iq3

68
we see that the length-squared function on quaternions corresponds to the de-
terminant function on 2 by 2 complex matrices. Taking q ∈ Sp(1), so of length
one, the corresponding complex matrix is in SU (2).
Under this identification of H with 2 by 2 complex matrices, we have an
identification of Lie algebras sp(1) = su(2) between pure imaginary quaternions
and skew-Hermitian trace-zero 2 by 2 complex matrices
 
−iw3 −w2 − iw1
~ = w1 i + w2 j + w3 k ↔
w = −iw · σ
w2 − iw1 iw3

The basis 2i , 2j , k2 gets identified with a basis for the Lie algebra su(2) which
written in terms of the Pauli matrices is
σj
Xj = −i
2
with the Xj satisfying the commutation relations

[X1 , X2 ] = X3 , [X2 , X3 ] = X1 , [X3 , X1 ] = X2

which are precisely the same commutation relations as for so(3)

[l1 , l2 ] = l3 , [l2 , l3 ] = l1 , [l3 , l1 ] = l2

We now have no less than three isomorphic Lie algebras sp(1) = su(2) =
so(3), with elements that get identified as follows
 
  0 −w3 w2
i w3 w1 − iw2
w1 2i + w2 2j + w3 k2 ↔ −

↔  w3 0 −w1 
2 w1 + iw2 −w3
−w2 w1 0

This isomorphism identifies basis vectors by


i σ1
↔ −i ↔ l1
2 2
etc. The first of these identifications comes from the way we chose to identify
H with 2 by 2 complex matrices. The second identification is Φ0 , the derivative
at the identity of the covering map Φ.
On each of these isomorphic Lie algebras we have adjoint Lie group (Ad)
and Lie algebra (ad) representations. Ad si given by conjugation with the cor-
responding group elements in Sp(1), SU (2) and SO(3). ad is given by taking
commutators in the respective Lie algebras of pure imaginary quaternions, skew-
Hermitian trace-zero 2 by 2 complex matrices and 3 by 3 real antisymmetric
matrices.
Note that these three Lie algebras are all three-dimensional real vector
spaces, so these are real representations. If one wants a complex representa-
tion, one can complexify and take complex linear combinations of elements.
This is less confusing in the case of su(2) than for sp(1) since taking complex

69
linear combinations of skew-Hermitian trace-zero 2 by 2 complex matrices just
gives all trace-zero 2 by 2 matrices (the Lie algebra sl(2, C)).
In addition, recall that there is a fourth isomorphic version of this repre-
sentation, the representation of SO(3) on column vectors. This is also a real
representation, but can straightforwardly be complexified. Since so(3) and su(2)
are isomorphic Lie algebras, their complexifications so(3)C and sl(2, C) will also
be isomorphic.
In terms of 2 by 2 complex matrices, one can exponentiate Lie algebra ele-
ments to get group elements in SU (2) and define
θ
Ω(θ, w) = eθ(w1 X1 +w2 X2 +w3 X3 ) = e−i 2 w·σ (6.3)
θ θ
= cos( )1 − i(w · σ) sin( ) (6.4)
2 2
Transposing the argument of theorem 6.1 from H to complex matrices, one finds
that, identifying
 
v3 v1 − iv2
v ↔v·σ =
v1 + iv2 −v3

one has
Φ(Ω(θ, w)) = R(θ, w)

with Ω(θ, w) acting by conjugation, taking

v · σ → Ω(θ, w)(v · σ)Ω(θ, w)−1 = (R(θ, w)v) · σ (6.5)

Note that in changing from the quaternionic to complex case, we are treating
the factor of 2 differently, since in the future we will want to use Ω(θ, w) to
perform rotations by an angle θ. In terms of the identification SU (2) = Sp(1),
we have Ω(θ, w) = u( θ2 , w).
Recall that any SU (2) matrix can be written in the form
 
α β
−β α

α = q0 − iq3 , β = −q2 − iq1

with α, β ∈ C arbitrary complex numbers satisfying |α|2 +|β|2 = 1. One can also
write down a somewhat unenlightening formula for the map Φ : SU (2) → SO(3)
in terms of such explicit SU (2) matrices, getting

Im(β 2 − α2 ) Re(α2 + β 2 ) 2 Im(αβ)


 
 
α β
Φ( ) = Re(β 2 − α2 ) Im(α2 + β 2 ) 2 Re(αβ) 
−β α
2 Re(αβ) −2 Im(αβ) |α|2 − |β|2

See [58], page 123-4, for a derivation.

70
6.3 A summary
To summarize, we have shown that for three dimensions we have two distinct
Lie groups:
• Spin(3), which geometrically is the space S 3 . Its Lie algebra is R3 with
Lie bracket the cross-product. We have seen two different explicit con-
structions of Spin(3), in terms of unit quaternions (Sp(1)), and in terms
of 2 by 2 unitary matrices of determinant 1 (SU (2)).
• SO(3), with the same Lie algebra R3 with the same Lie bracket.
There is a group homomorphism Φ that takes the first group to the second,
which is a two-fold covering map. Its derivative Φ0 is an isomorphism of the Lie
algebras of the two groups.
We can see from these constructions two interesting irreducible representa-
tions of these groups:
• A representation on R3 which can be constructed in two different ways: as
the adjoint representation of either of the two groups, or as the defining
representation of SO(3). This is known to physicists as the “spin 1”
representation.
• A representation of the first group on C2 , which is most easily seen as
the defining representation of SU (2). It is not a representation of SO(3),
since going once around a non-contractible loop starting at the identity
takes one to minus the identity, not back to the identity as required. This
is called the “spin 1/2 or “spinor” representation and will be studied in
more detail in chapter 7.

6.4 For further reading


For another discussion of the relationship of SO(3) and SU (2) as well as a
construction of the map Φ, see [58], sections 4.2 and 4.3, as well as [3], chapter
8, and [60] Chapters 2 and 4.

71
72
Chapter 7

Rotations and the Spin 12


Particle in a Magnetic Field

The existence of a non-trivial double-cover Spin(3) of the three-dimensional


rotation group may seem to be a somewhat obscure mathematical fact. Re-
markably though, the existence of fundamental spin- 21 particles shows that it
is Spin(3) rather than SO(3) that is the symmetry group corresponding to
rotations of fundamental quantum systems. Ignoring the degrees of freedom de-
scribing their motion in space, which we will examine in later chapters, states of
elementary particles such as the electron are described by a state space H = C2 ,
with rotations acting on this space by the two-dimensional irreducible represen-
tation of SU (2) = Spin(3).
This is the same two-state system studied in chapter 3, with the SU (2)
action found there now acquiring an interpretation as corresponding to the
double-cover of rotations of physical space. In this chapter we will revisit that
example, emphasizing the relation to rotations.

7.1 The spinor representation


In chapter 6 we examined in great detail various ways of looking at a particular
three-dimensional irreducible real representation of the groups SO(3), SU (2)
and Sp(1). This was the adjoint representation for those three groups, and
isomorphic to the vector representation for SO(3). In the SU (2) and Sp(1)
cases, there is an even simpler non-trivial irreducible representation than the
adjoint: the representation of 2 by 2 complex matrices in SU (2) on column
vectors C2 by matrix multiplication or the representation of unit quaternions in
Sp(1) on H by scalar multiplication. Choosing an identification C2 = H these
are isomorphic representations on C2 of isomorphic groups, and for calculational
convenience we will use SU (2) and its complex matrices rather than dealing with
quaternions. This irreducible representation is known as the “spinor” or “spin”

73
representation of Spin(3) = SU (2). The homomorphism πspinor defining the
representation is just the identity map from SU (2) to itself.
The spin representation of SU (2) is not a representation of SO(3). The
double cover map Φ : SU (2) → SO(3) is a homomorphism, so given a rep-
resentation (π, V ) of SO(3) one gets a representation (π ◦ Φ, V ) of SU (2) by
composition. One cannot go in the other direction: there is no homomorphism
SO(3) → SU (2) that would allow us to make the standard representation of
SU (2) on C2 into an SO(3) representation.
One could try and define a representation of SO(3) by
π : g ∈ SO(3) → π(g) = πspinor (g̃) ∈ SU (2)
where g̃ is some choice of one of the elements g̃ ∈ SU (2) satisfying Φ(g̃) = g.
The problem with this is that we won’t quite get a homomorphism. Changing
our choice of g̃ will introduce a minus sign, so π will only be a homomorphism
up to sign
π(g1 )π(g2 ) = ±π(g1 g2 )
The nontrivial nature of the double-covering ensures that there is no way to
completely eliminate all minus signs, no matter how we choose g̃. Examples
like this, which satisfy the representation property only one up to a sign ambi-
guity, are known as “projective representations”. So, the spinor representation
of SU (2) = Spin(3) is only a projective representation of SO(3), not a true
representation of SO(3).
Quantum mechanics texts sometimes deal with this phenomenon by noting
that physically there is an ambiguity in how one specifies the space of states H,
with multiplication by an overall scalar not changing the eigenvalues of operators
or the relative probabilities of observing these eigenvalues. As a result, the sign
ambiguity has no physical effect. It seems more straightforward though to not
try and work with projective representations, but just use the larger group
Spin(3), accepting that this is the correct symmetry group reflecting the action
of rotations on three-dimensional quantum systems.
The spin representation is more fundamental than the vector representation,
in the sense that the spin representation cannot be found just knowing the
vector representation, but the vector representation of SO(3) can be constructed
knowing the spin representation of SU (2). We have seen this in the identification
of R3 with 2 by 2 complex matrices, where rotations become conjugation by spin
representation matrices. Another way of seeing this uses the tensor product, and
is explained in section 9.4.3. Note that taking spinors as fundamental entails
abandoning the descriptions of three-dimensional geometry purely in terms of
real numbers. While the vector representation is a real representation of SO(3)
or Spin(3), the spinor representation is a complex representation.

7.2 The spin 1/2 particle in a magnetic field


In chapter 3 we saw that a general quantum system with H = C2 could be
understood in terms of the action of U (2) on C2 . The self-adjoint observables

74
correspond (up to a factor of i) to the corresponding Lie algebra representation.
The U (1) ⊂ U (2) subgroup commutes with everything else and can be analyzed
separately, so we will just consider the SU (2) subgroup. For an arbitrary such
system, the group SU (2) has no particular geometric significance. When it
occurs in its role as double-cover of the rotational group, the quantum system
is said to carry “spin”, in particular “spin one-half” (in chapter 8 will discuss
state spaces of higher spin values).
As before, we take as a standard basis for the Lie algebra su(2) the operators
Xj , j = 1, 2, 3, where
σj
Xj = −i
2
which satisfy the commutation relations

[X1 , X2 ] = X3 , [X2 , X3 ] = X1 , [X3 , X1 ] = X2

To make contact with the physics formalism, we’ll define self-adjoint operators
σj
Sj = iXj =
2
We could have chosen the other sign, but this is the standard convention of
the physics literature. In general, to a skew-adjoint operator (which is what
one gets from a unitary Lie algebra representation and what exponentiates to
unitary operators) we will associate a self-adjoint operator by multiplying by
i. These self-adjoint operators have real eigenvalues (in this case ± 21 ), so are
favored by physicists as observables since such eigenvalues will be related to
experimental results. In the other direction, given a physicist’s observable self-
adjoint operator, we will multiply by −i to get a skew-adjoint operator that can
be exponentiated to get a unitary representation.
Note that the conventional definition of these operators in physics texts
includes a factor of ~:
~σj
Sjphys = i~Xj =
2
A compensating factor of 1/~ is then introduced when exponentiating to get
group elements
θ phys
Ω(θ, w) = e−i ~ w·S ∈ SU (2)
which do not depend on ~. The reason for this convention has to do with the
action of rotations on functions on R3 (see chapter 17) and the appearance of
~ in the definition of the momentum operator. Our definitions of Sj and of
rotations using (see equation 6.3)

Ω(θ, w) = e−iθw·S = eθw·X

will not include these factors of ~, but in any case they will be equivalent to
the physics text definitions when we make our standard choice of working with
units such that ~ = 1.

75
States in H = C2 that have a well-defined value of the observable Sj will be
the eigenvectors of Sj , with value for the observable the corresponding eigen-
value, which will be ± 12 . Measurement theory postulates that if we perform the
measurement corresponding to Sj on an arbitrary state |ψi, then we will
• with probability c+ get a value of + 12 and leave the state in an eigenvector
|j, + 12 i of Sj with eigenvalue + 12
• with probability c− get a value of − 12 and leave the state in an eigenvector
|j, − 12 i of Sj with eigenvalue − 12
where if
1 1
|ψi = α|j, + i + β|j, − i
2 2
we have
|α|2 |β|2
c+ = , c− =
|α|2 + |β|2 |α| + |β|2
2

After such a measurement, any attempt to measure another Sk , k 6= j will give


± 12 with equal probability and put the system in a corresponding eigenvector
of Sk .
If a quantum system is in an arbitrary state |ψi it may not have a well-
defined value for some observable A, but one can calculate the “expected value”
of A. This is the sum over a basis of H consisting of eigenvectors (which will
all be orthogonal) of the corresponding eigenvalues, weighted by the probability
of their occurrence. The calculation of this sum in this case (A = Sj ) using
expansion in eigenvectors of Sj gives

hψ|A|ψi (αhj, + 21 | + βhj, − 12 |)A(α|j, + 21 i + β|j, − 12 i)


=
hψ|ψi (αhj, + 21 | + βhj, − 21 |)(α|j, + 12 i + β|j, − 12 i)
|α|2 (+ 12 ) + |β|2 (− 21 )
=
|α|2 + |β|2
1 1
=c+ (+ ) + c− (− )
2 2
One often chooses to simplify such calculations by normalizing states so that
the denominator hψ|ψi is 1. Note that the same calculation works in general
for the probability of measuring the various eigenvalues of an observable A, as
long as one has orthogonality and completeness of eigenvectors.
In the case of a spin one-half particle, the group Spin(3) = SU (2) acts on
states by the spinor representation with the element Ω(θ, w) ∈ SU (2) acting as

|ψi → Ω(θ, w)|ψi

As we saw in chapter 6, the Ω(θ, w) also act on self-adjoint matrices by conju-


gation, an action that corresponds to rotation of vectors when one makes the
identification
v ↔v·σ

76
(see equation 6.5). Under this identification the Sj correspond (up to a factor
of 2) to the basis vectors ej . One can write their transformation rule as

Sj → Sj0 = Ω(θ, w)Sj Ω(θ, w)−1

and  0  
S1 S1
S20  = R(θ, w)T S2 
S30 S3
Note that, recalling the discussion in section 4.1, rotations on sets of basis
vectors like this involve the transpose R(θ, w)T of the matrix R(θ, w) that acts
on coordinates.
In chapter 42 we will get to the physics of electromagnetic fields and how
particles interact with them in quantum mechanics, but for now all we need to
know is that for a spin one-half particle, the spin degree of freedom that we are
describing by H = C2 has a dynamics described by the Hamiltonian

H = −µ · B (7.1)

Here B is the vector describing the magnetic field, and


−e
µ=g S
2mc
is an operator called the magnetic moment operator. The constants that appear
are: −e the electric charge, c the speed of light, m the mass of the particle, and g,
a dimensionless number called the “gyromagnetic ratio”, which is approximately
2 for an electron, about 5.6 for a proton.
The Schrödinger equation is

d
|ψ(t)i = −i(−µ · B)|ψ(t)i
dt
with solution
|ψ(t)i = U (t)|ψ(0)i
where
−ge ge ge|B| B
U (t) = eitµ·B = eit 2mc S·B = et 2mc X·B = et 2mc X· |B|

The time evolution of a state is thus given at time t by the same SU (2) element
B
that, acting on vectors, gives a rotation about the axis w = |B| by an angle

ge|B|t
2mc

so is a rotation about w taking place with angular velocity ge|B|


2mc .
The amount of non-trivial physics that is described by this simple system is
impressive, including:

77
• The Zeeman effect: this is the splitting of atomic energy levels that occurs
when an atom is put in a constant magnetic field. With respect to the
energy levels for no magnetic field, where both states in H = C2 have the
same energy, the term in the Hamiltonian given above adds

ge|B|
±
4mc
to the two energy levels, giving a splitting between them proportional to
the size of the magnetic field.

• The Stern-Gerlach experiment: here one passes a beam of spin one-half


quantum systems through an inhomogeneous magnetic field. We have not
yet discussed particle motion, so more is involved here than the simple
two-state system. However, it turns out that one can arrange this in such
a way as to pick out a specific direction w, and split the beam into two
components, of eigenvalue + 12 and − 12 for the operator w · S.

• Nuclear magnetic resonance spectroscopy: one can subject a spin one-half


system to a time-varying magnetic field B(t), and such a system will be
described by the same Schrödinger equation, although now the solution
cannot be found just by exponentiating a matrix. Nuclei of atoms provide
spin one-half systems that can be probed with time and space-varying
magnetic fields, allowing imaging of the material that they make up.

• Quantum computing: attempts to build a quantum computer involve try-


ing to put together multiple systems of this kind (qubits), keeping them
isolated from perturbations by the environment, but still allowing inter-
action with the system in a way that preserves its quantum behavior.
The 2012 Physics Nobel prize was awarded for experimental work making
progress in this direction.

7.3 The Heisenberg picture


So far in this course we’ve been describing what is known as the Schrödinger
picture of quantum mechanics. States in H are functions of time, obeying
the Schrödinger equation determined by a Hamiltonian observable H, while
observable self-adjoint operators A are time-independent. Time evolution is
given by a unitary transformation

U (t) = e−itH , |ψ(t)i = U (t)|ψ(0)i

One can instead use U (t) to make a unitary transformation that puts the
time-dependence in the observables, removing it from the states, as follows:

|ψ(t)i → |ψ(t)iH = U −1 (t)|ψ(t)i = |ψ(0)i, A → AH (t) = U −1 (t)AU (t)

78
where the “H” subscripts for “Heisenberg” indicate that we are dealing with
“Heisenberg picture” observables and states. One can easily see that the physi-
cally observable quantities given by eigenvalues and expectations values remain
the same:

H hψ(t)|AH |ψ(t)iH = hψ(t)|U (t)(U −1 (t)AU (t))U −1 (t)|ψ(t)i = hψ(t)|A|ψ(t)i

In the Heisenberg picture the dynamics is given by a differential equation


not for the states but for the operators. Recall from our discussion of the adjoint
representation (see equation 5.1) the formula
d tX −tX d d
(e Y e ) = ( (etX Y ))e−tX + etX Y ( e−tX )
dt dt dt
= XetX Y e−tX − etX Y e−tX X

Using this with


Y = A, X = iH
we find
d
AH (t) = [iH, AH (t)] = i[H, AH (t)]
dt
and this equation determines the time evolution of the observables in the Heisen-
berg picture.
Applying this to the case of the spin one-half system in a magnetic field, and
taking for our observable S (the Sj , taken together as a column vector) we find

d eg
SH (t) = i[H, SH (t)] = i [SH (t) · B, SH (t)] (7.2)
dt 2mc
We know from the discussion above that the solution will be

SH (t) = U (t)SH (0)U (t)−1

for ge|B| B
U (t) = e−it 2mc S· |B|

and thus the spin vector observable evolves in the Heisenberg picture by rotating
about the magnetic field vector with angular velocity ge|B|
2mc .

7.4 The Bloch sphere and complex projective


space
There is a different approach one can take to characterizing states of a quantum
system with H = C2 . Multiplication of vectors in H by a non-zero complex
number does not change eigenvectors, eigenvalues or expectation values, so ar-
guably has no physical effect. Multiplication by a real scalar just corresponds
to a change in normalization of the state, and we will often use this freedom
to work with normalized states, those satisfying hψ|ψi = 1. With normalized

79
states, one still has the freedom to multiply states by a phase eiθ without chang-
ing eigenvectors, eigenvalues or expectation values. In terms of group theory,
the overall U (1) in the unitary group U (2) acts on H by a representation of
U (1), which can be characterized by an integer, the corresponding “charge”,
but this decouples from the rest of the observables and is not of much interest.
One is mainly interested in the SU (2) part of the U (2), and the observables
that correspond to its Lie algebra.
Working with normalized states in this case corresponds to working with
unit-length vectors in C2 , which are given by points on the unit sphere S 3 . If
we don’t care about the overall U (1) action, we can imagine identifying all states
that are related by a phase transformation. Using this equivalence relation we
can define a new set, whose elements are the “cosets”, elements of S 3 ⊂ C2 ,
with elements that differ just by multiplication by eiθ identified. The set of these
elements forms a new geometrical space, called the “coset space”, often written
S 3 /U (1). This structure is called a “fibering” of S 3 by circles, and is known
as the “Hopf fibration”. Try an internet search for various visualizations of the
geometrical structure involved, a surprising decomposition of three-dimensional
space into non-intersecting curves.
The same space can be represented in a different way, as C2 /C∗ , by taking
all elements of C2 and identifying those related by muliplication by a non-zero
complex number. If we were just using real numbers, R2 /R∗ can be thought of
as the space of all lines in the plane going through the origin.

One sees that each such line hits the unit circle in two opposite points, so

80
this set could be parametrized by a semi-circle, identifying the points at the
two ends. This space is given the name RP 1 , the “real projective line”, and
the analog space of lines through the origin in Rn is called RP n−1 . What we
are interested in is the complex analog CP 1 , which is often called the “complex
projective line”.
To better understand CP 1 , one would like to put coordinates on it. A
standard way to choose such a coordinate is to associate to the vector
 
z1
∈ C2
z2

the complex number z1 /z2 . Overall multiplication by a complex number will


drop out in this ratio, so one gets different values for the coordinate z1 /z2 for
each different coset element, and it appears that elements of CP 1 correspond to
points on the complex plane. There is however one problem with this coordinate:
the coset of  
1
0
does not have a well-defined value: as one approaches this point one moves off
to infinity in the complex plane. In some sense the space CP 1 is the complex
plane, but with a “point at infinity” added.
It turns out that CP 1 is best thought of not as a plane together with a
point, but as a sphere, with the relation to the plane and the point at infinity
given by stereographic projection. Here one creates a one-to-one mapping by
considering the lines that go from a point on the sphere to the north pole of
the sphere. Such lines will intersect the plane in a point, and give a one-to-one
mapping between points on the plane and points on the sphere, except for the
north pole. Now, one can identify the north pole with the “point at infinity”,
and thus the space CP 1 can be identified with the space S 2 . The picture looks
like this

81
and the equations relating coordinates (X1 , X2 , X3 ) on the sphere and the com-
plex coordinate z1 /z2 = z = x + iy on the plane are given by

X1 X2
x= , y=
1 − X3 1 − X3

and
2x 2y x2 + y 2 − 1
X1 = , X 2 = , X 3 =
x2 + y 2 + 1 x2 + y 2 + 1 x2 + y 2 + 1

The action of SU (2) on H by


    
z1 α β z1

z2 −β α z2

takes
z1 αz + β
z= →
z2 −βz + α
Such transformations of the complex plane are conformal (angle-preserving)
transformations known as “Möbius transformations”. One can check that the
corresponding transformation on the sphere is the rotation of the sphere in R3
corresponding to this SU (2) = Spin(3) transformation.
To mathematicians, this sphere identified with CP 1 is known as the “Rie-
mann sphere”, whereas physicists often instead use the terminology of “Bloch
sphere”. It provides a useful parametrization of the states of the qubit system,
up to scalar multiplication, which is supposed to be physically irrelevant. The

82
North pole is the “spin-up” state, the South pole is the “spin-down” state, and
along the equator one finds the two states that have definite values for S1 , as
well as the two that have definite values for S2 .

Notice that the inner product on vectors in H does not correspond at all
to the inner product of unit vectors in R3 . The North and South poles of the
Bloch sphere correspond to orthogonal vectors in H, but they are not at all
orthogonal thinking of the corresponding points on the Bloch sphere as vectors
in R3 . Similarly, eigenvectors for S1 and S2 are orthogonal on the Bloch sphere,
but not at all orthogonal in H.

Many of the properties of the Bloch sphere parametrization of states in H are


special to the fact that H = C2 . In the next class we will study systems of spin
n n
2 , where H = C . In these cases there is still a two-dimensional Bloch sphere,
but only certain states in H are parametrized by it. We will see other examples
of systems with “coherent states” analogous to the states parametrized by the
Bloch sphere, but the case H has the special property that all states (up to
scalar multiplication) are such “coherent states”.

83
7.5 For further reading
Just about every quantum mechanics textbook works out this example of a spin
1/2 particle in a magnetic field. For one example, see chapter 14 of [57]. For
an inspirational discussion of spin and quantum mechanics, together with more
about the Bloch sphere, see chapter 22 of [45].

84
Chapter 8

Representations of SU (2)
and SO(3)

For the case of G = U (1), in chapter 2 we were able to classify all complex
irreducible representations by an element of Z and explicitly construct each
irreducible representation. We would like to do the same thing here for repre-
sentations of SU (2) and SO(3). The end result will be that irreducible repre-
sentations of SU (2) are classified by a non-negative integer n = 0, 1, 2, 3, · · · ,
and have dimension n + 1, so we’ll (hoping for no confusion with the irreducible
representations (πn , C) of U (1)) denote them (πn , Cn+1 ). For even n these will
also be irreducible representations of SO(3), but this will not be true for odd
n. It is common in physics to label these representations by s = n2 = 0, 21 , 1, · · ·
and call the representation labeled by s the “spin s representation”. We already
know the first three examples:
• Spin 0: (π0 , C) is the trivial representation on V, with
π0 (g) = 1 ∀g ∈ SU (2)
This is also a representation of SO(3). In physics, this is sometimes called
the “scalar representation”. Saying that something transforms under ro-
tations as the “scalar representation” just means that it is invariant under
rotations.
• Spin 21 : Taking
π1 (g) = g ∈ SU (2) ⊂ U (2)
gives the defining representation on C2 . This is the spinor representation
discussed in chapter 7. It is not a representation of SO(3).
• Spin 1: Since SO(3) is a group of 3 by 3 matrices, it acts on vectors in R3 .
This is just the standard action on vectors by rotation. In other words,
the representation is (ρ, R3 ), with ρ the identity homomorphism
g ∈ SO(3) → ρ(g) = g ∈ SO(3)

85
One can complexify to get a representation on C3 , which in this case just
means acting with SO(3) matrices on column vectors, replacing the real
coordinates of vectors by complex coordinates. This is sometimes called
the “vector representation”, and we saw in chapter 6 that it is isomorphic
to the adjoint representation.
One gets a representation (π2 , C3 ) of SU (2) by just composing the homo-
morphisms Φ and ρ:

π2 = ρ ◦ Φ : SU (2) → SO(3)

This is the adjoint representation of SU (2).

8.1 Representations of SU (2): classification


8.1.1 Weight decomposition
If we make a choice of a U (1) ⊂ SU (2), then given any representation (π, V ) of
SU (2) of dimension m, we get a representation (π|U (1) , V ) of U (1) by restriction
to the U (1) subgroup. Since we know the classification of irreducibles of U (1),
we know that
(π|U (1) , V ) = Cq1 ⊕ Cq2 ⊕ · · · ⊕ Cqm
for q1 , q2 , · · · , qm ∈ Z, where Cq denotes the one-dimensional representation
of U (1) corresponding to the integer q. These are called the “weights” of the
representation V . They are exactly the same thing we discussed earlier as
“charges”, but here we’ll favor the mathematician’s terminology since the U (1)
here occurs in a context far removed from that of electromagnetism and its
electric charges.
Since our standard choice of coordinates (the Pauli matrices) picks out the
z-direction and diagonalizes the action of the U (1) subgroup corresponding to
rotation about this axis, this is the U (1) subgroup we will choose to define the
weights of the SU (2) representation V . This is the subgroup of elements of
SU (2) of the form  iθ 
e 0
0 e−iθ
Our decomposition of an SU (2) representation (π, V ) into irreducible represen-
tations of this U (1) subgroup equivalently means that we can choose a basis of
V so that  iθq 
 iθ e 1 0 ··· 0
eiθq2 · · ·

e 0  0 0 
π −iθ = 
0 e  ··· ··· 
0 0 · · · eiθqm
An important property of the set of integers qj is the following:

Theorem. If q is in the set {qj }, so is −q.

86
Proof. Recall that if we diagonalize a unitary matrix, the diagonal entries are
the eigenvalues, but their order is undetermined: acting by permutations on
these eigenvalues we get different diagonalizations of the same matrix. In the
case of SU (2) the matrix  
0 1
P =
−1 0
has the property that conjugation by it permutes the diagonal elements, in
particular  iθ   −iθ 
e 0 −1 e 0
P P =
0 e−iθ 0 eiθ
So  iθ   −iθ 
e 0 e 0
π(P )π( )π(P )−1 = π( )
0 e−iθ 0 eiθ
and we see that π(P ) gives a change of basis of V such that the representation
matrices on the U (1) subgroup are as before, with θ → −θ. Changing θ → −θ
in the representation matrices is equivalent to changing the sign of the weights
qj . The elements of the set {qj } are independent of the basis, so the additional
symmetry under sign change implies that for each non-zero element in the set
there is another one with the opposite sign.
Looking at our three examples so far, we see that the scalar or spin 0 repre-
sentation of course is one-dimensional of weight 0

(π0 , C) = C0
1
and the spinor or spin 2 representation decomposes into U (1) irreducibles of
weights −1, +1:
(π1 , C2 ) = C−1 ⊕ C+1
For the spin 1 representation, recall that our double-cover homomorphism
Φ takes
 
 iθ  cos 2θ − sin 2θ 0
e 0
∈ SU (2) →  sin 2θ cos 2θ 0 ∈ SO(3)
0 e−iθ
0 0 1

Acting with the SO(3) matrix on the right on C3 will give a unitary transforma-
tion of C3 , so in the group U (3). One can show that the upper left diagonal 2 by
2 block acts on C2 with weights −2, +2, whereas the bottom right element acts
trivially on the remaining part of C3 , which is a one-dimensional representation
of weight 0. So, the spin 1 representation decomposes as

(π2 , C3 ) = C−2 ⊕ C0 ⊕ C+2

Recall that the spin 1 representation of SU (2) is often called the “vector” rep-
resentation, since it factors in this way through the representation of SO(3) by
rotations on three-dimensional vectors.

87
8.1.2 Lie algebra representations: raising and lowering op-
erators
To proceed further in characterizing a representation (π, V ) of SU (2) we need
to use not just the action of the chosen U (1) subgroup, but the action of
group elements in the other two directions away from the identity. The non-
commutativity of the group keeps us from simultaneously diagonalizing those
actions and assigning weights to them. We can however work instead with the
corresponding Lie algebra representation (π 0 , V ) of su(2). As in the U (1) case,
the group representation is determined by the Lie algebra representation. We
will see that for the Lie algebra representation, we can exploit the complexifica-
tion (recall section 5.4) sl(2, C) of su(2) to further analyze the possible patterns
of weights.
Recall that the Lie algebra su(2) can be thought of as the tangent space R3
to SU (2) at the identity element, with a basis given by the three skew-adjoint
2 by 2 matrices
1
Xj = −i σj
2
which satisfy the commutation relations

[X1 , X2 ] = X3 , [X2 , X3 ] = X1 , [X3 , X1 ] = X2

We will often use the self-adjoint versions Sj = iXj that satisfy

[S1 , S2 ] = iS3 , [S2 , S3 ] = iS1 , [S3 , S1 ] = iS2

A unitary representation (π, V ) of SU (2) of dimension m is given by a homo-


morphism
π : SU (2) → U (m)
and we can take the derivative of this to get a map between the tangent spaces
of SU (2) and of U (m), at the identity of both groups, and thus a Lie algebra
representation
π 0 : su(2) → u(m)
which takes skew-adjoint 2 by 2 matrices to skew-adjoint m by m matrices,
preserving the commutation relations.
We have seen in section 8.1.1 that restricting the representation (π, V ) to
the diagonal U (1) subgroup of SU (2) and decomposing into irreducibles tells us
that we can choose a basis of V so that

(π, V ) = (πq1 , C) ⊕ (πq2 , C) ⊕ · · · ⊕ (πqm , C)

For our choice of U (1) as all matrices of the form


 iθ 
e 0
ei2θS3 =
0 e−iθ

88
with eiθ going around U (1) once as θ goes from 0 to 2π, this means we can
choose a basis of V so that
 iθq 
e 1 0 ··· 0
 0 eiθq2 · · · 0 
π(ei2θS3 ) = 
 ···

··· 
0 0 · · · eiθqm
Taking the derivative of this representation to get a Lie algebra representation,
using
d
π 0 (X) = π(eθX )|θ=0

we find for X = i2S3
 iθq   
e 1 0 ··· 0 iq1 0 ··· 0
d iθq2
0 e · · · 0 0 iq2 ··· 0 
π 0 (i2S3 ) =
  
  = 
dθ  · · · · · ·  · ·· ···
0 0 · · · eiθqm |θ=0 0 0 ··· iqm

Recall that π 0 is a real-linear map from a real vector space (su(2) = R3 ) to


another real vector space (u(n), the skew-Hermitian m by m complex matrices).
We can use complex linearity to extend any such map to a complex-linear map
from su(2)C (the complexification of su(2)) to u(m)C (the complexification of
u(m)). su(2)C is all complex linear combinations of the skew-adjoint, trace-
free 2 by 2 matrices: the Lie algebra sl(2, C) of all complex, trace-free 2 by 2
matrices. u(m)C is M (m, C) = gl(m, C), the Lie algebra of all complex m by
m matrices.
As an example, mutiplying X = i2S3 ∈ su(2) by −i 2 , we have S3 ∈ sl(2, C)
and the diagonal elements in the matrix π 0 (i2S3 ) get also multiplied by −i
2 (since
π 0 is a linear map), giving
 q1 
2 0 ··· 0
q2
 0 ··· 0
π 0 (S3 ) = 
· · ·
2 
· · ·
0 0 · · · q2m
We see that π 0 (S3 ) will have half-integral values, and make the following
definitions
Definition (Weights and Weight Spaces). If π 0 (S3 ) has an eigenvalue k
2, we
say that k is a weight of the representation (π, V ).
The subspace Vk ⊂ V of the representation V satisfying
k
v ∈ Vk =⇒ π 0 (S3 )v = v
2
is called the k’th weight space of the representation. All vectors in it are eigen-
vectors of π 0 (S3 ) with eigenvalue k2 .
The dimension dim Vk is called the multiplicity of the weight k in the rep-
resentation (π, V ).

89
S1 and S2 don’t commute with S3 , so they may not preserve the subspaces
Vk and we can’t diagonalize them simultaneously with S3 . We can however
exploit the fact that we are in the complexification sl(2, C) to construct two
complex linear combinations of S1 and S2 that do something interesting:

Definition (Raising and lowering operators). Let


   
0 1 0 0
S+ = S1 + iS2 = , S− = S1 − iS2 =
0 0 1 0

We have S+ , S− ∈ sl(2, C). These are neither self-adjoint nor skew-adjoint, but
satisfy
(S± )† = S∓
and similarly we have
π 0 (S± )† = π 0 (S∓ )
We call π 0 (S+ ) a “raising operator” for the representation (π, V ), and π 0 (S− )
a “lowering operator”.

The reason for this terminology is the following calculation:

[S3 , S+ ] = [S3 , S1 + iS2 ] = iS2 + i(−iS1 ) = S1 + iS2 = S+

which implies (since π 0 is a Lie algebra homomorphism)

π 0 (S3 )π 0 (S+ ) − π 0 (S+ )π 0 (S3 ) = π 0 ([S3 , S+ ]) = π 0 (S+ )

For any v ∈ Vk , we have

k
π 0 (S3 )π 0 (S+ )v = π 0 (S+ )π 0 (S3 )v + π 0 (S+ )v = ( + 1)π 0 (S+ )v
2
so
v ∈ Vk =⇒ π 0 (S+ )v ∈ Vk+2
The linear operator π 0 (S+ ) takes vectors with a well-defined weight to vectors
with the same weight, plus 2 (thus the terminology “raising operator”). A
similar calculation shows that π 0 (S− ) takes Vk to Vk−2 , lowering the weight by
2.
We’re now ready to classify all finite dimensional irreducible unitary repre-
sentations (π, V ) of SU (2). We define

Definition (Highest weights and highest weight vectors). A non-zero vector


v ∈ Vn ⊂ V such that
π 0 (S+ )v = 0
is called a highest weight vector, with highest weight n.

Irreducible representations will be characterized by a highest weight vector,


as follows

90
Theorem (Highest weight theorem). Finite dimensional irreducible represen-
tations of SU (2) have weights of the form

−n, −n + 2, · · · , n − 2, n

for n a non-negative integer, each with multiplicity 1, with n a highest weight.


Proof. Finite dimensionality implies there is a highest weight n, and we can
choose any highest weight vector vn ∈ Vn . Repeatedly applying π 0 (S− ) to vn
will give new vectors
vn−2j = π 0 (S− )j vn ∈ Vn−2j
with weights n − 2j.
Consider the span of the vn−2j , j ≥ 0. To show that this is a representation
one needs to show that the π 0 (S3 ) and π 0 (S+ ) leave it invariant. For π 0 (S3 ) this
is obvious, for π 0 (S+ ) one can show that

π 0 (S+ )vn−2j = j(n − j + 1)vn−2(j−1) (8.1)

by an induction argument. For j = 0 this is just the highest weight condition


on vn . Assuming validity for j, one can check validity for j + 1 by

π 0 (S+ )vn−2(j+1) =π 0 (S+ )π 0 (S− )vn−2j


=(π 0 ([S+ , S− ] + π 0 (S− )π 0 (S+ ))vn−2j
=(π 0 (2S3 ) + π 0 (S− )π 0 (S+ ))vn−2j
=((n − 2j)vn−2j + π 0 (S− )j(n − j + 1)vn−2(j−1)
=((n − 2j) + j(n − j + 1))vn−2j
=(j + 1)(n − (j + 1) + 1)vn−2((j+1)−1)

where we have used the commutation relation

[S+ , S− ] = 2S3

The span of the vn−2j is not just a representation, but an irreducible one,
since all the non-zero vn−2j arise by repeated application of π 0 (S− ) to vn and
equation 8.1 shows that (up to a constant) π 0 (S+ ) is an inverse to π 0 (S− ) for
all j up to the value j = n + 1. In the sequence of vn−2j for increasing j, finite-
dimensionality of V n implies that at some point one one must hit a “lowest
weight vector”, one annihilated by π 0 (S− ). From that point on, the vn−2j for
higher j will be zero. Taking into account the fact that the pattern of weights
is invariant under change of sign, one finds that the only possible pattern of
weights is
−n, −n + 2, · · · , n − 2, n
This is consistent with equation 8.1, which shows that it is at j = n that π 0 (S− )
will act on vn−2j without having an inverse proportional to π 0 (S+ ) (which would
act on vn−2(j+1) ).

91
Since we saw in section 8.1.1 that representations can be studied by looking
at by the set of their weights under the action of our chosen U (1) ⊂ SU (2), we
can label irreducible representations of SU (2) by a non-negative integer n, the
highest weight. Such a representation will be of dimension n + 1, with weights

−n, −n + 2, · · · , n − 2, n

Each weight occurs with multiplicity one, and we have

(π, V ) = C−n ⊕ C−n+2 ⊕ · · · Cn−2 ⊕ Cn

Starting with a highest-weight or lowest-weight vector, one can generate a


basis for the representation by repeatedly applying raising or lowering operators.
The picture to keep in mind is this

where all the vector spaces are copies of C, and all the maps are isomorphisms
(multiplications by various numbers).
In summary, we see that all irreducible finite dimensional unitary SU (2)
representations can be labeled by a non-negative integer, the highest weight n.
These representations have dimension n + 1 and we will denote them (πn , V n =
Cn+1 ). Note that Vn is the n’th weight space, V n is the representation with
highest weight n. The physicist’s terminology for this uses not n, but n2 and
calls this number the “spin”of the representation. We have so far seen the lowest
three examples n = 0, 1, 2, or spin s = n2 = 0, 12 , 1, but there is an infinite class
of larger irreducibles, with dim V = n + 1 = 2s + 1.

8.2 Representations of SU (2): construction


The argument of the previous section only tells us what properties possible
finite dimensional irreducible representations of SU (2) must have. It shows
how to construct such representations given a highest-weight vector, but does
not provide any way to construct such highest weight vectors. We would like to
find some method to explicitly construct an irreducible (πn , V n ) for each highest
weight n. There are several possible constructions, but perhaps the simplest one
is the following, which gives a representation of highest weight n by looking at
polynomials in two complex variables, homogeneous of degree n.

92
Recall from our early discussion of representations that if one has an action
of a group on a space M , one can get a representation on functions f on M by
taking
(π(g)f )(x) = f (g −1 · x)
For SU (2), we have an obvious action of the group on M = C2 (by matrices
acting on column vectors), and we look at a specific class of functions on this
space, the polynomials. We can break up the infinite-dimensional space of
polynomials on C2 into finite-dimensional subspaces as follows:
Definition (Homogeneous polynomials). The complex vector space of homoge-
neous polynomials of degree m in two complex variables z1 , z2 is the space of
functions on C2 of the form

f (z1 , z2 ) = a0 z1n + a1 z1n−1 z2 + · · · + an−1 z1 z2n−1 + an z2n

The space of such functions is a complex vector space of dimension n + 1.


Using the action of SU (2) on C2 , we will see that this space of functions is
exactly the representation space V n that we need. More explicitly, for
   
α β α −β
g= , g −1 =
−β α β α

we can construct the representation as follows:


 
−1 z1
(πn (g)f )(z1 , z2 ) =f (g )
z2
=f (αz1 − βz2 , βz1 + αz2 )
Xn
= ak (αz1 − βz2 )n−k (βz1 + αz2 )k
k=0

Taking the derivative, the Lie algebra representation is given by


 
d d z
πn0 (X)f = πn (etX )f|t=0 = f (e−tX 1 )|t=0
dt dt z2

By the chain rule this is


 
∂f ∂f d −tX z1
πn0 (X)f =( , )( e )
∂z1 ∂z2 dt z2 |t=0
∂f ∂f
=− (X11 z1 + X12 z2 ) − (X21 z1 + X22 z2 )
∂z1 ∂z2
where the Xij are the components of the matrix X.
Computing what happens for X = S3 , S+ , S− , we get
1 ∂f ∂f
(πn0 (S3 )f )(z1 , z2 ) = (− z1 + z2 )
2 ∂z1 ∂z2

93
so
1 ∂ ∂
πn0 (S3 ) = (−z1 + z2 )
2 ∂z1 ∂z2
and similarly
∂ ∂
πn0 (S+ ) = −z2 , πn0 (S− ) = −z1
∂z1 ∂z2
The z1k z2n−k are eigenvectors for S3 with eigenvalue 12 (n − 2k) since

1 1
πn0 (S3 )z1k z2n−k = (−kz1k z2n−k + (n − k)z1k z2n−k ) = (n − 2k)z1k z2n−k
2 2
z2n will be an explicit highest weight vector for the representation (πn , V n ).
An important thing to note here is that the formulas we have found for π 0
are not in terms of matrices. Instead we have seen that when we construct our
representations using functions on C2 , for any X ∈ su(2) (or its complexification
sl(2, C)), πn0 (X) is given by a differential operator. Note that these differential
operators are independent of n: one gets the same operator π 0 (X) on all the
V n . This is because the original definition of the representation

(π(g)f )(x) = f (g −1 · x)

is on the full infinite dimensional space of polynomials on C2 . While this space


is infinite-dimensional, issues of analysis don’t really come into play here, since
polynomial functions are essentially an algebraic construction. Later on in the
course we will need to work with function spaces that require much more serious
consideration of issues in analysis.
Restricting the differential operators π 0 (X) to a finite dimensional irreducible
subspace V n , the homogeneous polynomials of degree n, if one chooses a basis
of V n , then the linear operator π 0 (X) will be given by a n + 1 by n + 1 matrix.
Clearly though, the expression as a simple first-order differential operator is
much easier to work with. In the examples we will be studying in much of the
rest of the course, the representations under consideration will also be on func-
tion spaces, with Lie algebra representations appearing as differential operators.
Instead of using linear algebra techniques to find eigenvalues and eigenvectors,
the eigenvector equation will be a partial differential equation, wtih our focus
on using Lie groups and their representation theory to solve such equations.
One issue we haven’t addressed yet is that of unitarity of the representation.
We need Hermitian inner products on the spaces V n , inner products that will
be preserved by the action of SU (2) that we have defined on these spaces. A
standard way to define a Hermitian inner product on functions on a space M
is to define them using an integral: for f , g functions on M , take their inner
product to be Z
hf, gi = fg
M

While for M = C2 this gives an SU (2) invariant inner product on functions, it


is useless for f, g polynomial, since such integrals diverge. What one can do in

94
this case is define an inner product on polynomial functions on C2 by
Z
1 2 2
hf, gi = 2 f (z1 , z2 )g(z1 , z2 )e−(|z1 | +|z2 | ) dx1 dy1 dx2 dy2 (8.2)
π C2
Here z1 = x1 + iy1 , z2 = x2 + iy2 . One can do integrals of this kind fairly easily
since they factorize into separate integrals over z1 and z2 , each of which can be
treated using polar coordinates and standard calculus methods. One can check
by explicit computation that the polynomials

zj zk
√1 2
j!k!
will be an orthornormal basis of the space of polynomial functions with respect
to this inner product, and the operators π 0 (X), X ∈ su(2) will be skew-adjoint.
Working out what happens for the first few examples of irreducible SU (2)
representations, one finds orthonormal bases for the representation spaces V n
of homogeneous polynomials as follows
• For n = s = 0
1
1
• For n = 1, s = 2
z1 , z2

• For n = 2, s = 1
1 1
√ z12 , z1 z2 , √ z22
2 2
3
• For n = 3, s = 2

1 1 1 1
√ z13 , √ z12 z2 , √ z1 z22 , √ z23
6 2 2 6

8.3 Representations of SO(3) and spherical har-


monics
We would like to now use the classification and construction of representations of
SU (2) to study the representations of the closely related group SO(3). For any
representation (ρ, V ) of SO(3), we can use the double-covering homomorphism
Φ : SU (2) → SO(3) to get a representation

π =ρ◦Φ

of SU (2). It can be shown that if ρ is irreducible, π will be too, so we must


have π = ρ ◦ Φ = πn , one of the irreducible representations of SU (2) found in
the last section. Using the fact that Φ(−1) = 1, we see that

πn (−1) = ρ ◦ Φ(−1) = 1

95
From knowing that the weights of πn are −n, −n + 2, · · · , n − 2, n, we know that
 inπ 
 iπ  e 0 ··· 0
e 0  0 ei(n−2)π · · · 0 
πn (−1) = πn −iπ = ···
=1
0 e ··· 
0 0 · · · e−inπ

which will only be true for n even, not for n odd. Since the Lie algebra of SO(3)
is isomorphic to the Lie algebra of SU (2), the same Lie algebra argument using
raising and lowering operators as in the last section also applies. The irreducible
representations of SO(3) will be (ρl , V = C2l+1 ) for l = 0, 1, 2, · · · , of dimension
2l + 1 and satisfying
ρl ◦ Φ = π2l
Just like in the case of SU (2), we can explicitly construct these representa-
tions using functions on a space with an SO(3) action. The obvious space to
choose is R3 , with SO(3) matrices acting on x ∈ R3 as column vectors, by the
formula we have repeatedly used
 
x1
(ρ(g)f )(x) = f (g −1 · x) = f (g −1 x2 )
x3

Taking the derivative, the Lie algebra representation is given by


 
x1
0 d tX d −tX  
ρ (X)f = ρ(e )f|t=0 = f (e x2 )|t=0
dt dt
x3

where X ∈ so(3). Recall that a basis for so(3) is given by


     
0 0 0 0 0 1 0 −1 0
l1 = 0 0 −1 l2 =  0 0 0 l3 = 1 0 0
0 1 0 −1 0 0 0 0 0

which satisfy the commutation relations

[l1 , l2 ] = l3 , [l2 , l3 ] = l1 , [l3 , l1 ] = l2

Digression. A Note on Conventions


We’re using the notation lj for the real basis of the Lie algebra so(3) = su(2).
For a unitary representation ρ, the ρ0 (lj ) will be skew-adjoint linear operators.
For consistency with the physics literature, we’ll use the notation Lj = iρ0 (lj )
for the self-adjoint version of the linear operator corresponding to lj in this
representation on functions. The Lj satisfy the commutation relations

[L1 , L2 ] = iL3 , [L2 , L3 ] = iL1 , [L3 , L1 ] = iL2

We’ll also use elements l± = l1 ± il2 of the complexified Lie algebra to create
raising and lowering operators L± = iρ0 (l± ).

96
As with the SU (2) case, we won’t include a factor of ~ as is usual in physics
(e.g. the usual convention is Lj = i~ρ0 (lj )), since for considerations of the action
of the rotation group it would just cancel out (physicists define rotations using
i
e ~ θLj ). The factor of ~ is only of significance when Lj is expressed in terms of
the momentum operator, a topic discussed in chapter 17.
In the SU (2) case, the π 0 (Sj ) had half-integral eigenvalues, with the eigen-
values of π 0 (2S3 ) the integral weights of the representation. Here the Lj will
have integer eigenvalues, the weights will be the eigenvalues of 2L3 , which will
be even integers.

Computing ρ0 (l1 ) we find

 
0 0
0
−1

0 0
 
−t  x1
d 0 1
0 x )
ρ0 (l1 )f = f (e 2 |t=0 (8.3)
dt
x3
  
0 0 0 x1
d 
= f ( 0 cos t sin t  x2 )|t=0 (8.4)
dt
0 − sin t cos t x3
 
0
d
= f ( x2 cos t + x3 sin t )|t=0 (8.5)
dt
−x2 sin t + x3 cos t
 
0
∂f ∂f ∂f
=( , , ) ·  x3  (8.6)
∂x1 ∂x2 ∂x3
−x2
∂f ∂f
=x3 − x2 (8.7)
∂x2 ∂x3

so
∂ ∂
ρ0 (l1 ) = x3 − x2
∂x2 ∂x3

and similar calculations give

∂ ∂ ∂ ∂
ρ0 (l2 ) = x1 − x3 , ρ0 (l3 ) = x2 − x1
∂x3 ∂x1 ∂x1 ∂x2

The space of all functions on R3 is much too big: it will give us an infinity of
copies of each finite dimensional representation that we want. Notice that when
SO(3) acts on R3 , it leaves the distance to the origin invariant. If we work in
spherical coordinates (r, θ, φ) (see picture)

97
we will have

x1 =r sin θ cos φ
x2 =r sin θ sin φ
x3 =r cos θ

Acting on f (r, φ, θ), SO(3) will leave r invariant, only acting non-trivially on
θ, φ. It turns out that we can cut down the space of functions to something
that will only contain one copy of the representation we want in various ways.
One way to do this is to restrict our functions to the unit sphere, i.e. just look
at functions f (θ, φ). We will see that the representations we are looking for can
be found in simple trigonometric functions of these two angular variables.
We can construct our irreducible representations ρ0l by explicitly constructing
a function we will call Yll (θ, φ) that will be a highest weight vector of weight
l. The weight l condition and the highest weight condition give two differential
equations for Yll (θ, φ):
L3 Yll = lYll , L+ Yll = 0
These will turn out to have a unique solution (up to scalars).
We first need to change coordinates from rectangular to spherical in our
expressions for L3 , L± . Using the chain rule to compute expressions like


f (x1 (r, θ, φ), x2 (r, θ, φ), x3 (r, θ, φ))
∂r
we find
 ∂    ∂ 
∂r sin θ cos φ sin θ sin φ cos θ ∂x1
∂  =  r cos θ cos φ ∂ 
r cos θ sin φ − sin θ  ∂x
∂θ 2
∂ ∂
∂φ −r sin θ sin φ r sin θ cos φ 0 ∂x 3

98
so  ∂ 
 ∂  
∂r sin θ cos φ sin θ sin φ cos θ ∂x1
 1 ∂  = cos θ cos φ cos θ sin φ ∂ 
− sin θ  ∂x
r ∂θ 2
1 ∂ ∂
r sin θ ∂φ − sin φ cos φ 0 ∂x3

This is an orthogonal matrix, so one can invert it by taking its transpose, to get
 ∂    ∂ 
∂x1 sin θ cos φ cos θ cos φ − sin φ ∂r
 ∂  =  sin θ sin φ cos θ sin φ cos φ   1 ∂ 
∂x2 r ∂θ
∂ 1 ∂
∂x
cos θ − sin θ 0 r sin θ ∂φ
3

So we finally have
∂ ∂ ∂ ∂
L1 = iρ0 (l1 ) = i(x3 − x2 ) = i(sin φ + cot θ cos φ )
∂x2 ∂x3 ∂θ ∂φ
∂ ∂ ∂ ∂
L2 = iρ0 (l2 ) = i(x1 − x3 ) = i(− cos φ + cot θ sin φ )
∂x3 ∂x1 ∂θ ∂φ
∂ ∂ ∂
L3 = iρ0 (l3 ) = i(x1 − x3 ) = −i
∂x3 ∂x1 ∂φ
and
∂ ∂ ∂ ∂
L+ = iρ0 (l+ ) = eiφ ( + i cot θ ), L− = iρ0 (l− ) = e−iφ (− + i cot θ )
∂θ ∂φ ∂θ ∂φ
Now that we have expressions for the action of the Lie algebra on functions in
spherical coordinates, our two differential equations saying our function Yll (θ, φ)
is of weight l and in the highest-weight space are
∂ l
L3 Yll (θ, φ) = −i Y (θ, φ) = lYll (θ, φ)
∂φ l
and
∂ ∂
L+ Yll (θ, φ) = eiφ ( + i cot θ )Yll (θ, φ) = 0
∂θ ∂φ
The first of these tells us that

Yll (θ, φ) = eilφ Fl (θ)

for some function Fl (θ), and using the second we get



( − l cot θ)Fl (θ)
∂θ
with solution
Fl (θ) = Cll sinl θ
for an arbitrary constant Cll . Finally

Yll (θ, φ) = Cll eilφ sinl θ

99
This is a function on the sphere, which is also a highest weight vector in a
2l + 1 dimensional irreducible representation of SO(3). To get functions which
give vectors spanning the rest of the weight spaces, one just repeatedly applies
the lowering operator L− , getting functions
Ylm (θ, φ) =Clm (L− )l−m Yll (θ, φ)
∂ ∂
=Clm (e−iφ (− + i cot θ ))l−m eilφ sinl θ
∂θ ∂φ
for m = l, l − 1, l − 2 · · · , −l + 1, −l
The functions Ylm (θ, φ) are called “spherical harmonics”, and they span the
space of complex functions on the sphere in much the same way that the einθ
span the space of complex valued functions on the circle. Unlike the case of
polynomials on C2 , for functions on the sphere, one gets finite numbers by
integrating such functions over the sphere. So one can define an inner product
on these representations for which they are unitary by simply setting
Z Z 2π Z π
hf, gi = f g sin θdθdφ = f (θ, φ)g(θ, φ) sin θdθdφ
S2 φ=0 θ=0

We will not try and show this here, but for the allowable values of l, m the
Ylm (θ, φ) are mutually orthogonal with respect to this inner product.
One can derive various general formulas for the Ylm (θ, φ) in terms of Leg-
endre polynomials, but here we’ll just compute the first few examples, with
the proper constants that give them norm 1 with respect to the chosen inner
product.
• For the l = 0 representation
r
1
Y00 (θ, φ) =

• For the l = 1 representation


r r r
3 3 −1 3
1
Y1 = − iφ 0
sin θe , Y1 = cos θ, Y1 = sin θe−iφ
8π 4π 8π
(one can easily see that these have the correct eigenvalues for ρ0 (L3 ) =

−i ∂φ ).
• For the l = 2 representation one has
r r
2 15 2 i2φ 1 15
Y2 = sin θe , Y2 = − sin θ cos θeiφ
32π 8π
r
0 5
Y2 = (3 cos2 θ − 1)
16π
r r
15 15
Y2−1 = sin θ cos θe−iφ , Y2−2 = sin2 θe−i2φ
8π 32π

100
We will see later that these functions of the angular variables in spheri-
cal coordinates are exactly the functions that give the angular dependence of
wavefunctions for the physical system of a particle in a spherically symmetric
potential. In such a case the SO(3) symmetry of the system implies that the
state space (the wavefunctions) will provide a unitary representation π of SO(3),
and the action of the Hamiltonian operator H will commute with the action of
the operators L3 , L± . As a result all of the states in an irreducible representa-
tion component of π will have the same energy. States are thus organized into
“orbitals”, with singlet states called “s” orbitals (l = 0), triplet states called
“p” orbitals (l = 1), multiplicity 5 states called “d” orbitals (l = 2), etc.

8.4 The Casimir operator


For both SU (2) and SO(3), we have found that all representations can be
constructed out of function spaces, with the Lie algebra acting as first-order
differential operators. It turns out that there is also a very interesting second-
order differential operator that comes from these Lie algebra representations,
known as the Casimir operator. For the case of SO(3)

Definition (Casimir operator for SO(3)). The Casimir operator for the repre-
sentation of SO(3) on functions on S 2 is the second-order differential operator

L2 ≡ L21 + L22 + L23

(the symbol L2 is not intended to mean that this is the square of an operator L)

A straightforward calculation using the commutation relations satisfied by


the Lj shows that
[L2 , ρ0 (X)] = 0
for any X ∈ so(3). Knowing this, a version of Schur’s lemma says that L2 will act
on an irreducible representation as a scalar (i.e. all vectors in the representation
are eigenvectors of L2 , with the same eigenvalue). This eigenvalue can be used
to characterize the irreducible representation.
The easiest way to compute this eigenvalue turns out to be to act with L2 on
a highest weight vector. First one rewrites L2 in terms of raising and lowering
operators.

L− L+ =(L1 − iL2 )(L1 + iL2 )


=L21 + L22 + i[L1 , L2 ]
=L21 + L22 − L3

so
L2 = L21 + L22 + L23 = L− L+ + L3 + L23
For the representation ρ of SO(3) on functions on S 2 constructed above,
we know that on a highest weight vector of the irreducible representation ρl

101
(restriction of ρ to the 2l + 1 dimensional irreducible subspace of functions that
are linear combinations of the Ylm (θ, φ)), we have the two eigenvalue equations

L+ f = 0, L3 f = lf

with solution the functions proportional to Yll (θ, φ). Just from these conditions
and our expression for L2 we can immediately find the scalar eigenvalue of L2
since
L2 f = L− L+ f + (L3 + L23 )f = 0 + l + l2 = l(l + 1)
We have thus shown that our irreducible representation ρl can be characterized
as the representation on which L2 acts by the scalar l(l + 1).
In summary, we have two different sets of partial differential equations whose
solutions provide a highest weight vector for and thus determine the irreducible
representation ρl :

L+ f = 0, L3 f = lf
which are first order equations, with the first using complexification and
something like a Cauchy-Riemann equation, and

L2 f = l(l + 1)f, L3 f = lf
where the first equation is a second order equation, something like a
Laplace equation.
That a solution of the first set of equations gives a solution of the second set
is obvious. Much harder to show is that a solution of the second set gives a
solution of the first set. The space of solutions to

L2 f = l(l + 1)f

for l a non-negative integer includes as we have seen the 2l + 1-dimensional


vector space of linear combinations of the Ylm (θ, φ) (there are no other solu-
tions, although we will not show that). Since the action of SO(3) on functions
commutes with the operator L2 , this 2l + 1-dimensional space will provide a
representation, the irreducible one of spin l.
One can compute the explicit second order differential operator L2 in the ρ
representation on functions, it is

L2 =L21 + L22 + L23


∂ ∂ ∂ ∂ ∂
=(i(sin φ + cot θ cos φ ))2 + (i(− cos φ + cot θ sin φ ))2 + (−i )2
∂θ ∂φ ∂θ ∂φ ∂φ
1 ∂ ∂ 1 ∂2
=−( (sin θ ) + ) (8.8)
sin θ ∂θ ∂θ sin2 θ ∂φ2
We will re-encounter this operator later on in the course as the angular part of
the Laplace operator on R3 .

102
For the group SU (2) we can also find irreducible representations as solution
spaces of differential equations on functions on C2 . In that case, the differential
equation point of view is much less useful, since the solutions we are looking for
are just the homogeneous polynomials, which are more easily studied by purely
algebraic methods.

8.5 For further reading


The classification of SU (2) representations is a standard topic in all textbooks
that deal with Lie group representations. A good example is [28], which covers
this material well, and from which the discussion here of the construction of
representations as homogeneous polynomials is drawn (see pages 77-79). The
calculation of the Lj and the derivation of expressions for spherical harmonics
as Lie algebra representations of so(3) appears in most quantum mechanics
textbooks in one form or another (for example, see Chapter 12 of [57]). Another
source used here for the explicit constructions of representations is [13], Chapters
27-30.

103
104
Chapter 9

Tensor Products,
Entanglement, and
Addition of Spin

If one has two independent quantum systems, with state spaces H1 and H2 ,
the combined quantum system has a description that exploits the mathematical
notion of a “tensor product”, with the combined state space the tensor product
H1 ⊗ H2 . Because of the ability to take linear combinations of states, this
combined state space will contain much more than just products of independent
states, including states that are described as “entangled”, and responsible for
some of the most counter-intuitive behavior of quantum physical systems.
This same tensor product construction is a basic one in representation the-
ory, allowing one to construct a new representation (πW1 ⊗W2 , W1 ⊗ W2 ) out of
representations (πW1 , W1 ) and (πW2 , W2 ). When we take the tensor product of
states corresponding to two irreducible representations of SU (2) of spins s1 , s2 ,
we will get a new representation (πV 2s1 ⊗V 2s2 , V 2s1 ⊗ V 2s2 ). It will be reducible,
a direct sum of representations of various spins, a situation we will analyze in
detail.
Starting with a quantum system with state space H that describes a single
particle, one can describe a system of N particles by taking an N -fold tensor
product H⊗N = H ⊗ H ⊗ · · · ⊗ H. A deep fact about the physical world
is that for identical particles, we don’t get the full tensor product space, but
only the subspaces either symmetric or antisymmetric under the action of the
permutation group by permutations of the factors, depending on whether our
particles are “bosons” or “fermions”. An even deeper fact is that elementary
particles of half-integral spin s must behave as fermions, those of integral spin,
bosons.

Digression. When physicists refer to “tensors”, they generally mean the “ten-
sor fields” used in general relativity or other geometry-based parts of physics,

105
not tensor products of state spaces. A tensor field is a function on a manifold,
taking values in some tensor product of copies of the tangent space and its dual
space. The simplest tensor fields are just vector fields, functions taking values
in the tangent space. A more non-trivial example is the metric tensor, which
takes values in the dual of the tensor product of two copies of the tangent space.

9.1 Tensor products


Given two vector spaces V and W (over R or C), one can easily construct
the direct sum vector space V ⊕ W , just by taking pairs of elements (v, w) for
v ∈ V, w ∈ W , and giving them a vector space structure by the obvious addition
and multiplication by scalars. This space will have dimension

dim(V ⊕ W ) = dim V + dim W

If {e1 , e2 , . . . , edim V } is a basis of V , and {f1 , f2 , . . . , fdim W } a basis of W , the

{e1 , e2 , . . . , edim V , f1 , f2 , . . . , fdim W }

will be a basis of V ⊕ W .
A less trivial construction is the tensor product of the vector spaces V and
W . This will be a new vector space called V ⊗ W , of dimension

dim(V ⊗ W ) = (dim V )(dim W )

One way to motivate the tensor product is to think of vector spaces as vector
spaces of functions. Elements

v = v1 e1 + v2 e2 + · · · + vdim V edim V ∈ V

can be thought of as functions on the dim V points ei , taking values vi at ei . If


one takes functions on the union of the sets {ei } and {fj } one gets elements of
V ⊕ W . The tensor product V ⊗ W will be what one gets by taking all functions
on not the union, but the product of the sets {ei } and {fj }. This will be the
set with (dim V )(dim W ) elements, which we will write ei ⊗ fj , and elements
of V ⊗ W will be functions on this set, or equivalently, linear combinations of
these basis vectors.
This sort of definition is less than satisfactory, since it is tied to an explicit
choice of bases for V and W . We won’t however pursue more details of this
question or a better definition here. For this, one can consult pretty much any
advanced undergraduate text in abstract algebra, but here we will take as given
the following properties of the tensor product that we will need:
• Given vectors v ∈ V, w ∈ W we get an element v ⊗ w ∈ V ⊗ W , satisfying
bilinearity conditions (for c1 , c2 constants)

v ⊗ (c1 w1 + c2 w2 ) = c1 (v ⊗ w1 ) + c2 (v ⊗ w2 )

(c1 v1 + c2 v2 ) ⊗ w = c1 (v1 ⊗ w) + c2 (v2 ⊗ w)

106
• There are natural isomorphisms

C ⊗ V ' V, V ⊗ W ' W ⊗ V

and
U ⊗ (V ⊗ W ) ' (U ⊗ V ) ⊗ W
for vector spaces U, V, W
• Given a linear operator A on V and another linear operator B on W , we
can define a linear operator A ⊗ B on V ⊗ W by

(A ⊗ B)(v ⊗ w) = Av ⊗ Bw

for v ∈ V, w ∈ W .
With respect to the bases ei , fj of V and W , A will be a (dim V ) by
(dim V ) matrix, B will be a (dim W ) by (dim W ) matrix and A ⊗ B will
be a (dim V )(dim W ) by (dim V )(dim W ) matrix (which one can think of
as a (dim V ) by (dim V ) matrix of blocks of size (dim W )).
• One often wants to consider tensor products of vector spaces and dual vec-
tor spaces. An important fact is that there is an isomorphism between the
tensor product V ∗ ⊗ W and linear maps from V to W given by identifying
l ⊗ w (l ∈ V ∗ ) with the linear map

v ∈ V → l(v)w ∈ W

For V a real vector space, its complexification VC (the vector space one
gets by allowing multiplication by both real and imaginary numbers) can be
identified with the tensor product

V C = V ⊗R C

Here the notation ⊗R indicates a tensor product of two real vector spaces: V
of dimension dim V with basis {e1 , e2 , . . . , edim V } and C = R2 of dimension 2
with basis {1, i}.

9.2 Composite quantum systems and tensor prod-


ucts
Consider two quantum systems, one defined by a state space H1 and a set of
operators O1 on it, the second given by a state space H2 and set of operators O2 .
One can describe the composite quantum system corresponding to considering
the two quantum systems as a single one, with no interaction between them, by
just taking as a new state space

HT = H1 ⊗ H 2

107
with operators of the form
A ⊗ Id + Id ⊗ B
with A ∈ O1 , B ∈ O2 . To describe an interacting quantum system, one can use
the state space HT , but with a more general class of operators.
If H is the state space of a quantum system, one can think of this as de-
scribing a single particle, and then to describe a system of N such particles, one
uses the multiple tensor product
H⊗N = H ⊗ H ⊗ · · · ⊗ H ⊗ H
| {z }
N times

The symmetric group SN acts on this state space, and one has a repre-
sentation (π, H⊗N ) of SN as follows. For σ ∈ SN a permutation of the set
{1, 2, . . . , N } of N elements, on a tensor product of vectors one has
π(σ)(v1 ⊗ v2 ⊗ · · · ⊗ vN ) = vσ(1) ⊗ vσ(2) ⊗ · · · ⊗ vσ(N )
The representation of SN that this gives is in general reducible, containing
various components with different irreducible representations of the group SN .
A fundamental axiom of quantum mechanics is that if H⊗N describes N iden-
tical particles, then all physical states occur as one-dimensional representations
of SN , which are either symmetric (“bosons”) or antisymmetric (“fermions”)
where
Definition. A state v ∈ H⊗N is called
• symmetric, or bosonic if ∀σ ∈ SN
π(σ)v = v
The space of such states is denoted S N (H).
• antisymmetric, or fermionic if ∀σ ∈ SN
π(σ)v = (−1)|σ| v
The space of such states is denoted ΛN (H). Here |σ| is the minimal num-
ber of transpositions that by composition give σ.
Note that in the fermionic case, for σ a transposition interchanging two
particles, the antisymmetric representation π acts on the factor H ⊗ H by in-
terchanging vectors, taking
w⊗w ∈H⊗H
to itself. Antisymmetry requires that this state go to its negative, so the state
cannot be non-zero. So one cannot have non-zero states in H⊗N describing two
identical particles in the same state w ∈ H, a fact that is known as the “Pauli
Principle”.
While the symmetry or antisymmetry of states of multiple identical particles
is a separate axiom when such particles are described in this way as tensor
products, we will see later on (chapter 34) that this phenomenon instead finds
a natural explanation when particles are described in terms of quantum fields.

108
9.3 Indecomposable vectors and entanglement
If one is given a function f on a space X and a function g on a space Y , one
can form a product function f g on the product space X × Y by taking (for
x ∈ X, y ∈ Y )
(f g)(x, y) = f (x)g(y)
However, most functions on X × Y are not decomposable in this manner. Sim-
ilarly, for a tensor product of vector spaces, one has:

Definition (Decomposable and indecomposable vectors). A vector in V ⊗ W


is called decomposable if it is of the form v ⊗ w for some v ∈ V, w ∈ W . If it
cannot be put in this form it is called indecomposable.

Note that our basis vectors of V ⊗ W are all decomposable since they are
products of basis vectors of V and W . Linear combinations of these basis vectors
however are in general indecomposable. If we think of an element of V ⊗ W
as a dim V by dim W matrix, with entries the coordinates with respect to our
basis vectors for V ⊗ W , then for decomposable vectors we get a special class
of matrices, those of rank one.
In the physics context, the language used is:

Definition (Entanglement). An indecomposable state in the tensor product


state space HT = H1 ⊗ H2 is called an entangled state.

The phenomenon of entanglement is responsible for some of the most surpris-


ing and subtle aspects of quantum mechanical system. The Einstein-Podolsky-
Rosen paradox concerns the behavior of an entangled state of two quantum
systems, when one moves them far apart. Then performing a measurement on
one system can give one information about what will happen if one performs
a measurement on the far removed system, introducing a sort of unexpected
non-locality.
Measurement theory itself involves crucially an entanglement between the
state of a system being measured, thought of as in a state space Hsystem , and
the state of the measurement apparatus, thought of as lying in a state space
Happaratus . The laws of quantum mechanics presumably apply to the total
system Hsystem ⊗Happaratus , with the counter-intuitive nature of measurements
appearing due to this decomposition of the world into two entangled parts: the
one under study, and a much larger for which only an approximate description
in classical terms is possible. For much more about this, a recommended reading
is Chapter 2 of [52].

9.4 Tensor products of representations


Given two representations of a group, one can define a new representation, the
tensor product representation, by

109
Definition (Tensor product representation of a group). For (πV , V ) and (πW , W )
representations of a group G, one has a tensor product representation (πV ⊗W , V ⊗
W ) defined by
(πV ⊗W (g))(v ⊗ w) = πV (g)v ⊗ πW (g)w
One can easily check that πV ⊗W is a homomorphism.
To see what happens for the corresponding Lie algebra representation, one
computes (for X in the Lie algebra)
d
πV0 ⊗W (X)(v ⊗ w) = πV ⊗W (etX )(v ⊗ w)t=0
dt
d
= (πV (etX )v ⊗ πW (etX )w)t=0
dt
d d
=(( πV (etX )v) ⊗ πW (etX )w)t=0 + (πV (etX )v ⊗ ( πW (etX )w))t=0
dt dt
=(πV0 (X)v) ⊗ w + v ⊗ (πW 0
(X)w)
which could also be written
πV0 ⊗W (X) = (πV0 (X) ⊗ 1W ) + (1V ⊗ πW
0
(X))

9.4.1 Tensor products of SU (2) representations


Given two representations (πV , V ) and (πW , W ) of a group G, we can decom-
pose each into irreducibles. To do the same for the tensor product of the two
representations, we need to know how to decompose the tensor product of two
irreducibles. This is a fundamental non-trivial problem for a group G, with the
answer for G = SU (2) as follows:
Theorem 9.1 (Clebsch-Gordan decomposition).
The tensor product (πV n1 ⊗V n2 , V n1 ⊗ V n2 ) decomposes into irreducibles as
(πn1 +n2 , V n1 +n2 ) ⊕ (πn1 +n2 −2 , V n1 +n2 −2 ) ⊕ · · · ⊕ (π|n1 −n2 | , V |n1 −n2 | )
Proof. One way to prove this result is to use highest-weight theory, raising and
lowering operators, and the formula for the Casimir operator. We will not try
and show the details of how this works out, but in the next section give a
simpler argument using characters. However, in outline (for more details, see
for instance section 5.2 of [50]), here’s how one could proceed:
One starts by noting that if vn1 ∈ Vn1 , vn2 ∈ Vn2 are highest weight vectors
for the two representations, vn1 ⊗vn2 will be a highest weight vector in the tensor
product representation (i.e. annihilated by πn0 1 +n2 (S+ )), of weight n1 + n2 .
So (πn1 +n2 , V n1 +n2 ) will occur in the decomposition. Applying πn0 1 +n2 (S− ) to
vn1 ⊗vn2 one gets a basis of the rest of the vectors in (πn1 +n2 , V n1 +n2 ). However,
at weight n1 +n2 −2 one can find another kind of vector, a highest-weight vector
orthogonal to the vectors in (πn1 +n2 , V n1 +n2 ). Applying the lowering operator
to this gives (πn1 +n2 −2 , V n1 +n2 −2 ). As before, at weight n1 + n2 − 4 one finds
another, orthogonal highest weight vector, and gets another representation, with
this process only terminating at weight |n1 − n2 |.

110
9.4.2 Characters of representations
A standard tool for dealing with representations that we have ignored so far is
that of associating to a representation an invariant called its character. This
will be a conjugation-invariant function on the group that only depends on the
equivalence class of the representation. Given two representations constructed
in very different ways, one can often check whether they are isomorphic just by
seeing if their character functions match. The problem of identifying the possible
irreducible representations of a group can be attacked by analyzing the possible
character functions of irreducible representations. We will not try and enter
into the general theory of characters here, but will just see what the characters
of irreducible representations are for the case of G = SU (2). These can be used
to give a simple argument for the Clebsch-Gordan decomposition of the tensor
product of SU (2) representations. For this we don’t need general theorems
about the relations of characters and representations, but can directly check
that the irreducible representations of SU (2) correspond to distinct character
functions which are easily evaluated.

Definition (Character). The character of a representation (π, V ) of a group G


is the function on G given by

χV (g) = T r(π(g))

Since the trace of a matrix is invariant under conjugation, χV in general will


be a complex valued, conjugation-invariant function on G. One can easily check
that it will satisfy the relations

χV ⊕W = χV + χW , χV ⊗W = χV χW

For the case of G = SU (2), any element can be conjugated to be in the


subgroup U (1) of diagonal matrices. Knowing the weights of the irreducible
representations (πn , V n ) of SU (2), we know the characters to be the functions
 iθ 
e 0
χV n ( ) = einθ + ei(n−2)θ + · · · + e−i(n−2)θ + e−inθ (9.1)
0 e−iθ

As n gets large, this becomes an unwieldy expression, but one has

Theorem (Weyl character formula).

ei(n+1)θ − e−i(n+1)θ
 iθ 
e 0 sin((n + 1)θ)
χV n ( −iθ )= =
0 e eiθ − e−iθ sin(θ)

Proof. One just needs to use the identity

(einθ + ei(n−2)θ + · · · + e−i(n−2)θ + e−inθ )(eiθ − e−iθ ) = ei(n+1)θ − e−i(n+1)θ

and equation 9.1 for the character.

111
To get a proof of 9.1, one can compute the character of the tensor product
on the diagonal matrices using the Weyl character formula for the second factor
(ordering things so that n2 > n1 )

χV n1 ⊗V n2 =χV n1 χV n2
ei(n2 +1)θ − e−i(n2 +1)θ
=(ein1 θ + ei(n1 −2)θ + · · · + e−i(n1 −2)θ + e−in1 θ )
eiθ − e−iθ
−i(n1 +n2 +1)θ i(n2 −n1 +1)θ
(e i(n1 +n2 +1)θ
−e ) + · · · + (e − e−i(n2 −n1 +1)θ )
=
eiθ − e−iθ
=χV n1 +n2 + χV n1 +n2 −2 + · · · + χV n2 −n1

So, when we decompose the tensor product of irreducibles into a direct sum of
irreducibles, the ones that must occur are exactly those of theorem 9.1.

9.4.3 Some examples


Some simple examples of how this works are:

• Tensor product of two spinors:

V1⊗V1 =V2⊕V0

This says that the four complex dimensional tensor product of two spinor
representations (which are each two complex dimensional) decomposes
into irreducibles as the sum of a three dimensional vector representation
and a one dimensional trivial (scalar) representation.
   
1 0
Using the basis , for V 1 , the tensor product V 1 ⊗ V 1 has a basis
0 1
               
1 1 1 0 0 1 0 0
⊗ , ⊗ , ⊗ , ⊗
0 0 0 1 1 0 1 1

The vector
       
1 1 0 0 1
√ ( ⊗ − ⊗ )∈V1⊗V1
2 0 1 1 0

is clearly antisymmetric under permutation of the two factors of V 1 ⊗ V 1 .


One can show that this vector is invariant under SU (2), by computing
either the action of SU (2) or of its Lie algebra su(2). So, this vector
is a basis for the component V 0 in the decomposition of V 1 ⊗ V 1 into
irreducibles.
The other component, V 2 , is three dimensional, and has a basis
               
1 1 1 1 0 0 1 0 0
⊗ , √ ( ⊗ + ⊗ ), ⊗
0 0 2 0 1 1 0 1 1

112
These three vectors span one-dimensional complex subspaces of weights
q = 2, 0, −2 under the U (1) ⊂ SU (2) subgroup
 iθ 
e 0
0 e−iθ

They are symmetric under permutation of the two factors of V 1 ⊗ V 1 .


We see that if we take two identical quantum systems with H = V 1 = C2
and make a composite system out of them, if they were bosons we would
get a three dimensional state space V 2 = S 2 (V 1 ), transforming as a vector
(spin one) under SU (2). If they were fermions, we would get a one-
dimensional state space V 0 = Λ2 (V 1 ) of spin zero (invariant under SU (2)).
Note that in this second case we automatically get an entangled state, one
that cannot be written as a decomposable product.
• Tensor product of three or more spinors:

V 1 ⊗ V 1 ⊗ V 1 = (V 2 ⊕ V 0 ) ⊗ V 1 = (V 2 ⊗ V 1 ) ⊕ (V 0 ⊗ V 1 ) = V 3 ⊕ V 1 ⊕ V 1

This says that the tensor product of three spinor representations decom-
poses as a four dimensional (“spin 3/2”) representation plus two copies of
the spinor representation.
One can clearly generalize this and consider N -fold tensor products (V 1 )⊗N
of the spinor representation. Taking N high enough one can get any ir-
reducible representation of SU (2) that one wants this way, giving an al-
ternative to our construction using homogeneous polynomials. Doing this
however gives the irreducible as just one component of something larger,
and one needs a method to project out the component one wants. One
can do this using the action of the symmetric group SN on (V 1 )⊗N and
an understanding of the irreducible representations of SN . This relation-
ship between irreducible representations of SU (2) and those of SN coming
from looking at how both groups act on (V 1 )⊗N is known as “Schur-Weyl
duality”, and generalizes to the case of SU (n), where one looks at N -fold
tensor products of the defining representation of SU (n) matrices on Cn .
For SU (n) this provides perhaps the most straight-forward construction
of all irreducible representations of the group.

9.5 Bilinear forms and tensor products


A different sort of application of tensor products that will turn out to be im-
portant is to the description of bilinear forms, which generalize the dual space
V ∗ of linear forms on V . We have
Definition (Bilinear forms). A bilinear form B on a vector space V over a field
k (for us, k = R or C) is a map

B : (u, u0 ) ∈ V × V → B(u, u0 ) ∈ k

113
that is bilinear in both entries, i.e.

B(u + u0 , u00 ) = B(u, u00 ) + B(u0 , u00 ), B(cu, u0 ) = cB(u, u0 )

B(u, u0 + u00 ) = B(u, u0 ) + B(u, u00 ), B(u, cu0 ) = cB(u, u0 )


where c ∈ k.
If B(u0 , u) = B(u, u0 ) the bilinear form is called symmetric, if B(u0 , u) =
−B(u, u0 ) it is antisymmetric.

The relation to tensor products is

Theorem 9.2. The space of bilinear forms on V is isomorphic to V ∗ ⊗ V ∗ .

Proof. Given two linear forms α ∈ V ∗ , β ∈ V ∗ , one has a map

α ⊗ β ∈ V ∗ ⊗ V ∗ → B : B(u, u0 ) = α(u)β(u0 )

Choosing a basis ej of V , the coordinate functions vj = e∗j provide a basis of


V ∗ , so the vj ⊗ vk will be a basis of V ∗ ⊗ V ∗ . The map above takes linear
combinations of these to bilinear forms, and is easily seen to be one-to-one and
surjective for such linear combinations.

Given a basis ej of V and dual basis vj of V ∗ (the coordinates), one can


write the element of V ∗ ⊗ V ∗ corresponding to B as the sum
X
Bjk vj ⊗ vk
j,k

This expresses the bilinear form B in terms of a matrix B with entries Bjk ,
which can be computed as
Bjk = B(ej , ek )
In terms of the matrix B, the bilinear form is computed as
  0 
B11 . . . B1d u1
 . . .
0
B(u, u ) = u1 . . . ud  .. .. ..   ... 
 
 = u · Bu
0

Bd1 . . . Bdd u0d

The symmetric bilinear forms lie in S 2 (V ∗ ) ⊂ V ∗ ⊗ V ∗ and correspond to


symmetric matrices. Elements of V ∗ give linear functions on V , and one can
get quadratic functions on V from elements B ∈ S 2 (V ∗ ) by taking

u ∈ V → B(u, u) = u · Bu

That one gets quadratic functions by multiplying two linear functions corre-
sponds in terms of tensor products to
1
(α, β) ∈ V ∗ × V ∗ → (α ⊗ β + β ⊗ α) ∈ S 2 (V ∗ )
2

114
We will not give the details here, but one can generalize the above from
bilinear forms (isomorphic to V ∗ ⊗ V ∗ ) to multi-linear forms with N arguments
(isomorphic to (V ∗ )⊗N ). Evaluating such a multi-linear form with all argu-
ments set to u ∈ V gives a homogeneous polynomial of degree N , and one
has an isomorphism between symmetric multi-linear forms in S N (V ∗ ) and such
polynomials.
Antisymmetric bilinear forms lie in Λ2 (V ∗ ) ⊂ V ∗ ⊗ V ∗ and correspond to
antisymmetric matrices. One can define a multiplication (called the “wedge
product”) on V ∗ that takes values in Λ2 (V ∗ ) by
1
(α, β) ∈ V ∗ × V ∗ → α ∧ β = (α ⊗ β − β ⊗ α) ∈ Λ2 (V ∗ )
2
One can use this to get a product on the space of antisymmetric multilinear
forms of different degrees, giving something in many ways analogous to the
algebra of polynomials. This plays a role in the description of fermions and will
be considered in more detail in chapter 27.

9.6 For further reading


For more about the tensor product and tensor product of representations, see
section 6 of [67], or appendix B of [59]. Almost every quantum mechanics text-
book will contain an extensive discussion of the Clebsch-Gordan decomposition
for the tensor product of two irreducible SU (2) representations.

115
116
Chapter 10

Energy, Momentum and


Translation Groups

We’ll now turn to the problem that conventional quantum mechanics courses
generally begin with: that of the quantum system describing a free particle
moving in physical space R3 . This is something quite different than the classical
mechanical decription of a free particle, which will be reviewed in chapter 12.
A common way of motivating this is to begin with the 1924 suggestion by de
Broglie that, just as photons may behave like particles or waves, the same should
be true for matter particles. Photons carry an energy given by E = ~ω, where
ω is the angular frequency, and de Broglie’s proposal was that matter particles
behave like a wave with spatial dependence

eik·x

where x is the spatial position, and the momentum of the particle is p = ~k.
This proposal was realized in Schrödinger’s early 1926 discovery of a version
of quantum mechanics, in which the state space H is a space of complex-valued
functions on R3 , called “wavefunctions”. The operator

P = −i~∇

will have eigenvalues ~k, the de Broglie momentum, so it can be identified as


the momentum operator.
In this chapter our discussion will emphasize the central role of the momen-
tum operator. This operator will have the same relationship to spatial trans-
lations as the Hamiltonian operator does to time translations. In both cases,
the operators are given by the Lie algebra representation corresponding to a
unitary representation on the quantum state space H of groups of translations
(translation in the three space and one time directions respectively).
One way to motivate the quantum theory of a free particle is that, whatever
it is, it should have the same sort of behavior as in the classical case under
the translational and rotational symmetries of space-time. In chapter 12 we

117
will see that in the Hamiltonian form of classical mechanics, the components of
the momentum vector give a basis of the Lie algebra of the spatial translation
group R3 , the energy a basis of the Lie algebra of the time translation group
R. Invoking the classical relationship between energy and momentum

|p|2
E=
2m
used in non-relativistic mechanics relates the Hamiltonian and momentum oper-
ators, giving the conventional Schrödinger differential equation for the wavefunc-
tion of a free particle. We will examine the solutions to this equation, beginning
with the case of periodic boundary conditions, where spatial translations in each
direction are given by the compact group U (1) (whose representations we have
already studied in detail).

10.1 Energy, momentum and space-time trans-


lations
We have seen that it is a basic axiom of quantum mechanics that the observ-
able operator responsible for infinitesimal time translations is the Hamiltonian
operator H, a fact that is expressed as the Schrödinger equation
d
i~ |ψi = H|ψi
dt
When H is time-independent, one can understand this equation as reflecting the
existence of a unitary representation (U (t), H) of the group R of time transla-
tions on the state space H. For the case of H infinite-dimensional, this is known
as Stone’s theorem for one-parameter unitary groups, see for instance chapter
10.2 of [29] for details.
When H is finite-dimensional, the fact that a differentiable unitary repre-
sentation U (t) of R on H is of the form
i
U (t) = e− ~ tH

for H a self-adjoint matrix follows from the same sort of argument as in theorem
2.1. Such a U (t) provides solutions of the Schrödinger equation by

|ψ(t)i = U (t)|ψ(0)i

The Lie algebra of R is also R and we get a Lie algebra representation of R


by taking the time derivative of U (t), which gives us
d
~ U (t)|t=0 = −iH
dt
Since this Lie algebra representation comes from taking the derivative of a uni-
tary representation, −iH will be skew-adjoint, so H will be self-adjoint. The

118
minus sign is a convention, for reasons that will be explained in the discussion
of momentum to come later.
Note that if one wants to treat the additive group R as a matrix group,
related to its Lie algebra R by exponentiation of matrices, one can describe the
group as the group of matrices of the form
 
1 a
0 1

since     
1 a 1 b 1 a+b
=
0 1 0 1 0 1
Since  

0 a  
0 0 1 a
e =
0 1
the Lie algebra is just matrices of the form
 
0 a
0 0

We will mostly though write the group law in additive form. We are inter-
ested in the group R as a group of translations acting on a linear space, and the
corresponding infinite dimensional representation induced on functions on the
space. The simplest case is when R acts on itself by translation. Here a ∈ R
acts on q ∈ R (where q is a coordinate on R) by

q →a·q =q+a

and the induced representation π on functions uses

π(g)f (q) = f (g −1 · q)

to get
π(a)f (q) = f (q − a)
In the Lie algebra version of this representation, we will have
d
π 0 (a) = −a
dq
since
0 d df a2 d2 f
π(a)f = eπ (a) f = e−a dq f (q) = f (q) − a + + · · · = f (q − a)
dq 2! dq 2
which for functions with appropriate properties is just Taylor’s formula. Note
that here the same a labels points of the Lie algebra and of the group. We are
not treating the group R as a matrix group, since we want an additive group

119
law. So Lie algebra elements are not defined as for matrix groups (things one
exponentiates to get group elements). Instead, we think of the Lie algebra as
the tangent space to the group at the identity, and then simply identify R as the
tangent space at 0 (the Lie algebra) and R as the additive group. Note however
that the representation obeys a multiplicative law, with the homomorphism
property
π(a + b) = π(a)π(b)
so there is an exponential in the relation between π and π 0 .
Since we now want to describe quantum systems that depend not just on
time, but on space variables q = (q1 , q2 , q3 ), we will have an action by unitary
transformations of not just the group R of time translations, but also the group
R3 of spatial translations. We will define the corresponding Lie algebra rep-
resentations using self-adjoint operators P1 , P2 , P3 that play the same role for
spatial translations that the Hamiltonian plays for time translations:

Definition (Momentum operators). For a quantum system with state space


H given by complex valued functions of position variables q1 , q2 , q3 , momentum
operators P1 , P2 , P3 are defined by

∂ ∂ ∂
P1 = −i~ , P2 = −i~ , P3 = −i~
∂q1 ∂q2 ∂q3

These are given the name “momentum operators” since we will see that their
eigenvalues have an intepretation as the components of the momentum vector
for the system, just as the eigenvalues of the Hamiltonian have an interpretation
as the energy. Note that while in the case of the Hamiltonian the factor of ~ kept
track of the relative normalization of energy and time units, here it plays the
same role for momentum and length units. It can be set to one if appropriate
choices of units of momentum and length are made.
The differentiation operator is skew-adjoint since, using integration by parts
one has for ψ ∈ H
Z +∞ Z +∞ Z +∞
d d d d
ψ( ψ)dq = ( (ψψ) − ( ψ)ψ)dq = − ( ψ)ψdq
−∞ dq −∞ dq dq −∞ dq

The Pj are thus self-adjoint operators, with real eigenvalues as expected for an
observable operator. Multiplying by −i to get the corresponding skew-adjoint
operator of a unitary Lie algebra representation we find


−iPj = −~
∂qj

Up to the ~ factor that depends on units, these are exactly the Lie algebra
representation operators on basis elements for the action of R3 on functions on
R3 induced from translation:

π(a1 , a2 , a3 )f (q1 , q2 , q3 ) = f (q1 − a1 , q2 − a2 , q3 − a3 )

120
∂ ∂ ∂
π 0 (a1 , a2 , a3 ) = a1 (−iP1 ) + a2 (−iP2 ) + a3 (−iP3 ) = −~(a1 + a2 + a3 )
∂q1 ∂q2 ∂q3
Note that the convention for the sign choice here is the opposite from the
d d
case of the Hamiltonian (−iP = −~ dq vs. −iH = ~ dt ). This means that the
conventional sign choice we have been using for the Hamiltonian makes it minus
the generator of translations in the time direction. The reason for this comes
from considerations of special relativity, where the inner product on space-time
has opposite signs for the space and time dimensions . We will review this
subject in chapter 37 but for now we just need the relationship special relativity
gives between energy and momentum. Space and time are put together in
“Minkowski space”, which is R4 with indefinite inner product

< (u0 , u1 , u2 , u3 ), (v0 , v1 , v2 , v3 ) >= −u0 v0 + u1 v1 + u2 v2 + u3 v3

Energy and momentum are the components of a Minkowski space vector (p0 =
E, p1 , p2 , p3 ) with norm-squared given by minus the mass-squared:

< (E, p1 , p2 , p3 ), (E, p1 , p2 , p3 ) >= −E 2 + |p|2 = −m2

This is the formula for a choice of space and time units such that the speed of
light is 1. Putting in factors of the speed of light c to get the units right one
has
E 2 − |p|2 c2 = m2 c4
Two special cases of this are:

• For photons, m = 0, and one has the energy momentum relation E = |p|c

• For velocities v small compared to c (and thus momenta |p| small com-
pared to mc), one has

p p c|p|2 |p|2
E= |p|2 c2 + m2 c4 = c |p|2 + m2 c2 ≈ + mc2 = + mc2
2mc 2m
In the non-relativistic limit, we use this energy-momentum relation to
describe particles with velocities small compared to c, typically dropping
the momentum-independent constant term mc2 .

In later chapters we will discuss quantum systems that describe photons,


as well as other possible ways of constructing quantum systems for relativistic
particles. For now though, we will stick to the non-relativistic case. To describe
a quantum non-relativistic particle we choose a Hamiltonian operator H such
that its eigenvalues (the energies) will be related to the momentum operator
2
eigenvalues (the momenta) by the classical energy-momentum relation E = |p| 2m :

1 1 −~2 ∂ 2 ∂2 ∂2
H= (P12 + P22 + P32 ) = |P|2 = ( 2 + 2 + 2)
2m 2m 2m ∂q1 ∂q2 ∂q3

121
The Schrödinger equation then becomes:

∂ −~2 ∂ 2 ∂2 ∂2 −~2 2
i~ ψ(q, t) = ( 2 + 2 + 2 )ψ(q, t) = ∇ ψ(q, t)
∂t 2m ∂q1 ∂q2 ∂q3 2m

This is an easily solved simple constant coefficient second-order partial differ-


ential equation. One method of solution is to separate out the time-dependence,
by first finding solutions ψE to the time-independent equation

−~2 2
HψE (q) = ∇ ψE (q) = EψE (q)
2m
with eigenvalue E for the Hamiltonian operator and then use the fact that
i
ψ(q, t) = ψE (q)e− ~ tE

will give solutions to the full-time dependent equation


i~ ψ(q, t) = Hψ(q, t)
∂t
The solutions ψE (q) to the time-independent equation are just complex expo-
nentials proportional to

ei(k1 q1 +k2 q2 +k3 q3 ) = eik·q

satisfying
−~2 ~2 |k|2
(−i)2 |k|2 = =E
2m 2m
We have found that solutions to the Schrödinger equation are given by linear
combinations of states |ki labeled by a vector k, which are eigenstates of the
momentum and Hamiltonian operators with

~2
Pj |ki = ~kj |ki, H|ki = |k|2 |ki
2m
These are states with well-defined momentum and energy

|p|2
pj = ~kj , E =
2m
so they satisfy exactly the same energy-momentum relations as those for a clas-
sical non-relativistic particle.
While the quantum mechanical state space H contains states with the clas-
sical energy-momentum relation, it also contains much, much more since it
includes linear combinations of such states. At t = 0 one has
X
|ψi = ck eik·q
k

122
where ck are complex numbers, and the general time-dependent state will be
X |k|2
|ψ(t)i = ck eik·q e−it~ 2m
k

or, equivalently in terms of momenta p = ~k


2
i i |p|
X
|ψ(t)i = cp e ~ p·q e− ~ 2m t

10.2 Periodic boundary conditions and the group


U (1)
We have not yet discussed the inner product on our space of states when they
are given as wavefunctions on R3 , and there is a significant problem with doing
this. To get unitary representations of translations, we need to use a translation
invariant, Hermitian inner product on wavefunctions, and this will have to be
of the form Z
hψ1 , ψ2 i = C ψ1 (q)ψ2 (q)d3 q
R3

for some constant C. But if we try and compute the norm-squared of one of
our basis states |ki we find
Z Z
−ik·q ik·q 3
hk|ki = C (e )(e )d q = C 1 d3 q = ∞
R3 R3

As a result there is no value of C which will give these states a unit norm.
In the finite dimensional case, a linear algebra theorem assures us that given a
self-adjoint operator, we can find an orthonormal basis of its eigenvectors. In this
infinite dimensional case this is no longer true, and a much more sophisticated
formalism (the “spectral theorem for self-adjoint operators”) is needed to replace
the linear algebra theorem. This is a standard topic in treatments of quantum
mechanics aimed at mathematicians emphasizing analysis, but we will not try
and enter into this here. One place to find such a discussion is section 2.1 of
[64].
One way to deal with the normalization problem is to replace the non-
compact space by one of finite volume. We’ll consider first the simplified case of
a single spatial dimension, since once one sees how this works for one dimension,
treating the others the same way is straight-forward. In this one dimensional
case, one replaces R by the circle S 1 . This is equivalent to the physicist’s method
of imposing “periodic boundary conditions”, meaning to define the theory on
an interval, and then identify the ends of the interval. One can then think of
the position variable q as an angle φ and define the inner product as
Z 2π
1
hψ1 , ψ2 i = ψ1 (φ)ψ2 (φ)dφ
2π 0

123
The state space is then
H = L2 (S 1 )
the space of complex-valued square-integrable functions on the circle.
Instead of the translation group R, we have the standard action of the
group SO(2) on the circle. Elements g(θ) of the group are rotations of the circle
counterclockwise by an angle θ, or if we parametrize the circle by an angle φ,
just shifts
φ→φ+θ
Recall that in general we can construct a representation on functions from a
group action on a space by

π(g)f (x) = f (g −1 · x)

so we see that this rotation action on the circle gives a representation on H

π(g(θ))ψ(φ) = ψ(φ − θ)

If X is a basis of the Lie algebra so(2) (for instance


 taking
 the circle as the unit
0 −1
circle in R2 , rotations 2 by 2 matrices, X = , g(θ) = eθX ) then the
1 0
Lie algebra representation is given by taking the derivative
d
π 0 (X)f (φ) = f (φ − θ)|θ=0 = −f 0 (φ)

so we have
d
π 0 (X) = −

This operator is defined on a dense subspace of H = L2 (S 1 ) and is skew-adjoint,
since (using integration by parts)
Z 2π
1 d
hψ1 , ψ20 i = ψ1 ψ2 dφ
2π 0 dφ
Z 2π
1 d d
= ( (ψ1 ψ2 ) − ( ψ1 )ψ2 )dφ
2π 0 dφ dφ
= − hψ10 , ψ2 i

The eigenfunctions of π 0 (X) are just the einφ , for n ∈ Z, which we will also
write as state vectors |ni. These are orthonormal

hn|mi = δnm

and provide a basis for the space L2 (S 1 ), a basis that corresponds to the de-
composition into irreducibles of

L2 (S 1 )

124
as a representation of SO(2) described above. One has

(π, L2 (S 1 )) = ⊕n∈Z (πn , C)

where πn are the irreducible one-dimensional representations given by

πn (g(θ)) = einθ

The theory of Fourier series for functions on S 1 says that one can expand any
function ψ ∈ L2 (S 1 ) in terms of this basis, i.e.
+∞
X +∞
X
|ψi = ψ(φ) = cn einφ = cn |ni
n=−∞ n=−∞

where cn ∈ C. The condition that ψ ∈ L2 (S 1 ) corresponds to the condition


+∞
X
|cn |2 < ∞
n=−∞

on the coefficients cn . Using orthonormality of the |ni we find


Z 2π
1
cn = hn|ψi = e−inφ ψ(φ)dφ
2π 0

The Lie algebra of the group S 1 is the same as that of the group (R, +),
and the π 0 (X) we have found for the S 1 action on functions is related to the
momentum operator in the same way as in the R case. So, we can use the same
momentum operator
d
P = −i~

which satisfies
P |ni = ~n|ni
By changing space to the compact S 1 we now have momenta that instead of
taking on any real value, can only be integral numbers times ~. Solving the
Schrödinger equation

∂ P2 −~2 ∂ 2
i~ ψ(φ, t) = ψ(φ, t) = ψ(φ, t)
∂t 2m 2m ∂φ2
as before, we find
−~2 d2
EψE (φ) = ψE (φ)
2m dφ2
an eigenvector equation, which has solutions |ni, with

~2 n 2
E=
2m

125
Writing a solution to the Schrödinger equation as
+∞
~n2
X
ψ(φ, t) = cn einφ e−i 2m t
n=−∞

the cn will be determined from the initial condition of knowing the wavefunction
at time t = 0, according to the Fourier coefficient formula
Z 2π
1
cn = e−inφ ψ(φ, 0)dφ
2π 0
To get something more realistic, we need to take our circle to have an ar-
bitrary circumference L, and we can study our original problem by considering
the limit L → ∞. To do this, we just need to change variables from φ to φL ,
where
L
φL = φ

The momentum operator will now be
d
P = −i~
dφL
2π~
and its eigenvalues will be quantized in units of L . The energy eigenvalues
will be
2π 2 ~2 n2
E=
mL2

10.3 The group R and the Fourier transform


In the previous section, we imposed periodic boundary conditions, replacing the
group R of translations by a compact group S 1 , and then used the fact that
unitary representations of this group are labeled by integers. This made the

analysis rather easy, with H = L2 (S 1 ) and the self-adjoint operator P = −i~ ∂φ
behaving much the same as in the finite-dimensional case: the eigenvectors of
P give a countable orthonormal basis of H. If one wants to, one can think of P
as an infinite-dimensional matrix.
Unfortunately, in order to understand many aspects of quantum mechanics,
we can’t get away with this trick, but need to work with R itself. One reason
for this is that the unitary representations of R are labeled by the same group,
R, and we will find it very important to exploit this and treat positions and
momenta on the same footing (see the discussion of the Heisenberg group in
chapter 11). What plays the role here of |ni = einφ , n ∈ Z will be the |ki = eikq ,
k ∈ R. These are functions on R that are irreducible representations under the
translation action (π(a) acts on functions of q by taking q → q − a)
π(a)eikq = eik(q−a) = e−ika eikq
We can try and mimic the Fourier series decomposition, with the coefficients
cn that depend on the labels of the irreducibles replaced by a function fe(k)
depending on the label k of the irreducible representation of R.

126
Definition (Fourier transform). The Fourier transform of a function ψ is given
by Z ∞
1
Fψ = ψ(k)
e ≡√ e−ikq ψ(q)dq
2π −∞
The definition makes sense for ψ ∈ L1 (R), Lebesgue integrable functions on
R. For the following, it is convenient to instead restrict to the Schwartz space
S(R) of functions ψ such that the function and its derivatives fall off faster than
any power at infinity (which is a dense subspace of L2 (R)). For more details
about the analysis and proofs of the theorems quoted here, one can refer to a
standard textbook such as [63].
Given the Fourier transform of ψ, one can recover ψ itself:
Theorem (Fourier Inversion). For ψe ∈ S(R) the Fourier transform of a func-
tion ψ ∈ S(R), one has
Z +∞
1
ψ(q) = F ψ = √
e e eikq ψ(k)dk
e
2π −∞

Note that Fe is the same linear operator as F, with a change in sign of the
argument of the function it is applied to. Note also that we are choosing one
of various popular ways of normalizing the definition of the Fourier transform.
In others, the factor of 2π may appear instead in the exponent of the complex
exponential, or just in one of F or Fe and not the other.
The operators F and Fe are thus inverses of each other on S(R). One has
Theorem (Plancherel). F and Fe extend to unitary isomorphisms of L2 (R)
with itself. In other words
Z ∞ Z ∞
|ψ(q)|2 dq = |ψ(k)|
e 2
dk
−∞ −∞

Note that we will be using the same inner product on functions on R


Z ∞
hψ1 , ψ2 i = ψ1 (q)ψ2 (q)dq
−∞

both for functions of q and their Fourier transforms, functions of k.


An important example is the case of Gaussian functions where
Z +∞
2
−α q2 1 q2
Fe =√ e−ikq e−α 2 dq
2π −∞
Z +∞
1 α 2k 2 ik 2
=√ e− 2 ((q+i α ) −( α ) ) dq
2π −∞
Z +∞
1 k2 α 02
= √ e− 2α e− 2 q dq 0
2π −∞
1 − k2
= √ e 2α
α

127
A crucial property of the unitary operator F on H is that it diagonalizes
the differentiation operator and thus the momentum operator P . Under Fourier
transform, differential operators become just multiplication by a polynomial,
giving a powerful technique for solving differential equations. Computing the
Fourier transform of the differentiation operator using integration by parts, we
find
Z +∞

g 1 dψ
=√ e−ikq dq
dq 2π −∞ dq
Z +∞
1 d d
=√ ( (e−ikq ψ) − ( e−ikq )ψ)dq
2π −∞ dq dq
Z +∞
1
=ik √ e−ikq ψdq
2π −∞
=ik ψ(k)
e

So under Fourier transform, differentiation by q becomes multiplication by ik.


This is the infinitesimal version of the fact that translation becomes multiplica-
tion by a phase under Fourier transform. If ψa (q) = ψ(q + a), one has
Z +∞
fa (k) = √1
ψ e−ikq ψ(q + a)dq
2π −∞
Z +∞
1 0
=√ e−ik(q −a) ψ(q 0 )dq 0
2π −∞
=eika ψ(k)
e

Since p = ~k, we can easily change variables and work with p instead of k,
and often will do this from now on. As with the factors of 2π, there’s a choice
of where to put the factors of ~ in the normalization of the Fourier transform.
We’ll make the following choices, to preserve symmetry between the formulas
for Fourier transform and inverse Fourier transform:
Z +∞
1 pq
ψ(p)
e =√ e−i ~ dq
2π~ −∞
Z +∞
1 pq
ψ(q) = √ ei ~ dp
2π~ −∞
Note that in this case we have lost an important property that we had for
finite dimensional H and had managed to preserve by using S 1 rather than
R as our space. If we take H = L2 (R), the eigenvectors for the operator P
(the functions eikq ) are not square-integrable, so not in H. The operator P
is an unbounded operator and we no longer have a theorem saying that its
eigenvectors give an orthornormal basis of H. As mentioned earlier, one way
to deal with this uses a general spectral theorem for self-adjoint operators on a
Hilbert space, for more details see Chapter 2 of [64].

128
10.3.1 Delta functions
One would like to think of the eigenvectors of the operator P as in some sense
continuing to provide an orthonormal basis for H. One problem is that these
eigenvectors are not square-integrable, so one needs to expand one’s notion of
state space H beyond a space like L2 (R). Another problem is that Fourier
transforms of such eigenvectors (which will be eigenvectors of the position op-
erator) gives something that is not a function but a distribution. The proper
general formalism for handling state spaces H which include eigenvectors of both
position and momentum operators seems to be that of “rigged Hilbert spaces”
which this author confesses to never have mastered (the standard reference is
[21]). As a result we won’t here give a rigorous discussion, but will use non-
normalizable functions and distributions in the non-rigorous form in which they
are used in physics. The physics formalism is set up to work as if H was finite
dimensional and allows easy manipulations which don’t obviously make sense.
Our interest though is not in the general theory, but in very specific quantum
systems, where everything is determined by their properties as unitary group
representations. For such systems, the general theory of rigged Hilbert spaces
is not needed, since for the statements we are interested in various ways can
be found to make them precise (although we will generally not enter into the
complexities needed to do so).
Given any function g(q) on R, one can try and define an element of the dual
space of the space of functions on R by integration, i.e by the linear operator
Z +∞
f→ g(q)f (q)dq
−∞

(we won’t try and specify which condition on functions f or g is chosen to make
sense of this). There are however some other very obvious linear functionals on
such a function space, for instance the one given by evaluating the function at
q = c:
f → f (c)
Such linear functionals correspond to generalized functions, objects which when
fed into the formula for integration over R give the desired linear functional.
The most well-known of these is the one that gives this evaluation at q = c, it
is known as the “delta function” and written as δ(q − c). It is the object which,
if it were a function, would satisfy
Z +∞
δ(q − c)f (q)dq = f (c)
−∞

To make sense of such an object, one can take it to be a limit of actual functions.
For the δ-function, consider the limit as  → 0 of

1 (q−c)2
g = √ e− 2
2π

129
which satisfy
Z +∞
g (q)dq = 1
−∞

for all  > 0 (one way to see this is to use the formula given earlier for the
Fourier transform of a Gaussian).
Heuristically (ignoring obvious problems of interchange of integrals that
don’t make sense), one can write the Fourier inversion formula as follows
Z +∞
1
ψ(x) = √ eikq ψ(k)dk
e
2π −∞
Z +∞ Z +∞
1 1 0
=√ ikq
e (√ e−ikq ψ(q 0 )dq 0 )dk
2π −∞ 2π −∞
Z +∞ Z +∞
1 0
= ( eik(q−q ) ψ(q 0 )dk)dq 0
2π −∞ −∞
Z +∞
= δ(q 0 − q)ψ(q 0 )dq 0
−∞

Taking the delta function to be an even function (so δ(x0 − x) = δ(x − x0 )),
one can interpret the above calculation as justifying the formula
Z +∞
1 0
δ(q − q 0 ) = eik(q−q ) dk
2π −∞

One then goes on to consider the eigenvectors

1
|ki = √ eikq

of the momentum operator as satisfying a replacement for the finite-dimensional


orthonormality relation, with the δ-function replacing the δnm :
Z +∞ Z +∞
0 1 1 1 0
hk |ki = ( √ eik0 q )( √ eikq )dq = ei(k−k )q dq = δ(k − k 0 )
−∞ 2π 2π 2π −∞

As mentioned before, we will usually work with the variable p = ~k, in which
case we have
1 pq
|pi = √ ei ~
2π~
and Z +∞
0 1 (p−p0 )q
δ(p − p ) = ei ~ dq
2π~ −∞

For a more mathematically legitimate version of this calculation, one place


to look is Lecture 6 in the notes on physics by Dolgachev [12].

130
10.4 For further reading
Every book about quantum mechanics covers this example of the free quantum
particle somewhere very early on, in detail. Our discussion here is unusual
just in emphasizing the role of the spatial translation groups and its unitary
representations. Discussions of quantum mechanics for mathematicians (such
as [64]) typically emphasize the development of the functional analysis needed
for a proper description of the Hilbert space H and of the properties of general
self-adjoint operators on this state space. In this class we’re restricting attention
to a quite limited set of such operators coming from Lie algebra representations,
so will avoid the general theory.

131
132
Chapter 11

The Heisenberg group and


the Schrödinger
Representation

In our discussion of the free particle, we used just the actions of the groups R3 of
spatial translations and the group R of time translations, finding corresponding
observables, the self-adjoint momentum (P ) and Hamiltonian (H) operators.
We’ve seen though that the Fourier transform involves a perfectly symmetrical
treatment of position and momentum variables. This allows us to introduce a
position operator Q acting on our state space H. We will analyze in detail in
this chapter the implications of extending the algebra of observable operators in
this way, most of the time restricting to the case of a single spatial dimension,
since the physical case of three dimensions is an easy generalization.
The P and Q operators generate an algebra usually called the Heisenberg
algebra, since Werner Heisenberg and collaborators used it in the earliest work
on a full quantum-mechanical formalism in 1925. It was quickly recognized by
Hermann Weyl that this algebra comes from a Lie algebra representation, with
a corresponding group (called the Heisenberg group by mathematicians, the
Weyl group by physicists). The state space of a quantum particle, either free or
moving in a potential, will be a unitary representation of this group, with the
group of spatial translations a subgroup. Note that this particular use of a group
and its representation theory in quantum mechanics is both at the core of the
standard axioms and much more general than the usual characterization of the
significance of groups as “symmetry groups”. The Heisenberg group does not in
any sense correspond to a group of invariances of the physical situation (there
are no states invariant under the group), and its action does not commute with
any non-zero Hamiltonian operator. Instead it plays a much deeper role, with
its unique unitary representation determining much of the structure of quantum
mechanics.
Note: beginning with this chapter, we will always assume units for position

133
and momentum chosen so that ~ = 1 and no longer keep track of how this
dimensional constant appears in equations.

11.1 The position operator and the Heisenberg


Lie algebra
In the description of the state space H as functions of a position variable q, the
momentum operator is
d
P = −i
dq
The Fourier transform F provides a unitary transformation to a description of
H as functions of a momentum variable p in which the momentum operator P
is just multiplication by p. Exchanging the role of p and q, one gets a position
operator Q that acts as
d
Q=i
dp
when states are functions of p (the sign difference comes from the sign change
in Fe vs. F), or as multiplication by q when states are functions of q.

11.1.1 Position space representation


In the position space representation, taking as position variable q 0 , one has
normalized eigenfunctions describing a free particle of momentum p
1 0
|pi = √ eipq

which satisfy
d 1 0 1 0
P |pi = −i ( √ eipq ) = p( √ eipq ) = p|pi
dq 0 2π 2π
The operator Q in this representation is just the multiplication operator
Qψ(q 0 ) = q 0 ψ(q 0 )
that multiplies a function of the position variable q 0 by q 0 . The eigenvectors |qi
of this operator will be the δ-functions δ(q 0 − q) since
Q|qi = q 0 δ(q 0 − q) = qδ(q 0 − q)
A standard convention in physics is to think of a state written in the notation
|ψi as being representation independent. The wavefunction in the position space
representation can then be found by taking the coefficient of |ψi in the expansion
of a state in Q eigenfunctions |qi, so
Z +∞
hq|ψi = δ(q − q 0 )ψ(q 0 )dq 0 = ψ(q)
−∞

134
and in particular
1
hq|pi = √ eipq

11.1.2 Momentum space representation


In the momentum space description of H as functions of p0 , the state is the
Fourier transform of the state in the position space representation, so the state
|pi will be the function (actually, the distribution) on momentum space
Z +∞ Z +∞
1 0 1 0 0 0 1 0 0
F( √ eipq ) = e−ip q eipq dq 0 = ei(p−p )q dq 0 = δ(p − p0 )
2π 2π −∞ 2π −∞

These are eigenfunctions of the operator P , which is a multiplication operator


in this representation

P |pi = p0 δ(p0 − p) = pδ(p0 − p)

The position eigenfunctions are also given by Fourier transform


Z +∞
1 0 0 1 0
|qi = F(δ(q − q 0 )) = √ e−ip q δ(q − q 0 )dq 0 = √ e−ip q
2π −∞ 2π

The position operator is


d
Q=i
dp
and |qi is an eigenvector with eigenvalue q

d 1 0 1 0
Q|qi = i (√ e−ip q ) = q( √ e−ip q ) = q|qi
dp0 2π~ 2π

Another way to see that this is the correct operator is to use the unitary trans-
formation F and its inverse Fe that relate the position and momentum space
representations. Going from position space to momentum space one has

Q → FQFe

and one can check that this transformed Q operator will act as i dpd 0 on functions
of p0 .
One can express momentum space wavefunctions as coefficients of the ex-
pansion of a state ψ in terms of momentum eigenvectors
Z +∞
1 0
hp|ψi = ( √ e−ipq )ψ(q 0 )dq 0 = F(ψ(q)) = ψ(p)
e
−∞ 2π

135
11.1.3 Physical interpretation
With now both momentum and position operators on H, we have the standard
set-up for describing a non-relativistic quantum particle that is discussed exten-
sively early on in any quantum mechanics textbook, and one of these should be
consulted for more details and for explanations of the physical interpretation
of this quantum system. The classically observable quantity corresponding to
the operator P is the momentum, and eigenvectors of P are the states that
have well-defined values for this (the eigenvalue). The momentum eigenvalues
and energy eigenvalues will have the correct non-relativistic energy momentum
relationship. Note that for the free particle P commutes with the Hamiltonian
P2
H = 2m so there is a conservation law: states with a well-defined momentum
at one time always have the same momentum. This corresponds to an obvious
physical symmetry, the symmetry under spatial translations.
The operator Q on the other hand does not correspond to a physical sym-
metry, since it does not commute with the Hamiltonian. We will see that it
does generate a group action, and from the momentum space picture we can
see that this is a shift in the momentum, but such shifts are not symmetries of
the physics and there is no conservation law for Q. The states in which Q has
a well-defined numerical value are the ones such that the position wavefunction
is a delta-function. If one prepares such a state at a given time, it will not
remain a delta-function, but quickly evolve into a wavefunction that spreads
out in space.
Since the eigenfunctions of P and Q are non-normalizable, one needs a
slightly different formulation of the measurement theory principle used for finite
dimensional H. In this case, the probability of observing a position of a particle
with wavefunction ψ(q) in the interval [q1 , q2 ] will be
R q2
q1
ψ(q)ψ(q)dq
R +∞
−∞
ψ(q)ψ(q)dq
This will make sense for states |ψi ∈ L2 (R), which we will normalize to have
norm-squared one when discussing their physical interpretation. Then the sta-
tistical expectation value for the measured position variable will be
hψ|Q|ψi
which can be computed in either the position or momentum space representa-
tion.
Similarly, the probability of observing a momentum of a particle with momentum-
space wavefunction ψ(q)
e in the interval [p1 , p2 ] will be
R p2
p1
ψ(p)
e ψ(p)dp
e
R +∞
−∞
ψ(p)
e ψ(p)dp
e

and for normalized states the statistical expectation value of the measured mo-
mentum is
hψ|P |ψi

136
Note that states with a well-defined position (the delta-function states in
the position-space representation) are equally likely to have any momentum
whatsoever. Physically this is why such states quickly spread out. States with
a well-defined momentum are equally likely to have any possible position. The
properties of the Fourier transform imply the so-called “Heisenberg uncertainty
principle” that gives a lower bound on the product of a measure of uncertainty
in position times the same measure of uncertainty in momentum. Examples
of this that take on the lower bound are the Gaussian shaped functions whose
Fourier transforms were computed earlier.
For much more about these questions, again most quantum mechanics text-
books will contain an extensive discussion.

11.2 The Heisenberg Lie algebra


In either the position or momentum space representation the operators P and
Q satisfy the relation
[Q, P ] = i1
Soon after this commutation relation appeared in early work on quantum me-
chanics, Weyl realized that it can be interpreted as the relation between oper-
ators one would get from a representation of a three-dimensional Lie algebra,
now called the Heisenberg Lie algebra.

Definition (Heisenberg Lie algebra, d = 1). The Heisenberg Lie algebra h3 is


the vector space R3 with the Lie bracket defined by its values on a basis (X, Y, Z)
by
[X, Y ] = Z, [X, Z] = [Y, Z] = 0

Writing a general element of h3 in terms of this basis as xX + yY + zZ,


and grouping the x, y coordinates together (we will see that it is useful to think
of the vector space h3 as R2 ⊕ R), the Lie bracket is given in terms of the
coordinates by
   0  
x x 0 0
[( , z), ( 0 , z )] = ( , xy 0 − yx0 )
y y 0

Note that this is a non-trivial Lie algebra, but only minimally so. All Lie
brackets of Z with anything else are zero. All Lie brackets of Lie brackets are
also zero (as a result, this is an example of what is known as a “nilpotent” Lie
algebra).
The Heisenberg Lie algebra is isomorphic to the Lie algebra of 3 by 3 strictly
upper triangular real matrices, with Lie bracket the matrix commutator, by the
following isomorphism:
     
0 1 0 0 0 0 0 0 1
X ↔ 0 0 0 , Y ↔ 0 0 1 , Z ↔ 0 0 0
0 0 0 0 0 0 0 0 0

137
 
0 x z
xX + yY + zZ ↔ 0 0 y
0 0 0
and one has
x0 z0 xy 0 − x0 y
     
0 x z 0 0 0
[0 0 y  , 0 0 y 0 ] = 0 0 0 
0 0 0 0 0 0 0 0 0

The generalization of this to higher dimensions is


Definition (Heisenberg Lie algebra). The Heisenberg Lie algebra h2d+1 is the
vector space R2d+1 = R2d ⊕ R with the Lie bracket defined by its values on a
basis Xj , Yj , Z, (j = 1, . . . d) by

[Xj , Yk ] = δjk Z, [Xj , Z] = [Yj , Z] = 0


Pd Pd
Writing a general element as j=1 xj Xj + k=1 yk Yk + zZ, in terms of coor-
dinates the Lie bracket is
   0  
x x 0
[( , z), ( 0 , z)] = ( , x · y 0 − y · x0 )
y y 0

One can write this Lie algebra as a Lie algebra of matrices for any d. For
instance, in the physical case of d = 3, elements of the Heisenberg Lie algebra
can be written  
0 x1 x2 x3 z
0 0 0 0 y3 
 
0 0 0 0 y2 
 
0 0 0 0 y1 
0 0 0 0 0

11.3 The Heisenberg group


One can easily see that exponentiating matrices in h3 gives

1 x z + 21 xy
   
0 x z
exp 0 0 y  = 0 1 y 
0 0 0 0 0 1

so the group with Lie algebra h3 will be the group of upper triangular 3 by 3 real
matrices with ones on the diagonal, and this group will be the Heisenberg group
H3 . For our purposes though, it is better to work in exponential coordinates
(i.e. labeling a group element with the Lie algebra element that exponentiates
to it).
Matrix exponentials in general satisfy the Baker-Campbell-Hausdorff for-
mula, which says
1 1 1
eA eB = eA+B+ 2 [A,B]+ 12 [A,[A,B]]− 12 [B,[A,B]]+···

138
where the higher terms can all be expressed as repeated commutators. This
provides one way of showing that the Lie group structure is determined (for
group elements expressible as exponentials) by knowing the Lie bracket. For
the full formula and a detailed proof, see chapter 3 of [27]. One can easily
check the first few terms in this formula by expanding the exponentials, but the
difficulty of the proof is that it is not at all obvious why all the terms can be
organized in terms of commutators.
For the case of the Heisenberg Lie algebra, since all multiple commutators
vanish, the Baker-Campbell-Hausdorff formula implies for exponentials of ele-
ments of h3
1
eA eB = eA+B+ 2 [A,B]
(a proof of this special case of Baker-Campbell-Hausdorff is in section 3.1 of [27]).
We can use this to explicitly write the group law in exponential coordinates:

Definition (Heisenberg group, d = 1). The Heisenberg group H3 is the space


R3 with the group law
 0
x + x0
   
x x 0 1
( , z)( 0 z ) = ( , z + z 0 + (xy 0 − yx0 )) (11.1)
y y y + y0 2

Note that the Lie algebra basis elements X, Y, Z each generate subgroups
of H3 isomorphic to R. Elements of the first two of these subgroups generate
the full group, and elements of the third subgroup are “central”, meaning they
commute with all group elements. Also notice that the non-commutative nature
of the Lie algebra or group depends purely on the factor xy 0 − yx0 .
The generalization of this to higher dimensions is:

Definition (Heisenberg group). The Heisenberg group H2d+1 is the space R2d+1
with the group law
 0
x + x0
   
x x 0 1
( , z)( 0 , z ) = ( , z + z 0 + (x · y0 − y · x0 ))
y y y + y0 2

where the vectors here all live in Rd .

Note that in these exponential coordinates the exponential map relating the
Heisenberg Lie algebra h2d+1 and the Heisenberg Lie group H2d+1 is just the
identity map.

11.4 The Schrödinger representation


Since it can be defined in terms of 3 by 3 matrices, the Heisenberg group H3
has an obvious representation on C3 , but this representation is not unitary and
not of physical interest. What is of great interest is the infinite dimensional
representation on functions of q for which the Lie algebra version is given by
the Q, P and unit operators:

139
Definition (Schrödinger representation, Lie algebra version). The Schrödinger
representation of the Heisenberg Lie algebra h3 is the representation (Γ0S , L2 (R))
satisfying
d
Γ0S (X)ψ(q) = −iQψ(q) = −iqψ(q), Γ0S (Y )ψ(q) = −iP ψ(q) = − ψ(q)
dq
Γ0S (Z)ψ(q) = −iψ(q)
Factors of i have been chosen to make these operators skew-adjoint and the
representation thus unitary. They can be exponentiated, giving in the exponen-
tial coordinates on H3 of equation 11.1
 
x
ΓS (( , 0))ψ(q) = e−xiQ ψ(q) = e−ixq ψ(q)
0
 
0 d
ΓS (( , ))ψ(q) = e−yiP ψ(q) = e−y dq ψ(q) = ψ(q − y)
y
 
0
ΓS (( , z))ψ(q) = e−iz ψ(q)
0
For general group elements of H3 one has
Definition (Schrödinger representation, Lie group version). The Schrödinger
representation of the Heisenberg Lie group H3 is the representation (ΓS , L2 (R))
satisfying  
x xy
ΓS (( , z))ψ(q) = e−iz ei 2 e−ixq ψ(q − y)
y
To check that this defines a representation, one computes
   0  
x x x 0 x0 y 0 0
ΓS (( , z))ΓS (( 0 , z 0 ))ψ(q) = ΓS (( , z))e−iz ei 2 e−ix q ψ(q − y 0 )
y y y

0 xy+x0 y 0 0
=e−i(z+z ) ei 2 e−ixq e−ix (q−y) ψ(q − y − y 0 )
0 1 0 0 (x+x0 )(y+y 0 ) 0
=e−i(z+z + 2 (xy −yx )) ei 2 e−i(x+x )q ψ(q − (y + y 0 ))
x + x0
 
1
=ΓS (( , z + z 0 + (xy 0 − yx0 )))
y + y0 2
The group analog of the Heisenberg commutation relations (often called the
“Weyl form” of the commutation relations) is the relation

e−ixQ e−iyP = e−ixy e−iyP e−ixQ

One can derive this by calculating (using the Baker-Campbell-Hausdorff for-


mula)
1 xy
e−ixQ e−iyP = e−i(xQ+yP )+ 2 [−ixQ,−iyP ]) = e−i 2 e−i(xQ+yP )

140
as well as the same product in the opposite order, and then comparing the
results.
Note that, for the Schrödinger representation, we have
   
0 0
ΓS (( , z + 2π)) = ΓS (( , z))
0 0
so the representation operators are periodic with period 2π in the z-coordinate.
Some authors choose to define the Heisenberg group H3 as not R2 ⊕ R, but
R2 ⊕ S 1 , building this periodicity automatically into the definition of the group,
rather than the representation.
We have seen that the Fourier transform F takes the Schrödinger represen-
tation to a unitarily equivalent representation of H3 , in terms of functions of p
(the momentum space representation). The operators change as

ΓS (g) → F ΓS (g)Fe
when one makes the unitary transformation to the momentum space represen-
tation.
In typical physics quantum mechanics textbooks, one often sees calculations
made just using the Heisenberg commutation relations, without picking a spe-
cific representation of the operators that satisfy these relations. This turns out
to be justified by the remarkable fact that, for the Heisenberg group, once one
picks the constant with which Z acts, all irreducible representations are uni-
tarily equivalent. By unitarity this constant is −ic, c ∈ R. We have chosen
c = 1, but other values of c would correspond to different choices of units. In a
sense, the representation theory of the Heisenberg group is very simple: there’s
just one irreducible representation. This is very different than the theory for
even the simplest compact Lie groups (U (1) and SU (2)) which have an infinity
of inequivalent irreducibles labeled by weight or by spin. Representations of a
Heisenberg group will appear in different guises (we’ve seen two, will see an-
other in the discussion of the harmonic oscillator, and there are yet others that
appear in the theory of theta-functions), but they are all unitarily equivalent.
This statement is known as the Stone-von Neumann theorem.
So far we’ve been modestly cavalier about the rigorous analysis needed to
make precise statements about the Schrödinger representation. In order to prove
a theorem like the Stone-von Neumann theorem, which tries to say something
about all possible representations of a group, one needs to invoke a great deal
of analysis. Much of this part of analysis was developed precisely to be able to
deal with general quantum mechanical systems and prove theorems about them.
The Heisenberg group, Lie algebra and its representations are treated in detail
in many expositions of quantum mechanics for mathematicians. Some good
references for this material are [64], and [29]. In depth discussions devoted to
the mathematics of the Heisenberg group and its representations can be found
in [35], [19] and [66].
In these references can be found a proof of the (not difficult)
Theorem. The Schrödinger representation ΓS described above is irreducible.

141
and the much more difficult
Theorem (Stone-von Neumann). Any irreducible representation π of the group
H3 on a Hilbert space, satisfying

π 0 (Z) = −i1

is unitarily equivalent to the Schrödinger representation (ΓS , L2 (R))


Note that all of this can easily be generalized to the case of d spatial di-
mensions, for d finite, with the Heisenberg group now H2d+1 and the Stone-von
Neumann theorem still true. In the case of an infinite number of degrees of
freedom, which is the case of interest in quantum field theory, the Stone-von
Neumann theorem no longer holds and one has an infinity of inequivalent irre-
ducible representations, leading to quite different phenomena. For more on this
topic see chapter 33.
It is also important to note that the Stone-von Neumann theorem is for-
mulated for Heisenberg group representations, not for Heisenberg Lie algebra
representations. For infinite-dimensional representations in cases like this, there
are representations of the Lie algebra that are “non-integrable”: they aren’t
the derivatives of Lie group representations. For general representations of
the Heisenberg Lie algebra, i.e. the Heisenberg commutator relations, there are
counter-examples to the analog of the Stone von-Neumann theorem. It is only
for integrable representations that the theorem holds and one has a unique sort
of irreducible representation.

11.5 For further reading


For a lot more detail about the mathematics of the Heisenberg group, its Lie
algebra and the Schrödinger representation, see [7], [35], [19] and [66]. An ex-
cellent historical overview of the Stone-von Neumann theorem [51] by Jonathan
Rosenberg is well worth reading.

142
Chapter 12

The Poisson Bracket and


Symplectic Geometry

We have seen that the quantum theory of a free particle corresponds to the con-
struction of a representation of the Heisenberg Lie algebra in terms of operators
Q and P . One would like to use this to produce quantum systems with a similar
relation to more non-trivial classical mechanical systems than the free particle.
During the earliest days of quantum mechanics it was recognized by Dirac that
the commutation relations of the Q and P operators somehow corresponded
to the Poisson bracket relations between the position and momentum coordi-
nates on phase space in the Hamiltonian formalism for classical mechanics. In
this chapter we’ll give an outline of the topic of Hamiltonian mechanics and
the Poisson bracket, including an introduction to the symplectic geometry that
characterizes phase space.
The Heisenberg Lie algebra h2d+1 is usually thought of as quintessentially
quantum in nature, but it is already present in classical mechanics, as the Lie
algebra of degree zero and one polynomials on phase space, with Lie bracket
the Poisson bracket. The full Lie algebra of all functions on phase space (with
Lie bracket the Poisson bracket) is infinite dimensional, so not the sort of finite
dimensional Lie algebra given by matrices that we have studied so far (although,
historically, it is this kind of infinite dimensional Lie algebra that motivated the
discovery of the theory of Lie groups and Lie algebras by Sophus Lie during the
1870s). In chapter 14 we will see that degree two polynomials on phase space
also provide an important finite-dimensional Lie algebra.

12.1 Classical mechanics and the Poisson bracket


In classical mechanics in the Hamiltonian formalism, the space M = R2d
that one gets by putting together positions and the corresponding momenta
is known as “phase space”. Points in phase space can be thought of as uniquely
parametrizing possible initial conditions for classical trajectories, so another in-

143
terpretation of phase space is that it is the space that uniquely parametrizes
solutions of the equations of motion of a given classical mechanical system. The
basic axioms of Hamiltonian mechanics can be stated in a way that parallels
the ones for quantum mechanics.

Axiom (States). The state of a classical mechanical system is given by a point


in the phase space M = R2d , with coordinates qj , pj , for j = 1, . . . , d.

Axiom (Observables). The observables of a classical mechanical system are the


functions on phase space.

Axiom (Dyamics). There is a distinguished observable, the Hamiltonian func-


tion h, and states evolve according to Hamilton’s equations

∂h
q˙j =
∂pj

∂h
p˙j = −
∂qj
Specializing to the case d = 1, for any observable function f , Hamilton’s
equations imply

df ∂f dq ∂f dp ∂f ∂h ∂f ∂h
= + = −
dt ∂q dt ∂p dt ∂q ∂p ∂p ∂q

We can define

Definition (Poisson bracket). There is a bilinear operation on functions on the


phase space M = R2 (with coordinates (q, p)) called the Poisson bracket, given
by
∂f1 ∂f2 ∂f1 ∂f2
(f1 , f2 ) → {f1 , f2 } = −
∂q ∂p ∂p ∂q
An observable f then evolves in time according to

df
= {f, h}
dt
This relation is equivalent to Hamilton’s equations since it implies them by
taking f = q and f = p
∂h
q̇ = {q, h} =
∂p
∂h
ṗ = {p, h} = −
∂q
p2
For a non-relativistic free particle, h = 2m and these equations become
p
q̇ = , ṗ = 0
m

144
which just says that the momentum is the mass times the velocity, and is con-
served. For a particle subject to a potential V (q) one has
p2
h= + V (q)
2m
and the trajectories are the solutions to
p ∂V
q̇ = , ṗ = −
m ∂q
which adds Newton’s second law
∂V
F =− = ma = mq̈
∂q
to the definition of momentum in terms of velocity.
One can easily check that the Poisson bracket has the properties
• Antisymmetry
{f1 , f2 } = −{f2 , f1 }
• Jacobi identity
{{f1 , f2 }, f3 } + {{f3 , f1 }, f2 } + {{f2 , f3 }, f1 } = 0

These two properties, together with the bilinearity, show that the Poisson
bracket fits the definition of a Lie bracket, making the space of functions on
phase space into an infinite dimensional Lie algebra. This Lie algebra is respon-
sible for much of the structure of the subject of Hamiltonian mechanics, and it
was historically the first sort of Lie algebra to be studied.
The conservation laws of classical mechanics are best understood using this
Lie algebra. From the fundamental dynamical equation
df
= {f, h}
dt
we see that
df
{f, h} = 0 =⇒ =0
dt
and in this case the function f is called a “conserved quantity”, since it does
not change under time evolution. Note that if we have two functions f1 and f2
on phase space such that
{f1 , h} = 0, {f2 , h} = 0
then using the Jacobi identity we have
{{f1 , f2 }, h} = −{{h, f1 }, f2 } − {{f2 , h}, f1 } = 0
This shows that if f1 and f2 are conserved quantities, so is {f1 , f2 }, so functions
f such that {f, h} = 0 make up a Lie subalgebra. It is this Lie subalgebra
that corresponds to “symmetries” of the physics, commuting with the time
translation determined by the dynamical law given by h.

145
12.2 The Poisson bracket and the Heisenberg
Lie algebra
A third fundamental property of the Poisson bracket that can easily be checked
is the
• Leibniz rule

{f1 f2 , f } = {f1 , f }f2 + f1 {f2 , f }, {f, f1 f2 } = {f, f1 }f2 + f1 {f, f2 }

This property says that taking Poisson bracket with a function f acts on a
product of functions in a way that satisfies the Leibniz rule for what happens
when you take the derivative of a product. Unlike antisymmetry and the Ja-
cobi identity, which reflect the Lie algebra structure on functions, the Leibniz
property describes the relation of the Lie algebra structure to multiplication of
functions. At least for polynomial functions, it allows one to inductively reduce
the calculation of Poisson brackets to the special case of Poisson brackets of the
coordinate functions q and p, for instance:

{q 2 , qp} = q{q 2 , p} + {q 2 , q}p = q 2 {q, p} + q{q, p}q = 2q 2 {q, p} = 2q 2

The Poisson bracket is thus determined by its values on linear functions. We


will define
Definition. Ω(·, ·) is the restriction of the Poisson bracket to M ∗ , the linear
functions on M . Taking as basis vectors of M ∗ the coordinate functions q and
p, Ω is given on basis vectors by

Ω(q, q) = Ω(p, p) = 0, Ω(q, p) = −Ω(p, q) = 1

A general element of M ∗ will be a linear combination cq q + cp p for some


constants cq , cp . For general pairs of elements in M ∗ , Ω will be given by

Ω(cq q + cp p, c0q q + c0p p) = cq c0p − cp c0q (12.1)

We will often write elements of M ∗ as the column vector of their coefficients


cq , cp , identifying  
c
cq q + cp p ↔ q
cp
Then one has    0
c c
Ω( q , 0q ) = cq c0p − cp c0q
cp cp
Taking together linear functions on M and the constant function, one gets
a three dimensional space with basis elements q, p, 1, and this space is closed
under Poisson bracket. This space is thus a Lie algebra, and is isomorphic to
the Heisenberg Lie algebra h3 (see section 11.2), with the isomorphism given on
basis elements by
X ↔ q, Y ↔ p, Z ↔ 1

146
This isomorphism preserves the Lie bracket relations since
[X, Y ] = Z ↔ {q, p} = 1
It is convenient to choose its own notation for the dual phase space, so we
will often write M ∗ = M. The three dimensional space we have identified with
the Heisenberg Lie algebra is then
M⊕R
We will denote elements of this space either by functions cq q + cp p + c, or as
 
c
( q , c)
cp
In this second notation, the Lie bracket is
   0      0
c c 0 c c
[( q , c), ( 0q , c)] = ( , Ω( q , 0q ))
cp cp 0 cp cp
Notice that the non-trivial part of the Lie bracket structure is determined by Ω.
In higher dimensions, coordinate functions q1 , · · · , qd , p1 , · · · , pd on M pro-
vide a basis for the dual space M consisting of the linear coefficient functions
of vectors in M . Taking as an additional basis element the constant function
1, we have a 2d + 1 dimensional space with basis q1 , · · · , qd , p1 , · · · , pd , 1. The
Poisson bracket relations
{qj , qk } = {pj , pk } = 0, {qj , pk } = δjk
turn this space into a Lie algebra, isomorphic to the Heisenberg Lie algebra
h2d+1 . On general functions, the Poisson bracket will be given by the obvious
generalization of the d = 1 case
d
X ∂f1 ∂f2 ∂f1 ∂f2
{f1 , f2 } = ( − ) (12.2)
j=1
∂qj ∂pj ∂pj ∂qj

Elements of h2d+1 are functions on M = R2d of the form


cq1 q1 + · · · + cqd qd + cp1 p1 + · · · + cpd pd + c = cq · q + cp · p + c
(using the notation cq = (cq1 , . . . , cqd ), cp = (cp1 , . . . , cpd )). We will often
denote these by  
c
( q , c)
cp
This Lie bracket on h2d+1 is given by
   0      0
c c 0 c c
[( q , c), ( 0q , c)] = ( , Ω( q , 0q ))
cp cp 0 cp cp
which depends just on the antisymmetric bilinear form
   0
c c
Ω( q , 0q ) = cq · c0p − cp · c0q (12.3)
cp cp

147
Digression. We have been careful here to keep track of the difference between
phase space M = R2d and its dual M = M ∗ , since it is M ⊕ R that is given
the structure of a Lie algebra (in this case h2d+1 ) by the Poisson bracket, and it
is this Lie algebra we want to use in chapter 15 when we define a quantization
of the classical system. Taking duals, we find an isomorphism

M ⊕ R ↔ h∗2d+1

It is a general phenomenon that one can define a version of the Poisson bracket
on functions on g∗ , the dual of any Lie algebra g. This is because the Leibniz
property ensures that the Poisson bracket only depends on Ω, its restriction to
linear functions, and linear functions on g∗ are just elements of g. So one can
define a Poisson bracket on functions on g∗ by first defining

Ω(X, X 0 ) = [X, X 0 ] (12.4)

for X, X 0 ∈ g = (g∗ )∗ , and then extending this to all functions on g∗ by the


Leibniz property.

12.3 Symplectic geometry


We saw in chapter 4 that given a basis ej of a vector space V , a dual basis e∗j
of V ∗ is given by taking e∗j = vj , where vj are the coordinate functions. If one
instead is initially given the coordinate functions vj , one can construct a dual
basis of V = (V ∗ )∗ by taking as basis vectors the first order linear differential
operators given by differentiation with respect to the vj , in other words by
taking

ej =
∂vj
Elements of V are then identified with linear combinations of these operators.
In effect, one is identifying vectors v with the directional derivative along the
vector
v ↔v·∇
We also saw in chapter 4 that an inner product h·, ·i on V provides an
isomorphism of V and V ∗ by

v ∈ V ↔ lv (·) = hv, ·i ∈ V ∗ (12.5)

Such an inner product is the fundamental structure in Euclidean geometry,


giving a notion of length of a vector and angle between two vectors, as well
a group, the orthogonal group of linear transformations preserving the inner
product. It is a symmetric, non-degenerate bilinear form on V .
A phase space M does not usually come with a choice of inner product. We
have seen that the Poisson bracket gives us not a symmetric bilinear form, but
an antisymmetric bilinear form Ω, defined on the dual space M. We will define
an analog of an inner product, with symmetry replaced by antisymmetry:

148
Definition (Symplectic form). A symplectic form ω on a vector space V is a
bilinear map
ω :V ×V →R
such that

• ω is antisymmetric: ω(v, v 0 ) = −ω(v, v 0 )

• ω is nondegenerate: if v 6= 0, then ω(v, ·) ∈ V ∗ is non-zero.

A vector space V with a symplectic form ω is called a symplectic vector


space. The analog of Euclidean geometry, replacing the inner product by a
symplectic form, is called symplectic geometry. In this sort of geometry, there
is no notion of length (since antisymmetry implies ω(v, v) = 0). There is an
analog of the orthogonal group, called the symplectic group, which consists of
linear transformations preserving ω, a group we will study in detail in chapter
14.
Just as an inner product gives an identification of V and V ∗ , a symplectic
form can be used in a similar way, giving an identification of M and M. Using
the symplectic form Ω on M, we can define an isomorphism by identifying basis
vectors by


qj ∈ M ↔ Ω(·, qj ) = − Ω(qj , ·) = − ∈M
∂pj

pj ∈ M ↔ Ω(·, pj ) = − Ω(pj , ·) = ∈M
∂qj

and in general
u ∈ M ↔ Ω(·, u) (12.6)
Note that unlike the inner product case, a choice of convention of minus sign
must be made and is done here.
Recalling the discussion of bilinear forms from section 9.5, a bilinear form on
a vector space V can be identified with an element of V ∗ ⊗ V ∗ . Taking V = M ∗
we have V ∗ = (M ∗ )∗ = M , and the bilinear form Ω on M ∗ is an element of
M ⊗ M given by
d
X ∂ ∂ ∂ ∂
Ω= ( ⊗ − ⊗ )
j=1
∂q j ∂p j ∂p j ∂q j

Under the identification 12.6 of M and M ∗ , Ω ∈ M ⊗ M corresponds to


d
X
ω= (qj ⊗ pj − pj ⊗ qj ) ∈ M ∗ ⊗ M ∗ (12.7)
j=1

Another version of the identification of M and M is then given by

v ∈ M → ω(v, ·) ∈ M

149
In the case of Euclidean geometry, one can show by Gram-Schmidt orthog-
onalization that a basis ej can always be found that puts the inner product
(which is a symmetric element of V ∗ ⊗ V ∗ ) in the standard form
n
X
vj ⊗ vj
j=1

in terms of basis elements of V ∗ , the coordinate functions vj . There is an


analogous theorem in symplectic geometry (for a proof, see for instance Propo-
sition 1.1 of [7]), which says that a basis of a symplectic vector space V can
always be found so that the dual basis coordinate functions come in pairs qj , pj ,
with the symplectic form ω the same one we have found based on the Poisson
bracket, that given by equation 12.7. Note that one difference between Eu-
clidean and symplectic geometry is that a symplectic vector space will always
be even-dimensional.

Digression. For those familiar with differential manifolds, vector fields and
differential forms, the notion of a symplectic vector space can be extended to:

Definition (Symplectic manifold). A symplectic manifold M is a manifold with


a differential two-form ω(·, ·) (called a symplectic two-form) satisfying the con-
ditions

• ω is non-degenerate, i.e. for a nowhere zero vector field X, ω(X, ·) is a


nowhere zero one-form.

• dω = 0, in which case ω is said to be closed.

The cotangent bundle T ∗ N of a manifold N (i.e. the space of pairs of a


point on N together with a linear function on the tangent space at that point)
provides one class of symplectic manifolds, generalizing the linear case N = Rd ,
and corresponding physically to a particle moving on N . A simple example that
is neither linear nor a cotangent bundle is the sphere M = S 2 , with ω the area
two-form. The Darboux theorem says that, by an appropriate choice of local
coordinates, symplectic two-forms ω can always be put in the form
d
X
ω= dqj ∧ dpj
j=1

Unlike the linear case though, there will in general be no global choice of coor-
dinates for which this true. Later on, our discussion of quantization will rely
crucially on having a linear structure on phase space, so will not apply to general
symplectic manifolds.
Note that there is no assumption here that M has a metric (i.e. it may
not be a Riemannian manifold). A symplectic two-form ω is a structure on a
manifold analogous to a metric but with opposite symmetry properties. Whereas
a metric is a symmetric non-degenerate bilinear form on the tangent space at

150
each point, a symplectic form is an antisymmetric non-degenerate bilinear form
on the tangent space.
Returning to vector spaces V , one can generalize the notion of a symplectic
structure by dropping the non-degeneracy condition on ω. The Leibniz property
can still be used to extend this to a Poisson bracket on functions on V , which
is then called a Poisson structure on V . In particular, one can do this for
V = g∗ for any Lie algebra g, using the choice of ω discussed at the end of
section 12.2. So g∗ always has a Poisson structure, and on subspaces where ω
is non-degenerate, a symplectic structure.
For instance, for
V = h∗2d+1 = M ⊕ R
one has a Poisson bracket on functions on V , but only on the subspace M is
this Poisson bracket non-degenerate on linear functions, making M a symplectic
vector space. For another example one can take g = so(3), which is just R3 ,
with antisymmetric bilinear form ω given by the vector cross-product. In this
case it turns out that if one considers spheres of fixed radius in R3 , ω provides
a non-degenerate symplectic form on their tangent spaces and such spheres are
symplectic manifolds with symplectic two-form their area two-form.

12.4 For further reading


Some good sources for discussions of symplectic geometry and the geometrical
formulation of Hamiltonian mechanics are [2], [7] and [9].

151
152
Chapter 13

Hamiltonian Vector Fields


and the Moment Map

A basic feature of Hamiltonian mechanics is that, for any function f on phase


space M , one gets trajectories in phase space that solve Hamilton’s equations,
and the velocity vectors of these trajectories provide a vector field on phase
space. Such vector fields are called Hamiltonian vector fields. When a Lie group
G acts on phase space, the infinitesimal action of the group also associates to
each element of g a vector field on phase space. When these are Hamiltonian
vector fields, we get (up to a constant) for each L ∈ g a function µL that
corresponds to that vector field. This map from g to functions on M is called the
moment map, and it characterizes the action of G on phase space. Quantization
of such functions will give us the quantum observables corresponding to the
group action. For the case of the action of G = R3 on M = R6 by translations,
the moment map gives the momentum, for the action of G = SO(3) by rotations,
the angular momentum.

13.1 Vector fields and the exponential map


One can think of a vector field on M = R2 as a choice of a two-dimensional
vector at each point in R2 , so given by a vector-valued function
 
Fq (q, p)
F(q, p) =
Fp (q, p)

Such a vector field determines a system of differential equations


dq dp
= Fq , = Fp
dt dt
Once we specify initial conditions

q(0) = q0 , p(0) = p0

153
if Fq and Fp are differentiable functions these differential equations have a unique
solution q(t), p(t), at least for some neighborhood of t = 0 (from the existence
and uniqueness theorem that can be found for instance in [34]). These solutions
q(t), p(t) describe trajectories in R2 with velocity vector F(q(t), p(t)) and such
trajectories can be used to define the “flow” of the vector field: for each t this is
the map that takes the initial point (q(0), p(0)) ∈ R2 to the point (q(t), p(t)) ∈
R2 .
Another equivalent way to define vector fields on R2 is to use instead the
directional derivative along the vector field, identifying
∂ ∂
F(q, p) ↔ F(q, p) · ∇ = Fq (q, p) + Fp (q, p)
∂q ∂p
The case of F a constant vector is just our previous identification of the vector
∂ ∂
space M with linear combinations of ∂q and ∂p .
An advantage of defining vector fields in this way as first order linear differ-
ential operators is that it shows that vector fields form a Lie algebra, where one
takes as Lie bracket the commutator of the differential operators. The commu-
tator of two first-order differential operators is another first-order differential
operator since higher order derivatives will cancel, using equality of mixed par-
tial derivatives. In addition, such a commutator will satisfy the Jacobi identity.
Not only do we get a Lie algebra, but also a representation of the Lie algebra,
on functions on R2 .
Given this Lie algebra of vector fields, one can ask what the corresponding
group might be. This is not a finite dimensional matrix Lie algebra, so expo-
nentiation of matrices will not give the group. One can however use the flow of
the vector field X to define an analog of the exponential of a parameter t times
X:
Definition (Exponential map of a vector field). An exponential map for the
vector field X on M is a map

exp(tX) : M → M

that depends on a parameter t ∈ R, is the identity map at t = 0, and satisfies


d
exp(tX)(m) = X(exp(tX)(m))
dt
for m ∈ M .
The flow of the vector field X is the map

ΦX : (t, m) ∈ R × M → ΦX (t, m) = exp(tX)(m)

If the vector field X is differentiable, exp(tX) will be a well-defined map for


some neighborhood of t = 0, and satisfy

exp(t1 X)exp(t2 X) = exp((t1 + t2 )X)

thus providing a one-parameter group with derivative X at the identity.

154
Digression. For any manifold M , one has a Lie algebra of differentiable vector
fields with an associated Lie bracket. One also has an infinite dimensional Lie
group, the group of invertible maps from M to itself, such that the maps and
their inverses are both differentiable. This group is called the diffeomorphism
group of M and written Dif f (M ). Its Lie algebra is the Lie algebra of vector
fields.
The representation of the Lie algebra of vector fields on functions is the
differential of the representation of Dif f (M ) on functions induced in the usual
way from the action of Dif f (M ) on the space M .

13.2 Hamiltonian vector fields and canonical trans-


formations
Our interest is not in general vector fields, but in vector fields corresponding to
Hamilton’s equations for some Hamiltonian function f , i.e. the case

∂f ∂f
Fq = , Fp = −
∂p ∂q

We call such vector fields Hamiltonian vector fields, defining:

Definition (Hamiltonian Vector Field). A vector field on M = R2 given by

∂f ∂ ∂f ∂
− = −{f, ·}
∂p ∂q ∂q ∂p

for some function f on M = R2 is called a Hamiltonian vector field and will be


denoted by Xf . In higher dimensions, Hamiltonian vector fields will be those of
the form
d
X ∂f ∂ ∂f ∂
Xf = ( − ) = −{f, ·} (13.1)
j=1
∂pj ∂qj ∂qj ∂pj

for some function f on M = R2d .

The simplest non-zero Hamiltonian vector fields are those for f a linear
function. For cq , cp constants, if

f = cq q + cp p

then
∂ ∂
Xf = cp − cq
∂q ∂p
and the map
f → Xf
is just the isomorphism of M and M of equation 12.6.

155

For example, taking f = p, we have Xp = ∂q . The exponential map for this
vector field is

exp(tXp )(q0 , p0 ) = (q0 + t, p0 ) (13.2)


Similarly, for f = q one has Xq = − ∂p and

exp(tXq )(q0 , p0 ) = (q0 , p0 − t) (13.3)

For quadratic functions f one gets vector fields Xf with components linear in
the coordinates. An important example, which describes a harmonic oscillator
and will be treated in much more detail in chapter 20, is the case

1 2
f= (q + p2 )
2

for which

∂ ∂
Xf = p −q
∂q ∂p

The trajectories satisfy

dq dp
= p, = −q
dt dt

and are given by

q(t) = q(0) cos t + p(0) sin t, p(t) = p(0) cos t − q(0) sin t

The exponential map is given by clockwise rotation through an angle t

exp(tXf )(q0 , p0 ) = (q0 cos t + p0 sin t, p0 cos t − q0 sin t)

The vector field Xf and the trajectories in the q − p plane look like this

156
The relation of vector fields to the Poisson bracket is given by

{f1 , f2 } = Xf2 (f1 ) = −Xf1 (f2 )

so one has in particular


∂f ∂f
{q, f } = , {p, f } = −
∂p ∂q

The definition we have given here of Xf (equation 13.1) carries with it a


choice of how to deal with a confusing sign issue. Recall that vector fields on M
form a Lie algebra with Lie bracket the commutator of differential operators. A
natural question is that of how this Lie algebra is related to the Lie algebra of
functions on M (with Lie bracket the Poisson bracket).
The Jacobi identity implies

{f, {f1 , f2 }} = {{f, f1 }, f2 } + {{f2 , f }, f1 } = {{f, f1 }, f2 } − {{f, f2 }, f1 }

so
X{f1 ,f2 } = Xf2 Xf1 − Xf1 Xf2 = −[Xf1 , Xf2 ] (13.4)
This shows that the map f → Xf of equation 13.1 that we defined between
these Lie algebras is not quite a Lie algebra homomorphism because of the -

157
sign in equation 13.4 (it is called a Lie algebra “antihomomorphism”). The map
that is a Lie algebra homomorphism is

f → −Xf

To keep track of the minus sign here, one needs to keep straight the difference
between
• The functions on phase space M are a Lie algebra, with adjoint action

ad(f )(·) = {f, ·}

and
• The functions f provide vector fields Xf acting on functions on M , where

Xf (·) = {·, f } = −{f, ·}

The first of these is what will be most relevant to us later when we quantize
functions on M to get operators, preserving the Lie algebra structure. The
second is what one naturally gets from the geometrical action of a Lie group G
on the phase space M (see section 13.3). As a simple example, the function p
satisfies
∂F
{p, F (q)} = −
∂q
so
∂(·)
{p, ·} = −
∂q

is the infinitesimal action of translation in q on functions, whereas ∂q is the
vector field on M corresponding to infinitesimal translation in the position co-
ordinate.
It is important to note that the Lie algebra homomorphism

f → Xf

is not an isomorphism, for two reasons:


• It is not injective (one-to-one), since functions f and f +C for any constant
C correspond to the same Xf .
• It is not surjective since not all vector fields are Hamiltonian vector fields
(i.e. of the form Xf for some f ). One property that a vector field must
satisfy in order to possibly be a Hamiltonian vector field (this condition
is sufficient but we will not show this) is

X{g1 , g2 } = {Xg1 , g2 } + {g1 , Xg2 } (13.5)

for g1 and g2 on M . This is just the Jacobi identity for f, g1 , g2 , when


X = Xf .

158
Digression. For a general symplectic manifold M , the symplectic two-form ω
gives one an analog of Hamilton’s equations. This is the following equality of
one-forms, relating a Hamiltonian function h and a vector field Xh determining
time evolution of trajectories in M

iXh ω = ω(Xh , ·) = dh

(here iX is interior product with the vector field X). The Poisson bracket in
this context can be defined as

{f1 , f2 } = ω(Xf1 , Xf2 )

Recall that a symplectic two-form is defined to be closed, satisfying the equa-


tion dω = 0, which is then a condition on a three-form dω. Standard differential
form computations allow one to express dω(Xf1 , Xf2 , Xf3 ) in terms of Poisson
brackets of functions f1 , f2 , f3 , and one finds that dω = 0 is the Jacobi identity
for the Poisson bracket.
The theory of “prequantization” (see [36], [29]) enlarges the phase space M
to a U (1) bundle with connection, where the curvature of the connection is the
symplectic form ω. Then the problem of lack of injectivity of the Lie algebra
homomorphism
f → −Xf
is resolved by instead using the map

f → −∇Xf + if (13.6)

where ∇X is the covariant derivative with respect to the connection. For details
of this, see [36] or [29].
In our treatment of functions on phase space M , we have always been tak-
ing such functions to be time-independent. One can abstractly interpret M
as the space of trajectories of a classical mechanical system, with coordinates
q, p having the interpretation of initial conditions q(0), p(0) of the trajectories.
The exponential maps exp(tXh ) give an action on the space of trajectories for
Hamiltonian function h, taking the trajectory with initial conditions given by
m ∈ M to the time-translated one with initial conditions given by exp(tXh )(m).
One should really interpret the formula for Hamilton’s equations
df
= {f, h}
dt
as meaning
d
f (exp(tXf )(m))|t=0 = {f (m), h(m)}
dt
for each m ∈ M .
Given a Hamiltonian vector field Xf , the maps

exp(tXf ) : M → M

159
are known to physicists as “canonical transformations”, and to mathematicians
as “symplectomorphisms”. We will not try and work out in any more detail
how the exponential map behaves in general. In chapter 14 we will see what
happens for f an order-two homogeneous polynomial in the qj , pk . In that case
the vector field Xf will take linear functions on M to linear functions, thus
acting on M, in which case its behavior can be studied using the matrix for the
linear transformation with respect to the basis elements qj , pk .
Digression. The exponential map exp(tX) can be defined as above on a general
manifold. For a symplectic manifold M , Hamiltonian vector fields Xf will have
the property that
exp(tXf )∗ ω = ω
This is because

LXf ω = (diXf + iXf d)ω = diXf ω = dω(Xf , ·) = ddf = 0

13.3 Group actions on M and the moment map


Whenever one has an action of a Lie group G on a space M , an infinitesimal
version of this action is the map

L ∈ g → XL

from g to vector fields on M that takes L to the vector field XL which acts on
functions on M by
d
XL F (m) = F (etL · m)|t=0
dt
This map however is not a homomorphism (for the Lie bracket on vector fields
the commutator of derivatives), but an anti-homomorphism. To see why this
is, recall that when a group G acts on a space, we get a representation π on
functions F on the space by

π(g)F (m) = F (g −1 · m)

The derivative of this representation will be the Lie algebra representation


d
π 0 (L)F (m) = F (e−tL · m)|t=0 = −XL F (m)
dt
so we see that it is the map

L → π 0 (L) = −XL

that will be a homomorphism.


Given an action of a group G on M , if for L ∈ g the vector field XL is a Hamil-
tonian vector field, we would like to find the function f such that XL = Xf .
This will allow us to study infinitesimal group actions on M by using functions

160
on M and the Poisson bracket. Quantization will turn functions on M into
operators, turning the function f into an operator which will be the observable
corresponding to the infinitesimal group action by a Lie algebra element L.
Only for certain actions of G on M will the XL be Hamiltonian vector fields.
A necessary condition is that XL satisfy equation 13.5

XL {g1 , g2 } = {XL g1 , g2 } + {g1 , XL g2 }

which is a property of Hamiltonian vector fields. Even when a function f exists


such that Xf = XL , it is only unique up to a constant, since f and f + C will
give the same vector field. We would like to choose these constants in such a
way that the map
L→f
is a Lie algebra homomorphism from g to the Lie algebra of functions on M .
When this is possible, the G-action is said to be a Hamiltonian G-action. When
it is not possible, the G-symmetry of the classical phase space is said to have
an “anomaly”.
We can define

Definition (Moment map). Given a Hamiltonian action of a Lie group G on


M , a Lie algebra homomorphism

L → µL

from g to functions on M is said to be a moment map if

XL = XµL

Equivalently, for functions F on M , µL satisfies

XµL F = −{µL , F } = XL F

Note what we are calling a moment map is sometimes called a “co-moment


map”, with the term “moment map” referring to a repackaged form of the same
information, the map
µ : M → g∗
where
(µ(m))(L) = µL (m)
For G = H3 , the moment map is just our identification of the Heisenberg
Lie algebra with functions on phase space:

µL = L = cq q + cp p + c ∈ h3

Here
∂ ∂
XµL = −cq + cp
∂p ∂q

161
The action of H3 on M for which this is the moment map will have XL = XµL
and one can check that this action is
 
x
( , z) · (q0 , p0 ) = (q0 + y, p0 − x) (13.7)
y

See equations 13.2 and 13.3, which show that this action corresponds to the
exponential map for vector fields associated to Lie algebra basis elements q and
p. For central elements of the Lie algebra (the constant functions), the vector
field is zero, and the exponential map takes these to the identity map on M .
For d = 3 the translation group G = R3 is a subgroup of H7 of elements of
the form  
0
( , 0)
a
since it acts on the phase space by translation in the position coordinates

(q0 , p0 ) → (q0 + a, p0 )

Taking a to be the corresponding element in the Lie algebra of G = R3 , the


vector field on M is
Xa = a · ∇
and the moment map is given by

µa (q0 , p0 ) = a · p0

(a function on M for each element a of the Lie algebra) or

µ(q0 , p0 )(a) = a · p0

(an element of the dual of the Lie algebra for each point m = (q0 , p0 ) in phase
space).
For another example, consider the action of the group G = SO(3) of rota-
tions on phase space M = R6 , which gives a map from so(3) to vector fields on
R6 , taking for example
∂ ∂ ∂ ∂
l1 ∈ so(3) → Xl1 = −q3 + q2 − p3 + p2
∂q2 ∂q3 ∂p2 ∂p3

(this is the vector field for an infinitesimal clockwise rotation in the q2 − q3 and
p2 − p3 planes, in the opposite direction to the case of the vector field X 12 (q2 +p2 )
in the q − p plane of section 13.2). The moment map here gives the usual
expression for the 1-component of the angular momentum

µl1 = q2 p3 − q3 p2

since one can check from equation 13.1 that Xl1 = Xµl1 . On basis elements of
so(3) one has
µlj (q0 , p0 ) = (q0 × p0 )j

162
Formulated as a map from M to so(3)∗ , the moment map is

µ(q0 , p0 )(l) = (q0 × p0 ) · l

where l ∈ so(3).

Digression. For the case of M a general symplectic manifold, one can still de-
fine the moment map, whenever one has a Lie group G acting on M , preserving
the symplectic form ω. The infinitesimal condition for such a G action is that

LX ω = 0

where LX is the Lie derivative along the vector field X. Using the formula

LX = (d + iX )2 = diX + iX d

for the Lie derivative acting on differential forms (iX is contraction with the
vector field X), one has
(diX + iX d)ω = 0
and since dω = 0 we have
diX ω = 0
When M is simply-connected, one-forms iX ω whose differential is 0 (called
“closed”) will be the differentials of a function (and called “exact”). So there
will be a function µ such that

iX ω(·, ·) = ω(X, ·) = dµ(·)

although such a µ is only unique up to a constant.


Given an element L ∈ g, the G action on M gives a vector field XL . When
we can choose the constants appropriately and find functions µL satisfying

iXL ω(·, ·) = dµL (·)

such that the map


L → µL
taking Lie algebra elements to functions on M (with Lie bracket the Poisson
bracket) is a Lie algebra homomorphism, then this is called the moment map.
One can equivalently work with

µ : M → g∗

by defining
(µ(m))(L) = µL (m)
An important class of symplectic manifolds M with an action of a Lie group
G, preserving the symplectic form, are the co-adjoint orbits Ol . These are the

163
manifolds one gets by acting on a chosen l ∈ g∗ by the co-adjoint action Ad∗ ,
meaning the action of g on g∗ satisfying

(Ad∗ (g) · l)(X) = l(Ad(g)X)

where X ∈ g, and Ad(g) is the usual adjoint action on g. For these cases, the
moment map
µ : Ol → g∗
is just the inclusion map. Two simple examples are
• For g = h3 , fixing a choice of c elements of g are linear functions on
M = R2 , so l ∈ g∗ is a point in M (evaluation of the function at that
point). The co-adjoint action is the action of H3 on M of equation 13.7.
• For g = so(3) the non-zero co-adjoint orbits are spheres, with radius the
length of l.

13.4 For further reading


For a general discussion of vector fields on Rn , see [34]. See [2], [7] and [9] for
more on Hamiltonian vector fields and the moment map. For more on the duals
of Lie algebras and co-adjoint orbits, see [8] and [35].

164
Chapter 14

Quadratic Polynomials and


the Symplectic Group

The Poisson bracket on functions on phase space M = R2d is determined by an


antisymmetric bilinear form Ω on the dual phase space M = M ∗ . Just as there
is a group of linear transformations (the orthogonal group) leaving invariant an
inner product, which is a symmetric bilinear form, here there is a group leaving
invariant Ω, the symplectic group Sp(2d, R). The Lie algebra sp(2d, R) of this
group can be identified with the Lie algebra of order-two polynomials on M , with
Lie bracket the Poisson bracket. Elements of Sp(2d, R) act on M, preserving
Ω, and so provide a map of the Heisenberg Lie algebra h2d+1 = M ⊕ R to itself,
preserving the Lie bracket (which is defined using Ω). The symplectic group
thus acts by automorphisms on h2d+1 . This action has an infinitesimal version,
reflected in the non-trivial Poisson brackets between order two and order one
polynomials on M .
The identification of elements L of the Lie algebra sp(2d, R) with order-two
polynomials µL on M is just the moment map for the action of the symplectic
group Sp(2d, R) on phase space. The corresponding vector fields will have linear
coefficient functions. Quantization of these polynomial functions will provide
quantum observables corresponding to any Lie subgroup G ⊂ Sp(2d, R) (any
Lie group G that acts on M preserving the symplectic form). Such group actions
will give rise to quantum observables, but they may or may not be “symmetries”,
with the term “symmetry” usually meaning that one has {µL , h} = 0 for h the
Hamiltonian function.

14.1 The symplectic group


Recall that the orthogonal group can be defined as the group of linear transfor-
mations preserving an inner product, which is a symmetric bilinear form. We
now want to study the analog of the orthogonal group one gets by replacing

165
the inner product by the antisymmetric bilinear form Ω that determines the
symplectic geometry of phase space. We will define
Definition (Symplectic Group). The symplectic group Sp(2d, R) is the sub-
group of linear transformations g of M ∗ = R2d that satisfy

Ω(gv1 , gv2 ) = Ω(v1 , v2 )

for v1 , v2 ∈ M ∗
While this definition uses the dual phase space M ∗ and Ω, it would have been
equivalent to have made the definition using M and ω, since these transforma-
tions preserve the isomorphism between M and M ∗ given by Ω (see equation
12.6). For an action on M ∗

u ∈ M ∗ → gu ∈ M ∗

the action on elements of M , which correspond to linear functions Ω(u, ·) on


M ∗ is given by

Ω(u, ·) ∈ M → g · Ω(u, ·) = Ω(u, g −1 (·)) = Ω(gu, ·) ∈ M

Here the first equality uses the definition of the dual representation (see 4.2)
and the second uses the invariance of Ω.

14.1.1 The symplectic group for d = 1


In order to study symplectic groups as groups of matrices, we’ll begin with the
case d = 1 and the group Sp(2, R). We can write Ω as
   0   0 
cq cq 0 0
 0 1 cq
Ω( , 0 ) = cq cp − cp cq = cq cp (14.1)
cp cp −1 0 c0p

Note that for the analogous case of the inner product, the same formula holds
with the elementary antisymmetric matrix replaced by the identity matrix.
A linear transformation g of M ∗ will be given by
    
cq α β cq

cp γ δ cp

The condition for Ω to be invariant under such a transformation is


 T     
α β 0 1 α β 0 1
= (14.2)
γ δ −1 0 γ δ −1 0
or    
0 αδ − βγ 0 1
=
−αδ + βγ 0 −1 0
so  
α β
det = αδ − βγ = 1
γ δ

166
This says that we can have any linear transformation with unit determinant.
In other words, we find that Sp(2, R) = SL(2, R). We will see later that this
isomorphism with a special linear group only happens for d = 1.
For the analog of equation 14.2 in the inner product case, replace the el-
ementary antisymmetric matrices by unit matrices, giving the condition that
defines orthogonal matrices, g T g = 1.
Now turning to the Lie algebra, for group elements g ∈ GL(2, R) near the
identity, one can write g in the form g = etL where L is in the Lie algebra
gl(2, R). The condition that g acts on M preserving Ω implies that (differenti-
ating 14.2)
     
d tL T 0 1 tL tL T T 0 1 0 1
((e ) e ) = (e ) (L + L)etL = 0
dt −1 0 −1 0 −1 0

Setting t = 0, the condtion on L is


   
T 0 1 0 1
L + L=0 (14.3)
−1 0 −1 0

This requires that L must be of the form


 
a b
L=
c −a

which is what one expects: L is in the Lie algebra sl(2, R) of 2 by 2 real matrices
with zero trace. The analog in the inner product case is just the condition
defining elements of the Lie algebra of the orthogonal group, LT + L = 0.
The homogeneous degree two polynomials in p and q form a three-dimensional
sub-Lie algebra of the Lie algebra of functions on phase space, since the non-zero
2 2
Poisson bracket relations between them on a basis q2 , p2 , qp are

q 2 p2
{ , } = qp {qp, p2 } = 2p2 {qp, q 2 } = −2q 2
2 2
This Lie algebra is isomorphic to sl(2, R), with an explicit isomorphism given
by identifying basis elements as follows:

q2 p2
     
0 1 0 0 1 0
↔E= − ↔F = − qp ↔ G = (14.4)
2 0 0 2 1 0 0 −1

The commutation relations amongst these matrices are

[E, F ] = G [G, E] = 2E [G, F ] = −2F

which are the same as the Poisson bracket relations between the corresponding
quadratic polynomials.
We thus see that we have an isomorphism between the Lie algebra of degree
two homogeneous polynomials with the Poisson bracket and the Lie algebra

167
of 2 by 2 trace-zero real matrices with the commutator as Lie bracket. The
isomorphism on general elements of these Lie algebras is

bq 2 cp2
    
1  b −a q a b
µL = −aqp + − = q p ↔L= (14.5)
2 2 2 −a −c p c −a

We have seen that this is a Lie algebra isomorphism on basis elements, but one
can explicitly check that.
{µL , µL0 } = µ[L,L0 ]

The use of the notation µL for these quadratic functions reflects the fact
that
L ∈ sl(2, R) → µL
is a moment map. This is for the SL(2, R) action on phase space M = R2
corresponding to the above SL(2, R) action on M (under the identification
between M and M given by Ω). One can check the condition XL = XµL on
vector fields on M , but we will not do this here, since for our purposes it is the
action on M that is important.
Two important subgroups of SL(2, R) are

• The subgroup of elements one gets by exponentiating G, which is isomor-


phic to the multiplicative group of positive real numbers
 t 
tG e 0
e =
0 e−t

Here one can explicitly see that this group has elements going off to infinity.

• Exponentiating the Lie algebra element E − F gives rotations of the plane


 
θ(E−F ) cos θ sin θ
e =
− sin θ cos θ

Note that the Lie algebra element being exponentiated here is

1 2
E−F ↔ (p + q 2 )
2
which we will later re-encounter as the Hamiltonian function for the har-
monic oscillator.

The group SL(2, R) is non-compact and thus its representation theory is


quite unlike the case of SU (2). In particular, all of its non-trivial irreducible
unitary representations are infinite-dimensional, forming an important topic in
mathematics, but one that is beyond our scope. We will be studying just one
such irreducible representation, and it is a representation only of a double-cover
of SL(2, R), not SL(2, R) itself.

168
14.1.2 The symplectic group for arbitary d
For general d, the symplectic group Sp(2d, R) is the group of linear transfor-
mations g of M that leave Ω (see 12.3) invariant, i.e. satisfy
   0    0
cq cq c c
Ω(g , g 0 ) = Ω( q , 0q )
cp cp cp cp

where cq , cp are d-dimensional vectors. By essentially the same calculation as in


the d = 1 case, we find the d-dimensional generalization of equation 14.2. This
says that Sp(2d, R) is the group of real 2d by 2d matrices g satisfying
   
T 0 1 0 1
g g=
−1 0 −1 0

where 0 is the d by d zero matrix, 1 the d by d unit matrix.


Again by a similar argument to the d = 1 case where the Lie algebra sp(2, R)
was determined by the condition 14.3, sp(2d, R) is the Lie algebra of 2d by 2d
matrices L satisfying
   
T 0 1 0 1
L + L=0
−1 0 −1 0

Such matrices will be those with the block-diagonal form


 
A B
L= (14.6)
C −AT

where A, B, C are d by d real matrices, with B and C symmetric, i.e.

B = BT , C = C T

The generalization of 14.5 is

Theorem 14.1. The Lie algebra sp(2d, R) is isomorphic to the Lie algebra of
order two homogeneous polynomials on M = R2d by the isomorphism (using a
vector notation for the coefficient functions q1 , · · · , qd , p1 , · · · , pd )

L ↔ µL

where
  
1  0 −1 q
µL = q p L
2 1 0 p
  
1  B −A q
= q p
2 −AT −C p
1
= (q · Bq − 2q · Ap − p · Cp) (14.7)
2

169
We will postpone the proof of this theorem until section 14.2, since it is easier
to first study Poisson brackets between order two and order one polynomials,
then use this to prove the theorem about Poisson brackets between order two
polynomials.
The Lie algebra sp(2d, R) has a subalgebra gl(d, R) consisting of matrices
of the form  
A 0
L=
0 −AT
or, in terms of polynomials, polynomials

−q · Ap = −(AT p) · q

Here A is any real d by d matrix. This shows that one way to get symplectic
transformations is to take any linear transformation of the position coordinates,
together with the dual linear transformation on momentum coordinates. In this
way, any linear group of symmetries of the position space becomes a group of
symmetries of phase-space. An example of this is the group SO(d) of spatial
rotations, with Lie algebra so(d) ⊂ gl(d, R), the antisymmetric d by d matrices.
In the case d = 3, µL gives the standard expression for the angular momentum
as a function of the qj , pk coordinates on phase space. For example, taking
L = l1 , one has
µl1 = q2 p3 − q3 p2
the standard expression for angular momentum about the 1-axis.
Another important subgroup comes from taking A = 0, B = 1, C = −1,
which gives
1
µL = (|q|2 + |p|2 )
2
which will be the Hamiltonian function for a d-dimensional harmonic oscillator.
Exponentiating, one gets an SO(2) subgroup, one that acts on phase space in a
way that mixes position and momentum coordinates, so cannot be understood
just in terms of configuration space.

14.2 The symplectic group and automorphisms


of the Heisenberg group
Returning to the d = 1 case, we have found two three-dimensional Lie alge-
bras (h3 and sl(2, R)) as subalgebras of the infinite dimensional Lie algebra of
functions on phase space:

• h3 , the Lie algebra of linear polynomials on M , with basis 1, q, p.

• sl(2, R), the Lie algebra of order two homogeneous polynomials on M ,


with basis q 2 , p2 , qp.

170
Taking all quadratic polynomials, we get a six-dimensional Lie algebra with
basis elements 1, q, p, qp, q 2 , p2 . This is not the direct product of h3 and sl(2, R)
since there are nonzero Poisson brackets
{qp, q} = − q, {qp, p} = p
p 2
q 2 (14.8)
{ , q} = − p, { , p} = q
2 2
These relations show that operating on a basis of linear functions on M by
taking the Poisson bracket with something in sl(2, R) (a quadratic function)
provides a linear transformation on M ∗ . In this section we will see that this
is a reflection of the fact that SL(2, R) acts on the Heisenberg group H3 by
automorphisms.
Recall the definition 11.1 of the Heisenberg group H3 as elements
 
x
( , z) ∈ M ⊕ R
y
with the group law
 0
x + x0
       0
x x 1 x x
( , z)( 0 , z 0 ) = ( , z + z 0
+ Ω( , 0 ))
y y y + y0 2 y y
Elements g ∈ SL(2, R) act on H3 by
     
x x x
( , z) → φg (( , z)) = (g , z) (14.9)
y y y
This is an example of
Definition (Group automorphisms). If an action of elements g of a group G
on a group H
h ∈ H → φg (h) ∈ H
satisfies
φg (h1 )φg (h2 ) = φg (h1 h2 )
for all g ∈ G and h1 , h2 ∈ H, the group G is said to act on H by automorphisms.
Each map φg is an automorphism of H. Note that since φg is an action of G,
we have φg1 g2 = φg1 φg2 .
Here G = SL(2, R), H = H3 and φg given above is an action by automorphisms
since
   0    0
x x x x
φg ( , z)φg ( 0 , z 0 ) =(g , z)(g 0 , z 0 )
y y y y
0
     0
x+x 0 1 x x
=(g , z + z + Ω(g , g 0 ))
y + y0 2 y y
0
     0
x+x 1 x x
=(g , z + z 0 + Ω( , 0 ))
y + y0 2 y y
   0
x x
=φg (( , z)( 0 , z 0 )) (14.10)
y y

171
One can consider the Lie algebra h3 instead of the group H3 , and there
will again be an action of SL(2, R) by automorphisms. Denoting elements
cq q + cp p + c of the Lie algebra h3 = M ⊕ R (see section 12.2) by
 
c
( q , c)
cp

the Lie bracket is


   0        0
cq cq 0 0 c c
[( 0
, c), ( 0 , c )] = ( 0 0
, cq cp − cp cq ) = ( , Ω( q , 0q ))
cp cp 0 0 cp cp

This Lie bracket just depends on Ω, so acting on M by g ∈ SL(2, R) will give


a map of h3 to itself preserving the Lie bracket. More explicitly, an SL(2, R)
group element acts on the Lie algebra h3 by
   
c c
X = ( q , c) ∈ h3 → φg (X) = (g q , c)
cp cp

These φg are just the same maps φg that give automorphisms of the group
structure of H3 since the exponential map relating h3 and H3 in these coordi-
nates is just the identification
   
cq x
( , c) ↔ ( , z)
cp y

The φg provide an example of


Definition (Lie algebra automorphisms). If an action of elements g of a group
G on a Lie algebra h
X ∈ h → φg (X) ∈ h
satisfies
[φg (X), φg (Y )] = φg ([X, Y ])
for all g ∈ G and X, Y ∈ h, the group is said to act on h by automorphisms.
The action of φg on h is an automorphism of h and we have the relation φg1 g2 =
φg 1 φg 2 .
Two examples are
• In the case discussed above, SL(2, R) acts on h3 by automorphisms. With
respect to the decomposition

h3 = M ⊕ R

SL(2, R) acts just on M, by linear transformations preserving Ω.


• If h is the Lie algebra of a Lie group H, the adjoint representation (Ad, h)
gives an action
X ∈ h → Ad(h)(X) = hXh−1

172
of H on h by automorphisms φh = Ad(h). For the case of the Heisenberg
group, one can check that the adjoint representation of H3 on h3 leaves
invariant the M component, only acting on the R component. Note that
this is opposite behavior to the co-adjoint action of H3 on h∗3 , which acts
on M by translations (as in equation 13.7).
The SL(2, R) action on h3 by Lie algebra automorphisms has an infinitesimal
version (i.e. for group elements infinitesimally close to the identity), an action of
the Lie algebra of SL(2, R) on h3 . This is defined for L ∈ sl(2, R) and X ∈ h3
by
d
L · X = (etL · X)|t=0 (14.11)
dt
Computing this, one finds
   
c c
L · ( q , c) = (L q , 0) (14.12)
cp cp
so L acts on h3 = M ⊕ R just by matrix multiplication on vectors in M.
More generally, one has
Definition 14.1 (Lie algebra derivations). If an action of a Lie algebra g on a
Lie algebra h
X ∈h→Z ·X ∈h
satisfies
[Z · X, Y ] + [X, Z · Y ] = Z · [X, Y ]
for all Z ∈ g and X, Y ∈ h, the Lie algebra g is said to act on h by derivations.
The action of an element Z on h is a derivation of h.
Given an action of a Lie group G on a Lie algebra h by automorphisms,
taking the derivative as in 14.11 gives an action of g on h by derivations since
d d
Z·[X, Y ] = (φetL ([X, Y ]))|t=0 = ([φetL X, φetL Y ])|t=0 = [Z·X, Y ]+[X, Z·Y ]
dt dt
We will often refer to this action of g on h as the infinitesimal version of the
action of G on h.
Two examples are
• The case above, where sl(2, R) acts on h3 by derivations.
• The adjoint representation of a Lie algebra h on itself gives an action of h
on itself by derivations, with
X ∈ h → Z · X = ad(Z)(X) = [Z, X]

The Poisson brackets between degree two and degree one polynomials dis-
cussed at the beginning of this section give explicitly the action of sl(2, R) on
h3 by derivations. For a general L ∈ sl(2, R) and cq q + cp p + c ∈ h3 we have
 0    
cq acq + bcp cq
{µL , cq q + cp p + c} = c0q q + c0p p, = = L (14.13)
c0p ccq − acp cp

173
(here µL is given by 14.5). We see that this is just the action of sl(2, R) by
derivations on h3 of equation 14.12, the infinitesimal version of the action of
SL(2, R) on h3 by automorphisms.
Note that in the larger Lie algebra of all polynomials on M of order two or
less, the action of sl(2, R) on h3 by derivations is part of the adjoint action of
the Lie algebra on itself, since it is given by the Poisson bracket (which is the
Lie bracket), between order two and order one polynomials.
The generalization to the case of arbitrary d is
Theorem. The sp(2d, R) action on h2d+1 = M ⊕ R by derivations is

L · (cq · q + cp · p + c) = {µL , cq · q + cp · p + c} = c0q · q + c0p · p (14.14)

where  0  
cq cq
= L
c0p cp
or, equivalently (see section 4.1), on coordinate function basis vectors of M one
has    
q q
{µL , } = LT
p p
Proof. One can first prove 14.14 for the cases when only one of A, B, C is non-
zero, then the general case follows by linearity. For instance, taking the special
case  
0 B 1
L= , µL = q · Bq
0 0 2
one can show that the action on coordinate functions (the basis vectors of M)
is      
1 q q 0
{ q · Bq, } = LT =
2 p p Bq
by computing
1X 1X
{ qj Bjk qk , pl } = (qj {Bjk qk , pl } + {qj Bjk , pl }qk )
2 2
j,k j,k
1 X X
= ( qj Bjl + Blk qk )
2 j
k
X
= Blj qj (since B = B T )
j

Repeating for A and C one finds that for general L one has
   
q q
{µL , } = LT
p p
Since an element in M can be written as
     
q c q
cq cp LT = (L q )T

p cp p

174
we have  0  
cq c
=L q
c0p cp

We can now prove theorem 14.1 as follows:


Proof.
L → µL
is clearly a vector space isomorphism of a space of matrices and one of quadratic
polynomials. To show that it is a Lie algebra isomorphism, one can use the
Jacobi identity for the Poisson bracket to show

{µL , {µL0 , cq · q + cp · p}} − {µL0 , {µL , cq · q + cp · p}} = {{µL , µL0 }, cq · q + cp · p}

The left-hand side of this equation is c00q · q + c00p · p, where


 00   
cq 0 0 cq
= (LL − L L)
c00p cp

As a result, the right-hand side is the linear map given by

{µL , µL0 } = µ[L,L0 ]

14.3 For further reading


For more on symplectic groups and the isomorphism between sp(2d, R) and
homogeneous degree two polynomials, see chapter 14 of [25] or chapter 4 of [19].
Chapter 15 of [25] and chapter 1 of [19] discuss the action of the symplectic
group on the Heisenberg group and Lie algebra by automorphisms.

175
176
Chapter 15

Quantization

Given any Hamiltonian classical mechanical system with phase space R2d , phys-
ics textbooks have a standard recipe for producing a quantum system, by a
method known as “canonical quantization”. We will see that for linear functions
on phase space, this is just the construction we have already seen of a unitary
representation Γ0S of the Heisenberg Lie algebra, the Schrödinger representation,
and the Stone-von Neumann theorem assures us that this is the unique such
construction, up to unitary equivalence. We will also see that this recipe can
only ever be partially successful, with the Schrödinger representation extending
to give us a representation of a sub-algebra of the algebra of all functions on
phase space (the polynomials of degree two and below), and a no-go theorem
showing that this cannot be extended to a representation of the full infinite
dimensional Lie algebra. Recipes for quantizing higher-order polynomials will
always suffer from a lack of uniqueness, a phenomenon known to physicists as
the existence of “operator ordering ambiguities”.
In later chapters we will see that this quantization prescription does give
unique quantum systems corresponding to some Hamiltonian systems (in par-
ticular the harmonic oscillator and the hydrogen atom), and does so in a manner
that allows a description of the quantum system purely in terms of representa-
tion theory.

15.1 Canonical quantization


Very early on in the history of quantum mechanics, when Dirac first saw the
Heisenberg commutation relations, he noticed an analogy with the Poisson
bracket. One has
i
{q, p} = 1 and − [Q, P ] = 1
~
as well as
df d i
= {f, h} and O(t) = − [O, H]
dt dt ~

177
where the last of these equations is the equation for the time dependence of a
Heisenberg picture observable O(t) in quantum mechanics. Dirac’s suggestion
was that given any classical Hamiltonian system, one could “quantize” it by
finding a rule that associates to a function f on phase space a self-adjoint
operator Of (in particular Oh = H) acting on a state space H such that
i
O{f,g} = − [Of , Og ]
~
This is completely equivalent to asking for a unitary representation (π 0 , H)
of the infinite dimensional Lie algebra of functions on phase space (with the
Poisson bracket as Lie bracket). To see this, note that one can choose units
for momentum p and position q such that ~ = 1. Then, as usual getting a
skew-adjoint Lie algebra representation operator by multiplying a self-adjoint
operator by −i, setting
π 0 (f ) = −iOf
the Lie algebra homomorphism property

π 0 ({f, g}) = [π 0 (f ), π 0 (g)]

corresponds to
−iO{f,g} = [−iOf , −iOg ] = −[Of , Og ]
so one has Dirac’s suggested relation.
Recall that the Heisenberg Lie algebra is isomorphic to the three-dimensional
sub-algebra of functions on phase space given by linear combinations of the con-
stant function, the function q and the function p. The Schrödinger representa-
tion ΓS provides a unitary representation not of the Lie algebra of all functions
on phase space, but of these polynomials of degree at most one, as follows

O1 = 1, Oq = Q, Op = P

so
d
Γ0S (1) = −i1, Γ0S (q) = −iQ = −iq, Γ0S (p) = −iP = −
dq
Moving on to quadratic polynomials, these can also be quantized, as follows

P2 Q2
O p2 = , O q2 =
2 2 2 2
For the function pq one can no longer just replace p by P and q by Q since the
operators P and Q don’t commute, and P Q or QP is not self-adjoint. What
does work, satisfying all the conditions to give a Lie algebra homomorphism is
1
Opq = (P Q + QP )
2
This shows that the Schrödinger representation Γ0S that was defined as a
representation of the Heisenberg Lie algebra h3 extends to a unitary Lie algebra

178
representation of a larger Lie algebra, that of all quadratic polynomials on phase
space, a representation that we will continue to denote by Γ0S and refer to as the
Schrödinger representation. On a basis of homogeneous order two polynomials
we have
p2 P2 i d2
Γ0S ( ) = −i =
2 2 2 dq 2
q2 Q2 i
Γ0S ( ) = −i = − q2
2 2 2
−i
Γ0S (pq) = (P Q + QP )
2
Restricting Γ0S to just linear combinations of these homogeneous order two poly-
nomials (which give the Lie algebra sl(2, R), recall equation 14.4) we get a Lie
algebra representation of sl(2, R) called the metaplectic representation.
Restricted to the Heisenberg Lie algebra, the Schrödinger representation Γ0S
exponentiates to give a representation ΓS of the corresponding Heisenberg Lie
group (see 11.4). As an sl(2, R) representation however, one can show that Γ0S
has the same sort of problem as the spinor representation of su(2) = so(3), which
was not a representation of SO(3), but only of its double cover SU (2) = Spin(3).
To get a group representation, one must go to a double cover of the group
SL(2, R), which will be called the metaplectic group and denoted M p(2, R).
The source of the problem is the subgroup of SL(2, R) generated by expo-
nentiating the Lie algebra element
 
1 2 2 0 1
(p + q ) ↔ E − F =
2 −1 0

When we study the Schrödinger representation using its action on the quantum
harmonic oscillator state space H in chapter 20 we will see that the Hamiltonian
is the operator
1 2
(P + Q2 )
2
and this has half-integer eigenvalues. As a result, trying to exponentiate Γ0S
gives a representation of SL(2, R) only up to a sign, and one needs to go to the
double cover M p(2, R) to get a true representation.
One should keep in mind though that, since SL(2, R) acts non-trivially by
automorphisms on H3 , elements of these two groups do not commute. The
Schrödinger representation is a representation not of the product group, but of
something called a “semi-direct product” which will be discussed in more detail
in chapter 16.

15.2 The Groenewold-van Hove no-go theorem


If one wants to quantize polynomial functions on phase space of degree greater
than two, it quickly becomes clear that the problem of “operator ordering am-
biguities” is a significant one. Different prescriptions involving different ways

179
of ordering the P and Q operators lead to different Of for the same function
f , with physically different observables (although the differences involve the
commutator of P and Q, so higher-order terms in ~).
When physicists first tried to find a consistent prescription for producing an
operator Of corresponding to a polynomial function on phase space of degree
greater than two, they found that there was no possible way to do this consistent
with the relation
i
O{f,g} = − [Of , Og ]
~
for polynomials of degree greater than two. Whatever method one devises for
quantizing higher degree polynomials, it can only satisfy that relation to lowest
order in ~, and there will be higher order corrections, which depend upon one’s
choice of quantization scheme. Equivalently, it is only for the six-dimensional Lie
algebra of polynomials of degree up to two that the Schrödinger representation
gives one a Lie algebra representation, and this cannot be consistently extended
to a representation of a larger subalgebra of the functions on phase space. This
problem is made precise by the following no-go theorem
Theorem (Groenewold-van Hove). There is no map f → Of from polynomials
on R2 to self-adjoint operators on L2 (R) satisfying
i
O{f,g} = − [Of , Og ]
~
and
Op = P, Oq = Q
for any Lie subalgebra of the functions on R2 larger than the subalgebra of
polynomials of degree less than or equal to two.
Proof. For a detailed proof, see section 5.4 of [7], section 4.4 of [19], or chapter
16 of [25]. In outline, the proof begins by showing that taking Poisson brack-
ets of polynomials of degree three leads to higher order polynomials, and that
furthermore for degree three and above there will be no finite-dimensional sub-
algebras of polynomials of bounded degree. The assumptions of the theorem
force certain specific operator ordering choices in degree three. These are then
used to get a contradiction in degree four, using the fact that the same degree
four polynomial has two different expressions as a Poisson bracket:
1 1
q 2 p2 = {q 2 p, p2 q} = {q 3 , p3 }
3 9

15.3 Canonical quantization in d dimensions


One can easily generalize the above to the case of d dimensions, with the
Schrödinger representation ΓS now giving a unitary representation of the Heisen-
berg group H2d+1 , with the corresponding Lie algebra representation given by
Γ0S (qj ) = −iQj , Γ0S (pj ) = −iPj

180
which satisfy the Heisenberg relations

[Qj , Pk ] = iδjk

Generalizing to quadratic polynomials in the phase space coordinate func-


tions, we have
i
Γ0S (qj qk ) = −iQj Qk , Γ0S (pj pk ) = −iPj Pk , Γ0S (qj pk ) = − (Qj Pk + Pk Qj )
2
(15.1)
One can exponentiate these operators to get a representation on the same H
of M p(2d, R), a double cover of the symplectic group Sp(2d, R). This phe-
nomenon will be examined carefully in later chapters, starting with chapter 18
and the calculation in section 18.2.1, followed by discussion in later chapters
using a different (but unitarily equivalent) representation that appears in the
quantization of the harmonic oscillator. The Groenewold-van Hove theorem im-
plies that we cannot find a unitary representation of a larger group of canonical
transformations extending this one on the Heisenberg and metaplectic groups.

15.4 Quantization and symmetries


The Schrödinger representation is thus a representation of the groups H2d+1
and M p(2d, R), with the Lie algebra representation providing observables cor-
responding to elements of the Lie algebras h2d+1 (linear combinations of Qj
and Pk ) and sp(2d, R) (linear combinations of order-two combinations of Qj
and Pk ). The observables that commute with the Hamiltonian operator H will
make up a Lie algebra of symmetries of the quantum system, and will take en-
ergy eigenstates to energy eigenstates of the same energy. Some examples for
the physical case of d = 3 are:
• The group R3 of translations in coordinate space is a subgroup of the
Heisenberg group and has a Lie algebra representation as linear combina-
tions of the operators −iPj . If the Hamiltonian is position-independent,
for instance the free particle case of
1
H= (P 2 + P22 + P32 )
2m 1
then the momentum operators correspond to symmetries. Note that the
position operators Qj do not commute with this Hamiltonian, and so do
not correspond to a symmetry of the dynamics.
• The group SO(3) of spatial rotations is a subgroup of Sp(6, R) and the
operators

−i(Q2 P3 − Q3 P2 ), −i(Q3 P1 − Q1 P3 , −i(Q1 P2 − Q2 P1 )

are a basis for a Lie algebra representation of so(3). These are the same
operators that were studied in chapter 8 under the name ρ0 (lj ). They will

181
be symmetries of rotationally invariant Hamiltonians, for instance the free
particle as above, or the particle in a potential
1
H= (P 2 + P22 + P32 ) + V (Q1 , Q2 , Q3 )
2m 1
when the potential only depends on the combination Q21 + Q22 + Q23 .

15.5 More general notions of quantization


The definition given here of quantization using the Schrödinger representation
of h2d+1 only allows the construction of a quantum system based on a classical
phase space for the linear case of M = R2d . For other sorts of classical systems
one needs other methods to get a corresponding quantum system. One possible
approach is the path integral method, which starts with a choice of configuration
space and Lagrangian, and will be discussed in chapter 32.
Digression. The name “geometric quantization” refers to attempt to generalize
quantization to the case of any symplectic manifold M , starting with the idea
of prequantization (see equation 13.6). This gives a representation of the Lie
algebra of functions on M on a space of sections of a line bundle with connection
∇, with ∇ a connection with curvature ω, where ω is the symplectic form on
M . One then has to deal with two problems
• The space of all functions on M is far too big, allowing states localized in
both position and coordinate variables in the case M = R2d . One needs
some way to cut down this space to something like a space of functions
depending on only half the variables (e.g. just the positions, or just the
momenta). This requires finding an appropriate choice of a so-called “po-
larization” that will accomplish this.
• To get an inner product on the space of states, one needs to introduce
a twist by a “square root” of a certain line bundle, something called the
“metaplectic correction”.
For more details, see for instance [29] or [75].
Geometric quantization focuses on finding an appropriate state space. An-
other general method, the method of “deformation quantization” focuses instead
on the algebra of operators, with a quantization given by finding an appropriate
non-commutative algebra that is in some sense a deformation of a commuta-
tive algebra of functions. To first order the deformation in the product law is
determined by the Poisson bracket.
Starting with any Lie algebra g, one can in principle use 12.4 to get a Pois-
son bracket on functions on the dual space g∗ , and then take the quantization
of this to be the algebra of operators known as the universal enveloping alge-
bra U (g). This will in general have many different irreducible representations
and corresponding possible quantum state spaces. The co-adjoint orbit philoso-
phy posits an approximate matching between orbits in g∗ under the dual of the

182
adjoint representation (which are symplectic manifolds) and irreducible repre-
sentations. Geometric quantization provides one possible method for trying to
associate representations to orbits. For more details, see [35].
None of the general methods of quantization is fully satisfactory, with each
running into problems in certain cases, or not providing a construction with all
the properties that one would want.

15.6 For further reading


Just about all quantum mechanics textbooks contain some version of the discus-
sion here of canonical quantization starting with classical mechanical systems
in Hamiltonian form. For discussions of quantization from the point of view of
representation theory, see [7] and chapters 14-16 of [25]. For a detailed discus-
sion of the Heisenberg group and Lie algebra, together with their representation
theory, also see chapter 2 of [35].

183
184
Chapter 16

Semi-direct Products

The theory of a free particle depends crucially on the group of symmetries


of three-dimensional space, a group which includes a subgroup R3 of spatial
translations, and a subgroup SO(3) of rotations. The second subgroup acts non-
trivially on the first, since the direction of a translation is rotated by an element
of SO(3). In later chapters dealing with special relativity, these symmetry
groups get enlarged to include a fourth dimension, time, and the theory of a
free particle will again be determined by these symmetry groups. In chapters 13
and 14 we saw that there are two groups acting on phase space: the Heisenberg
group H2d+1 and the symplectic group Sp(2d, R). In this situation also, the
second group acts non-trivially on the first by automorphisms (see 14.10).
This situation of two groups, with one acting on the other by automor-
phisms, allows one to construct a new sort of product of the two groups, called
the semi-direct product, and this will be the topic for this chapter. We’ll also
begin the study of representations of such groups, outlining what happens when
the first group is commutative. Chapter 18 will describe how the Schrödinger
representation of H2d+1 extends to become a representation (up to a sign am-
biguity) of the semi-direct product of H2d+1 and Sp(2d, R). In chapter 17 we’ll
consider the cases of the semi-direct product of translations and rotations in
two and three dimensions, and there see how the irreducible representations are
provided by the quantum state space of a free particle.

16.1 An example: the Euclidean group


Given two groups G0 and G00 , one can form the product group by taking pairs
of elements (g 0 , g 00 ) ∈ G0 × G00 . However, when the two groups act on the same
space, but elements of G0 and G00 don’t commute, a different sort of product
group is needed. As an example, consider the case of pairs (a2 , R2 ) of elements
a2 ∈ R3 and R2 ∈ SO(3), acting on R3 by translation and rotation

v → (a2 , R2 ) · v = a2 + R2 v

185
If we then act on the result with (a1 , R1 ) we get

(a1 , R1 ) · ((a2 , R2 ) · v) = (a1 , R1 ) · (a2 + R2 v) = a1 + R1 a2 + R1 R2 v

Note that this is not what we would get if we took the product group law on
R3 × SO(3), since then the action of (a1 , R1 )(a2 , R2 ) on R3 would be

v → a1 + a2 + R1 R2 v

To get the correct group action on R3 , we need to take R3 × SO(3) not with
the product group law, but instead with the group law

(a1 , R1 )(a2 , R2 ) = (a1 + R1 a2 , R1 R2 )

This group law differs from the standard product law, by a term R1 a2 , which
is the result of R1 ∈ SO(3) acting non-trivially on a2 ∈ R3 . We will denote the
set R3 × SO(3) with this group law by

R3 o SO(3)

This is the group of transformations of R3 preserving the standard inner prod-


uct.
The same construction works in arbitrary dimensions, where one has
Definition (Euclidean group). The Euclidean group E(d) (sometimes written
ISO(d) for “inhomogeneous” rotation group) in dimension d is the product of
the translation and rotation groups of Rd as a set, with multiplication law

(a1 , R1 )(a2 , R2 ) = (a1 + R1 a2 , R1 R2 )

(where aj ∈ Rd , Rj ∈ SO(d)) and can be denoted by

Rd o SO(d)

E(d) can also be written as a matrix group, taking it to be the subgroup


of GL(d + 1, R) of matrices of the form (R is a d by d orthogonal matrix, a a
d-dimensional column vector)  
R a
0 1
One gets the multiplication law for E(d) from matrix multiplication since
    
R1 a1 R2 a2 R1 R2 a1 + R1 a2
=
0 1 0 1 0 1

16.2 Semi-direct product groups


The Euclidean group example of the previous section can be generalized to the
following

186
Definition (Semi-direct product group). Given a group K, a group N , and an
action φ of K on N by automorphisms

φk : n ∈ N → φk (n) ∈ N

the semi-direct product N o K is the set of pairs (n, k) ∈ N × K with group law

(n1 , k1 )(n2 , k2 ) = (n1 φk1 (n2 ), k1 k2 )

One can easily check that this satisfies the group axioms. The inverse is

(n, k)−1 = (φk−1 (n−1 ), k −1 )

Checking associativity, one finds

((n1 , k1 )(n2 , k2 ))(n3 , k3 ) =(n1 φk1 (n2 ), k1 k2 )(n3 , k3 )


=(n1 φk1 (n2 )φk1 k2 (n3 ), k1 k2 k3 )
=(n1 φk1 (n2 )φk1 (φk2 (n3 )), k1 k2 k3 )
=(n1 φk1 (n2 φk2 (n3 )), k1 k2 , k3 )
=(n1 , k1 )(n2 φk2 (n3 ), k2 k3 )
=(n1 , k1 )((n2 , k2 )(n3 , k3 ))

The notation N o K for this construction has the weakness of not explicitly
indicating the automorphism φ which it depends on. There may be multiple
possible choices for φ, and these will always include the trivial choice φk = 1 for
all k ∈ K, which will give the standard product of groups.
Digression. For those familiar with the notion of a normal subgroup, N is a
normal subgroup of N o K. A standard notation for “N is a normal subgroup
of G” is N  G. The symbol N o K is supposed to be a mixture of the × and
 symbols (note that some authors define it to point in the other direction).
The Euclidean group E(d) is an example with N = Rd , K = SO(d). For
a ∈ Rd , R ∈ SO(d) one has
φR (a) = Ra
In chapter 39 we will see another important example, the Poincaré group which
generalizes E(3) to include a time dimension, treating space and time according
to the principles of special relativity.
The most important example for quantum theory is
Definition (Jacobi group). The Jacobi group in d dimensions is the semi-direct
product group
GJ (d) = H2d+1 o Sp(2d, R)
If we write elements of the group as
 
c
(( q , c), k)
cp

187
where k ∈ Sp(2d, R), then the automorphism φk that defines the Jacobi group
is given by    
c c
φk (( q , c)) = (k q , c) (16.1)
cp cp
Note that the Euclidean group E(d) is a subgroup of the Jacobi group GJ (d),
the subgroup of elements of the form
   
0 R 0
(( , 0), )
cp 0 R

where R ∈ SO(d). The  


0
( , 0) ⊂ H2d+1
cp
make up the group Rd of translations in the qj coordinates, and the
 
R 0
k= ⊂ Sp(2d, R)
0 R
are symplectic transformations since
   0
c c
Ω(k q , k 0q ) =Rcq · Rc0p − Rcp · Rc0q
cp cp
=cq · c0p − cp · c0q
   0
c c
=Ω( q , 0q )
cp cp

(R is orthogonal so preserves dot products).

16.3 Semi-direct product Lie algebras


We have seen that semi-direct product Lie groups can be constructed by taking
a product N × K of Lie groups as a set, and imposing a group multiplication
law that uses an action of K on N by automorphisms. In a similar manner, one
can construct semi-direct product Lie algebras n o k by taking the direct sum
of n and k as vector spaces, and defining a Lie bracket that uses an action of n
on k by derivations (the infinitesimal version of automorphisms, see definition
14.1).
Considering first the example E(d) = Rd o SO(d), recall that elements E(d)
can be written in the form  
R a
0 1
for R ∈ SO(d) and a ∈ Rd . The tangent space to this group at the identity will
be given by matrices of the form
 
X a
0 0

188
where X is an antisymmetric d by d matrix and a ∈ Rd . Exponentiating such
matrices will give elements of E(d).
The Lie bracket is then given by the matrix commutator, so
     
X1 a1 X2 a2 [X1 , X2 ] X1 a2 − X2 a1
[ , ]= (16.2)
0 0 0 0 0 0

So the Lie algebra of E(d) will be given by taking the sum of Rd (the Lie
algebra of Rd ) and so(d), with elements pairs (a, X) with a ∈ Rd and X an
antisymmetric d by d matrix. The infinitesimal version of the rotation action
of SO(d) on Rd by automorphisms

φR (a) = Ra

is
d d
φetX (a)|t=0 = (etX a)|t=0 = Xa
dt dt
Just in terms of such pairs, the Lie bracket can be written

[(a1 , X1 ), (a2 , X2 )] = (X1 a2 − X2 a1 , [X1 , X2 ])

We can define in general

Definition (Semi-direct product Lie algebra). Given Lie algebras k and n, and
an action of elements Y ∈ k on n by derivations

X ∈n→Y ·X ∈n

the semi-direct product nok is the set of pairs (X, Y ) ∈ n⊕k with the Lie bracket

[(X1 , Y1 ), (X2 , Y2 )] = ([X1 , X2 ] + Y1 · X2 − Y2 · X1 , [Y1 , Y2 ])

One can easily see that in the special case of the Lie algebra of E(d) this
agrees with the construction above.
In section 14.1.2 we studied the Lie algebra of all polynomials of degree
at most two in d-dimensional phase space coordinates qj , pj , with the Poisson
bracket as Lie bracket. There we found two Lie subalgebras, the degree zero
and one polynomials (isomorphic to h2d+1 ), and the homogeneous degree two
polynomials (isomorphic to sp(2d, R)) with the second subalgebra acting on the
first by derivations as in equation 14.14.
Recall from chapter 14 that elements of this Lie algebra can also be written
as pairs  
c
(( q , c), L)
cp
of elements in h2d+1 and sp(2d, R), with this pair corresponding to the polyno-
mial
µL + cq · q + cp · p + c

189
In terms of such pairs, the Lie bracket is given by
   0  0      0
cq cq cq 0 cq c c
[(( 0
, c), L), (( 0 , c), L )] = ((L 0 −L , Ω( q , 0q )), [L, L0 ])
cp cp cp cp cp cp

which satisfies the definition above.


So one example of a semi-direct product Lie algebra is

h2d+1 o sp(2d, R)

and from the discussion in chapter 14.2 one can see that this is the Lie algebra
of the semi-direct product group

GJ (d) = H2d+1 o Sp(2d, R)

.
The Lie algebra of E(d) will be a sub-Lie algebra of this, consisting of ele-
ments of the form    
0 X 0
(( , 0), )
cp 0 X
where X is an antisymmetric d by d matrix.

Digression. Just as E(d) can be identified with a group of d+1 by d+1 matrices,
the Jacobi group GJ (d) is also a matrix group and one can in principle work with
it and its Lie algebra using usual matrix methods. The construction is slightly
complicated and represents elements of GJ (d) as matrices in Sp(2d+1, R) . See
section 8.5 of [8] for details of the d = 1 case.

16.4 For further reading


Semi-direct products are not commonly covered in detail in either physics or
mathematics textbooks, with the exception of the case of the Poincaré group of
special relativity, which will be discussed in chapter 39. Some textbooks that
do cover the subject include section 3.8 of [59], chapter 6 of [26] and [8].

190
Chapter 17

The Quantum Free Particle


as a Representation of the
Euclidean Group

In this chapter we will explicitly construct unitary representations of the Eu-


clidean groups E(2) and E(3) of spatial symmetries in two and three dimensions.
These groups commute with the Hamiltonian of the free particle, and their irre-
ducible representations will be given just by the quantum state space of a free
particle (of fixed energy) in either two ot three spatial dimensions. The mo-
mentum operators Pj will provide the infinitesimal action of translations on the
state space, while angular momentum operators Lk will provide the infinitesi-
mal rotation action (there will be only one of these in two dimensions, three in
three dimensions).
The Hamiltonian of the free particle is proportional to the operator |P|2 .
This is a quadratic operator that commutes with the action of all the elements
of the Lie algebra of the Euclidean group, and so is a Casimir operator play-
ing an analogous role to that of the SO(3) Casimir operator |L|2 of section
8.4. Irreducible representations will be labeled by the eigenvalue of this oper-
ator, which in this case will be proportional to the energy. In the Schrödinger
representation where the Pj are differentiation operators, this will be a second-
order differential operator, and the eigenvalue equation will be a second-order
differential equation (the time-independent Schrödinger equation).
Using the Fourier transform, the space of solutions of the Schrödinger equa-
tion of fixed energy becomes something much easier to analyze, the space of
functions on momentum space supported only on the subspace of momenta of
a fixed length. In the case of E(2) this is just a circle, whereas for E(3) it is a
sphere. In both cases, for each radius one gets an irreducible representation.
In the case of E(3) other classes of irreducible representations can be con-
structed. This can be done by introducing multi-component wavefunctions,
with a new action of the rotation group SO(3). A second Casimir operator is

191
available in this case, and irreducible representations are eigenfunctions of this
operator in the space of wavefunctions of fixed energy. The eigenvalue of the
second Casimir operator turns out to be an integer, known to physicists as the
“helicity”.

17.1 The quantum free particle and representa-


tions of E(2)
We’ll begin with the case of two spatial dimensions, partly for simplicity, partly
because physical systems that are translationally invariant in one direction can
often be treated as effectively two dimensional. A basis for the Lie algebra of
E(2) is given by the functions

l = q1 p2 − q2 p1 , p1 , p2

on the d = 2 phase space M = R4 . The non-zero Lie bracket relations are given
by the Poisson brackets

{l, p1 } = p2 , {l, p2 } = −p1

and there is an isomorphism of this Lie algebra with a matrix Lie algebra of 3
by 3 matrices given by
     
0 −1 0 0 0 1 0 0 0
l ↔ 1 0 0 , p1 ↔ 0 0 0 , p2 ↔ 0 0 1
0 0 0 0 0 0 0 0 0
Writing this Lie algebra in terms of linear and quadratic functions on the
phase space shows that it can be realized as a sub-Lie algebra of the Jacobi
Lie algebra gJ (2). Quantization via the Schrödinger representation Γ0S then
provides a unitary representation of the Lie algebra of E(2) on the state space
H of functions of the position variables q1 , q2 , in terms of operators
∂ ∂
Γ0S (p1 ) = −iP1 = − , Γ0S (p2 ) = −iP1 = − (17.1)
∂q1 ∂q2
and
∂ ∂
Γ0S (l) = −iL = −i(Q1 P2 − Q2 P1 ) = −(q1 − q2 ) (17.2)
∂q2 ∂q1
The Hamiltonian operator for the free particle is
1 1 ∂2 ∂2
H= (P12 + P22 ) = − ( 2 + 2)
2m 2m ∂q1 ∂q2
and solutions to the Schrödinger equation can be found by solving the eigenvalue
equation
1 ∂2 ∂2
Hψ(q1 , q2 ) = − ( 2 + 2 )ψ(q1 , q2 ) = Eψ(q1 , q2 )
2m ∂q1 ∂q2

192
The operators L, P1 , P2 commute with H and so provide a representation of the
Lie algebra of E(2) on the space of wavefunctions of energy E.
This construction of irreducible representations of E(2) is similar in spirit
to the construction of irreducible representations of SO(3) in section 8.4. There
the Casimir operator L2 commuted with the SO(3) action, and gave a differ-
ential operator on functions on the sphere whose eigenfunctions were spaces
of dimension 2l + 1 with eigenvalue l(l + 1), for l non-negative and integral.
For E(2) the quadratic function p21 + p22 Poisson commutes with l, p1 , p2 . After
quantization,
|P|2 = P12 + P22

is a second-order differential operator which commutes with L, P1 , P2 . This op-


erator has infinite-dimensional eigenspaces that each carry an irreducible rep-
resentation of E(2). They are characterized by a non-negative eigenvalue that
has physical interpretation as 2mE where m, E are the mass and energy of a
free quantum particle moving in two spatial dimensions.
Recall from our discussion of the free particle in chapter 10 that

ψp (q) = eip·q = |pi

is a solution of the time-independent Schrödinger equation with energy

|p|2
E= >0
2m

and such |pi give a sort of continuous basis of H, even though these are not
square-integrable functions. The formalism for working with them uses distri-
butions and the orthonormality relation

hp|p0 i = δ(p − p0 )

An arbitrary ψ(q) ∈ H can be written as a continuous linear combination of


the |pi, i.e. as an inverse Fourier transform of a function ψ(p)
e on momentum
space as
ZZ
1
ψ(q) = eip·q ψ(p)d
e 2
p

In momentum space the time-independent Schrödinger equation becomes

|p|2
( − E)ψ(p)
e =0
2m

so we get a solution for any choice of ψ(p)


e that is non-zero only on the circle
|p|2 = 2mE (we won’t try to characterize which class of such functions to
consider, which would determine which class of functions solving the Schrödinger
equation we end up with after Fourier transform).

193
Going to polar coordinates p = (p cos θ, p sin θ), the space of solutions to
the time-independent Schrödinger equation at energy E is given by ψ(p)
e of the
form
ψ(p)
e = ψeE (θ)δ(p2 − 2mE)

To put this delta-function in a more useful form, note that for p ≈ 2mE one
has the linear approximation

1 √
p2 − 2mE ≈ √ (p − 2mE)
2 2mE

so one has the equality of distributions

1 √
δ(p2 − 2mE) = √ δ(p − 2mE)
2 2mE

It is this space of functions ψeE (θ) of functions on the circle of radius 2mE
that will provide an infinite-dimensional representation of the group E(2), one
that turns out to be irreducible, although we will not show that here. The

194
position space wavefunction corresponding to ψeE (θ) will be
ZZ
1
ψ(q) = eip·q ψeE (θ)δ(p2 − 2mE)pdpdθ

1
ZZ
1 √
= eip·q ψeE (θ) √ δ(p − 2mE)pdpdθ
2π 2 2mE
Z 2π √
1
= ei 2mE(q1 cos θ+q2 sin θ) ψeE (θ)dθ
4π 0

Functions ψeE (θ) with simple behavior in θ will correspond to wavefunctions


with more complicated behavior in position space. For instance, taking ψeE (θ) =
e−inθ one finds that the wavefunction along the q2 direction is given by
Z 2π √
1
ψ(0, q) = ei 2mE(q sin θ) e−inθ dθ
4π 0
1 √
= Jn ( 2mEq)
2
where Jn is the n’th Bessel function.
Equations 17.1 and 17.2 give the representation of the Lie algebra of E(2)
on wavefunctions ψ(q). The representation of this Lie algebra on the ψeE (θ) is
e 0 . Using the formula
just given by the Fourier transform, and we’ll denote this Γ S
for the Fourier transform we find that

e 0S (p1 ) = − ∂ = −ip1 = −i 2mE cos θ
g
Γ
∂q1

e 0S (p2 ) = − ∂ = −ip2 = −i 2mE sin θ
g
Γ
∂q2
are multiplication operators and, taking the Fourier transform of 17.2 gives the
differentiation operator

e 0 (l) = − (p1 ∂ − p2 ∂ )
Γ S
∂p2 ∂p1

=−
∂θ
(use integration by parts to show qj = i ∂p∂ j and thus the first equality, then the
chain rule for functions f (p1 (θ), p2 (θ)) for the second).
This construction of a representation of E(2) starting with the Schrödinger
representation gives the same result as starting with the action of E(2) on
configuration space, and taking the induced action on functions on R2 (the
wavefunctions). To see this, note that E(2) has elements (a, R(φ)) which can
be written as a product (a, R(φ)) = (a, 1)(0, R(φ)) or, in terms of matrices
    
cos φ − sin φ a1 1 0 a1 cos φ − sin φ 0
 sin φ cos φ a2  = 0 1 a2   sin φ cos φ 0
0 0 1 0 0 1 0 0 1

195
The group has a unitary representation

(a, R(φ)) → u(a, R(φ))

on the position space wavefunctions ψ(q), given by the induced action on func-
tions from the action of E(2) on position space R2

u(a, R(φ))ψ(q) =ψ((a, R(φ))−1 · q)


= = ψ((−R(−φ)a, R(−φ)) · q)
=ψ(R(−φ)(q − a))

This is just the Schrödinger representation ΓS of the Jacobi group GJ (2), re-
stricted to the subgroup E(2) of transformations of phase space that are transla-
tions in q and rotations in both q and p vectors, preserving their inner product
(and thus the symplectic form). One can see this by considering the action
of translations as the exponential of the Lie algebra representation operators
Γ0S (pj ) = −iPj

u(a, 1)ψ(q) = e−i(a1 P1 +a2 P2 ) ψ(q) = ψ(q − a)

and the action of rotations as the exponential of the Γ0S (l) = −iL

u(0, R(φ))ψ(q) = e−iφL ψ(q) = ψ(R(−φ)q)

One also has a Fourier-transformed version u e of this representation, with


translations now acting by multiplication operators on the ψeE

e(a, 1)ψeE (θ) = e−i(a·p) ψeE (θ) = e−i
u 2mE(a1 cos θ+a2 sin θ)
ψeE (θ) (17.3)

and rotations acting by rotation in momentum space

e(0, R(φ))ψeE (θ) = ψeE (θ − φ)


u (17.4)

Although we won’t prove it here, the representations constructed this way


provide essentially all the unitary irreducible representations of E(2), parame-
trized by a real number E > 0. The only other ones are those on which the
translations act trivially, and SO(2) acts as an irreducible representation. We
have seen that such SO(2) representations are one-dimensional, and character-
ized by an integer, the weight. These representations in some sense correspond
to the case E = 0, but note that for non-zero weight they are something differ-
ent than just that constant wavefunctions, using a non-trivial action of SO(2)
on the wavefunction value.

17.2 The case of E(3)


In the physical case of three spatial dimensions, the state space of the theory of a
quantum free particle is again a Euclidean group representation, with the same

196
relationship to the Schrödinger representation as in two spatial dimensions. The
main difference is that the rotation group is now three dimensional and non-
commutative, so instead of the single Lie algebra basis element l we have three
of them, satisfying Poisson bracket relations that are the Lie algebra relations
of so(3)

{l1 , l2 } = l3 , {l2 , l3 } = l1 , {l3 , l1 } = l2


The pj give the other three basis elements of the Lie algebra of E(3). They
commute amongst themselves and the action of rotations on vectors provides
the rest of the non-trivial Poisson bracket relations

{l1 , p2 } = p3 , {l1 , p3 } = −p2

{l2 , p1 } = −p3 , {l2 , p3 } = p1


{l3 , p1 } = p2 , {l3 , p2 } = −p1
An isomorphism of this Lie algebra with a Lie algebra of matrices is given
by
     
0 0 0 0 0 0 1 0 0 −1 0 0
0 0 −1 0 0 0 0 0 1 0 0 0
l1 ↔   l ↔  l ↔
0 2 0 0 0 3

0 1 0 −1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
     
0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0
p1 ↔   p ↔  p ↔
0 2 0 3

0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0
The lj are quadratic functions in the qj , pj , given by the classical mechanical
expression for the angular momentum

l=q×p

or, in components

l1 = q2 p3 − q3 p2 , l2 = q3 p1 − q1 p3 , l3 = q1 p2 − q2 p1

The Euclidean group E(3) is a subgroup of the Jacobi group GJ (3) in the
same way as in two dimensions, and the Schrödinger representation ΓS provides
a representation of E(3) with Lie algebra version

∂ ∂
Γ0S (l1 ) = −iL1 = −i(Q2 P3 − Q3 P2 ) = −(q2 − q3 )
∂q3 ∂q2

∂ ∂
Γ0S (l2 ) = −iL2 = −i(Q3 P1 − Q1 P3 ) = −(q3 − q1 )
∂q1 ∂q3

197
∂ ∂
Γ0S (l3 ) = −iL3 = −i(Q1 P2 − Q2 P1 ) = −(q1 − q2 )
∂q2 ∂q1

Γ0S (pj ) = −iPj = −
∂qj
These are just the infinitesimal versions of the action of E(3) on functions
induced from its action on position space R3 . Given an element g = (a, R) ∈
E(3) ⊂ GJ (3) we have a unitary transformation on wavefunctions

u(a, R)ψ(q) = ΓS (g)ψ(q) = ψ(g −1 · q) = ψ(R−1 (q − a))


These group elements will be a product of a translation and a rotation, and
the unitary transformations u are exponentials of the Lie algebra actions above,
with
u(a, 1)ψ(q) = e−i(a1 P1 +a2 P2 +a3 P3 ) ψ(q) = ψ(q − a)
for a translation by a, and

u(0, R(φ, ej ))ψ(q) = e−iφLj ψ(q) = ψ(R(−φ, ej )q)

for R(φ, ej ) a rotation about the j-axis by angle φ.


This representation of E(3) on wavefunctions is reducible, since in terms of
momentum eigenstates, rotations will only take eigenstates with one value of the
momentum to those with another value of the same length-squared. We can get
an irreducible representation by using the Casimir operator P12 +P22 +P32 , which
commutes with all elements in the Lie algebra of E(3). The Casimir operator
will act on an irreducible representation as a scalar, and the representation will
be characterized by that scalar. The Casimir operator is just 2m times the
Hamiltonian
1
H= (P 2 + P22 + P32 )
2m 1
and so the constant characterizing an irreducible will just be the energy 2mE.
Our irreducible representation will be on the space of solutions of the time-
independent Schrödinger equation
1 1 ∂2 ∂2 ∂2
(P12 + P22 + P32 )ψ(q) = − ( 2 + 2 + 2 )ψ(q) = Eψ(q)
2m 2m ∂q1 ∂q2 ∂q3
Using the Fourier transform
Z
1
ψ(q) = 3 eip·q ψ(p)d
e 3
p
(2π) 2 R3

the time-independent Schrödinger equation becomes


|p|2
( − E)ψ(p)
e =0
2m
and we have distributional solutions

ψ(p)
e = ψeE (p)δ(|p|2 − 2mE)

198
characterized by functions ψeE (p) defined on the sphere |p|2 = 2mE.

Such complex-valued functions on the sphere of radius 2mE provide a
Fourier-transformed version ue of the irreducible representation of E(3). Here
the action of the group E(3) is by

e(a, 1)ψeE (p) = e−i(a·p) ψeE (p)


u

for translations, by
e(0, R)ψeE (p) = ψeE (R−1 p)
u

for rotations, and by

−1
u
e(a, R)ψeE (p) = u u(0, R)ψeE (p) = e−ia·R p ψeE (R−1 p)
e(a, 1)e

for a general element.

17.3 Other representations of E(3)


For the case of E(3), besides the representations parametrized by E > 0 con-
structed above, as in the E(2) case there are finite-dimensional representations
where the translation subgroup of E(3) acts trivially. Such irreducible represen-
tations are just the spin-s representations (ρs , C2s+1 ) of SO(3) for s = 0, 1, 2, . . ..
E(3) has some structure not seen in the E(2) case, which can be used to
construct new classes of infinite-dimensional irreducible representations. This
can be seen from two different points of view:

• There is a second Casimir operator which one can show commutes with
the E(3) action, given by

L · P = L1 P1 + L2 P2 + L3 P3

• The group SO(3) acts on momentum vectors by rotation, with orbit of


the group action the sphere of momentum vectors of fixed energy E > 0.
This is the sphere on which the Fourier transform of the wavefunctions
in the representation is supported. Unlike the corresponding circle in
the E(2) case, here there is a non-trivial subgroup of the rotation group
SO(3) which leaves a given momentum vector invariant. This is just the
SO(2) ⊂ SO(3) subgroup of rotations about the axis determined by the
momentum vector, and it is different for different points in momentum
space.

199
For single-component wavefunctions, a straightforward computation shows
that the second Casimir operator L · P acts as zero. By introducing wavefunc-
tions with several components, together with an action of SO(3) that mixes the
components, it turns out that one can get new irreducible representations, with
a non-zero value of the second Casimir corresponding to a non-trivial weight of
the action of the SO(2) of rotations about the momentum vector.
One can construct such multiple-component wavefunctions as representa-
tions of E(3) by taking the tensor product of our irreducible representation on
wavefunctions of energy E (call this HE ) and the finite dimensional irreducible
representation C2s+1
HE ⊗ C2s+1
The Lie algebra representation operators for the translation part of E(3) act as
momentum operators on HE and as 0 on C2s+1 . For the SO(3) part of E(3),
we get operators we can write as

Jj = Lj + Sj

where Lj acts on HE and Sj = ρ0 (lj ) acts on C2s+1 .


This tensor product representation will not be irreducible, but its irreducible
components can be found by taking the eigenspaces of the second Casimir op-

200
erator, which will now be
J·P
We will not work out the details of this here (although details can be found
in chapter 31 for the case s = 21 , where SO(3) is replaced by Spin(3)). What
happens is that the tensor product breaks up into irreducibles as
n=s
M
HE ⊗ C2s+1 = HE,n
n=−s

where n is an integer taking values from −s to s that is called the “helicity”.


HE,n is the subspace of the tensor product on which the first Casimir |P|2 takes
the value 2mE, and the second Casimir J · P takes the value np, where p =

2mE. The physical interpretation of the helicity is that it is the component of
angular momentum along the axis given by the momentum vector. The helicity
can also be thought of as the weight of the action of the SO(2) subgroup of
SO(3) corresponding to rotations about the axis of the momentum vector.
Choosing E > 0 and n ∈ Z, the representations on HE,n (which we have
constructed using some s such that |s| ≥ n) give all possible irreducible repre-
sentations of E(3). The representation spaces have a physical interpretation as
the state space for a free quantum particle of energy E which carries an “inter-
nal” quantized angular momentum about its direction of motion, given by the
helicity.

17.4 For further reading


The angular momentum operators are a standard topic in every quantum me-
chanics textbook, see for example chapter 12 of [57]. The characterization here
of free-particle wavefunctions at fixed energy as giving irreducible representa-
tions of the Euclidean group is not so conventional, but it is just an example of
a non-relativistic version of the conventional description of relativistic quantum
particles in terms of representations of the Poincaré group (see chapter 39). In
the Poincaré group case the analog of the E(3) irreducible representations of
non-zero helicity considered here will be irreducible representations labeled by
a non-zero mass and an irreducible representation of SO(3) (the spin). In that
case for massless particles one will again see representations labeled by a helicity
(an irreducible representation of SO(2)), but there is no analog of such massless
particles in the E(3) case.
For more details about representations of E(2) and E(3), see [66] or [69]
(which is based on [65]).

201
202
Chapter 18

Representations of
Semi-direct Products

In this chapter we will examine some aspects of representations of semi-direct


products, in particular for the case of the Jacobi group and its Lie algebra, as
well as the case of N o K, for N commutative.
The Schrödinger representation provides a unitary representation of the
Heisenberg group, one that carries extra structure arising from the fact that
the symplectic group acts on the Heisenberg group by automorphisms. Each
element of the symplectic group takes a given construction of the Schrödinger
representation to a unitarily equivalent one, providing an operator on the state
space called an “intertwining operator”. These intertwining operators will give
(up to a phase factor), a representation of the symplectic group. Up to the
problem of the phase factor, the Schrödinger representation in this way extends
to a representation of the full Jacobi group. To explicitly find the phase factor,
one can start with the Lie algebra representation, where the sp(2d, R) action is
given by quantizing quadratic functions on phase space. It turns out that, for a
finite dimensional phase space, this gives a representation up to sign, which can
be turned into a true representation by taking a double cover (called M p(2d, R))
of Sp(2d, R).
In later chapters, we will find that many actions of groups on quantum sys-
tems can be understood as subgroups of this M p(2d, R), with the corresponding
observables arising as the quadratic combinations of momentum and position
operators determined by the moment map.
The Euclidean group E(d) is a subgroup of the Jacobi group and we saw in
chapter 17 how some of its representations can be understood by restricting the
Schrödinger representation to this subgroup. More generally, this is an example
of a semi-direct product N o K with N commutative. In such cases irreducible
representations can be characterized in terms of the action of K on irreducible
representations of N , and the irreducible representations of certain subgroups
of K.

203
The reader should be warned that much of the material included in this
chapter is not well-motivated by its applications to non-relativistic quantum
mechanics, where it is not obviously needed. The motivation is rather provided
by the more complicated case of relativistic quantum field theory, but it seems
worthwhile to first see how things work in a simpler context. In particular, the
discussion of representations of N o K for M commutative is motivated by the
case of the Poincaré group (see chapter 39), and that of intertwining operators
by the case of symmetry groups acting on quantum fields (see chapter 36).

18.1 Intertwining operators and the metaplectic


representation
For a general semi-direct product N o K with non-commutative N , the repre-
sentation theory can be quite complicated. For the Jacobi group case though, it
turns out that things simplify dramatically because of the Stone-von Neumann
theorem which says that, up to unitary equivalence, we only have one irreducible
representation of N = H2d+1 .
In the general case, recall that for each k ∈ K the definition of the semi-direct
product comes with an automorphism φk : N → N satisfying φk1 k2 = φk1 φk2 .
Given a representation π of N , for each k we can define a new representation
πk of N by first acting with φk :

πk (n) = π(φk (n))

In the special case of the Heisenberg group and Schrödinger representation ΓS ,


we can do this for each k ∈ K = Sp(2d, R), defining a new represention by

ΓS,k (n) = ΓS (φk (n))

The Stone-von Neumann theorem assures us that these must all be unitarily
equivalent, so there must exist unitary operators Uk satisfying

ΓS,k = Uk ΓS Uk−1 = ΓS (φk (n)) (18.1)

Operators like this that relate two representations are called “intertwining
operators”.

Definition (Intertwining operator). If (π1 , V1 ), (π2 , V2 ) are two representations


of a group G, an intertwining operator between these two representations is an
operator U such that
π2 (g)U = U π1 (g) ∀g ∈ G

In our case V1 = V2 is the Schrödinger representation state space H and Uk :


H → H is an intertwining operator between ΓS and ΓS,k for each k ∈ Sp(2d, R).
Since
ΓS,k1 k2 = Uk1 k2 ΓS Uk−1
1 k2

204
the Uk should satisfy the group homomorphism property

Uk1 k2 = Uk1 Uk2

and give us a representation of the group Sp(2d, R) on H. This is what we


expect on general principles: a group action on the classical phase space after
quantization becomes a unitary representation on the quantum state space.
The problem with this argument is that the Uk are not uniquely defined.
Schur’s lemma tells us that since the representation on H is irreducible, the
operators commuting with the representation operators are just the complex
scalars. These give a phase ambiguity in the definition of the unitary operators
Uk , which then give a representation of Sp(2d, R) on H only up to a phase, i.e.

Uk1 k2 = Uk1 Uk2 eiϕ(k1 ,k2 )

for some real-valued function ϕ of pairs of group elements. In terms of corre-


sponding Lie algebra representation operators UL0 , this ambiguity appears as an
unknown constant times the identity operator.
The question then arises whether the phases of the Uk can be chosen so
as to satisfy the homomorphism property (i.e. can one choose phases so that
ϕ(k1 , k2 ) = N 2π for N integral?). It turns out that one can not quite do this,
needing to allow N to be half-integral, so one gets the homomorphism property
up to a sign. Just as in the SO(d) case where a similar sign ambiguity showed
the need to go to a double-cover Spin(d) to get a true representation, here one
also needs to go to a double cover of Sp(2d, R), called the metaplectic group
M p(2d, R). The nature of this sign ambiguity and double cover is subtle, for
details see [39] or [25]. In section 18.2.1 we will show by computation one aspect
of the double cover.
Since this is just a sign ambiguity, it does not appear infinitesimally: one can
choose the ambiguous constants in the Lie algebra representation operators so
that the Lie algebra homomorphism property is satisfied. However, this will no
longer be true for infinite dimensional phase spaces, a situation that is described
as an “anomaly” in the symmetry. This phenomenon will be examined in more
detail in chapter 33.
In the finite dimensional case, we can construct the Uk explicitly, by expo-
nentiating the Lie algebra representation operators UL0 , which are constructed
by quantization of the homogeneous order-two polynomials that give the Lie
algebra sp(2d, R).
This quantization takes

µL ∈ sp(2d, R) → UL0

where µL is the polynomial corresponding to the matrix L to UL0 , which is the


corresponding polynomial in operators Qj , Pj , as determined by equation 15.1.
This choice will satisfy the Lie algebra homomorphism property

[UL0 1 , UL0 2 ] = U[L


0
1 ,L2 ]
(18.2)

205
The Lie algebra relation (see 14.14)
   
q T q
{µL , }=L (18.3)
p p

becomes after quantization


   
Q Q
[UL0 , ]=LT
(18.4)
P P

Exponentiating this UL0 will give us our Uk , and thus the operators we want.
Note that if we only need to satisfy equation 18.4 the UL0 can be changed by
a constant times the identity operator, but such a change would be inconsistent
with equation 18.2 (and thus called an “anomaly”). Equation 18.4 could be
written
[UL0 , Γ0S (X)] = Γ0S (L · X) (18.5)
where
d
L·X = φ tL (X)|t=0
dt e
It is the infinitesimal expression of the intertwining property 18.1.

18.2 Some examples


As a balance to the abstract discussion so far in this chapter, in this section
we’ll work out a couple of the simplest possible examples in great detail. These
examples will also make clear the conventions being chosen, and show the basic
structure of what the quadratic operators corresponding to symmetries look like,
a structure that will reappear in the much more complicated infinite-dimensional
quantum field theory examples we will come to later.

18.2.1 The SO(2) action on the d = 1 phase space


In the case d = 1 one has elements g ∈ SO(2) ⊂ Sp(2, R) acting on cq q + cp p ∈
M by       
cq cq cos θ sin θ cq
→g =
cp cp − sin θ cos θ cp
so
g = eθL
where  
0 1
L=
−1 0
To find the intertwining operators, we first find the quadratic function µL
in q, p that satisfies      
q q −p
{µL , } = LT =
p p q

206
By equation 14.5 this is
  
1  1 0 q 1
µL = q p = (q 2 + p2 )
2 0 1 p 2

Quantizing µL using the Schrödinger representation Γ0S , one has a unitary


Lie algebra representation U 0 of so(2) with

i
UL0 = − (Q2 + P 2 )
2
satisfying    
Q −P
[UL0 , ]= (18.6)
P Q
and intertwining operators
0 2
θ
+P 2 )
Ug = eθUL = e−i 2 (Q

These give a representation of SO(2) only up to a sign. To see the problem,


consider the state ψ(q) ⊂ H = L2 (R) given by
q2
ψ(q) = e− 2

One has
d2
(Q2 + P 2 )ψ(q) = (q 2 − )ψ(q) = ψ(q)
dq 2
so ψ(q) is an eigenvector of Q2 + P 2 with eigenvalue 1. As one goes around the
group SO(2) once (taking θ from 0 to 2π), the representation acts by a phase
that only goes from 0 to π, demonstrating the same problem that occurs in the
case of the spinor representation.
Conjugating the Heisenberg Lie algebra representation operators by the uni-
tary operators Ug intertwines the representations corresponding to rotations of
the phase space plane by an angle θ.
    
θ 2 2 Q i θ2 (Q2 +P 2 ) cos θ − sin θ Q
e−i 2 (Q +P ) e = (18.7)
P sin θ cos θ P

Note that this is a quite different calculation than in the spin case where we
also constructed a double cover of SO(2). Despite the quite different context
(rotations acting on an infinite dimensional state space), again one sees the
double cover here, as either Ug or −Ug will give the same rotation.
This example will be studied in much greater detail when we get to the
theory of the quantum harmonic oscillator in chapter 20. Note that the SO(2)
group action here inherently requires using both coordinate and momentum
variables, it is not a symmetry that can be seen just by looking at the problem
in configuration space.

207
18.2.2 The SO(2) action by rotations of the plane for d = 2
In the case d = 2 there is a another example of an SO(2) group which is a
subgroup of the symplectic group, here Sp(4, R). This is the group of rotations
of the configuration space R2 , with a simultaneous rotation of the momentum
space, leaving invariant the Poisson bracket. The group SO(2) acts on cq1 q1 +
cq2 q2 + cp1 p1 + cp2 p2 ∈ M by
      
cq1 cq1 cos θ − sin θ 0 0 cq1
cq2  cq2   sin θ cos θ 0 0  cq2 
  → g  =   
cp1  cp1   0 0 cos θ − sin θ cp1 
c p2 cp2 0 0 sin θ cos θ cp2

so g = eθL where L ∈ sp(4, R) is given by


 
0 −1 0 0
1 0 0 0
L= 
0 0 0 −1
0 0 1 0

L acts on phase space coordinate functions by


     
q1 q1 q2
  → LT q2  = −q1 
 q2     
p1  p1   p2 
p2 p2 −p1

By equation 14.14, with


 
0 −1
A= , B=C=0
1 0

the quadratic function µL that satisfies


        
q1 q1 0 1 0 0 q1 q2
q2  q
T  2
   −1 0 0 0 q2  = −q1 
   
{µL , 
p1 } = L p1  =  0
 
0 0 1 p1   p2 
p2 p2 0 0 −1 0 p2 −p1

is  
0 −1
µL = −q · p = q1 p2 − q2 p1
1 0
This is just the formula for the angular momentum corresponding to rotation
about an axis perpendicular to the q1 − q2 plane

l = q1 p2 − q2 p1

Quantization gives a representation of the Lie algebra so(2) with

UL0 = −i(Q1 P2 − Q2 P1 )

208
satisfying        
Q1 Q2 P1 P2
[UL0 , ]= , [UL0 , ]=
Q2 −Q1 P2 −P1
Exponentiating gives a representation of SO(2)

UeθL = e−iθ(Q1 P2 −Q2 P1 )

with conjugation by UeθL rotating linear combinations of the Q1 , Q2 (or the


P1 , P2 ) each by an angle θ.

UeθL (cq1 Q1 + cq2 Q2 )Ue−1 0 0


θL = cq1 Q1 + cq2 Q2

where
c0q1
    
cos θ − sin θ cq1
=
c0q2 sin θ cos θ cq2
Note that for this SO(2) the double cover is trivial. As far as this subgroup
of Sp(4, R) is concerned, there is no need to consider the double cover M p(4, R)
to get a well-defined representation.
Replacing the matrix L by  
A 0
0 A
for A any real 2 by 2 matrix
 
a11 a12
A=
a21 a22

we get an action of the group GL(2, R) ⊂ Sp(4, R) on M, and after quantization


a Lie algebra representation
  
 a11 a12 P1
UA0 = i Q1 Q2
a21 a22 P2

which will satisfy


       
Q1 Q1 P1 P1
[UA0 , ] = −A 0
, [UA , ]=A T
Q2 Q2 P2 P2

Note that the action of A on the momentum operators is the dual of the action
on the position operators. Only in the case of an orthogonal action (the SO(2)
earlier) are these the same, with AT = −A.

18.3 Representations of N o K, N commutative


The representation theory of semi-direct products N oK will in general be rather
complicated. However, when N is commutative things simplify considerably,
and in this section we’ll survey some of the general features of this case. The

209
special cases of the Euclidean groups in 2 and 3 dimensions were covered in
chapter 17 and the Poincaré group case will be discussed in chapter 39.
For a general commutative group N , one does not have the simplifying fea-
ture of the Heisenberg group, the uniqueness of its irreducible representation.
For commuative groups on the other hand, while there are many irreducible
representations, they are all one-dimensional. As a result, the set of represen-
tations of N acquires its own group structure, also commutative, and one can
define
Definition (Character group). For N a commutative group, let N
b be the set of
characters of N , i.e. functions

α:N →C

that satisfy the homomorphism property

α(n1 n2 ) = α(n1 )α(n2 )

The elements of N
b form a group, with multiplication

(α1 α2 )(n) = α1 (n)α2 (n)

We only will actually need the case N = Rd , where we have already seen
that the differentiable irreducible representations are one-dimensional and given
by
αp (a) = eip·a
where a ∈ N . So the character group in this case is Nb = Rd , with elements
labeled by the vector p.
For a semidirect product N o K, we will have an automorphism φk of N for
each k ∈ K. From this action on N , we get an induced action on functions on
N , in particular on elements of N
b , by

φbk : α ∈ N
b → φbk (α) ∈ N
b

where φbk (α) is the element of N


b satisfying

φbk (α)(n) = α(φ−1


k (n))

For the case of N = Rd , we have


−1 −1 T
φbk (αp )(a) = eip·φk (a) = ei(((φk ) (p))·a

so
φbk (αp ) = α(φ−1 )T (p)
k

When K acts by orthogonal transformations on N = Rd , φTk = φ−1


k so

φbk (αp ) = αφk (p)

210
To analyze representations (π, V ) of N o K, one can begin by restricting
attention to the N action, decomposing V into subspaces Vα where N acts
according to α. v ∈ V is in the subspace Vα when

π(n, 1)v = α(n)v

Acting by K will take this subspace to another one according to

Theorem.
v ∈ Vα =⇒ π(0, k)v ∈ Vφbk (α)

Proof. Using the definition of the semi-direct product in chapter 16 one can
show that the group multiplication satisfies

(0, k −1 )(n, 1)(0, k) = (φk−1 (n), 1)

Using this, one has

π(n, 1)π(0, k)v =π(0, k)π(0, k −1 )π(n, 1)π(0, k)v


=π(0, k)π(φk−1 (n), 1)v
=π(0, k)α(φk−1 (n))v
=φbk (α)(n)π(0, k)v

For each α ∈ N b one can look at its orbit under the action of K by φbk , which
will give a subset Oα ⊂ N b . From the above theorem, we see that if Vα 6= 0,
then we will also have Vβ 6= 0 for β ∈ Oα , so one piece of information that
characterizes a representation V is the set of orbits one gets in this way.
α also defines a subgroup Kα ⊂ K consisting of group elements whose action
on Nb leaves α invariant:

Definition (Stabilizer group or little group). The subgroup Kα ⊂ K of elements


k ∈ K such that
φbk (α) = α
for a given α ∈ N
b is called the stabilizer subgroup (by mathematicians) or little
subgroup (by physicists).

The group Kα will act on the subspace Vα , and this representation of Kα is


a second piece of information one can use to characterize a representation.
In the case of the Euclidean group E(2) we found that the non-zero orbits
Oα were circles and the groups Kα were trivial. For E(3), the non-zero orbits
were spheres, with Kα an SO(2) subgroup of SO(3) (one that varies with α). In
these cases we found that our construction of representations of E(2) or E(3) on
spaces of solutions of the single-component Schrödinger equation corresponded
under Fourier transform to a representation on functions on the orbits Oα .
We also found in the E(3) case that using multiple-component wavefunctions

211
gave new representations corresponding to a choice of orbit Oα and a choice
of irreducible representation of Kα = SO(2). We did not show this, but this
construction gives an irreducible representation when a single orbit Oα occurs
(with a transitive K action), with an irreducible representation of Kα on Vα .
We will not further pursue the general theory here, but one can show that
distinct irreducible representations of N o K will occur for each choice of an
orbit Oα and an irreducible representation of Kα . One way to construct these
representations is as the solution space of an appropriate wave-equation, with
the wave-equation corresponding to the eigenvalue equation for a Casimir oper-
ator. In general, other “subsidiary conditions” then need be imposed to pick out
a subspace of solutions that give an representation of N o K, this corresponds
to the existence of other Casimir operators.

18.4 For further reading


For more on representations of semi-direct products, see section 3.8 of [59],
chapter 5 of [66], [8], and [26]. The general theory was developed by Mackey
during the late 1940s and 1950s, and his lecture notes on representation theory
[41] are a good source for the details of this. The point of view taken here,
that emphasizes constructing representations as solutions spaces of differential
equations, where the differential operators are Casimir operators, is explained
in more detail in [33].
The conventional derivation found in most physics textbooks of the opera-
tors UL0 coming from an infinitesimal group action uses Lagrangian methods and
Noether’s theorem. The purely Hamiltonian method used here treats configu-
ration and momentum variables on the same footing, and is useful especially
in the case of group actions that mix them. For another treatment of these
operators along the lines of this chapter, see section 14 of [25].
The issue of the phase factor in the intertwining operators and the metaplec-
tic double cover will be discussed in later chapters using a different realization
of the Heisenberg Lie algebra representation. For a discussion of this in terms
of the Schrödinger representation, see part I of [39].

212
Chapter 19

Central Potentials and the


Hydrogen Atom

When the Hamiltonian function is invariant under rotations, we then expect


eigenspaces of the corresponding Hamiltonian operator to carry representations
of SO(3). These spaces of eigenfunctions of a given energy break up into ir-
reducible representations of SO(3), and we have seen that these are labeled
by a integer l = 0, 1, 2, . . . and have dimension 2l + 1. One can use this to
find properties of the solutions of the Schrödinger equation whenever one has
a rotation-invariant potential energy. We will work out what happens for the
case of the Coulomb potential describing the hydrogen atom. This specific case
is exactly solvable because it has a second not-so-obvious SO(3) symmetry, in
addition to the one coming from rotations of R3 .

19.1 Quantum particle in a central potential


In classical physics, to describe not free particles, but particles experiencing
some sort of force, one just needs to add a “potential energy” term to the
kinetic energy term in the expression for the energy (the Hamiltonian function).
In one dimension, for potential energies that just depend on position, one has

p2
h= + V (q)
2m

for some function V (q). In the physical case of three dimensions, this will be

1 2
h= (p + p22 + p23 ) + V (q1 , q2 , q3 )
2m 1

Quantizing and using the Schrödinger representation, the Hamiltonian op-

213
erator for a particle moving in a potential V (q1 , q2 , q3 ) will be

1
H= (P 2 + P22 + P33 ) + V (Q1 , Q2 , Q3 )
2m 1
−~2 ∂ 2 ∂2 ∂2
= ( 2 + 2 + 2 ) + V (q1 , q2 , q3 )
2m ∂q1 ∂q2 ∂q3
−~2
= ∆ + V (q1 , q2 , q3 )
2m
We will be interested in so-called “central potentials”, potential functions that
are functions only of q12 + q22 + q32 , and thus only depend upon r, the radial
distance to the origin. For such V , both terms in the Hamiltonian will be
SO(3) invariant, and eigenspaces of H will be representations of SO(3).
Using the expressions for the angular momentum operators in spherical co-
ordinates derived in chapter 8 (including equation 8.8 for the Casimir operator
L2 ), one can show that the Laplacian has the following expression in spherical
coordinates
∂2 2 ∂ 1
∆= 2 + − L2
∂r r ∂r r2
The Casmir operator L2 has eigenvalues l(l + 1) on irreducible representations
of dimension 2l + 1 (integral spin l). So, restricted to such an irreducible repre-
sentation, we have
∂2 2 ∂ l(l + 1)
∆= 2 + −
∂r r ∂r r2
To solve the Schrödinger equation, we want to find the eigenfunctions of
H. The space of eigenfunctions of energy E will be a sum of of irreducible
representations of SO(3), with the SO(3) acting on the angular coordinates
of the wavefunctions, leaving the radial coordinate invariant. We have seen in
chapter 8 that such representations on functions of angular coordinates can be
explicitly expressed in terms of the spherical harmonic functions Ylm (θ, φ). So,
to find eigenfunctions of the Hamiltonian

~2
H=− ∆ + V (r)
2m
we want to find functions glE (r) depending on l = 0, 1, 2, . . . and the energy
eigenvalue E satisfying

−~2 d2 2 d l(l + 1)
( ( 2+ − ) + V (r))glE (r) = EglE (r)
2m dr r dr r2
Given such a glE (r) we will have

HglE (r)Ylm (θ, φ) = EglE (r)Ylm (θ, φ)

and the
ψ(r, θ, φ) = glE (r)Ylm (θ, φ)

214
will span a 2l + 1 dimensional (since m = −l, −l + 1, . . . , l − 1, l) space of energy
eigenfunctions for H of eigenvalue E.
For a general potential function V (r), exact solutions for the eigenvalues E
and corresponding functions glE (r) cannot be found in closed form. One special
case where we can find such solutions is for the three-dimensional harmonic os-
cillator, where V (r) = 21 mω 2 r2 . These are much more easily found though using
the creation and annihilation operator techniques to be discussed in chapter 20.
The other well-known and physically very important case is the case of a 1r
potential, called the Coulomb potential. This describes a light charged particle
moving in the potential due to the electric field of a much heavier charged
particle, a situation that corresponds closely to that of a hydrogen atom. In
this case we have
e2
V =−
r

where e is the charge of the electron, so we are looking for solutions to

−~2 d2 2 d l(l + 1) e2
( ( 2+ − 2
) − )glE (r) = EglE (r)
2m dr r dr r r

Since having
d2
(rg) = Erg
dr2

is equivalent to
d2 2
( + )g = Eg
dr2 r

for any function g, glE (r) will satisfy

−~2 d2 l(l + 1) e2
( ( 2− 2
) − )rglE (r) = ErglE (r)
2m dr r r

The solutions to this equation can be found through a rather elaborate pro-
cess described in most quantum mechanics textbooks, which involves looking
for a power series solution. For E ≥ 0 there are non-normalizable solutions that
describe scattering phenomena that we won’t study here. For E < 0 solutions
correspond to an integer n = 1, 2, 3, . . ., with n ≥ l + 1. So, for each n we get n
solutions, with l = 0, 1, 2, . . . , n − 1, all with the same energy

me4
En = −
2~2 n2

A plot of the different energy eigenstates looks like this:

215
The degeneracy in the energy values leads one to suspect that there is some
extra group action in the problem commuting with the Hamiltonian. If so, the
eigenspaces of energy eigenfunctions will come in irreducible representations
of some larger group than SO(3). If the representation of the larger group
is reducible when one restricts to the SO(3) subgroup, giving n copies of our
SO(3) representation of spin l, that would explain the pattern observed here.
In the next section we will see that this is the case, and there use representation
theory to derive the above formula for En .
We won’t go through the process of showing how to explicitly find the func-
tions glEn (r) but just quote the result. Setting

~2
a0 =
me2

(this has dimensions of length and is known as the “Bohr radius”), and defining

216
gnl (r) = glEn (r) the solutions are of the form
r 2r l 2l+1 2r
gnl (r) ∝ e− na0 ( ) Ln+l ( )
na0 na0

where the L2l+1


n+l are certain polynomials known as associated Laguerre polyno-
mials.
So, finally, we have found energy eigenfunctions

ψnlm (r, θ, φ) = gnl (r)Ylm (θ, φ)

for
n = 1, 2, . . .
l = 0, 1, . . . , n − 1
m = −l, −l + 1, . . . , l − 1, l
The first few of these, properly normalized, are
1 r
ψ100 = p 3 e− a0
πa0

(called the 1S state, “S” meaning l = 0)

1 r r
ψ200 = p (1 − )e− 2a0
3
8πa0 2a 0

(called the 2S state), and the three dimensional l = 1 (called 2P , “P ” meaning


l = 1) states with basis elements

1 r r
ψ211 = − p 3 e− 2a0 sin θeiφ
8 πa0 a0

1 r − 2ar
ψ211 = − p e 0 cos θ
4 2πa0 a0
3

1 r r
ψ21−1 = p 3 e− 2a0 sin θe−iφ
8 πa0 a0

19.2 so(4) symmetry and the Coulomb potential


The Coulomb potential problem is very special in that it has an additional
symmetry, of a non-obvious kind. This symmetry appears even in the classi-
cal problem, where it is responsible for the relatively simple solution one can
find to the essentially identical Kepler problem. This is the problem of finding
the classical trajectories for bodies orbiting around a central object exerting a
gravitational force, which also has a 1r potential. Kepler’s second law for such

217
motion comes from conservation of angular momentum, which corresponds to
the Poisson bracket relation
{lj , h} = 0
Here we’ll take the Coulomb version of the Hamiltonian that we need for the
hydrogen atom problem
1 e2
h= |p|2 −
2m r
One can read the relation {lj , h} = 0 in two ways:

• The hamiltonian h is invariant under the action of the group (SO(3))


whose infinitesimal generators are lj .

• The components of the angular momentum (lj ) are invariant under the
action of the group (R of time translations) whose infinitesimal generator
is h, so the angular momentum is a conserved quantity.

Kepler’s first and third laws have a different origin, coming from the existence
of a new conserved quantity for this special choice of Hamiltonian. This quantity
is, like the angular momentum, a vector, often called the Lenz (or sometimes
Runge-Lenz, or even Laplace-Runge-Lenz) vector.

Definition (Lenz vector). The Lenz vector is the vector-valued function on the
phase space R6 given by

1 q
w= (l × p) + e2
m |q|

Simple manipulations of the cross-product show that one has

l·w =0

We won’t here explicitly calculate the various Poisson brackets involving the
components wj of w, since this is a long and unilluminating calculation, but
will just quote the results, which are


{wj , h} = 0
This says that, like the angular momentum, the vector with components
wj is a conserved quantity under time evolution of the system, and its
components generate symmetries of the classical system.


{lj , wk } = jkl wl
These relations say that the generators of the SO(3) symmetry act on wj
the way one would expect, since wj is a vector.

218

−2h
{wj , wk } = jkl ll ( )
m
This is the most surprising relation, and it has no simple geometrical
explanation (although one can change variables in the problem to try and
give it one). It expresses a highly non-trivial relationship between the two
sets of symmetries generated by the vectors l, w and the Hamiltonian h.

The wj are cubic in the q and p variables, so one would expect that the
Groenewold-van Hove no-go theorem would tell one that there is no consistent
way to quantize this system by finding operators Wj corresponding to the wj
that would satisfy the commutation relations corresponding to these Poisson
brackets. It turns out though that this can be done, although not for functions
defined over the entire phase-space. One gets around the no-go theorem by
doing something that only works when the Hamiltonian h is negative (we’ll be
taking a square root of −h).
The choice of operators Wj that works is

1 Q
W= (L × P − P × L) + e2
2m |Q|2

where the last term is the operator of multiplication by e2 qj /|q|2 . By elaborate


and unenlightening computations the Wj can be shown to satisfy the commu-
tation relations corresponding to the Poisson bracket relations of the wj :

[Wj , H] = 0

[Lj , Wk ] = i~jkl Wl
2
[Wj , Wk ] = i~jkl Ll (− H)
m
as well as
L·W =W·L=0
The first of these shows that energy eigenstates will be preserved not just by
the angular momentum operators Lj , but by a new set of non-trivial operators,
the Wj , so will be representations of a larger Lie algebra thatn so(3).
In addition, one has the following relation between W 2 , H and the Casimir
operator L2
2
W 2 = e4 1 + H(L2 + ~2 1)
m
and it is this which will allow us to find the eigenvalues of H, since we know
those for L2 , and can find those of W 2 by changing variables to identify a second
so(3) Lie algebra.
To do this, first change normalization by defining
r
−m
K= W
2E

219
where E is the eigenvalue of the Hamiltonian that we are trying to solve for.
Note that it is at this point that we violate the conditions of the no-go theorem,
since we must have E < 0 to get a K with the right properties, and this restricts
the validity of our calculations to a subset of the energy spectrum. For E > 0
one can proceed in a similar way, but the Lie algebra one gets is different (so(3, 1)
instead of so(4)).
One then has the following relation between operators
2H(K 2 + L2 + ~2 1) = −me4 1
and the following commutation relations
[Lj , Lk ] = i~jkl Ll
[Lj , Kk ] = i~jkl Kl
[Kj , Kk ] = i~jkl Ll
Defining
1 1
M= (L + K), N = (L − K)
2 2
one has
[Mj , Mk ] = i~jkl Ml
[Nj , Nk ] = i~jkl Nl
[Mj , Nk ] = 0
This shows that we have two commuting copies of so(3) acting on states, spanned
respectively by the Mj and Nj , with two corresponding Casimir operators M 2
and N 2 .
Using the fact that
L·K=K·L=0
one finds that
M2 = N2
Recall from our discussion of rotations in three dimensions that representa-
tions of so(3) = su(2) correspond to representations of Spin(3) = SU (2), the
double cover of SO(3) and the irreducible ones have dimension 2l + 1, with l
half-integral. Only for l integral does one get representations of SO(3), and it
is these that occur in the SO(3) representation on functions on R3 . For four di-
mensions, we found that Spin(4), the double cover of SO(4), is SU (2) × SU (2),
and one thus has spin(4) = so(4) = su(2)×su(2) = so(3)×so(3). This is exactly
the Lie algebra we have found here, so one can think of the Coulomb problem as
having an so(4) symmetry. The representations that will occur can include the
half-integral ones, since neither so(3) is the so(3) of physical rotations in 3-space
(those are generated by L = M + N, which will have integral eigenvalues of l).
The relation between the Hamiltonian and the Casimir operators M 2 and
2
N is
2H(K 2 + L2 + ~2 1) = 2H(2M 2 + 2N 2 + ~2 1) = 2H(4M 2 + ~2 1) = −me4 1

220
On irreducible representations of so(3) of spin µ, we will have

M 2 = µ(µ + 1)1

for some half-integral µ, so we get the following equation for the energy eigen-
values
−me4 −me4
E=− 2 =− 2
2~ (4µ(µ + 1) + 1) 2~ (2µ + 1)2
Letting n = 2µ + 1, for µ = 0, 21 , 1, . . . we get n = 1, 2, 3, . . . and precisely the
same equation for the eigenvalues described earlier

me4
En = −
2~2 n2
It is not hard to show that the irreducible representations of a product like
so(3) × so(3) are just tensor products of irreducibles, and in this case the two
factors of the product are identical due to the equality of the Casimirs M 2 = N 2 .
The dimension of the so(3)×so(3) irreducibles is thus (2µ+1)2 = n2 , explaining
the multiplicity of states one finds at energy eigenvalue En .

19.3 The hydrogen atom


The Coulomb potential problem provides a good description of the quantum
physics of the hydrogen atom, but it is missing an important feature of that
system, the fact that electrons are spin 21 systems. To describe this, one really
needs to take as space of states two-component wavefunctions
 
ψ1 (q)
|ψi =
ψ2 (q)

(or, equivalently, replace our state space H of wavefunctions by the tensor prod-
uct H ⊗ C2 ) in a way that we will examine in detail in chapter 31.
The Hamiltonian operator for the hydrogen atom acts trivially on the C2
factor, so the only effect of the additional wavefunction component is to double
the number of energy eigenstates at each energy. Electrons are fermions, so
antisymmetry of multi-particle wavefunctions implies the Pauli principle that
states can only be occupied by a single particle. As a result, one finds that when
adding electrons to an atom described by the Coulomb potential problem, the
first two fill up the lowest Coulomb energy eigenstate (the ψ100 or 1S state at n =
1), the next eight fill up the n = 2 states ( two each for ψ200 , ψ211 , ψ210 , ψ21−1 ),
etc. This goes a long ways towards explaining the structure of the periodic table
of elements.
When one puts a hydrogen atom in a constant magnetic field B, the Hamil-
tonian acquires a term that acts only on the C2 factor, of the form
2e
B·σ
mc

221
This is exactly the sort of Hamiltonian we began our study of quantum mechan-
ics with for a simple two-state system. It causes a shift in energy eigenvalues
proportional to ±|B| for the two different components of the wavefunction, and
the observation of this energy splitting makes clear the necessity of treating the
electron using the two-component formalism.

19.4 For further reading


This is a standard topic in all quantum mechanics books. For example, see
chapters 12 and 13 of [57]. The so(4) calculation is not in [57], but is in some
of the other such textbooks, a good example is chapter 7 of [4]. For extensive
discussion of the symmetries of the 1r potential problem, see [24] or [26].

222
Chapter 20

The Harmonic Oscillator

In this chapter we’ll begin the study of the most important exactly solvable
physical system, the harmonic oscillator. Later chapters will discuss extensions
of the methods developed here to the case of fermionic oscillators, as well as free
quantum field theories, which are harmonic oscillator systems with an infinite
number of degrees of freedom.

For a finite number of degrees of freedom, the Stone-von Neumann theo-


rem tells us that there is essentially just one way to non-trivially represent the
(exponentiated) Heisenberg commutation relations as operators on a quantum
mechanical state space. We have seen two unitarily equivalent constructions
of these operators: the Schrödinger representation in terms of functions on ei-
ther coordinate space or momentum space. It turns out that there is another
class of quite different constructions of these operators, one that depends upon
introducing complex coordinates on phase space and then using properties of
holomorphic functions. We’ll refer to this as the Bargmann-Fock representation,
although quite a few mathematicians have had their name attached to it for one
good reason or another (some of the other names one sees are Friedrichs, Segal,
Shale, Weil, as well as the descriptive terms “holomorphic” and “oscillator”).

Physically the importance of this representation is that it diagonalizes the


Hamiltonian operator for a fundamental sort of quantum system: the harmonic
oscillator. In the Bargmann-Fock representation the energy eigenstates of such
a system are the simplest states and energy eigenvalues are just integers. These
integers label the irreducible representations of the U (1) symmetry generated
by the Hamiltonian, and they can be interpreted as counting the number of
“quanta” in the system. It is the ubiquity of this example that justifies the
“quantum” in “quantum mechanics”. The operators on the state space can be
simply understood in terms of basic operators called annihilation and creation
operators which increase or decrease by one the number of quanta.

223
20.1 The harmonic oscillator with one degree of
freedom
An even simpler case of a particle in a potential than the Coulomb potential of
the last chapter is the case of V (q) quadratic in q. This is also the lowest-order
approximation when one studies motion near a local minimum of an arbitrary
V (q), expanding V (q) in a power series around this point. We’ll write this as

p2 1
h= + mω 2 q 2
2m 2
with coefficients chosen so as to make ω the angular frequency of periodic motion
of the classical trajectories. These satisfy Hamilton’s equations

∂V p
ṗ = − = mω 2 q, q̇ =
∂q m
so
q̈ = −ω 2 q
which will have solutions with periodic motion of angular frequency ω. These
solutions can be written as

q(t) = c+ eiωt + c− e−iωt

for c+ , c− ∈ C where, since q(t) must be real, we have c− = c+ . The space of


solutions of the equation of motion is thus two real-dimensional, and abstractly
one can think of this as the phase space of the system.
More conventionally, one can parametrize the phase space by initial values
that determine the classical trajectories, for instance by the position q(0) and
momentum p(0) at an initial time t(0). Since

p(t) = mq̇ = mc+ iωeiωt − mc− iωe−iωt = imω(c+ eiωt − c+ e−iωt )

we have

q(0) = c+ + c− = 2 Re(c+), p(0) = imω(c+ − c− ) = 2mω Im(c+ )

so
1 1
c+ = q(0) + i p(0)
2 2mω
The classical phase space trajectories are

1 1 1 1
q(t) = ( q(0) + i p(0))eiωt + ( q(0) − i p(0))e−iωt
2 2mω 2 2mω
imω 1 −imω 1
p(t) = ( q(0) − p(0))eiωt + ( q(0) + p(0))e−iωt
2 2 2 2

224
Instead of using two real coordinates to describe points in the phase space
(and having to introduce a reality condition when using complex exponentials),
one can instead use a single complex coordinate
1 i
z(t) = √ (q(t) − p(t))
2 mω
Then the equation of motion is a first-order rather than second-order differential
equation
ż = iωz
with solutions
z(t) = z(0)eiωt (20.1)
The classical trajectories are then realized as complex functions of t, and paramet-
rized by the complex number
1 i
z(0) = √ (q(0) − p(0))
2 mω
Since the Hamiltonian is just quadratic in the p and q, we have seen that we
can construct the corresponding quantum operator uniquely using the Schröding-
er representation. For H = L2 (R) we have a Hamiltonian operator
P2 1 ~2 d2 1
H= + mω 2 Q2 = − + mω 2 q 2
2m 2 2m dq 2 2
To find solutions of the Schrödinger equation, as with the free particle, one
proceeds by first solving for eigenvectors of H with eigenvalue E, which means
finding solutions to
~2 d2 1
HψE = (− 2
+ mω 2 q 2 )ψE = EψE
2m dq 2
Solutions to the Schrödinger equation will then be linear combinations of the
functions
i
ψE (q)e− ~ Et
Standard but somewhat intricate methods for solving differential equations
like this show that one gets solutions for E = En = (n + 21 )~ω, n a non-negative
integer, and the normalized solution for a given n (which we’ll denote ψn ) will
be r
mω 1 mω − mω q2
ψn (q) = ( 2n 2
) 4 Hn ( q)e 2~ (20.2)
π~2 (n!) ~
where Hn is a family of polynomials called the Hermite polynomials. The
ψn provide an orthonormal basis for H (one does not need to consider non-
normalizable wavefunctions as in the free particle case), so any initial wavefunc-
tion ψ(q, 0) can be written in the form

X
ψ(q, 0) = cn ψn (q)
n=0

225
with Z +∞
cn = ψn (q)ψ(q, 0)dq
−∞

(note that the ψn are real-valued). At later times, the wavefunction will be
∞ ∞
− ~i En t 1
X X
ψ(q, t) = cn ψn (q)e = cn ψn (q)e−i(n+ 2 )ωt
n=0 n=0

20.2 Creation and annihilation operators


It turns out that there is a quite easy method which allows one to explicitly find
eigenfunctions and eigenvalues of the harmonic oscillator Hamiltonian (although
it’s harder to show it gives all of them). This also leads to a new representation
of the Heisenberg group (of course unitarily equivalent to the Schrödinger one
by the Stone-von Neumann theorem). Instead of working with the self-adjoint
operators Q and P that satisfy the commutation relation

[Q, P ] = i~1

we define
r r r r
mω 1 † mω 1
a= Q+i P, a = Q−i P
2~ 2mω~ 2~ 2mω~
which satisfy the commutation relation

[a, a† ] = 1

To simplify calculations, from now on we will set ~ = m = ω = 1. This


corresponds to a specific choice of units for energy, distance and time, and the
general case of arbitrary constants can be recovered by rescaling the results of
our calculations. So, now
1 1
a = √ (Q + iP ), a† = √ (Q − iP )
2 2
and
1 1
Q = √ (a + a† ), P = √ (a − a† )
2 i 2
The Hamiltonian operator is
1 2 1 1 1
H= (Q + P 2 ) = ( (a + a† )2 − (a − a† )2 )
2 2 2 2
1
= (aa† + a† a)
2
1
=a† a +
2

226
Up to the constant 21 , H is given by the operator

N = a† a

which satisfies the commutation relations

[N, a] = [a† a, a] = a† [a, a] + [a† , a]a = −a

and
[N, a† ] = a†
If |ci is a normalized eigenvector of N with eigenvalue c, one has

c = hc|a† a|ci = |a|ci|2 ≥ 0

so eigenvalues of N must be non-negative. Using the commutation relations of


N, a, a† gives

N a|ci = ([N, a] + aN )|ci = a(N − 1)|ci = (c − 1)a|ci

and
N a† |ci = ([N, a† ] + a† N )|ci = a† (N + 1)|ci = (c + 1)a† |ci
This shows that a|ci will have eigenvalue c − 1 for N , and a normalized eigen-
function for N will be
1
|c − 1i = √ a|ci
c
Similarly, since

|a† |ci|2 = hc|aa† |ci = hc|(N + 1)|ci = c + 1

we have
1
|c + 1i = √ a† |ci
c+1
We can find eigenfunctions for H by first solving

a|0i = 0

for |0i (the lowest energy or “vacuum” state) which will have energy eigenvalue
1 † 1
2 , then acting by a n-times on |0i to get states with energy eigenvalue n + 2 .
The equation for |0i is thus

1 1 d
a|0i = √ (Q + iP )ψ0 (q) = √ (q + )ψ0 (q) = 0
2 2 dq

One can check that all solutions to this are all of the form
q2
ψ0 (q) = Ce− 2

227
so there is a unique normalized lowest-energy eigenfunction

1 q2
ψ0 (q) = 1 e− 2

π 4

The rest of the energy eigenfunctions can be found by computing

a† a† a† 1 d q2
|ni = √ · · · √ √ |0i = 1 n √ (q − )n e− 2
n 2 1 π 4 2 2 n! dq

which (after putting back in constants and consulting the definition of a Hermite
polynomial) can be shown to give the eigenfunctions claimed earlier in equation
20.2.

In the physical interpretation of this quantum system, the state |ni, with
energy ~ω(n + 21 ) is thought of as a state describing n “quanta”. The state
|0i is the “vacuum state” with zero quanta, but still carrying a “zero-point”
energy of 12 ~ω. The operators a† and a have somewhat similar properties to
the raising and lowering operators we used for SU (2) but their commutator
is different (just the identity operator), leading to simpler behavior. In this
case they are called “creation” and “annihilation” operators respectively, due
to the way they change the number of quanta. The relation of such quanta to
physical particles like the photon is that quantization of the electromagnetic field
involves quantization of an infinite collection of oscillators, with the quantum
of an oscillator corresponding physically to a photon with a specific momentum
and polarization. This leads to a well-known problem of how to handle the
infinite vacuum energy corresponding to adding up 21 ~ω for each oscillator.

The first few eigenfunctions are plotted below. The lowest energy eigenstate
is a Gaussian centered at q = 0, with a Fourier transform that is also a Gaussian
centered at p = 0. Classically the lowest energy solution is an oscillator at rest at
its equilibrium point (q = p = 0), but for a quantum oscillator one cannot have
such a state with a well-defined position and momentum. Note that the plot
gives the wavefunctions, which in this case are real and can be negative. The
square of this function is what has an intepretation as the probability density
for measuring a given position.

228
20.3 The Bargmann-Fock representation
Working with the operators a and a† and their commutation relation

[a, a† ] = 1

makes it clear that there is a simpler way to represent these operators than
the Schrödinger representation as operators on position space functions that we
have been using, while the Stone-von Neumann theorem assures us that this will
be unitarily equivalent to the Schrödinger representation. This representation
appears in the literature under a large number of different names, depending on
the context, all of which refer to the same representation:

Definition (Bargmann-Fock or oscillator or holomorphic or Segal-Shale-Weil


representation). The Bargmann-Fock (etc.) representation is given by taking as
state space H = F, where F is the space of holomorphic functions on C with
finite norm in the inner product
Z
1 2
hψ1 |ψ2 i = ψ1 (w)ψ2 (w)e−|w| dudv (20.3)
π C

229
where w = u + iv. We define the following two operators acting on this space:
d
a= , a† = w
dw
One has
d d n
[a, a† ]wn = (wwn ) − w w = (n + 1 − n)wn = wn
dw dw
so this commutator is the identity operator on polynomials
[a, a† ] = 1
and
Theorem. The Bargmann-Fock representation has the following properties
• The elements
wn

n!
of F for n = 0, 1, 2, . . . are orthornormal.
• The operators a and a† are adjoints with respect to the given inner product
on F.
• The basis
wn

n!
of F for n = 0, 1, 2, . . . is complete.
Proof. The proofs of the above statements are not difficult, in outline they are
• For orthonormality one can just compute the integrals
Z
2
wm wn e−|w| dudv
C

in polar coordinates.
d
• To show that w and dw are adjoint operators, use integration by parts.
• For completeness, assume hn|ψi = 0 for all n. The expression for the |ni
as Hermite polynomials times a Gaussian implies that
Z
q2
F (q)e− 2 ψ(q)dq = 0

q2
for all polynomials F (q). Computing the Fourier transform of ψ(q)e− 2

gives

(−ikq)j − q2
Z 2
Z X
−ikq − q2
e e ψ(q)dq = e 2 ψ(q)dq = 0
j=0
j!
q2
So ψ(q)e− 2 has Fourier transform 0 and must be 0 itself. Alternatively,
one can invoke the spectral theorem for the self-adjoint operator H, which
guarantees that its eigenvectors form a complete and orthonormal set.

230
Since in this representation the number operator N = a† a satisfies
d n
N wn = w w = nwn
dw
the monomials in w diagonalize the number and energy operators, so one has
wn
|ni = √
n!

for the normalized energy eigenstate of energy ~ω(n + 21 ).


Note that we are here taking the state space F to include infinite linear
combinations of the states |ni, as long as the Bargmann-Fock norm is finite.
We will sometimes want to restrict to the subspace of finite linear combinations
of the |ni, which we will sometimes denote F f in . This is just the space C[w] of
polynomials.

20.4 The Bargmann transform


The Stone von-Neumann theorem implies the existence of
Definition. Bargmann transform
There is a unitary map called the Bargmann transform

B : HBF → HS

between the Bargmann-Fock and Schrödinger representations, with operators


satisfying the relation
Γ0BF (X) = B −1 Γ0S (X)B
for X ∈ h2d+1 .
In practice, knowing B explicitly is often not needed, since one can use the
representation independent relation
1
aj = √ (Qj + iPj )
2

to express operators either purely in terms of aj and a†j , which have a simple
expression

aj = , a†j = wj
∂wj
in the Bargmann-Fock representation, or purely in terms of Qj and Pj which
have a simple expression

Qj = qj , Pj = −i
∂qj

231
in the Schrödinger representation.
To give an idea of what the Bargmann transform looks like explicitly, we’ll
just give the formula for the d = 1 case here, without proof. If ψ(q) is a state
in HS = L2 (R), then
Z +∞
1 2 q2
(Bψ)(z) = 1 e−z − 2 +2qz ψ(q)dq
π 4 −∞
One can check this equation for the case of the lowest energy state in the
Schrödinger representation, where |0i has coordinate space representation

1 q2
ψ(q) = 1 e− 2

π 4

and
Z +∞
1 2 2
− q2 +2qz 1 q2
(Bψ)(z) = 1 e−z 1 e− 2 dq
π4 −∞ π 4
Z +∞
1 2
−q 2 +2qz
= 1 e−z dq
π2 −∞
Z +∞
1 2
= 1 e−(q−z) dq
π2 −∞
Z +∞
1 2
= 1 e−q dq
π 2 −∞
=1

which is the expression for the state |0i in the Bargmann-Fock representation.

20.5 Multiple degrees of freedom


Up until now we have been working with the simple case of one physical degree of
freedom, i.e. one pair (Q, P ) of position and momentum operators satisfying the
Heisenberg relation [Q, P ] = i1, or one pair of adjoint operators a, a† satisfying
[a, a† ] = 1. We can easily extend this to any number d of degrees of freedom
by taking tensor products of our state space F, and d copies of our operators,
acting on each factor of the tensor product. Our new state space will be

H = Fd = F ⊗ · · · ⊗ F
| {z }
d times

and we will have operators

Qj , Pj j = 1, . . . d

satisfying
[Qj , Pk ] = iδjk 1, [Qj , Qk ] = [Pj , Pk ] = 0

232
where Qj and Pj just act on the j’th term of the tensor product in the usual
way.
We can now define annihilation and creation operators in the general case:
Definition (Annihilation and creation operators). The 2d operators
1 1
aj = √ (Qj + iPj ), a†j = √ (Qj − iPj ), j = 1, . . . , d
2 2

are called annihilation (the aj ) and creation (the a†j ) operators.


One can easily check that these satisfy:
Definition (Canonical commutation relations). The canonical commutation re-
lations (often abbreviated CCR) are

[aj , a†k ] = δjk 1, [aj , ak ] = [a†j , a†k ] = 0

Using the fact that tensor products of function spaces correspond to func-
tions on the product space, in the Schrödinger representation we have

H = L2 (Rd )

and in the Bargmann-Fock representation H = Fd is the space of holomorphic


functions in d complex variables (with finite norm in the d-dimensional version
of 20.3).
The harmonic oscillator Hamiltonian for d degrees of freedom will be
d d
1X 2 X † 1
H= (Pj + Q2j ) = (aj aj + )
2 j=1 j=1
2

where one should keep in mind that one can rescale each degree of freedom
separately, allowing different parameters ωj for the different degrees of freedom.
The energy and number operator eigenstates will be written

|n1 , . . . , nd i

where
a†j aj |n1 , . . . , nd i = Nj |n1 , . . . , nd i = nj |n1 , . . . , nd i
Note that for d = 3 the harmonic oscillator problem is an example of the cen-
tral potential problems described in chapter 19. It has an SO(3) symmetry, with
angular momentum operators that commute with the Hamiltonian, and space
of energy eigenstates that can be organized into irreducible SO(3) representa-
tions. In the Schrödinger representation states are in H = L2 (R3 ), decribed
by wavefunctions that can be written in rectangular or spherical coordinates,
and the Hamiltonian is a second order differential operator. In the Bargmann-
Fock representation, states in F3 are described by holomorphic functions of 3
complex variables, with operators given in terms of products of annihilation

233
and creation operators. The Hamiltonian is, up to a constant, just the number
operator, with energy eigenstates homogeneous polynomials (with eigenvalue of
the number operator their degree).
Either the Pj , Qk or the aj , a†k together with the identity operator will give
a representation of the Heisenberg Lie algebra h2d+1 on H, and by exponentia-
tion a representation of the Heisenberg group H2d+1 . Quadratic combinations
of these operators will give a representation of sp(2d, R), the Lie algebra of
Sp(2d, R). In the next chapters we will study these and other aspects of the
quantum harmonic oscillator as a unitary representation.

20.6 For further reading


All quantum mechanics books should have a similar discussion of the harmonic
oscillator, with a good example the detailed one in chapter 7 of Shankar [57].
One source for a detailed treatment of the Bargmann-Fock representation and
Bargmann transform is [19].

234
Chapter 21

The Harmonic Oscillator as


a Representation of the
Heisenberg Group

The quantum harmonic oscillator explicitly constructed in the previous chapter


provides new insight into the representation theory of the Heisenberg and meta-
plectic groups, using the existence of not just the Schrödinger representation
ΓS , but the unitarily equivalent Bargmann-Fock version. In this chapter we’ll
examine various aspects of the Heisenberg group H2d+1 part of this story that
the formalism of annihilation and creation operators illuminates, going beyond
the understanding of this representation one gets from the use of position space
wavefunctions and the Schrödinger representation.
The Schrödinger representation ΓS of H2d+1 uses a specific choice of ex-
tra structure on classical phase space: a decomposition of its coordinates into
positions qj and momenta pj . For the unitarily equivalent Bargmann-Fock rep-
resentation a different sort of extra structure is needed, a decomposition of
coordinates on phase space into complex coordinates zj and their complex con-
jugates z j . Such a decomposition is called a “complex structure” J, and will
correspond after quantization to a choice that distinguishes annihilation and
creation operators. This choice has physical significance, it is equivalent to a
specification of the lowest energy state |0i ∈ H (since this by definition is the
state satisfying aj |0i = 0 for all annihilation operators aj ). In chapter 20 we
used one particular standard choice of J. For other choices one gets different
eigenstates of the number operator, known to physicists as “squeezed states”.
In later chapters on relativistic quantum field theory, we will see that the phe-
nomenon of anti-particles is best understood in terms of a new possibility for
the choice of J that appears in that case.
The Heisenberg group representation constructed in this way is denoted ΓJ .
It does not commute with the Hamiltonian H, so it is not a symmetry, which
would take states to other states with the same energy. While it doesn’t com-

235
mute with the Hamiltonian, it does have physically important aspects. In par-
ticular it takes the state |0i to a distinguished set of states known as “coherent
states”. These states are labeled by points of the phase space R2d and provide
the closest analog possible in the quantum system of classical states (i.e. those
with a well-defined value of position and momentum variables).

21.1 Complex structures and phase space


Quantization of phase space M = R2d using the Schrödinger representation
gives a unitary Lie algebra representation Γ0S of the Heisenberg Lie algebra h2d+1
which takes the qj and pj coordinate functions on phase space to operators Qj
and Pj on HS = L2 (Rd ). This involves a choice, that of taking states to be
functions of the qj , or (using the Fourier transform) of the pj . It turns out to be
a general phenomenon that quantization involves choosing some extra structure
on phase space, beyond the Poisson bracket.
For the case of the harmonic oscillator, we found in chapter 20 that quantiza-
tion was most conveniently performed using annihilation and creation operators,
which involve a different sort of choice of extra structure on phase space. There
we introduced complex coordinates on phase space, making the choice
1 1
zj = √ (qj − ipj ), z j = √ (qj + ipj )
2 2

The zj were then quantized using creation operators a†j , the z j using annihilation
operators aj . In the Bargmann-Fock representation, where the state space is a
space of functions of complex variables wj , we have


aj = , a†j = wj
∂wj

and there is a distinguished state, the constant function, which is annihilated


by all the aj .
In this section we’ll introduce the notion of a complex structure on a real vec-
tor space, with such structures characterizing the possible ways of introducing
complex coordinates zj , z j and thus annihilation and creation operators. The
abstract notion of a complex structure can be formalized as follows. Given any
real vector space V = Rn , we have seen that taking complex linear combinations
of vectors in V gives a complex vector space V ⊗ C, the complexification of V ,
and this can be identified with Cn , a real vector space of twice the dimension.
When n = 2d is even, one can get a complex vector space out of V = R2d
in a different way, but to do this one needs the following additional piece of
information:

Definition (Complex structure). A complex structure on a real vector space V


is a linear operator
J :V →V

236
such that
J 2 = −1
Given such a pair (V = R2d , J), one can break up complex linear combina-
tions of vectors in V into those on which J acts as i and those on which it acts
as −i (since J 2 = −1, its eigenvalues must be ±i). Note that we have extended
the action of J on V to an action on V ⊗ C using complex linearity. One has

V ⊗ C = VJ+ ⊕ VJ−

where VJ+ is the +i eigenspace of the operator J on V ⊗ C and VJ− is the


−i eigenspace. Complex conjugation takes elements of VJ+ to VJ− and vice-
versa. The choice of J has thus given us two complex vector spaces of complex
dimension d, VJ+ and VJ− , related by this complex conjugation.
Since
J(v − iJv) = i(v − iJv)
for any v ∈ V , one can identify the real vector space V with the complex vector
space VJ+ by the map
1
v ∈ V → √ (v − iJv) ∈ VJ+ (21.1)
2
This allows one to think of the pair (V, J) as giving V the structure of a complex
vector space, with J providing multiplication by i. Similarly, taking
1
v ∈ V → √ (v + iJv) ∈ VJ−
2

identifies V with VJ− .


The real vector space we want to choose a complex structure on is the dual
phase space M = M ∗ . There will then be a decomposition

M ⊗ C = M+
J ⊕ MJ

and quantization will take elements of M+ J to linear combinations of creation


operators, M− J to linear combinations of annihilation operators. Note that, as
in earlier chapters, it is elements not of M but of the dual phase space M that
we want to work with, since it is these that have a Lie algebra structure, and
become operators after quantization.
The standard choice of complex structure is to take J = J0 , where J0 is the
linear operator that acts on basis vectors qj , pj of M by

J0 qj = pj , J0 pj = −qj

One can take as basis elements of M+


J0 , the +i eigenspace of J0 , the complex
coordinate functions in M ⊗ C given by
1
zj = √ (qj − ipj )
2

237
since one has
1
J0 zj = √ (pj + iqj ) = izj
2
Basis elements of M−
J0 are the complex conjugates

1
z j = √ (qj + ipj )
2
With respect to the chosen basis qj , pj , one can write a complex structure
as a matrix. For the case of J0 and for d = 1, on an arbitrary element of M
one has
J0 (cq q + cp p) = cq p − cp q
so J0 in matrix form with respect to the basis (q, p) is
      
cq 0 −1 cq −cp
J0 = = (21.2)
cp 1 0 cp cq

21.2 Complex structures and quantization


Recall that the Heisenberg Lie algebra is just the Lie algebra of linear and
constant functions on M , so can be thought of as

h2d+1 = M ⊕ R

where the R component is the constant functions. The Lie bracket is just the
Poisson bracket. Complexifying this, one has

h2d+1 ⊗ C = (M ⊕ R) ⊗ C = (M ⊗ C) ⊕ C = M+
J ⊕ MJ ⊕ C

so one can write elements of h2d+1 ⊗ C as pairs (u, c) = (u+ + u− , c) where


− −
u ∈ M ⊗ C, u+ ∈ M+
J , u ∈ MJ , c ∈ C

This complexified Lie algebra is still a Lie algebra, with the Lie bracket relations
extended from the real Lie algebra by complex linearity. One has

[(u1 , c1 ), (u2 , c2 )] = (0, Ω(u1 , u2 ))

where Ω is the antisymmetric bilinear form on M given by the Poisson bracket


on linear functions (extended by complex linearity to a bilinear form on M⊗C).
For each J, we would like to find a quantization that takes elements of

M+ J to linear combinations of creation operators, MJ to linear combinations of
annihilation operators. This will give a representation of the complexified Lie
algebra
Γ0J : (u, c) ∈ h2d+1 ⊗ C → Γ0J (u, c)
which must satisfy the Lie algebra homomorphism property

[Γ0J (u1 , c1 ), Γ0J (u2 , c2 )] = Γ0J ([(u1 , c1 ), (u2 , c2 )]) = Γ0J (0, Ω(u1 , u2 )) (21.3)

238
Note that Γ0J will only be a unitary representation (with Γ0J (u, c) skew-adjoint
operators) for (u, c) in the real Lie subalgebra h2d+1 (meaning u ∈ M, c ∈ R).
Since we can write

(u, c) = (u+ , 0) + (u− , 0) + (0, c)



where u+ ∈ M+ −
J and u ∈ MJ , we have

Γ0J (u, c) = Γ0J (u+ , 0) + Γ0J (u− , 0) + Γ0J (0, c)

The last term must be


Γ0J (0, c) = −ic1
which is chosen so that for c real one has a skew-adjoint transformation and
thus a unitary representation.
We would like to construct Γ0J (u+ , 0) as a linear combination of creation
operators and Γ0J (u− , 0) as a linear combination of annihilation operators. For
this to be possible, we need two conditions on J. The first is a compatibility
condition between Ω and J: for all v1 , v2 ∈ M

Ω(Jv1 , Jv2 ) = Ω(v1 , v2 ) (21.4)

From the definition of the symplectic group in chapter 14, this condition just
says that J ∈ Sp(2d, R). Since we are extending the action of J to M ⊗ C by
complex linearity, this condition will remain true for u1 , u2 ∈ M ⊗ C. Given
this condition, the Γ0J (u+ , 0) will commute, since if u+ + +
1 , u2 ∈ MJ , by 21.3 we
have
[Γ0J (u+ 0 + 0 + +
1 , 0), ΓJ (u2 , 0)] = ΓJ (0, Ω(u1 , u2 ))

and
Ω(u+ + + + + + + +
1 , u2 ) = Ω(Ju1 , Ju2 ) = Ω(iu1 , iu2 ) = −Ω(u1 , u2 ) = 0

The Γ0J (u− , 0) will commute with each other by essentially the same argument.
For the case of the standard complex structure J = J0 , one can check this
compatibility condition by computing (treating the d = 1 case, which generalizes
easily, and using equations 14.1 and 21.2)
       0 
0 0 0 −1 cq T 0 1 0 −1 cq
Ω(J0 (cq q + cp p), J0 (cq q + cp p)) =( ) ( )
1 0 cp −1 0 1 0 c0p
    0 
 0 1 0 1 0 −1 cq
= cq cp
−1 0 −1 0 1 0 c0p
  0 
 0 1 cq
= cq cp
−1 0 c0p
=Ω(cq q + cp p, c0q q + c0p p)

More simply of course, one could just note that J0 ∈ SL(2, R) = Sp(2, R).
For the case of J = J0 and arbitrary values of d, one can write out explicitly
Ω on basis elements

zj ∈ M+ J0 , z j ∈ MJ0

239
as
1 1
Ω(zj , zk ) = {zj , zk } = { √ (qj − ipj ), √ (qj − ipj )} = 0
2 2
1 1
Ω(z j , z k ) = ({z j , z k } = { √ (qj + ipj ), √ (qj + ipj )} = 0
2 2
1 1
Ω(zj , z k ) = {zj , z k } = { √ (qj − ipj ), √ (qj + ipj )} = iδjk
2 2
or
{zj , −iz k } = δjk
Note that here the conjugate variable to zj with respect to the Poisson bracket
is −iz k .
The Lie algebra representation is given in this case by

Γ0J0 (zj , 0) = −ia†j = −iwj , Γ0J0 (z j , 0) = −iaj = −i , Γ0J0 (0, 1) = −i1
∂wj

Note that this is precisely the Bargmann-Fock representation, i.e. we have


shown that Γ0J0 = Γ0BF .
To check that Γ0J0 is a Lie algebra homomorphism, compute

[Γ0J0 (zj , 0), Γ0J0 (z k , 0)] =[−ia†j , −iak ] = −[a†j , ak ]


=δjk 1 = δjk Γ0J0 (0, i)

which is correct since

[(zj , 0), (z k , 0)] = (0, {zj , z k }) = (0, iδjk )

Note that the operators aj and a†j are not skew-adjoint, so Γ0J0 is not unitary
on the full Lie algebra h2d+1 ⊗ C, but only on the real subspace h2d+1 of real
linear combinations of qj , pj , 1.

21.3 The positivity condition on J


We still need a second condition on J, a positivity condition. Recall that the
annihilation and creation operators satisfy (for d = 1)

[a, a† ] = 1

This condition on the commutator corresponds to the following condition on


the representation, for J = J0

[Γ0J0 (z, 0), Γ0J0 (z, 0)] = [−ia† , −ia] = 1

By the homomorphism property 21.3 we have

[Γ0J0 (z, 0), Γ0J0 (z, 0)] = Γ0J0 (0, Ω(z, z)) = −iΩ(z, z)1

240
so positivity here corresponds to the fact that

−iΩ(z, z) = 1 > 0

Use of the opposite sign for the commutator

[a, a† ] = −1

would correspond to interchanging the role of a and a† , with the lowest energy
state now satisfying a† |0i = 0 and no state in the state space satisfying a|0i = 0.

This is equivalent to a change of sign of J, interchanging M+ J and MJ . For the
d = 1 case with the wrong sign, one can just make this interchange to construct
the state space, but in higher dimensions one needs to have the same sign for
all [aj , a†j ]. In order to have a state |0i that is annihilated by all annihilation
operators, we need all the commutators [aj , a†j ] to have the positive sign.
For the case of general J and arbitrary d, to get operators Γ0J (u, 0) with the
right positivity properties, we will need the condition

−iΩ(u, u) > 0

for non-zero u ∈ M+ J . Only if this condition is satisfied will we be able to


construct Γ0J in terms of conventional annihilation and creation operators for

basis elements of M+ J and MJ . Using the identification 21.1 of M and MJ ,
+
+
u ∈ MJ can be written as
1
u = √ (v − iJv)
2
for some non-zero v ∈ M. So another way to write the positivity condition is
as
1
−iΩ(u, u) = − i Ω(v − iJv, v + iJv)
2
1
= (−iΩ(v, v) − iΩ(Jv, Jv) + Ω(v, Jv) − Ω(Jv, v))
2
=Ω(v, Jv) > 0

For the standard complex structure J0 , we can check this using the matrix
expressions for Ω and J0
   
 0 1 0 −1 cq
Ω(cq q + cp p, J0 (cq q + cp p)) = cq cp
−1 0 1 0 cp
  
 1 0 cq
= cq cp
0 1 cp
=c2q + c2p

Note that Ω(v, Jv) thus gives a positive-definite quadratic function on M. Using
the isomorphism M = M provided by Ω, this corresponds to a positive-definite

241
quadratic function on the phase space itself. For J0 this is just (twice) the
standard harmonic oscillator Hamiltonian function. One application of more
general J is to the case of more general quadratic (but still positive-definite)
Hamiltonian functions.
We can give a name to the class of J for which we will have a formalism of
annihilation and creation operators:

Definition (Positive compatible complex structures). A complex structure J


on M is said to be positive and compatible with Ω if it satisfies the compatibil-
ity condition 21.4 (i.e. is in Sp(2d, R)) and either of the equivalent positivity
conditions


−iΩ(u, u) > 0
for non-zero u ∈ M+
J.


Ω(v, Jv) > 0
for non-zero v ∈ M.

Given such a complex structure, we have:

Theorem. Given a positive compatible complex structure J on M, one can find


a basis zjJ of M+
J such that a representation of h2d+1 ⊗ C, unitary for the real
subalgebra h2d+1 , is given by

Γ0J (zjJ , 0) = −ia†j , Γ0J (z Jj , 0) = −iaj , Γ0J (0, c) = −ic1

where aj , a†k satisfy the conventional commutation relations, and z Jj is the com-
plex conjugate of zjJ .

Proof. An outline of the construction goes as follows:

1. Define a norm on M by |v|2 = Ω(v, Jv). One then has a positive inner
product h·, ·iJ on M and by Gram-Schmidt orthornormalization can find
a basis of span{qj } ⊂ M consisting of d vectors qjJ satisfying

hqjJ , qkJ iJ = δjk

2. The vectors JqjJ will also be orthonormal since

hJqjJ , JqkJ iJ = Ω(JqjJ , J 2 qjJ ) = Ω(qjJ , JqjJ ) = hqjJ , qkJ iJ

They will be orthogonal to the qjJ since

hqjJ , JqkJ iJ = Ω(qjJ , J 2 qkJ ) = −Ω(qjJ , qkJ ) = 0

242
3. Define
1
zjJ = √ (qjJ − iJqjJ )
2
The zjJ give a complex basis of M+ J
J , their complex conjugates z j a complex

basis of MJ .
4. One can check that

Γ0J (zjJ , 0) = −ia†j = −iwj , Γ0J (z Jj , 0) = −iaj = −i
∂wj

satisfy the desired commutation relations and give a unitary representation


on linear combinations of the (zjJ , 0) and (z Jj , 0) in the real subalgebra
h2d+1 .

21.4 Complex structures for d = 1 and squeezed


states
To get a better understanding of what happens for other complex structures
than J0 , in this section we’ll see explicitly what happens when d = 1. We can
generalize the case J = J0 , where a basis of M+ J0 is given by

1
z = √ (q − ip)
2
by replacing the i by an arbitrary complex number τ . Then the condition that

q − τ p be in M+
J and its conjugate in MJ is

J(q − τ p) = J(q) − τ J(p) = i(q − τ p)

J(q − τ p) = J(q) − τ J(p) = −i(q − τ p)


Subtracting the two equations one finds

1 Re(τ )
J(p) = − q+ p
Im(τ ) Im(τ )

and adding them gives

Re(τ ) (Re(τ ))2


J(q) = − q + (Im(τ ) + )p
Im(τ ) Im(τ )

With respect to the basis (q, p) the matrix for J is


Re(τ ) 1
!
− Im(τ − Im(τ
 
) ) 1 − Re(τ ) −1
J= 2 = (21.5)
Im(τ ) + (Re(τ )) Re(τ )
Im(τ ) |τ |2 Re(τ )
Im(τ ) Im(τ )

243
One can easily check that det J = 1, so J ∈ SL(2, R) and is compatible with Ω.
The positivity condition here is that the matrix
|τ |2
   
0 1 1 − Re(τ )
J=
−1 0 Im(τ ) Re(τ ) 1
give a positive quadratic form. This will be the case when Im(τ ) > 0. We
have thus constructed a set of J that are positive, compatible with Ω, and
parametrized by an element τ of the upper half-plane, with J0 corresponding to
τ = i.
To construct annihilation and creation operators satisfying the standard
commutation relations
[aτ , aτ ] = [a†τ , a†τ ] = 0, [aτ , a†τ ] = 1
set
1 1
aτ = p (Q − τ P ), a†τ = p (Q − τ P )
2 Im(τ ) 2 Im(τ )
1
The Hamiltonian with eigenvalues n + 2 for n = 0, 1, 2, · · · will be
1 1
Hτ = (aτ a†τ + a†τ aτ ) = (Q2 + |τ |2 P 2 − Re(τ )(QP + P Q)) (21.6)
2 2 Im(τ )
The lowest energy state will satisfy
aτ |0iτ = 0
which in the Schrödinger representation is the differential equation
d
(Q − τ P )ψ(q) = (q + iτ )ψ(q) = 0
dq
which has as solutions
i 2|ττ |2 q 2
ψ(q) ∝ e (21.7)
This will be a normalizable state for Im(τ ) > 0, again showing the necessity of
the positivity condition.
Eigenstates of Hτ for general τ are known as “squeezed states” in physics.
Note that for τ pure imaginary, they correspond just to a rescaling of variables
1 p
q→p q, p → Im(τ )p
Im(τ )
Such states are “squeezed” in the sense that for Im(τ ) large the position un-
certainty in the state |0iτ will become small (while the momentum uncertainty
becomes large).
The construction for d = 1 can be generalized to arbitrary d, with complex
structures J now parametrized by a d-dimensional complex matrix τ which must
be symmetric (to be compatible with Ω), and such that Im(τ ) is a positive-
definite matrix. The space of such complex structures is known as the Siegel
upper half-space.

244
21.5 Coherent states and the Heisenberg group
action
Since the Hamiltonian for the harmonic oscillator does not commute with the
operators aj or a†j which give the representation of the Lie algebra h2d+1 on
the state space HBF , the Heisenberg Lie group and its Lie algebra are not
symmetries of the system. Energy eigenstates do not break up into irreducible
representations of the group but rather the entire state space makes up such
an irreducible representation. The state space for the harmonic oscillator does
however have a distinguished state, the lowest energy state |0i, and one can ask
what happens to this state under the Heisenberg group action. We’ll study this
question for the simplest case of d = 1 and the standard complex structure J0 .
Considering the basis of operators for the Lie algebra representation 1, a, a† ,
we see that the first acts as a constant on |0i, generating a phase tranformation
of the state, while the second annihilates |0i, so generates group transformations
that leave the state invariant. It is only the third operator a† , that takes |0i to
other non-zero states, and one could consider the family of states

eαa |0i

for α ∈ C. The transformations eαa are not unitary since αa† is not skew ad-
joint. It is better to fix this by replacing αa† with the skew-adjoint combination
αa† − αa, defining
Definition (Coherent states). The coherent states in H are the states

−αa
|αi = eαa |0i

where α ∈ C.

Since eαa −αa is unitary, the |αi will be a family of distinct normalized states
in H, with α = 0 corresponding to the lowest energy state |0i. These are, up
to phase transformation, precisely the states one gets by acting on |0i with
arbitrary elements of the Heisenberg group H3 .
Using the Baker-Campbell-Hausdorff formula gives
† † |α|2
−αa
|αi = eαa |0i = eαa e−αa e− 2 |0i

and since a|0i = 0 one has




|α|2
αa† −
|α|2 X αn
|αi = e 2 e |0i = e 2 √ |ni (21.8)
n=0 n!

Since a|ni = n|n − 1i one finds


|α|2 X αn
a|αi = e 2 p |n − 1i = α|αi
n=1 (n − 1)!

245
and this property could be used as an equivalent definition of coherent states.
Note that coherent states are superpositions of different states |ni, so are
not eigenvectors of the number operator N . They are eigenvectors of
1
a = √ (Q + iP )
2
with eigenvalue α so one can try and think of α as a complex number whose
real part gives the position and imaginary part the momentum. This does not
lead to a violation of the Heisenberg uncertainty principle since this is not a
self-adjoint operator, and thus not an observable. Such states are however very
useful for describing certain sorts of physical phenomena, for instance the state
of a laser beam, where (for each momentum component of the electromagnetic
field) one does not have a definite number of photons, but does have a definite
amplitude and phase.
One thing coherent states do provide is an alternate complete set of norm
one vectors in H, so any state can be written in terms of them. However, these
states are not orthogonal (they are eigenvectors of a non-self-adjoint operator so
the spectral theorem for self-adjoint operators does not apply). One can easily
compute that
2
|hβ|αi|2 = e−|α−β|
One possible reason these states are given the name “coherent” is that they
remain coherent states as they evolve in time (for the harmonic oscillator Hamil-
tonian), with α evolving in time along a classical phase space trajectory. If the
state at t = 0 is a coherent state labeled by α0 (|ψ(0)i = |α0 i), by 21.8, at later
times one has (here ~ = ω = 1)

|ψ(t)i =e−iHt |α0 i



|α0 |2 X αn
=e−iHt e− 2 √ 0 |ni
n=0 n!

1 |α0 |2 X αn
=e−i(n+ 2 )t e− 2 √ 0 |ni
n=0 n!

1 |e−it α0 |2 X (e−it α0 )n
=e−i 2 t e− 2 √ |ni
n=0 n!
1
=e−i 2 t |e−it α0 i
1
Up to the phase factor e−i 2 t , this remains a coherent state, with label α given by
the classical time-dependence of the complex coordinate z(t) = √12 (q(t) + ip(t))
for the harmonic oscillator (see 20.1) with z(0) = α0 .

Digression (Spin coherent states). One can perform a similar construction


replacing the group H3 by the group SU (2), and the state |0i by a highest weight
vector of an irreducible representation (πn , V n = Cn+1 ) of spin n2 . Writing | n2 i

246
for a highest weight vector, we have
n n n
πn0 (S3 )| i = , πn0 (S+ )| i = 0
2 2 2
and we can create a family of spin coherent states by acting on | n2 i by elements
of SU (2). If we identify states in this family that differ just by a phase, the
states are parametrized by a sphere.
By analogy with the Heisenberg group coherent states, with πn0 (S+ ) playing
the role of the annihilation operator a and πn0 (S− ) playing the role of the creation
operator a† , we can define a skew-adjoint transformation
1 iφ 0 1
θe πn (S− ) − θe−iφ πn0 (S+ )
2 2
and exponentiate to get a family of unitary transformations parametrized by
(θ, φ). Acting on the highest weight state we get a definition of the family of
spin coherent states as
1 iφ 0
(S− )− 12 θe−iφ πn
0 n
|θ, φi = e 2 θe πn (S+ )
| i
2
One can show that the SU (2) group element used here corresponds, in terms of
its action on vectors, to a rotation by an angle θ about the axis (sin φ, − cos φ, 0),
so one can associate the state |θ, φi to the unit vector along the z-axis, rotated
by this transformation.

21.6 For further reading


For more about complex structures symplectic vector spaces, including a discus-
sion of the Siegel upper half-space of (positive compatible complex structures
for arbitrary d), see chapter 1.4 of [7]. Coherent states and spin coherent states
are discussed in chapter 21 of [57]. Few quantum mechanics textbooks discuss
squeezed states, for one that does, see chapter 12 of [78].

247
248
Chapter 22

The Harmonic Oscillator


and the Metaplectic
Representation, d = 1

In the last chapter we examined those aspects of the harmonic oscillator quan-
tum system and the Bargmann-Fock representation that correspond to quan-
tization of phase space functions of order less than or equal to one, finding a
unitary representation ΓJ of the Heisenberg group H2d+1 for each positive com-
patible complex structure J on the dual phase space M. We’ll now turn to what
happens for order two functions, which will give a representation of M p(2d, R)
on the harmonic oscillator state space, extending ΓJ to a representation of the
full Jacobi group. In this chapter we will see what happens in some detail for
the d = 1 case, where the symplectic group is just SL(2, R).
The choice of complex structure J corresponds not only to a choice of |0iJ
(since J determines which operators are annihilation operators), but also to
that of a specific subgroup U (1) ⊂ SL(2, R). The nature of the double-cover
needed for ΓJ to be a true representation (not just a representation up to sign)
is best seen by considering the action of this U (1) on the harmonic oscillator
state space. The Lie algebra of this U (1) acts on energy eigenstates with an
extra 12 term, well-known to physicists as the non-zero energy of the vacuum
state, and this shows the need for the double-cover.

22.1 The metaplectic representation for d = 1


In the last chapter we saw that choosing a complex structure J on the dual
phase space M = R2d allows one to break up complex linear combinations of
the Qj , Pj into annihilation and creation operators, giving a unitary represen-
tation Γ0J of h2d+1 on the Fock space Fd . For the standard choice of complex
structure J = J0 we have Γ0J0 = Γ0BF and this representation is the one used in

249
the standard description in terms of annihilation and creation operators of the
quantum harmonic oscillator system with d degrees of freedom.
Recall from our discussion of the Schrödinger representation Γ0S in section
18 that we can extend that representation from h2d+1 to include quadratic
combinations of the qj , pj , getting a unitary representation of the semi-direct
product h2d+1 o sp(2d, R). Restricting attention to the sp(2d, R) factor, we get
the metaplectic representation, and it is this that we will construct explicitly
using Γ0BF instead of Γ0S . In this chapter, we’ll start with the case d = 1, where
sp(2, R) = sl(2, R).
One can readily compute the Poisson brackets of order two of z and z using
the basic relation {z, z} = i and the Leibniz rule, finding the following for the
non-zero cases

{zz, z 2 } = −2iz 2 , {zz, z 2 } = 2iz 2 , {z 2 , z 2 } = 4izz

In the case of the Schrödinger representation, our quadratic combinations of p


and q were real, and we could identify the Lie algebra they generated with the
Lie algebra sl(2, R) of traceless 2 by 2 real matrices with basis
     
0 1 0 0 1 0
E= , F = , G=
0 0 1 0 0 −1

Since we have complexified, our quadratic combinations of z and z are in the


complexification of sl(2, R), the Lie algebra sl(2, C) of traceless 2 by 2 complex
matrices. We can take as a basis of sl(2, C) over the complex numbers

1
Z = E − F, X± = (G ± i(E + F ))
2
which satisfy

[Z, X− ] = −2iX− , [Z, X+ ] = 2iX+ , [X+ , X− ] = −iZ

and then use as our isomorphism between quadratics in z, z and sl(2, C)

z2 z2
↔ X− , ↔ X+ , zz ↔ Z
2 2
The element  
1 0 1
zz = (q 2 + p2 ) ↔ Z =
2 −1 0
exponentiates to give a SO(2) = U (1) subgroup of SL(2, R) with elements of
the form  
cos θ sin θ
eθZ =
− sin θ cos θ
Note that h = 21 (p2 + q 2 ) = zz is the classical Hamiltonian function for the
harmonic oscillator.

250
We can now quantize quadratics in z and z using annihilation and creation
operators acting on the Fock space F. There is no operator ordering ambiguity
for
d2
z 2 → (a† )2 = w2 , z 2 → a2 =
dw2
For the case of zz (which is real), in order to get the sl(2, R) commutation
relations to come out right (in particular, the Poisson bracket {z 2 , z 2 } = 4izz),
we must take the symmetric combination

1 1 d 1
zz → (aa† + a† a) = a† a + = w +
2 2 dw 2
(which of course is just the standard Hamiltonian for the quantum harmonic
oscillator).
Multiplying as usual by −i one can now define an extension of the Bargmann-
Fock representation to an sl(2, C) representation by taking

i i 1
Γ0BF (X+ ) = − a2 , Γ0BF (X− ) = − (a† )2 , Γ0BF (Z) = −i (a† a + aa† )
2 2 2
One can check that we have made the right choice of Γ0BF (Z) to get an sl(2, C)
representation by computing
i i 1
[Γ0BF (X+ ), Γ0BF (X− )] =[− a2 , − (a† )2 ] = − (aa† + a† a)
2 2 2
= − iΓ0BF (Z) = Γ0BF ([X+ , X− ])

As a representation of the real sub-Lie algebra sl(2, R) of sl(2, C), one has
(using the fact that G, E + F, E − F is a real basis of sl(2, R)):

Definition (Metaplectic representation of sl(2, R)). The representation Γ0BF


on F given by
i
Γ0BF (G) = Γ0BF (X+ + X− ) = − ((a† )2 + a2 )
2
1
Γ0BF (E + F ) = Γ0BF (−i(X+ − X− )) = − ((a† )2 − a2 ) (22.1)
2
1
Γ0BF (E − F ) = Γ0BF (Z) = −i (a† a + aa† )
2
is a representation of sl(2, R) called the metaplectic representation.

Note that one can explicitly see from these expressions that this is a unitary
representation, since all the operators are skew-adjoint (using the fact that a
and a† are each other’s adjoints).
This representation Γ0BF will be unitarily equivalent to the Schrödinger ver-
sion Γ0S found earlier when quantizing q 2 , p2 , pq as operators on H = L2 (R).
It is however much easier to work with since it can be studied as the state

251
space of the quantum harmonic oscillator, with the Lie algebra acting simply
by quadratic expressions in the annihilation and creation operators.
One thing that can now easily be seen is that this representation Γ0BF does
not integrate to give a representation of the group SL(2, R). If the Lie algebra
representation Γ0BF comes from a Lie group representation ΓBF of SL(2, R), we
have
0
ΓBF (eθZ ) = eθΓBF (Z)

where
1 1
Γ0BF (Z) = −i(a† a + ) = −i(N + )
2 2
so
1
ΓBF (eθZ )|ni = e−iθ(n+ 2 ) |ni

Taking θ = 2π, this gives an inconsistency

ΓBF (1)|ni = −|ni

which has its origin in the physical phenomenon that the energy of the lowest
energy eigenstate |0i is 21 rather than 0, so not an integer.
This is precisely the same sort of problem we found when studying the
spinor representation of the Lie algebra so(3). Just as in that case, the problem
indicates that we need to consider not the group SL(2, R), but a double cover,
the metaplectic group M p(2, R). The behavior here is quite a bit more subtle
than in the Spin(3) double cover case, where Spin(3) was just the group SU (2),
and topologically the only non-trivial cover of SO(3) was the Spin(3) one since
π1 (SO(3)) = Z2 . Here one has π1 (SL(2, Z)) = Z, and each extra time one goes
around the U (1) subgroup we are looking at one gets a topologically different
non-contractible loop in the group. As a result, SL(2, R) has lots of non-trivial
covering groups, of which only one interests us, the double cover M p(2, R). In
^R), but that plays
particular, there is an infinite-sheeted universal cover SL(2,
no role here.

Digression. This group M p(2, R) is quite unusual in that it is a finite-dim-


ensional Lie group, but does not have any sort of description as a group of
finite-dimensional matrices. This is related to the fact that its only interesting
irreducible representation is the infinite-dimensional one we are studying. The
lack of any significant irreducible finite-dimensional representations corresponds
to its not having a matrix description, which would give such a representation.
Note that the lack of a matrix description means that this is a case where the
definition we gave of a Lie algebra in terms of the matrix exponential does not
apply. The more general geometric definition of the Lie algebra of a group in
terms of the tangent space at the identity of the group does apply, although to do
this one really needs a construction of the double cover M p(2, R), which is quite
non-trivial. This is not actually a problem for purely Lie algebra calculations,
since the Lie algebras of M p(2, R) and SL(2, R) can be identified.

252
Another aspect of the metaplectic representation that is relatively easy to
see in the Bargmann-Fock construction is that the state space F is not an
irreducible representation, but is the sum of two irreducible representations

F = Feven ⊕ Fodd

where Feven consists of the even functions, Fodd of odd functions. On the sub-
space F f in ⊂ F of finite sums of the number eigenstates, these are just the
even and odd degree polynomials. Since the generators of the Lie algebra rep-
resentation are degree two combinations of annihilation and creation operators,
they will take even functions to even functions and odd to odd. The separate
irreducibility of these two pieces is due to the fact that (when n = m(2)), one
can get from state |ni to any another |mi by repeated application of the Lie
algebra representation operators.

22.2 Complex structures and the SL(2, R) action


on M
Recall from the discussion in chapter 18 that the existence of the metaplectic
representation can be understood in terms of the fact that the action of SL(2, R)
on M implies an action by automorphisms on the Heisenberg group H3 , together
with the uniqueness (up unitary equivalence) of the irreducible representation of
H3 . This SL(2, R) action on H3 was studied in section 14.2 where we saw that
it was given infinitesimally by Poisson brackets between order two (sl(2, R))
and linear (h3 ) polynomials (see equation 14.8).
The Bargmann-Fock construction above of the metaplectic representation
depends on an extra choice, one that is not invariant under the SL(2, R) action.
This is the choice of a complex structure J0 to provide the splitting M ⊗ C =

M+ J0 ⊕ MJ0 into ±i eigenspaces of J0 . Recall that the group SL(2, R) acts on
M by matrices  
α β
γ δ
satisfying αδ − βγ = 1 (with respect to the choice of a basis {q, p}). Only a
subgroup of SL(2, R) will respect the splitting provided by J0 : the subgroup of
matrices commuting with J0 . This will be this subgroup of SL(2, R) that acts
on M ⊗ C (extending the action on M by complex linearity) taking M+ J0 to
− −
M+ J0 and M J0 to M J0 .
This commutativity condition is explicitly
     
α β 0 1 0 1 α β
=
γ δ −1 0 −1 0 γ δ
so    
β −α −γ −δ
=
δ −γ α β

253
which implies β = −γ and α = δ. The elements of SL(2, R) that we want will
be of the form  
α β
−β α
with unit determinant, so α2 + β 2 = 1. This is the U (1) = SO(2) subgroup of
SL(2, R) of matrices of the form
 
cos θ sin θ
= eθZ
− sin θ cos θ

The metaplectic representation restricted to this subgroup was studied using


the Schrödinger representation in 18.2.1, and its role in showing the necessity of
the metaplectic double cover was seen in the last section. Here we’ll study the
representation on the same subgroup, but in the Bargmann-Fock version using
annihilation and creation operators.
Introducing the complex structure J0 and thus coordinates z, z on the com-
plexification M ⊗ C, the Poisson bracket relations 14.8 after complexification
break up into two sorts: those that preserve the complex structure and those
that don’t. The ones that preserve the complex structure will be

{zz, z} = −iz, {zz, z} = iz (22.2)

since it is zz that corresponds to Z according to our isomorphism of order two


polynomials and sl(2, R). Note that this could be written as
    
z −i 0 z
{zz, }=
z 0 i z

which is a (complexified) example of equation 14.14 since µZ = zz and Z = −J0


acts as −i on the coordinate function z, and i on the coordinate function z.
Quantization by the metaplectic representation gives (see 22.1)

i
Γ0BF (zz) = Γ0BF (Z) = − (aa† + a† a)
2
and the quantized analogs of 22.2 are

i i
[− (aa† + a† a), a† ] = −ia† , [− (aa† + a† a), a] = ia (22.3)
2 2

Exponentiating, one has g = eθZ ∈ U (1) ⊂ SL(2, R) and unitary operators



θ
+a† a)
Ug = ΓBF (eθZ ) = e−i 2 (aa

which satisfy

Ug a† Ug−1 = e−iθ a† , Ug aUg−1 = eiθ a (22.4)

254
Note that, using equation 5.1 one has
d i
(Ug a† Ug−1 )|θ=0 = [− (aa† + a† a), a† ]
dθ 2
so equation 22.3 is just the derivative at the identity of equation 22.4. Also note
that
We see that, on operators, conjugation by the action of the U (1) subgroup
of SL(2, R) does not mix creation and annihilation operators. On the distin-
guished state |0i, Ug acts as the phase transformation
1
Ug |0i = e−i 2 θ |0i

Besides 22.2, there are also Poisson bracket relations corresponding to in-
finitesimal sl(2, R) transformations that do not preserve the complex structure.
They are

{z 2 , z} = 2iz, {z 2 , z} = 0, {z 2 , z} = −2iz, {z 2 , z} = 0 (22.5)

As an example of SL(2, R) transformations that change the complex structure,


consider the subgroup of elements of the form
 α 
e 0
gα =
0 e−α

For α > 0 these are “squeezing” transformations which expand vectors in the q
direction in M, and contract them in the p direction. One can think of these
as a change of basis in M, with the complex structure in the new basis
 α   −α
e2α
   
e 0 0 −1 e 0 0
Jα = gα J0 gα−1 = =
0 e−α 1 0 0 eα e−2α 0

Comparing to equation 21.5 we find that Jα is the positive compatible complex


structure with parameter τ = ie2α .
The relations 22.5 correspond to the following relations of annihilation and
creation operators:

[(a† )2 , a] = −2a† , [(a† )2 , a† ] = 0, [a2 , a† ] = 2a, [a2 , a] = 0

and one sees that conjugation by exponentials of linear combinations of a2 and


(a† )2 will mix annihilation and creation operators (unlike the U (1) ⊂ SL(2, R)
case of equations 22.3 and 22.4). For the example above

gα = eαG

and we find
0 † 2
i
) +a2 )
ΓBF (gα ) = ΓBF (eαG ) = eαΓBF (G) = e−α 2 ((a

by 22.1.

255
The ΓBF (gα ) are unitary operators (since exponentials of skew-adjoint oper-
ators) on the Fock space F that take |0i to a different state |0iα , not proportional
to |0i. This state |0iα can be used to characterize the complex structure Jα ,
as the one corresponding to the change of variables that takes the annihilation
operator a to the linear combination of annihilation and creation operators that
annihilates |0iα . One can generalize this for arbitrary positive compatible com-
plex structures, and get a map that take τ in the upper half-plane to vectors in
F (modulo scalar multiplication of the vectors). This map turns out to be quite
useful in algebraic geometry, providing an embedding of the upper half-plane
in complex projective space (the same holds true for d > 1 for the Siegel upper
half-space).

22.3 Normal Ordering


Whenever one has a product involving both z and z that one would like to
quantize, the non-trivial commutation relation of a and a† means that one has
different inequivalent possibilities, depending on the order one chooses for the
a and a† . In this case, we chose to quantize h = zz using the symmetric choice
1 1
H= (aa† + a† a) = a† a +
2 2
because quantization of order two polynomials then gave a representation of
sl(2, R). We could instead have chosen to use a† a, which is an example of a
“normal-ordered” product.
Definition. Normal-ordered product
Given any product P of the a and a† operators, the normal ordered product
of P , written :P : is given by re-ordering the product so that all factors a† are
on the left, all factors a on the right, for example

:a2 a† a(a† )3 : = (a† )4 a3

The advantage of working with the normal-ordered choice


1
a† a = : (aa† + a† a):
2
is that it acts trivially on |0i and has integer rather than half-integer eigenvalues
on F. There is then no need to invoke a double-covering. The disadvantage
is that one gets a representation of u(1) that does not extend to sl(2, R). One
also needs to keep in mind that the definition of normal-ordering depends upon
the choice of J, or equivalently, the choice of distinguished state |0i annihilated
by the annihilation operators. A better notation would be something like :P :J
rather than just :P :.
We have now seen that the necessary choice of a complex structure J in
a Bargmann-Fock type quantization shows up in the following distinguished
features of the representation:

256

• The choice of the decomposition M ⊗ C = M+ J ⊕ MJ . After quantization
this determines which operators are linear combinations of annihilation
operators and which are linear combinations of creation operators.
• The choice of normal-ordering prescription. SL(2, R) transformations
that mix annihilation and creation operators change the definition of the
normal ordering symbol : :.
• The choice (up to a scalar factor) of a distinguished state |0i ∈ H, the
state annihilated by annihilation operators. Equation 21.7 gives this state
explicitly in the Schrödinger representation, showing how it depends on
the complex structure τ .

• The choice of quadratic Hamiltonian as half the symmetrized product


of annihilation and creation operators, and thus with energy eigenstates
given by applying creation operators to |0i. See equation 21.6 for the
explicit form of this as a function of τ .
• The choice of a distinguished subgroup U (1) ⊂ SL(2, R), the subgroup
that commutes with J.
We saw in chapter 21 that the space of positive, compatible complex struc-
tures can be parametrized by the upper half-plane. An alternate way of charac-
terizing this space is as the quotient space SL(2, R)/U (1), since SL(2, R) acts
transitively on the complex structures (by conjugation of the matrix for J), with
the stabilizer of J a subgroup U (1) ⊂ SL(2, R).
SL(2, R) transformations that do not commute with the complex structure
are known in the physics literature as “Bogoliubov transformations”. Instead
of describing such transformations as we have done, in terms of real matrices
and the basis q, p of M, one can instead use complex matrices and the basis
z, z of M ⊗ C. Transformations that preserve the Poisson brackets (and, after
quantization, the commutation relations of annihilation and creation operators)
are given by complex matrices of the form
 
α β
, |α|2 − |β|2 = 1
β α

The group of such matrices is called SU (1, 1) and is isomorphic to SL(2, R).

22.4 For further reading


The metaplectic representation is not usually mentioned in the physics litera-
ture, and the discussions in the mathematical literature tend to be aimed at
an advanced audience. Two good examples of such detailed discussions can
be found in [19] and chapters 1 and 11 of [66]. For an example of the use
of Bogoliubov transformations (change of complex structure) in the theory of
superfluidity, see chapter 10.3 of [61].

257
258
Chapter 23

The Harmonic Oscillator as


a Representation of U (d)

As a representation of M p(2d, R), ΓBF is unitarily equivalent (by the Barg-


mann transform) to ΓS , the Schrödinger representation studied earlier. The
Bargmann-Fock version though makes some aspects of the representation easier
to study; in particular it makes clear the nature of the double-cover that appears.
A subtle aspect of the Bargmann-Fock construction is that it depends on a
specific choice of complex structure J, a choice which corresponds to a choice of a
distinguished state |0i. This same choice picks out a subgroup U (d) ⊂ Sp(2d, R)
of transformations which commute with J, and the harmonic oscillator state
space gives one a representation of a double-cover of this group. We will see
that normal-ordering the products of annihilation and creation operators turns
this into a representation of U (d) itself. In this way, a U (d) action on the finite-
dimensional phase space gives operators that provide an infinite dimensional
representation of U (d) on the harmonic oscillator state space H. This method
for turning symmetries of the classical phase space into unitary representations
of the symmetry group on a quantum state space is elaborated in great detail
here not just because of its application to these simple quantum systems, but
because it will turn out to be fundamental in our later study of quantum field
theories.
We will see in detail how in the case d = 2 the group U (2) ⊂ Sp(4, R) com-
mutes with the Hamiltonian, so acts as symmetries preserving energy eigenspaces
on the harmonic oscillator state space. This gives the same construction of all
SU (2) ⊂ U (2) irreducible representations that we studied in chapter 8. The
case d = 3 corresponds to the physical example of a quadratic central potential
in three dimensions, with the rotation group acting on the state space as an
SO(3) subgroup of the subgroup U (3) ⊂ Sp(6, R) of symmetries commuting
with the Hamiltonian.

259
23.1 Complex structures and the Sp(2d, R) ac-
tion on M
We saw in chapter 21 that a generalization of the Bargmann-Fock representation
can be defined for any choice of a positive compatible complex structure J on
M = R2d . J provides a decomposition

M ⊗ C = M+
J ⊕ MJ

One can choose complex coordinates zj , j = 1, · · · , d which will be basis el-


ements in M+ −
J , while the z j , j = 1, · · · , d are basis elements in M . The
standard choice of complex structure J = J0 corresponds to the choice
1 1
zj = √ (qj − ipj ), z j = √ (qj + ipj )
2 2
with more general choices of complex structures given by the linear combinations
of qj , pj described in detail for the d = 1 case in section 21.4. For the calculations
of this chapter, one can assume unless otherwise stated that either the choice
J = J0 has been made, or the choice of complex structure does not matter.
The choice of complexified coordinate functions zj , z j gives a decomposition
of the complexified Lie algebra sp(2d, C) into three sub-algebras as follows:

• A Lie subalgebra with basis elements zj zk (as usual, the Lie bracket is the
Poisson bracket). There are 12 (d2 + d) distinct such basis elements. This is
a commutative Lie subalgebra, since the Poisson bracket of any two basis
elements is zero.

• A Lie subalgebra with basis elements z j z k . Again, it has dimension 12 (d2 +


d) and is a commutative Lie subalgebra.

• A Lie subalgebra with basis elements zj z k , which has dimension d2 . Com-


puting Poisson brackets one finds

{zj z k , zl z m } =zj {z k , zl z m } + z k {zj , zl z m }


= − izj z m δkl + izl z k δjm (23.1)

The first two subalgebras correspond to complexified infinitesimal Sp(2, R)


transformations that do not preserve the decomposition

M ⊗ C = M+
J0 ⊕ MJ0

These are the analogs for arbitrary d of the Bogoliubov transformations studied
in the case d = 1, but we will not further discuss their properties in the general
case. The last subalgebra is the one we will mostly be interested in since it turns
out that quantization of elements of this subalgebra produces the operators of
most physical interest.

260
Taking all complex linear combinations, this subalgebra can be identified
with the Lie algebra gl(d, C) of all d by d complex matrices, since if Ejk is the
matrix with 1 at the j-th row and k-th column, zeros elsewhere, one has
[Ejk , Elm ] = Ejm δkl − Elk δjm
and these provide a basis of gl(d, C). Identifying bases by
izj z k ↔ Ejk
gives the isomorphism of Lie algebras. This gl(d, C) is the complexification of
u(d), the Lie algebra of the unitary group U (d). Elements of u(d) will corre-
spond to skew-adjoint matrices so real linear combinations of the real quadratic
functions
zj z k + z j zk , i(zj z k − z j zk )
on M.
The moment map here is
X
A ∈ gl(d, C) → µA = i zj Ajk z k
j,k

and we have
Theorem 23.1. One has the Poisson bracket relation
{µA , µA0 } = µ[A,A0 ]
so the moment map is a Lie algebra homomorphism.
One also has (for column vectors z with components z1 , . . . , zd )
{µA , z} = AT z, {µA , z} = −Az (23.2)
Proof. Using 23.1 one has
X
{µA , µA0 } = − {zj Ajk z k , zl A0lm z m }
j,k,l,m
X
=− Ajk A0lm {zj z k , zl z m }
j,k,l,m
X
=i Ajk A0lm (zj z m δkl − zl z k δjm )
j,k,l,m
X
=i zj [A, A0 ]jk z k = µ[A,A0 ]
j,k

To show 23.2, compute


X X
{µA , zl } ={i zj Ajk z k , zl } = i zj Ajk {z k , zl }
j,k j,k
X
= zj Ajl
j

261
and
X X
{µA , z l } ={i zj Ajk z k , z l } = i Ajk {zj , z l }z k
j,k j,k
X
=− Alk z k
k

Note that here we have written formulas for A ∈ gl(d, C), an arbitrary com-
plex d by d matrix. It is only for A ∈ u(d), the skew-adjoint (AT = −A)
matrices, that µA will be a real-valued moment map, lying in the real lie al-
gebra sp(2d, R), and giving a unitary representation on the state space after
quantization. For such A we can write the relations 23.2 as a (complexified)
example of 14.14
   T  
z A 0 z
{µA , }=
z 0 AT z

Recall that in chapter 14 we found the moment map µL = −q · Ap for


elements L ∈ sp(2d, R) of the block-diagonal form
 
A 0
0 −AT

where A is a real d by d matrix and so in gl(d, R). That block decomposition


corresponded to the decomposition of basis vectors of M into the sets qj and
pj . Here we have complexified, and are working with respect to a different

decomposition, that of M ⊗ C = M+ J ⊕ MJ . The matrices A in this case are
complex, skew-adjoint, and in a different Lie subalgebra, u(d) ⊂ sp(2d, R).
The standard Hamiltonian
d
X
h= zj z j
j=1

lies in this sub-algebra (it is the case A = −i1), and one can show that its
Poisson brackets with the rest of the sub-algebra are zero. It gives a basis
element of the one-dimensional u(1) subalgebra that commutes with the rest of
the u(d) subalgebra.
In section 22.2, for the case d = 1 and J = J0 , we found that there was
a U (1) ⊂ SL(2, R) group acting on M preserving Ω, and commuting with J0 .

Complexifying M this U (1) acted separately on M+ J0 and MJ0 , and there was
a moment map taking Z = −J0 to the function µZ = zz on M . Here we have
a U (d) ⊂ Sp(2d, R), again acting on M preserving Ω, and commuting with J,

so also acting separately on M+ J and MJ after complexification.

262
23.2 The metaplectic representation and U (d) ⊂
Sp(2d, R)
Turning to the quantization problem, we would like to extend the quantization of
linear functions of zj , z j of chapter 21 to quadratic functions, using annihilation
and creation operators. For any j, k one can take

zj zk → −ia†j a†k , z j z k → −iaj ak

There is no ambiguity in the quantization of the two subalgebras given by pairs


of the z coordinates or pairs of the z coordinates since creation operators com-
mute with each other, and annihilation operators commute with each other.
If j 6= k one can take

zj z k → −ia†j ak = −iak a†j

and there is again no ordering ambiguity. If j = k, as in the d = 1 case there is


a choice to be made. One possibility is to take
1 1
zj z j → −i (aj a†j + a†j aj ) = −i(a†j aj + )
2 2
which will have the proper sp(2d, R) commutation relations (in particular for
commutators of a2j with (a†j )2 ), but require going to a double cover to get a true
representation of the group. The Bargmann-Fock construction thus gives us a
unitary representation of u(d) on Fock space Fd , but after exponentiation this
is a representation not of the group U (d), but of a double cover we call U ] (d).
One could instead quantize using normal-ordered operators, taking

zj z j → −ia†j aj

The definition of normal ordering generalizes simply, since the order of anni-
hilation and creation operators with different values of j is immaterial. If one
uses this normal-ordered choice, one has shifted the usual quantized operators
of the Bargmann-Fock representation by a scalar 12 for each j, and after expo-
nentiation the state space H = Fd provides a representation of U (d), with no
need for a double cover. As a u(d) representation however, this does not extend
to a representation of sp(2d, R), since commutation of a2j with (a†j )2 can land
one on the unshifted operators.
We saw above that the infinitesimal action of u(d) ⊂ sp(2d, R) preserves
the decomposition of M ⊗ C = M+ ⊕ M− , and this will be true after ex-
ponentiating for U (d) ⊂ Sp(2d, R). We won’t show this here, but U (d) is
the maximal subgroup that preserves this decomposition. The analog of the
d = 1 parametrization of possible distinguished states |0i by SL(2, R)/U (1)
here would be a parametrization of such states (or, equivalently, of possible
choices of J) by the space Sp(2d, R)/U (d), the Siegel upper half-space.
Since the normal-ordering doesn’t change the commutation relations obeyed
by products of the form a†j ak , one can quantize the quadratic expression for

263
µA and get quadratic combinations of the aj , a†k with the same commutation
relations as in theorem 23.1. Letting
X †
UA0 = aj Ajk ak
j,k

we have
Theorem 23.2. For A ∈ gl(d, C) a d by d complex matrix one has

[UA0 , UA0 0 ] = U[A,A0 ]

So
A ∈ gl(d, C) → UA0
is a Lie algebra representation of gl(d, C) on H = C[w1 , . . . , wd ], the harmonic
oscillator state space in d degrees of freedom.
One also has (for column vectors a with components a1 , . . . , ad )

[UA0 , a† ] = AT a† , [UA0 , a] = −Aa (23.3)

Proof. Essentially the same proof as 23.1.


For A ∈ u(d) the Lie algebra representation UA0 of u(d) exponentiates to give
a representation of U (d) on H = C[w1 , . . . , wd ] by operators
0
UeA = eUA

These satisfy
T T
UeA a† (UeA )−1 = eA a† , UeA a(UeA )−1 = eA a (23.4)

(the relations 24.1 are the derivative of these). This shows that the UeA are
intertwining operators for a U (d) action on annihilation and creation operators
that preserves the canonical commutation relations (the relations that say the
aj , a†j give a representation of the complexified Heisenberg Lie algebra). Here
the use of normal-ordered operators means that UA0 is a representation of u(d)
that differs by a constant from the metaplectic representation, and UeA differs
by a phase-factor. This does not affect the commutation relations with UA0 or
the conjugation action of UeA . The representation one gets this way differs
in two ways from the metaplectic representation. It acts on the same space
H = Fd , but it is a true representation of U (d), no double-cover is needed. It
also does not extend to a representation of the larger group Sp(2d, R).
The operators UA0 and UeA commute with the Hamiltonian operator. From
the physics point of view, this is useful, as it provides a decomposition of en-
ergy eigenstates into irreducible representations of U (d). From the mathematics
point of view, the quantum harmonic oscillator state space provides a construc-
tion of a large class of irreducible representations of U (d) (the energy eigenstates
of a given energy).

264
23.3 Examples in d = 2 and 3
23.3.1 Two degrees of freedom and SU (2)
In the case d = 2, the group U (2) ⊂ Sp(4, R) preserving the complex structure
J0 commutes with the standard harmonic oscillator Hamiltonian and so acts
as symmetries on the quantum state space, preserving energy eigenspaces. Re-
stricting to the subgroup SU (2) ⊂ U (2), we can recover our earlier construction
of SU (2) representations in terms of homogeneous polynomials, in a new con-
text. This use of the energy eigenstates of a two-dimensional harmonic oscillator
appears in the physics literature as the “Schwinger boson method” for studying
representations of SU (2).
The state space for the d = 2 Bargmann-Fock representation, restricting to
finite linear combinations of energy eigenstates, is

H = F2f in = C[w1 , w2 ]

the polynomials in two complex variables w1 , w2 . Recall from our SU (2) dis-
cussion that it was useful to organize these polynomials into finite dimensional
sets of homogeneous polynomials of degree n for n = 0, 1, 2, . . .

H = H0 ⊕ H1 ⊕ H2 ⊕ · · ·

There are four annihilation or creation operators


∂ ∂
a†1 = w1 , a†2 = w2 , a1 = , a2 =
∂w1 ∂w2
acting on H. These are the quantizations of complexified phase space coordi-
nates z1 , z2 , z 1 , z 2 with quantization just the Bargmann-Fock construction of
the representation Γ0BF of h2d

Γ0BF (1) = −i1, Γ0BF (zj ) = −ia†j , Γ0BF (z j ) = −iaj

Our original dual phase space was M = R4 , with a group Sp(4, R) acting on
it, preserving the antisymmetric bilinear form Ω. When picking the coordinates
z1 , z2 , we made a standard choice of complex structure J0 on M. Complexifying,
we have

M ⊗ C = M+ 2
J0 ⊕ MJ0 = C ⊕ C
2


where z1 , z2 are coordinates on M+
J0 , z 1 , z 2 are coordinates on MJ0 . This choice
of J = J0 picks out a distinguished subgroup U (2) ⊂ Sp(4, R).
The quadratic combinations of the creation and annihilation operators give
representations on H of three subalgebras of the complexification sp(4, C) of
sp(4, R):
• A three dimensional commutative Lie sub-algebra spanned by z1 z2 , z12 , z22 ,
with quantization

Γ0BF (z1 z2 ) = −ia†1 a†2 , Γ0BF (z12 ) = −i(a†1 )2 , Γ0BF (z22 ) = −i(a†2 )2

265
• A three dimensional commutative Lie sub-algebra spanned by z 1 z 2 , z 21 , z 22 ,
with quantization

Γ0BF (z 1 z 2 ) = −ia1 a2 , Γ0BF (z 21 ) = −ia21 , Γ0BF (z 22 ) = −ia22

• A four dimensional Lie subalgebra isomorphic to gl(2, C) with basis

z1 z 1 , z2 z 2 , z1 z 2 , z 1 z2

and quantization
i i
Γ0BF (z1 z 1 ) = − (a†1 a1 + a1 a†1 ), Γ0BF (z2 z 2 ) = − (a†2 a2 + a2 a†2 )
2 2

Γ0BF (z 1 z2 ) = −ia1 a†2 , Γ0BF (z1 z 2 ) = −ia2 a†1

Real linear combinations of

z1 z 1 , z2 z 2 , z1 z 2 + z2 z 1 , i(z1 z 2 − z2 z 1 )

span the Lie algebra u(2) ⊂ sp(4, R), and Γ0BF applied to these gives a
unitary Lie algebra representation by skew-adjoint operators.
Inside this last subalgebra, there is a distinguished element h = z1 z 1 + z2 z 2
that Poisson-commutes with the rest of the subalgebra (but not with elements
in the first two subalgebras). Quantization of h gives the Hamiltonian operator

1 1 1 ∂ ∂
H= (a1 a†1 + a†1 a1 + a2 a†2 + a†2 a2 ) = N1 + + N2 + = w1 + w2 +1
2 2 2 ∂w1 ∂w2
This operator will just multiply a homogeneous polynomial by its degree plus
one, so it acts just by multiplication by n + 1 on Hn . Exponentiating this
operator (multiplied by −i) one gets a representation of a U (1) subgroup of the
metaplectic cover M p(4, R). Taking instead the normal-ordered version

∂ ∂
:H: = a†1 a1 + a†2 a2 = N1 + N2 = w1 + w2
∂w1 ∂w2
one gets a representation of a U (1) subroup of Sp(4, R). Neither H nor :H:
commutes with operators coming from quantization of the first two subalgebras.
These change the eigenvalue of H or :H: by ±2 so take

Hn → Hn±2

in particular taking |0i to either 0 or a state in H2 .


h is a basis element for the u(1) in u(2) = u(1) ⊕ su(2). For the su(2) part a
σ
correspondence to our basis Xj = −i 2j in terms of 2 by 2 matrices is

1 i 1
X1 ↔ (z1 z 2 + z2 z 1 ), X2 ↔ (z2 z 1 − z1 z 2 ), X3 ↔ (z1 z 1 − z2 z 2 )
2 2 2

266
This relates two different but isomorphic ways of describing su(2): as 2 by 2
matrices with Lie bracket the commutator, or as quadratic polynomials, with
Lie bracket the Poisson bracket.
Quantizing using normal-ordering of operators give a representation of su(2)
on H
i 1
Γ0 (X1 ) = − (a†1 a2 + a†2 a1 ), Γ0 (X2 ) = (a†2 a1 − a†1 a2 )
2 2
i
Γ0 (X3 ) = − (a†1 a1 − a†2 a2 )
2
Comparing this to the representation π 0 of su(2) on homogeneous polynomials
discussed in chapter 8, one finds that they are isomorphic, although they act on
dual spaces, so Γ0 (X) = π 0 (−X T ) for all X ∈ su(2).
We see that, up to this change from a vector space to its dual, and the
normal-ordering (which only affects the u(1) factor, shifting the Hamiltonian by
a constant), the Bargmann-Fock representation on polynomials and the SU (2)
representation on homogeneous polynomials are identical. The inner product
that makes the representation unitary is the one of equation 8.2. The Bargmann-
Fock representation extends this SU (2) representation as a unitary representa-
tion to a much larger group (H5 o M p(4, R)), with all polynomials in w1 , w2
now making up a single irreducible representation.
The fact that we have an SU (2) group acting on the state space of the d = 2
harmonic oscillator and commuting with the action of the Hamiltonian H means
that energy eigenstates can be organized as irreducible representations of SU (2).
In particular, one sees that the space Hn of energy eigenstates of energy n + 1
will be a single irreducible SU (2) representation, the spin n2 representation of
dimension n + 1 (so n + 1 will be the multiplicity of energy eigenstates of that
energy).
Another physically interesting subgroup here is the SO(2) ⊂ SU (2) ⊂
Sp(4, R) consisting of simultaneous rotations in the position and momentum
planes, which was studied in detail using the coordinates q1 , q2 , p1 , p2 in section
18.2.2. There we found that the moment map was given by

µL = l = q1 p2 − q2 p1

and quantization by the Schrödinger representation gave a representation of the


Lie algebra so(2) with
UL0 = −i(Q1 P2 − Q2 P1 )

Note that this is a different SO(2) action than the one with moment map the
Hamiltonian, it acts separately on positions and momenta rather than mixing
them.
To see what happens if one instead uses the Bargmann-Fock representation,
note that
1 1
qj = √ (zj + z j ), pj = i √ (zj − z j )
2 2

267
so the moment map is
i
µL = ((z1 + z 1 )(z2 − z 2 ) − (z2 + z 2 )(z1 − z 1 ))
2
=i(z2 z 1 − z1 z 2 )
Quantizing, Bargmann-Fock gives a unitary representation of so(2)
UL0 = a†2 a1 − a†1 a2
which is Γ0 (2X2 ). The factor of two here reflects the fact that exponentiation
gives a representation of SO(2) ⊂ Sp(4, R), with no need for a double cover.

23.3.2 Three degrees of freedom and SO(3)


The case d = 3 corresponds physically to the so-called isotropic quantum har-
monic oscillator system, and it is an example of the sort of central poten-
tial problem we studied in chapter 19 (since the potential just depends on
r2 = q12 + q22 + q32 ). For such problems, we saw that since the classical Hamilto-
nian is rotationally invariant, the quantum Hamiltonian will commute with the
action of SO(3) on wavefunctions and energy eigenstates can be decomposed
into irreducible representations of SO(3).
Here the Bargmann-Fock representation gives an action of H7 oM p(6, R) on
the state space, with a U (3) subgroup commuting with the Hamiltonian (more
precisely one has a double cover of U (3), but by normal-ordering one can get an
actual U (3)). The eigenvalue of the U (1) corresponding to the Hamiltonian gives
the energy of a state, and states of a given energy will be sums of irreducible
representations of SU (3). This works much like in the d = 2 case, although here
our irreducible representations are the spaces Hn of homogeneous polynomials
of degree n in three variables rather than two. These spaces have dimension
1
2 (n + 1)(n + 2). A difference with the SU (2) case is that one does not get all
irreducible representations of SU (3) this way.
The rotation group SO(3) will be a subgroup of this U (3) and one can ask
how the SU (3) irreducible Hn decomposes into a sum of irreducibles of the
subgroup (which will be characterized by an integral spin l = 0, 1, 2, · · · ). One
can show that for even n one gets all even values of l from 0 to n, and for odd
n one gets all odd values of l from 1 to n. A derivation can be found in some
quantum mechanics textbooks, see for example pgs. 456-460 of [43].
To construct the angular momentum operators in the Bargmann-Fock rep-
resentation, recall that in the Schrödinger representation these were
L1 = Q2 P3 − Q3 P2 , L2 = Q3 P1 − Q1 P3 , L3 = Q1 P2 − Q2 P1
and one can rewrite these operators in terms of annihilation and creation op-
erators. Alternatively, one can use theoem 23.2, for Lie algebra basis elements
lj ∈ so(3) ⊂ u(3) ⊂ gl(3, C) which are (see chapter 6)
     
0 0 0 0 0 1 0 −1 0
l1 = 0 0 −1 , l2 =  0 0 0 , l3 = 1 0 0
0 1 0 −1 0 0 0 0 0

268
to calculate
3
X
−iLj = Ul0j = a†m (lj )mn an
m,n=1

This gives

Ul01 = a†3 a2 − a†2 a3 , Ul02 = a†1 a3 − a†3 a1 , Ul03 = a†2 a1 − a†1 a2

Exponentiating these operators gives a representation of the rotation group


SO(3) on the state space F3 , commuting with the Hamiltonian, so acting on
energy eigenspaces (which will be the homogeneous polynomials of fixed degree).

23.4 For further reading


The references from chapter 22 ([19], [66]) also contain the general case dis-
cussed here. The construction of the metaplectic representation restricted to
U (d) ⊂ Sp(2d, R) using annihilation and creation operators is a standard topic
in quantum field theory textbooks, although there in an infinite rather than
finite-dimensional context (and not explicitly in the language used here). We
will encounter the quantum field theory version in later chapters.

269
270
Chapter 24

The Fermionic Oscillator

In this chapter we’ll introduce a new quantum system by using a simple varia-
tion on techniques we used to study the harmonic oscillator, that of replacing
commutators by anticommutators. This variant of the harmonic oscillator will
be called a “fermionic oscillator”, with the original sometimes called a “bosonic
oscillator”. The terminology of “boson” and “fermion” refers to the principle
enunciated in chapter 9 that multiple identical particles are described by tensor
product states that are either symmetric (bosons) or antisymmetric (fermions).
The bosonic and fermionic oscillator systems are single-particle systems, de-
scribing the energy states of a single particle, so the usage of the bosonic/fermion-
ic terminology is not obviously relevant. In later chapters we will study quantum
field theories, which can be treated as infinite-dimensional oscillator systems.
In that context, multiple particle states will automatically be symmetric or an-
tisymmetric, depending on whether the field theory is treated as a bosonic or
fermionic oscillator system, thus justifying the terminology.

24.1 Canonical anticommutation relations and


the fermionic oscillator
Recall that the Hamiltonian for the quantum harmonic oscillator system in d
degrees of freedom (setting ~ = m = ω = 1) is

d
X 1
H= (Q2j + Pj2 )
j=1
2

and that it can be diagonalized by introducing number operators Nj = a†j aj


defined in terms of operators

1 1
aj = √ (Qj + iPj ), a†j = √ (Qj − iPj )
2 2

271
that satisfy the so-called canonical commutation relations (CCR)

[aj , a†k ] = δjk 1, [aj , ak ] = [a†j , a†k ] = 0

The simple change in the harmonic oscillator problem that takes one from
bosons to fermions is the replacement of the bosonic annihilation and creation
operators (which we’ll now denote aB and aB † ) by fermionic annihilation and
creation operators called aF and aF † , and replacement of the commutator

[A, B] ≡ AB − BA

of operators by the anticommutator

[A, B]+ ≡ AB + BA

The commutation relations are now

[aF , a†F ]+ = 1, [aF , aF ]+ = 0, [a†F , a†F ]+ = 0

with the last two relations implying that a2F = 0 and (a†F )2 = 0
The fermionic number operator

NF = a†F aF

now satisfies
2
NF2 = a†F aF a†F aF = a†F (1 − a†F aF )aF = NF − a†F a2F = NF
2
(using the fact that a2F = a†F = 0). So one has

NF2 − NF = NF (NF − 1) = 0

which implies that the eigenvalues of NF are just 0 and 1. We’ll denote eigen-
vectors with such eigenvalues by |0i and |1i. The simplest representation of the
operators aF and a†F on a complex vector space HF will be on C2 , and choosing
the basis    
0 1
|0i = , |1i =
1 0
the operators are represented as
     
0 0 † 0 1 1 0
aF = , aF = , NF =
1 0 0 0 0 0
Since
1 †
H= (a aF + aF a†F )
2 F
is just 21 the identity operator, to get a non-trivial quantum system, instead we
make a sign change and set
1 
1 1 0
H = (a†F aF − aF a†F ) = NF − 1 = 2
2 2 0 − 21

272
The energies of the energy eigenstates |0i and |1i will then be ± 12 since

1 1
H|0i = − |0i, H|1i = |1i
2 2
Note that the quantum system we have constructed here is nothing but our
old friend the two-state system of chapter 3. Taking complex linear combinations
of the operators
aF , a†F , NF , 1
we get all linear transformations of HF = C2 (so this is an irreducible represen-
tation of the algebra of these operators). The relation to the Pauli matrices is
just
1 1 1
a†F = (σ1 + iσ2 ), aF = (σ1 − iσ2 ), H = σ3
2 2 2

24.2 Multiple degrees of freedom


For the case of d degrees of freedom, one has this variant of the canonical
commutation relations (CCR) amongst the bosonic annihilation and creation
operators aB j and aB †j :
Definition (Canonical anticommutation relations). A set of 2d operators

aF j , aF †j , j = 1, . . . , d

is said to satisfy the canonical anticommutation relations (CAR) when one has

[aF j , aF †k ]+ = δjk 1, [aF j , aF k ]+ = 0, [aF †j , aF †k ]+ = 0

In this case one may choose as the state space the tensor product of N copies
of the single fermionic oscillator state space

HF = (C2 )⊗d = C2 ⊗ C2 ⊗ · · · ⊗ C2
| {z }
d times

The dimension of HF will be 2d . On this space the operators aF j and aF †j can


be explicitly given by
 
0 0
aF j = σ3 ⊗ σ3 ⊗ · · · ⊗ σ3 ⊗ ⊗ 1 ⊗ ··· ⊗ 1
| {z } 1 0
j−1 times
 
0 1
aF †j = σ3 ⊗ σ3 ⊗ · · · ⊗ σ3 ⊗ ⊗ 1 ⊗ ··· ⊗ 1
| {z } 0 0
j−1 times

The factors of σ3 ensure that the canonical anticommutation relations

[aF j , aF k ]+ = [aF †j , aF †k ]+ = [aF j , aF †k ]+ = 0

273
are satisfied for j 6= k since in these cases one will get in the tensor product
factors    
0 0 0 1
[σ3 , ] = 0 or [σ3 , ] =0
1 0 + 0 0 +
While this sort of tensor product construction is useful for discussing the
physics of multiple qubits, in general it is easier to not work with large tensor
products, and the Clifford algebra formalism we will describe in chapter 25
avoids this.
The number operators will be
NF j = aF †j aF j
These will commute with each other, so can be simultaneously diagonalized,
with eigenvalues nj = 0, 1. One can take as an orthonormal basis of HF the 2d
states
|n1 , n2 , · · · , nd i
As an example, for the case d = 3 the pattern of states and their energy
levels for the bosonic and fermionic cases looks like this

274
In the bosonic case the lowest energy state is at positive energy and there are
an infinite number of states of ever increasing energy. In the fermionic case the
lowest energy state is at negative energy, with the pattern of energy eigenvalues
of the finite number of states symmetric about the zero energy level.
Just as in the bosonic case, we can consider quadratic combinations of cre-
ation and annihilation operators of the form
X †
UA0 = aF j Ajk aF k
j,k

and we have
Theorem 24.1. For A ∈ gl(d, C) a d by d complex matrix one has
[UA0 , UA0 0 ] = U[A,A0 ]
So
A ∈ gl(d, C) → UA0
is a Lie algebra representation of gl(d, C) on HF
One also has (for column vectors aF with components aF 1 , . . . , aF d )
[UA0 , a†F ] = AT a†F , [UA0 , aF ] = −AaF (24.1)
Proof. The proof is similar to that of 23.1, except besides the relation
[AB, C] = A[B, C] + [A, B]C
we also use the relation
[AB, C] = A[B, C]+ − [A, B]+ C
For example
[UA0 , aF †l ] = [aF †j Ajk aF k , aF †l ]
X

j,k

aF †j Ajk [aF k , aF †l ]+
X
=
j,k

aF †j Ajl
X
=
j

The Hamiltonian is X 1
H= (NF j − 1)
j
2
which (up to the constant 12 that doesn’t contribute to commutation relations) is
just UB0 for the case B = 1. Since this commutes with all other d by d matrices,
we have
[H, UA0 ] = 0
for all A ∈ gl(d, C), so these are symmetries and we have a representation of
the Lie algebra gl(d, C) on each energy eigenspace. Only for A ∈ u(d) (A a
skew-adjoint matrix) will the representation turn out to be unitary.

275
24.3 For further reading
Most quantum field theory books and a few quantum mechanics books contain
some sort of discussion of the fermionic oscillator, see for example Chapter
21.3 of [57] or Chapter 5 of [10]. The standard discussion often starts with
considering a form of classical analog using anticommuting “fermionic” variables
and then quantization to get the fermionic oscillator. Here we are doing things
in the opposite order, starting in this chapter with the quantized oscillator, then
considering the classical analog in a later chapter.

276
Chapter 25

Weyl and Clifford Algebras

We have seen that just changing commutators to anticommutators takes the


harmonic oscillator quantum system to a very different one (the fermionic os-
cillator), with this new system having in many ways a parallel structure. It
turns out that this parallelism goes much deeper, with every aspect of the har-
monic oscillator story having a fermionic analog. We’ll begin in this chapter by
studying the operators of the corresponding quantum systems.

25.1 The Complex Weyl and Clifford algebras


In mathematics, a “ring” is a set with addition and multiplication laws that are
associative and distributive (but not necessarily commutative), and an “algebra”
is a ring that is also a vector space over some field of scalars. The canonical
commutation and anticommutation relations define interesting algebras, called
the Weyl and Clifford algebras respectively. The case of complex numbers as
scalars is simplest, so we’ll start with that, before moving on to the real number
case.

25.1.1 One degree of freedom, bosonic case


Starting with the one degree of freedom case (corresponding to two operators
Q, P , which is why the notation will have a 2) we can define
Definition (Complex Weyl algebra, one degree of freedom). The complex Weyl
algebra in the one degree of freedom case is the algebra Weyl(2, C) generated by
the elements 1, aB , a†B , satisfying the canonical commutation relations:

[aB , a†B ] = 1, [aB , aB ] = [a†B , a†B ] = 0


In other words, Weyl(2, C) is the algebra one gets by taking arbitrary prod-
ucts and complex linear combinations of the generators. By repeated use of the
commutation relation
aB a†B = 1 + a†B aB

277
any element of this algebra can be written as a sum of elements in normal order,
of the form
cl,m (a†B )l am
B

with all annihilation operators aB on the right, for some complex constants cl,m .
As a vector space over C, Weyl(2, C) is infinite-dimensional, with a basis

1, aB , a†B , a2B , a†B aB , (a†B )2 , a3B , a†B a2B , (a†B )2 aB , (a†B )3 , . . .

This algebra is isomorphic to a more familiar one. Setting


a†B = w, aB =
∂w
one sees that Weyl(2, C) can be identified with the algebra of polynomial coeffi-
cient differential operators on functions of a complex variable w. As a complex
vector space, the algebra is infinite dimensional, with a basis of elements

∂m
wl
∂wm
In our study of quantization and the harmonic oscillator we saw that the
subset of such operators consisting of complex linear combinations of

∂ ∂2 ∂
1, w, , w2 , , w
∂w ∂w2 ∂w
is closed under commutators, so it forms a Lie algebra of complex dimension 6.
This Lie algebra includes as subalgebras the Heisenberg Lie algebra h3 ⊗ C (first
three elements) and the Lie algebra sl(2, C) = sl(2, R)⊗C (last three elements).
Note that here we are allowing complex linear combinations, so we are getting
the complexification of the real six-dimensional Lie algebra that appeared in
our study of quantization.
Since the aB and a†B are defined in terms of P and Q, one could of course
also define the Weyl algebra as the one generated by 1, P, Q, with the Heisenberg
commutation relations, taking complex linear combinations of all products of
these operators.

25.1.2 One degree of freedom, fermionic case


Changing commutators to anticommutators, one gets a different algebra, the
Clifford algebra

Definition (Complex Clifford algebra, one degree of freedom). The complex


Clifford algebra in the one degree of freedom case is the algebra Cliff(2, C) gen-
erated by the elements 1, aF , a†F , subject to the canonical anticommutation rela-
tions
[aF , a†F ]+ = 1, [aF , aF ]+ = [a†F , a†F ]+ = 0

278
This algebra is a four dimensional algebra over C, with basis

1, aF , a†F , a†F aF

since higher powers of the operators vanish, and one can use the anticommu-
tation relation betwee aF and a†F to normal order and put factors of aF on
the right. We saw in the last chapter that this algebra is isomorphic with the
algebra M (2, C) of 2 by 2 complex matrices, using
       
1 0 0 0 † 0 1 † 1 0
1↔ , aF ↔ , aF ↔ , a F aF ↔
0 1 1 0 0 0 0 0

We will see later on that there is also a way of identifying this algebra with
“differential operators in fermionic variables”, analogous to what happens in
the bosonic (Weyl algebra) case.
Recall that the bosonic annihilation and creation operators were originally
defined in term of the P and Q operators by
1 1
aB = √ (Q + iP ), a†B = √ (Q − iP )
2 2
Looking for the fermionic analogs of the operators Q and P , we use a slightly
different normalization, and set
1 1
aF = (γ1 + iγ2 ), a†F = (γ1 − iγ2 )
2 2
so
1
γ1 = aF + a†F , γ2 =(aF − a†F )
i
and the CAR imply that the operators γj satisfy the anticommutation relations

[γ1 , γ1 ]+ = [aF + a†F , aF + a†F ]+ = 2

[γ2 , γ2 ]+ = −[aF − a†F , aF − a†F ]+ = 2


1
[γ1 , γ2 ]+ = [aF + a†F , aF − a†F ]+ = 0
i
From this we see that
• One could alternatively have defined Cliff(2, C) as the algebra generated
by 1, γ1 , γ2 , subject to the relations

[γj , γk ]+ = 2δjk

• Using just the generators 1 and γ1 , one gets an algebra Cliff(1, C), gener-
ated by 1, γ1 , with the relation

γ12 = 1

This is a two-dimensional complex algebra, isomorphic to C ⊕ C.

279
25.1.3 Multiple degrees of freedom
For a larger number of degrees of freedom, one can generalize the above and
define Weyl and Clifford algebras as follows:

Definition (Complex Weyl algebras). The complex Weyl algebra for d degrees
of freedom is the algebra Weyl(2d, C) generated by the elements 1, aB j , aB †j ,
j = 1, . . . , d satisfying the CCR

[aB j , aB †k ] = δjk 1, [aB j , aB k ] = [aB †j , aB †k ] = 0

Weyl(2d, C) can be identified with the algebra of polynomial coefficient dif-


ferential operators in m complex variables w1 , w2 , . . . , wd . The subspace of
complex linear combinations of the elements

∂ ∂2 ∂
1, wj , , wj wk , , wj
∂wj ∂wj ∂wk ∂wk

is closed under commutators and is isomorphic to the complexification of the Lie


algebra h2d+1 o sp(2d, R) built out of the Heisenberg Lie algebra in 2d variables
and the Lie algebra of the symplectic group Sp(2d, R). Recall that this is the
Lie algebra of polynomials of degree at most 2 on the phase space R2d , with the
Poisson bracket as Lie bracket.
One could also define the complex Weyl algebra by taking complex linear
combinations of products of generators 1, Pj , Qj , subject to the Heisenberg com-
mutation relations.
For Clifford algebras one has

Definition (Complex Clifford algebras, using annilhilation and creation op-


erators). The complex Clifford algebra for d degrees of freedom is the algebra
Cliff(2d, C) generated by 1, aF j , aF †j for j = 1, 2, . . . , d satisfying the CAR

[aF j , aF †k ]+ = δjk 1, [aF j , aF k ]+ = [aF †j , aF †k ]+ = 0

or, alternatively, one has the following more general definition that also works
in the odd-dimensional case

Definition (Complex Clifford algebras). The complex Clifford algebra in n vari-


ables is the algebra Cliff(n, C) generated by 1, γj for j = 1, 2, . . . , n satisfying
the relations
[γj , γk ]+ = 2δjk

We won’t try and prove this here, but one can show that, abstractly as
algebras, the complex Clifford algebras are something well-known. Generalizing
the case d = 1 where we saw that Cliff(2, C) was isomorphic to the algebra of 2
by 2 complex matrices, one has isomorphisms

Cliff(2d, C) ↔ M (2d , C)

280
in the even-dimensional case, and in the odd-dimensional case

Cliff(2d + 1, C) ↔ M (2d , C) ⊕ M (2d , C)

Two properties of Cliff(n, C) are

• As a vector space over C, a basis of Cliff(n, C) is the set of elements

1, γj , γj γk , γj γk γl , . . . , γ1 γ2 γ3 · · · γn−1 γn

for indices j, k, l, · · · ∈ 1, 2, . . . , n, with j < k < l < · · · . To show this,


consider all products of the generators, and use the commutation relations
for the γj to identify any such product with an element of this basis. The
relation γj2 = 1 shows that one can remove repeated occurrences of a
γj . The relation γj γk = −γk γj can then be used to put elements of the
product in the order of a basis element as above.

• As a vector space over C, Cliff(n, C) has dimension 2n . One way to see


this is to consider the product

(1 + γ1 )(1 + γ2 ) · · · (1 + γn )

which will have 2n terms that are exactly those of the basis listed above.

25.2 Real Clifford algebras


We can define real Clifford algebras Cliff(n, R) just as for the complex case, by
taking only real linear combinations:

Definition (Real Clifford algebras). The real Clifford algebra in n variables is


the algebra Cliff(n, R) generated over the real numbers by 1, γj for j = 1, 2, . . . , n
satisfying the relations
[γj , γk ]+ = 2δjk

For reasons that will be explained in the next chapter, it turns out that a
more general definition is useful. We write the number of variables as n = r + s,
for r, s non-negative integers, and now vary not just r + s, but also r − s, the
so-called “signature”.

Definition (Real Clifford algebras, arbitrary signature). The real Clifford al-
gebra in n = r + s variables is the algebra Cliff(r, s, R) over the real numbers
generated by 1, γj for j = 1, 2, . . . , n satisfying the relations

[γj , γk ]+ = ±2δjk 1

where we choose the + sign when j = k = 1, . . . , r and the − sign when j = k =


r + 1, . . . , n.

281
In other words, as in the complex case different γj anticommute, but only
the first r of them satisfy γj2 = 1, with the other s of them satisfying γj2 = −1.
Working out some of the low-dimensional examples, one finds:
• Cliff(0, 1, R). This has generators 1 and γ1 , satisfying

γ12 = −1

Taking real linear combinations of these two generators, the algebra one
gets√is just the algebra C of complex numbers, with γ1 playing the role of
i = −1.
• Cliff(0, 2, R). This has generators 1, γ1 , γ2 and a basis

1, γ1 , γ2 , γ1 γ2

with

γ12 = −1, γ22 = −1, (γ1 γ2 )2 = γ1 γ2 γ1 γ2 = −γ12 γ22 = −1

This four-dimensional algebra over the real numbers can be identified with
the algebra H of quaternions by taking

γ1 ↔ i, γ2 ↔ j, γ1 γ2 ↔ k

• Cliff(1, 1, R). This is the algebra M (2, R) of real 2 by 2 matrices, with


one possible identification as follows
       
1 0 0 1 0 −1 −1 0
1↔ , γ1 ↔ , γ2 ↔ , γ 1 γ2 ↔
0 1 1 0 1 0 0 1

Note that one can construct this using the aF , a†F for the complex case
Cliff(2, C)
γ1 = aF + a†F , γ2 = aF − a†F
since these are represented as real matrices.
• Cliff(3, 0, R). This is the algebra M (2, C) of complex 2 by 2 matrices,
with one possible identification using Pauli matrices given by
 
1 0
1↔
0 1
     
0 1 0 −i 1 0
γ1 ↔ σ1 = , γ2 ↔ σ2 = , γ 3 ↔ σ3 =
1 0 i 0 0 −1
     
i 0 0 i 0 −1
γ1 γ2 ↔ iσ3 = , γ2 γ3 ↔ iσ1 = , γ1 γ3 ↔ −iσ2 =
0 −i i 0 1 0
 
i 0
γ1 γ2 γ3 ↔
0 i

282
It turns out that Cliff(r, s, R) is always one or two copies of matrices of real,
complex or quaternionic elements, of dimension a power of 2, but this requires
a rather intricate algebraic argument that we will not enter into here. For the
details of this and the resulting pattern of algebras one gets, see for instance
[38]. One special case where the pattern is relatively simple is when one has
r = s. Then n = 2r is even-dimensional and one finds

Cliff(r, r, R) = M (2r , R)

We will see in the next chapter that just as quadratic elements of the Weyl
algebra give a basis of the Lie algebra of the symplectic group, quadratic ele-
ments of the Clifford algebra give a basis of the Lie algebra of the orthogonal
group.

25.3 For further reading


A good source for more details about Clifford algebras and spinors is Chapter
12 of the representation theory textbook [66]. For the details of what happens
for all Cliff(r, s, R), another good source is Chapter 1 of [38].

283
284
Chapter 26

Clifford Algebras and


Geometry

The definitions given in last chapter of Weyl and Clifford algebras were purely
algebraic, based on a choice of generators. These definitions do though have a
more geometrical formulation, with the definition in terms of generators corre-
sponding to a specific choice of coordinates. For the Weyl algebra, the geometry
involved is known as symplectic geometry, and we have already seen that in the
bosonic case quantization of a phase space R2d depends on the choice of a non-
degenerate antisymmetric bilinear form Ω which determines the Poisson brack-
ets and thus the Heisenberg commutation relations. Such a Ω also determines a
group Sp(2d, R), which is the group of linear transformations of R2d preserving
Ω. The Clifford algebra also has a coordinate invariant definition, based on a
more well-known structure on a vector space Rn , that of a non-degenerate sym-
metric bilinear form, i.e. an inner product. In this case the group that preserves
the inner product is an orthogonal group. In the symplectic case antisymmetric
forms require an even number of dimensions, but this is not true for symmetric
forms, which also exist in odd dimensions.

26.1 Non-degenerate bilinear forms


In the case of M = R2d , the dual phase space, for two vectors u, u0 ∈ M

u = cq1 q1 + cp1 p1 + · · · + cqd qd + cpd pd ∈ M

u0 = c0q1 q1 + c0p1 p1 + · · · + c0qd qd + c0pd pd ∈ M

the Poisson bracket determines an antisymmetric bilinear form on M, given


explicitly by

285
Ω(u, u0 ) =cq1 c0p1 − cp1 c0q1 + · · · + cqd c0pd − cpd c0qd
  0 
0 1 ... 0 0 cq1
−1 0 . . . 0 0 c0p 
   1
= cq1 cp1 . . . cqd cpd  ... .. .. ..   .. 

 . . . . 
 
 0 0 ... 0 1 c0qd 

0 0 ... −1 0 c0pd
Matrices g ∈ M (2d, R) such that
   
0 1 ... 0 0 0 1 ... 0 0
−1 0 . . . 0 0 −1 0 ... 0 0
   
g T  ... .. .. ..  g =  .. .. .. .. 

 . . . 

 .
 . . .
 0 0 . . . 0 1 0 0 ... 0 1
0 0 . . . −1 0 0 0 ... −1 0
make up the group Sp(2d, R) and preserve Ω, satisfying
Ω(gu, gu0 ) = Ω(u, u0 )
This choice of Ω is much less arbitrary than it looks. One can show that
given any non-degenerate antisymmetric bilinear form on R2d a basis can be
found with respect to which it will be the Ω given here (for a proof, see [7]).
This is also true if one complexifies, taking (q, p) ∈ C2d and using the same
formula for Ω, which is now a bilinear form on C2d . In the real case the group
that preserves Ω is called Sp(2d, R), in the complex case Sp(2d, C).
To get a fermionic analog of this, it turns out that all we need to do is replace
“non-degenerate antisymmetric bilinear form Ω(·, ·)” with “non-degenerate sym-
metric bilinear form h·, ·i”. Such a symmetric bilinear form is actually something
much more familiar from geometry than the antisymmetric case analog: it is
just a notion of inner product. Two things are different in the symmetric case:
• The underlying vector space does not have to be even dimensional, one
can take M = Rn for any n, including n odd. To get a detailed analog of
the bosonic case though, we will mostly consider the even case n = 2d.
• For a given dimension n, there is not just one possible choice of h·, ·i up to
change of basis, but one possible choice for each pair of integers r, s such
that r + s = n. Given r, s, any choice of h·, ·i can be put in the form
hu, u0 i =u1 u01 + u2 u02 + · · · ur u0r − ur+1 u0r+1 − · · · − un u0n
  0 
1 0 ... 0 0 u1
0 1 . . . 0 0   u02 
  
= u1 . . . un  ... ... .. ..   .. 

 . .  . 
 

0 0 . . . −1 0  u0n−1 
0 0 . . . 0 −1 u0n
| {z }
r + signs, s - signs

286
For a proof by Gram-Schmidt orthogonalization, see [7].
We can thus extend our definition of the orthogonal group as the group of
transformations g preserving an inner product
hgu, gu0 i = hu, u0 i
to the case r, s arbitrary by
Definition (Orthogonal group O(r, s, R)). The group O(r, s, R) is the group of
real r + s by r + s matrices g that satisfy
   
1 0 ... 0 0 1 0 ... 0 0
0 1 . . . 0 0  0 1 ... 0 0 
   
g T  ... ... .. ..  g =  .. .. .. ..
 
 . . 

.
 . . .

0 0 . . . −1 0  0 0 . . . −1 0
0 0 . . . 0 −1 0 0 ... 0 −1
| {z } | {z }
r + signs, s - signs r + signs, s - signs

SO(r, s, R) ⊂ O(r, s, R) is the subgroup of matrices of determinant +1.


If one complexifies, taking components of vectors to be in Cn , using the
same formula for h·, ·i, one can change basis by multiplying the s basis elements
by a factor of i, and in this new basis all basis vectors ej satisfy hej , ej i = 1.
One thus sees that on Cn , as in the symplectic case, up to change of basis there
is only one non-degenerate bilinear form. The group preserving this is called
O(n, C). Note that on Cn h·, ·i is not the Hermitian inner product (which is
antilinear on the first variable), and it is not positive definite.

26.2 Clifford algebras and geometry


As defined by generators in the last chapter, Clifford algebras have no obvious
geometrical significance. It turns out however that they are powerful tools in
the study of the geometry of linear spaces with an inner product, including
especially the study of linear transformations that preserve the inner product,
i.e. rotations. To see the relation between Clifford algebras and geometry,
consider first the positive definite case Cliff(n, R). To an arbitrary vector
v = (v1 , v2 , . . . , vn ) ∈ Rn
we associate the Clifford algebra element v
/ = γ(v) where γ is the map
v ∈ Rn → γ(v) = v1 γ1 + v2 γ2 + · · · + vn γn ∈ Cliff(n, R)
Using the Clifford algebra relations for the γj , given two vectors v, w the
product of their associated Clifford algebra elements satisfies
v
/w / v = [v1 γ1 + v2 γ2 + · · · + vn γn , w1 γ1 + w2 γ2 + · · · + wn γn ]+
/ + w/
= 2(v1 w1 + v2 w2 + · · · + vn wn )
= 2hv, wi

287
where h·, ·i is the symmetric bilinear form on Rn corresponding to the standard
inner product of vectors. Note that taking v = w one has
2
/ = hv, vi = ||v||2
v

The Clifford algebra Cliff(n, R) contains Rn as the subspace of linear combi-


nations of the generators γj , and one can think of it as a sort of enhancement of
the vector space Rn that encodes information about the inner product. In this
larger structure one can multiply as well as add vectors, with the multiplication
determined by the inner product.
In general one can define a Clifford algebra whenever one has a vector space
with a symmetric bilinear form:

Definition (Clifford algebra of a symmetric bilinear form). Given a vector space


V with a symmetric bilinear form h·, ·i, the Clifford algebra Cliff(V, h·, ·i) is the
algebra generated by 1 and elements of V , with the relations

v
/w/ + w/
/ v = 2hv, wi

Note that different people use different conventions here, with

v
/w / v = −2hv, wi
/ + w/

another common choice. One also sees variants without the factor of 2.
For n dimensional vector spaces over C, we have seen that for any non-
degenerate symmetric bilinear form a basis can be found such that h·, ·i has the
standard form
hz, wi = z1 w1 + z2 w2 + · · · + zn wn

As a result, there is just one complex Clifford algebra in dimension n, the one
we defined as Cliff(n, C).
For n dimensional vector spaces over R with a non-degenerate symmetric
bilinear forms of type r, s such that r+s = n, the corresponding Clifford algebras
Cliff(r, s, R) are the ones defined in terms of generators in the last chapter.
In special relativity, space-time is a real 4-dimensional vector space with an
indefinite inner product corresponding to (depending on one’s choice of conven-
tion) either the case r = 1, s = 3 or the case s = 1, r = 3. The group of linear
transformations preserving this inner product is called the Lorentz group, and
its orientation preserving component is written as SO(3, 1) or SO(1, 3) depend-
ing on the choice of convention. In later chapters we will consider what happens
to quantum mechanics in the relativistic case, and there encounter the corre-
sponding Clifford algebras Cliff(3, 1, R) or Cliff(1, 3, R). The generators γj of
such a Clifford algebra are well-known in the subject as the “Dirac γ- matrices”.
For now though, we will restrict attention to the positive definite case, so
just will be considering Cliff(n, R) and seeing how it is used to study the group
O(n) of n-dimensional rotations in Rn .

288
26.2.1 Rotations as iterated orthogonal reflections
We’ll consider two different ways of seeing the relationship between the Clifford
algebra Cliff(n, R) and the group O(n) of rotations in Rn . The first is based
upon the geometrical fact (known as the Cartan-Dieudonné theorem) that one
can get any rotation by doing multiple orthogonal reflections in different hy-
perplanes. Orthogonal reflection in the hyperplane perpendicular to a vector w
takes a vector v to the vector
hv, wi
v0 = v − 2 w
hw, wi
something that can easily be seen from the following picture

From now on we identify vectors v, v0 , w with the corresponding Clifford


algebra elements by the map γ. The linear transformation given by reflection
in w is
0 hv, wi
/→v
v / =/v−2 w
/
hw, wi
w
/
v − (/
=/ vw/ + w/
/ v)
hw, wi
Since
w
/ hw, wi
w
/ = =1
hw, wi hw, wi
we have (for non-zero vectors w)

−1 w
/
w
/ =
hw, wi
and the reflection transformation is just conjugation by w
/ times a minus sign
0 −1 −1
/→v
v / =v/−v
/ − w/
/ vw
/ = −w/
/ vw
/

So, thinking of vectors as lying in the Clifford algebra, the orthogonal trans-
formation that is the result of one reflection is just a conjugation (with a minus

289
sign). These lie in the group O(n), but not in the subgroup SO(n), since they
change orientation. The result of two reflections in hyperplanes orthogonal to
/ 2w
w1 , w2 will be a conjugation by w /1
0 −1 −1
/→v
v / = −w
/ 2 (−w
/ 1v
/w/ 1 )w
/ 2 = (w
/ 2w
/ 1 )/
v(w / 1 )−1
/ 2w

This will be a rotation preserving the orientation, so of determinant one and in


the group SO(n).
This construction not only gives an efficient way of representing rotations
(as conjugations in the Clifford algebra), but it also provides a construction of
the group Spin(n) in arbitrary dimension n. One can define
Definition (Spin(n)). The group Spin(n) is the set of invertible elements of
the Clifford algebra Cliff(n) of the form

w /2 ···w
/ 1w /k

where the vectors wj for j = 1, · · · , k are vectors in Rn satisfying |wj |2 = 1


and k is even. Group multiplication is just Clifford algebra multiplication.
The action of Spin(n) on vectors v ∈ Rn will be given by conjugation

/ → (w
v /2 ···w
/ 1w / k )/
v(w
/ 1w / k )−1
/2 ···w

and this will correspond to a rotation of the vector v. This construction gen-
eralizes to arbitrary n the one we gave in chapter 6 of Spin(3) in terms of unit
length elements of the quaternion algebra H. One can see here the characteristic
fact that there are two elements of the Spin(n) group giving the same rotation
in SO(n) by noticing that changing the sign of the Clifford algebra element
w /2 ···w
/ 1w / k does not change the conjugation action, where signs cancel.

26.2.2 The Lie algebra of the rotation group and quadratic


elements of the Clifford algebra
For a second approach to understanding rotations in arbitrary dimension, one
can use the fact that these are generated by taking products of rotations in the
coordinate planes. A rotation by an angle θ in the j −k coordinate plane (j < k)
will be given by
v → eθjk v
where jk is an n by n matrix with only two non-zero entries: jk entry −1 and
kj entry +1 (see equation 5.2.1). Restricting attention to the j − k plane, eθjk
acts as the standard rotation matrix in the plane
    
vj cos θ − sin θ vj

vk sin θ cos θ vk

In the SO(3) case we saw that there were three of these matrices

23 = l1 , 13 = −l2 , 12 = l3

290
providing a basis of the Lie algebra so(3). In n dimensions there will be 12 (n2 −n)
of them, providing a basis of the Lie algebra so(n).
Just as in the case of SO(3) where unit length quaternions were used, we can
use elements of the Clifford algebra to get these same rotation transformations,
but as conjugations in the Clifford algebra. To see how this works, consider the
quadratic Clifford algebra element γj γk for j 6= k and notice that

(γj γk )2 = γj γk γj γk = −γj γj γk γk = −1

so one has
θ (θ/2)2 (θ/2)3
e 2 γj γk =(1 − + · · · ) + γj γk (θ/2 − + ···)
2! 3!
θ θ
= cos( ) + γj γk sin( )
2 2
Conjugating a vector vj γj + vk γk in the j − k plane by this, one can show
that
θ θ
e− 2 γj γk (vj γj + vk γk )e 2 γj γk = (vj cos θ − vk sin θ)γj + (vj sin θ + vk cos θ)γk

which is just a rotation by θ in the j − k plane. Such a conjugation will also


leave invariant the γl for l 6= j, k. Thus one has
θ θ
e− 2 γj γk γ(v)e 2 γj γk = γ(eθjk v) (26.1)

and the infinitesimal version


1
[− γj γk , γ(v)] = γ(jk v) (26.2)
2
Note that these relations are closely analogous to equations 18.7 and 18.6, which
in the symplectic case show that a rotation in the Q, P plane is given by con-
jugation by the exponential of an operator quadratic in Q, P . We will examine
this analogy in greater detail in chapter 28.
One can also see that, just as in our earlier calculations in three dimensions,
θ
one gets a double cover of the group of rotations, with here the elements e 2 γj γk
of the Clifford algebra giving a double cover of the group of rotations in the
j − k plane (as θ goes from 0 to 2π). General elements of the spin group can
be constructed by multiplying these for different angles in different coordinate
planes. One sees that the Lie algebra spin(n) can be identified with the Lie
algebra so(n) by
1
jk ↔ − γj γk
2
Yet another way to see this would be to compute the commutators of the − 21 γj γk
for different values of j, k and show that they satisfy the same commutation
relations as the corresponding matrices jk .
Recall that in the bosonic case we found that quadratic combinations of the
Qj , Pk (or of the aB j , aB †j ) gave operators satisfying the commutation relations

291
of the Lie algebra sp(2n, R). This is the Lie algebra of the group Sp(2n, R),
the group preserving the non-degenerate antisymmetric bilinear form Ω(·, ·) on
the phase space R2n . The fermionic case is precisely analogous, with the role of
the antisymmetric bilinear form Ω(·, ·) replaced by the symmetric bilinear form
h·, ·i and the Lie algebra sp(2n, R) replaced by so(n) = spin(n).
In the bosonic case the linear functions of the Qj , Pj satisfied the commuta-
tion relations of another Lie algebra, the Heisenberg algebra, but in the fermionic
case this is not true for the γj . In chapter 27 we will see that one can define a
notion of a “Lie superalgebra” that restores the parallelism.

26.3 For further reading


Some more detail about the relationship between geometry and Clifford algebras
can be found in [38], and an exhaustive reference is [49].

292
Chapter 27

Anticommuting Variables
and Pseudo-classical
Mechanics

The analogy between the algebras of operators in the bosonic (Weyl algebra) and
fermionic (Clifford algebra) cases can be extended by introducing a fermionic
analog of phase space and the Poisson bracket. This gives a fermionic ana-
log of classical mechanics, sometimes called “pseudo-classical mechanics”, the
quantization of which gives the Clifford algebra as operators, and spinors as
state spaces. In this chapter we’ll intoduce “anticommuting variables” ξj that
will be the fermionic analogs of the variables qj , pj . These objects will become
generators of the Clifford algebra under quantization, and will later be used in
the construction of fermionic state spaces, by analogy with the Schrödinger and
Bargmann-Fock constructions in the bosonic case.

27.1 The Grassmann algebra of polynomials on


anticommuting generators
Given a phase space M = R2d , one gets classical observables by taking poly-
nomial functions on M . These are generated by the linear functions qj , pj , j =
1, . . . , d, which lie in the dual space M = M ∗ . One can instead start with a
real vector space V = Rn with n not necessarily even, and again consider the
space V ∗ of linear functions on V , but with a different notion of multiplication,
one that is anticommutative on elements of V ∗ . Using such a multiplication,
one can generate an anticommuting analog of the algebra of polynomials on V
in the following manner, beginning with a choice of basis elements ξj of V ∗ :

Definition (Grassmann algebra). The algebra over the real numbers generated

293
by ξj , j = 1, . . . , n, satisfying the relations

ξj ξk + ξk ξj = 0

is called the Grassmann algebra, and denoted Λ∗ (Rn ).

Note that these relations imply that generators satisfy ξj2 = 0. Also note that
sometimes the product in the Grassmann algebra is called the “wedge product”
and the product of ξj and ξk is denoted ξj ∧ ξk . We will not use a different
symbol for the product in the Grassmann algebra, relying on the notation for
generators to keep straight what is a generator of a conventional polynomial
algebra (e.g. qj or pj ) and what is a generator of a Grassman algebra (e.g. ξj ).
Recall from section 9.5 that the algebra of polynomial functions on V could
be thought of as S ∗ (V ∗ ), the symmetric part of the tensor algebra on V ∗ , with
multiplication of two linear functions l1 , l2 in V ∗ given by
1
l1 l2 = (l1 ⊗ l2 + l2 ⊗ l1 ) ∈ S 2 (V ∗ )
2
This corresponds to a polynomial on V by taking its value at the point v ∈ V
to be
1
l1 l2 (v) = (l1 ⊗ l2 + l2 ⊗ l1 )(v ⊗ v)
2
Similarly, the Grassman algebra on V is the antisymmetric part of the tensor
algebra on V ∗ , with the wedge product of two linear functions l1 , l2 in V ∗ given
by
1
l1 ∧ l2 = (l1 ⊗ l2 − l2 ⊗ l1 ) ∈ Λ2 (V ∗ )
2
If one tries to think of these as functions on V , one cannot ask for their value
at a point v, since
1
(l1 ⊗ l2 − l2 ⊗ l1 )(v ⊗ v) = 0
2
The Grassmann algebra behaves in many ways like the polynomial algebra
on Rn , but it is finite dimensional, with basis

1, ξj , ξj ξk , ξj ξk ξl , · · · , ξ1 ξ2 · · · ξn

for indices j < k < l < · · · taking values 1, 2, . . . , n. As with polynomials,


monomials are characterized by a degree (number of generators in the product),
which takes values from 0 to n. Λk (Rn ) is the subspace of Λ∗ (Rn ) of linear
combinations of monomials of degree k.

Digression (Differential forms). The Grassmann algebra is also known as the


exterior algebra, and readers may have already seen it in the context of differ-
ential forms on Rn . These are known to physicists as “antisymmetric tensor
fields”, and given by taking elements of the exterior algebra Λ∗ (Rn ) with coef-
ficients not constants, but functions on Rn . This construction is important in
the theory of manifolds, where at a point x in a manifold M , one has a tangent

294
space Tx M and its dual space (Tx M )∗ . A set of local coordinates xj on M gives
basis elements of (Tx M )∗ denoted by dxj and differential forms locally can be
written as sums of terms of the form

f (x1 , x2 , · · · , xn )dxj ∧ · · · ∧ dxk ∧ · · · ∧ dxl

where the indices j, k, l satisfy 1 ≤ j < k < l ≤ n.


A fundamental principle of mathematics is that a good way to understand
a space is in terms of the functions on it. One can think of what we have done
here as creating a new kind of space out of Rn , where the algebra of functions
on the space is Λ∗ (Rn ), generated by coordinate functions ξj with respect to a
basis of Rn . The enlargement of conventional geometry to include new kinds
of spaces such that this makes sense is known as “supergeometry”, but we will
not attempt to pursue this subject here. Spaces with this new kind of geometry
have functions on them, but do not have conventional points since we have seen
that one can’t ask what the value of an anticommuting function at a point is.
Remarkably, one can do calculus on such unconventional spaces, introducing
analogs of the derivative and integral for anticommuting functions. For the case
n = 1, an arbitrary function is

F (ξ) = c0 + c1 ξ

and one can take



F = c1
∂ξ
For larger values of n, an arbitrary function can be written as

F (ξ1 , ξ2 , . . . , ξn ) = FA + ξj FB

where FA , FB are functions that do not depend on the chosen ξj (one gets FB
by using the anticommutation relations to move ξj all the way to the left). Then
one can define

F = FB
∂ξj
This derivative operator has many of the same properties as the conventional
derivative, although there are unconventional signs one must keep track of. An
unusual property of this derivative that is easy to see is that one has
∂ ∂
=0
∂ξj ∂ξj
Taking the derivative of a product one finds this version of the Leibniz rule
for monomials F and G
∂ ∂ ∂
(F G) = ( F )G + (−1)|F | F ( G)
∂ξj ∂ξj ∂ξj

where |F | is the degree of the monomial F .

295
A notion of integration (often called the “Berezin integral”) with many of the
usual properties of an integral can also be defined. It has the peculiar feature
of being the same operation as differentiation, defined in the n = 1 case by
Z
(c0 + c1 ξ)dξ = c1

and for larger n by


Z
∂ ∂ ∂
F (ξ1 , ξ2 , · · · , ξn )dξ1 dξ2 · · · dξn = ··· F = cn
∂ξn ∂ξn−1 ∂ξ1

where cn is the coefficient of the basis element ξ1 ξ2 · · · ξn in the expression of F


in terms of basis elements.
This notion of integration is a linear operator on functions, and it satisfies
an analog of integration by parts, since if one has


F = G
∂ξj

then Z
∂ ∂ ∂
F dξj = F = G=0
∂ξj ∂ξj ∂ξj
using the fact that repeated derivatives give zero.

27.2 Pseudo-classical mechanics and the fermionic


Poisson bracket
The basic structure of Hamiltonian classical mechanics depends on an even
dimensional phase space M = R2d with a Poisson bracket {·, ·} on functions on
this space. Time evolution of a function f on phase space is determined by

d
f = {f, h}
dt
for some Hamiltonian function h. This says that taking the derivative of any
function in the direction of the velocity vector of a classical trajectory is the
linear map
f → {f, h}
on functions. As we saw in chapter 12, since this linear map is a derivative, the
Poisson bracket will have the derivation property, satisfying the Leibniz rule

{f1 , f2 f3 } = f2 {f1 , f3 } + {f1 , f2 }f3

for arbitrary functions f1 , f2 , f3 on phase space. Using the Leibniz rule and an-
tisymmetry, one can calculate Poisson brackets for any polynomials, just from

296
knowing the Poisson bracket on generators qj , pj (or, equivalently, the antisym-
metric bilinear form Ω(·, ·)), which we chose to be

{qj , qk } = {pj , pk } = 0, {qj , pk } = −{pk , qj } = δjk

Notice that we have a symmetric multiplication on generators, while the Poisson


bracket is antisymmetric.
To get pseudo-classical mechanics, we think of the Grassmann algebra Λ∗ (Rn )
as our algebra of classical observables, an algebra we can think of as functions
on a “fermionic” phase space V = Rn (note that in the fermionic case, the phase
space does not need to be even dimensional). We want to find an appropriate
notion of fermionic Poisson bracket operation on this algebra, and it turns out
that this can be done. While the standard Poisson bracket is an antisymmetric
bilinear form Ω(·, ·) on linear functions, the fermionic Poisson bracket will be
based on a choice of symmetric bilinear form on linear functions, equivalently,
a notion of inner product h·, ·i.
Denoting the fermionic Poisson bracket by {·, ·}+ , for a multiplication anti-
commutative on generators one has to adjust signs in the Leibniz rule, and the
derivation property analogous to the derivation property of the usual Poisson
bracket is

{F1 F2 , F3 }+ = F1 {F2 , F3 }+ + (−1)|F2 ||F3 | {F1 , F3 }+ F2

where |F2 | and |F3 | are the degrees of F2 and F3 . It will also have the symmetry
property
{F1 , F2 }+ = −(−1)|F1 ||F2 | {F2 , F1 }+
and one can use these properties to compute the fermionic Poisson bracket for
arbitrary functions in terms of the relations for generators.
One can think of the ξj as the “anti-commuting coordinate functions” with
respect to a basis ei of V = Rn . We have seen that the symmetric bilinear
forms on Rn are classified by a choice of positive signs for some basis vectors,
negative signs for the others. So, on generators ξj one can choose

{ξj , ξk }+ = ±δjk

with a plus sign for j = k = 1, · · · , r and a minus sign for j = k = r + 1, · · · , n,


corresponding to the possible inequivalent choices of non-degenerate symmetric
bilinear forms.
Taking the case of a positive-definite inner product for simplicity, one can
calculate explicitly the fermionic Poisson brackets for linear and quadratic com-
binations of the generators. One finds

{ξj ξk , ξl }+ = ξj {ξk , ξl }+ − {ξj , ξl }+ ξk = δkl ξj − δjl ξk (27.1)

and

{ξj ξk , ξl ξm }+ ={ξj ξk , ξl }+ ξm + ξl {ξj ξk , ξm }+


=δkl ξj ξm − δjl ξk ξm + δkm ξl ξj − δjm ξl ξk (27.2)

297
The second of these equations shows that the quadratic combinations of the
generators ξj satisfy the relations of the Lie algebra of the group of rotations in
n dimensions (so(n) = spin(n)). The first shows that the ξk ξl acts on the ξj as
infinitesimal rotations in the k − l plane.
In the case of the conventional Poisson bracket, the antisymmetry of the
bracket and the fact that it satisfies the Jacobi identity implies that it is a
Lie bracket determining a Lie algebra (the infinite dimensional Lie algebra of
functions on a phase space R2d ). The fermionic Poisson bracket provides an
example of something called a Lie superalgebra. These can be defined for vector
spaces with some usual and some fermionic coordinates:
Definition (Lie superalgebra). A Lie superalgebra structure on a real or com-
plex vector space V is given by a Lie superbracket [·, ·]± . This is a bilinear map
on V which on generators X, Y, Z (which may be usual coordinates or fermionic
ones) satisfies
[X, Y ]± = −(−1)|X||Y | [Y, X]±
and a super-Jacobi identity

[X, [Y, Z]± ]± = [[X, Y ]± , Z]± + (−1)|X||Y | [Y, [X, Z]± ]±

where |X| takes value 0 for a usual generator, 1 for a fermionic generator.
Analogously to the bosonic case, on polynomials in generators with order of
the poynomial less than or equal to two, the fermionic Poisson bracket {·, ·}+ is a
Lie superbracket, giving a Lie superalgebra of dimension 1 + n + 21 (n2 − n) (since
there is one constant, n linear terms ξj and 12 (n2 − n) quadratic terms ξj ξk ).
On functions of order two this Lie superalgebra is a Lie algebra, so(n). We will
see in chapter 28 that one can generalize the definition of a representation to
Lie superalgebras, and quantization will give a distinguished representation of
this Lie superalgebra, in a manner quite parallel to that of the Schrödinger or
Bargmann-Fock constructions of a representation in the bosonic case.
The relation between between the quadratic and linear polynomials in the
generators is parallel to what happens in the bosonic case. Here we have the
fermionic analog of the bosonic theorem 14.1:
Theorem 27.1. The Lie algebra so(n, R) is isomorphic to the Lie algebra
Λ2 (V ∗ ) (with Lie bracket {·, ·}+ ) of order two anticommuting polynmials on
V = Rn , by the isomorphism
L ↔ µL
where L ∈ so(n, R) is an antisymmetric n by n real matrix, and
1 1X
µL = ξ · Lξ = Ljk ξj ξk
2 2
j,k

The so(n, R) action on anticommuting coordinate functions is


X
{µL , ξk }+ = Ljk ξj
j

298
or
{µL , ξ}+ = LT ξ

Proof. The theorem follows from equations 27.1 and 27.2, or one can proceed
by analogy with the proof of theorem 14.1 as follows. First prove the second
part of the theorem by computing

1X 1X
{ ξj Ljk ξk , ξl }+ = Ljk (ξj {ξk , ξl }+ − {ξj , ξl }+ ξk )
2 2
j,k j,k
1 X X
= ( Ljl ξj − Llk ξk )
2 j
k
X
= Ljl ξj (since L = −LT )
j

For the first part of the theorem, the map

L → µL

is a vector space isomorphism of the space of antisymmetric matrices and


Λ2 (Rn ). To show that it is a Lie algebra isomorphism, one can use an analogous
argument to that of the proof of 14.1. Here one considers the action

ξ → {µL , ξ}

of µL ∈ so(n, R) on an arbitrary
X
ξ= cj ξj
j

and uses the super-Jacobi identity relating the fermionic Poisson brackets of
µL , µL0 , ξ.

27.3 Examples of pseudo-classical mechanics


In pseudo-classical mechanics, the dynamics will be determined by choosing a
Hamiltonian h in Λ∗ (Rn ). Observables will be other functions F ∈ Λ∗ (Rn ),
and they will satisfy the analog of Hamilton’s equations

d
F = {F, h}+
dt

We’ll consider two of the simplest possible examples.

299
27.3.1 The pseudo-classical spin degree of freedom
Using pseudo-classical mechanics, one can find a “classical” analog of something
that is quintessentially quantum: the degree of freedom that appears in the
qubit or spin 1/2 system that we have seen repeatedly in this course. Taking
V = R3 with the standard inner product as fermionic phase space, we have
three generators ξ1 , ξ2 , ξ3 ∈ V ∗ satisfying the relations

{ξj , ξk }+ = δjk

and an 8 dimensional space of functions with basis

1, ξ1 , ξ2 , ξ3 , ξ1 ξ2 , ξ1 ξ3 , ξ2 ξ3 , ξ1 ξ2 ξ3

If we want the Hamiltonian function to be non-trivial and of even degree, it


will have to be a linear combination

h = B12 ξ1 ξ2 + B13 ξ1 ξ3 + B23 ξ2 ξ3

for some constants B12 , B13 , B23 . This can be written


3
1 X
h= Ljk ξj ξk
2
j,k=1

where Ljk are the entries of the matrix


 
0 B12 B13
L = −B12 0 B23 
−B13 −B23 0

The equations of motion on generators will be

d
ξj (t) = {ξj , h}+ = −{h, ξj }+
dt

which, since L = −LT , by theorem 27.1 can be written

d
ξj (t) = Lξj (t)
dt
with solution
ξj (t) = etL ξj (0)
This will be a time-dependent rotation of the ξj in the plane perpendicular to

B = (B23 , −B13 , B12 )

at a constant speed proportional to |B|.

300
27.3.2 The pseudo-classical fermionic oscillator
We have already studied the fermionic oscillator as a quantum system, and one
can ask whether there is a corresponding pseudo-classical system. Such a system
is given by taking an even dimensional fermionic phase space V = R2d , with a
basis of coordinate functions ξ1 , · · · , ξ2d that generate Λ∗ (R2d ). On generators
the fermionic Poisson bracket relations come from the standard choice of positive
definite symmetric bilinear form

{ξj , ξk }+ = δjk

As shown in theorem 27.1, quadratic products ξj ξk act on the generators by


infinitesimal rotations in the j − k plane, and satisfy the commutation relations
of so(2d).
To get a pseudo-classical system corresponding to the fermionic oscillator
one makes the choice
d d
1X X
h= (ξ2j ξ2j−1 − ξ2j−1 ξ2j ) = ξ2j ξ2j−1
2 j=1 j=1

This makes h the moment map for a simultaneous rotation in the 2j − 1, 2j


planes, corresponding to a matrix in so(2d) given by
d
X
L= 2j−1,2j
j=1

As in the bosonic case, we can make the standard choice of complex structure
J = J0 on R2d and get a decomposition

V ∗ ⊗ C = R2d ⊗ C = Cd ⊕ Cd

into eigenspaces of J of eigenvalue ±i. This is done by defining


1 1
θj = √ (ξ2j−1 − iξ2j ), θj = √ (ξ2j−1 + iξ2j )
2 2
for j = 1, . . . , d. These satisfy the fermionic Poisson bracket relations

{θj , θk }+ = {θj , θk }+ = 0, {θj , θk }+ = δjk

In terms of the θj , the Hamiltonian is


d d
iX X
h=− (θj θj − θj θj ) = −i θj θ j
2 j=1 j=1

Using the derivation property of {·, ·}+ one finds


d
X
{h, θj }+ = −i (θk {θk , θj }+ − {θk , θj }+ θk ) = −iθj
k=1

301
and, similarly,
{h, θj }+ = iθj
so one sees that h is just the generator of U (1) ⊂ U (d) phase rotations on the
variables θj . The equations of motion are

d d
θj = {θj , h}+ = iθj , θj = {θj , h}+ = −iθj
dt dt
with solutions
θj (t) = eit θj (0), θj (t) = e−it θj (0)

27.4 For further reading


For more details on pseudo-classical mechanics, a very readable original ref-
erence is [6], and there is a detailed discussion in the textbook [64], chapter
7.

302
Chapter 28

Fermionic Quantization and


Spinors

In this chapter we’ll begin by investigating the fermionic analog of the notion
of quantization, which takes functions of anticommuting variables on a phase
space with symmetric bilinear form h·, ·i and gives an algebra of operators with
generators satisfying the relations of the corresponding Clifford algebra. We
will then consider analogs of the constructions used in the bosonic case which
there gave us the Schrödinger and Bargmann-Fock representations of the Weyl
algebra on a space of states.
We know that for a fermionic oscillator with d degrees of freedom, the alge-
bra of operators will be Cliff(2d, C), the algebra generated by annihilation and
creation operators aF j , aF †j . These operators will act on HF , a complex vector
space of dimension 2d , and this will be our fermionic analog of the bosonic Γ0 .
Since the spin group consists of invertible elements of the Clifford algebra, it
has a representation on HF . This is known as the “spinor representation”, and
it can be constructed by analogy with the construction of the metaplectic in
the bosonic case. We’ll also consider the analog in the fermionic case of the
Schrödinger representation, which turns out to have a problem with unitarity,
but finds a use in physics as “ghost” degrees of freedom.

28.1 Quantization of pseudo-classical systems


In the bosonic case, quantization was based on finding a representation of the
Heisenberg Lie algebra of linear functions on phase space, or more explicitly,
for basis elements qj , pj of this Lie algebra finding operators Qj , Pj satisfying
the Heisenberg commutation relations. In the fermionic case, the analog of
the Heisenberg Lie algebra is not a Lie algebra, but a Lie superalgebra, with
basis elements 1, ξj , j = 1, . . . , n and a Lie superbracket given by the fermionic

303
Poisson bracket, which on basis elements is

{ξj , ξk }+ = ±δjk , {ξj , 1}+ = 0, {1, 1}+ = 0

Quantization is given by finding a representation of this Lie superalgebra. One


can generalize the definition of a Lie algebra representation to that of a Lie
superalgebra representation by
Definition (Representation of a Lie superalgebra). A representation of a Lie
superalgebra is a homomorphism Φ preserving the superbracket

[Φ(X), Φ(Y )]± = Φ([X, Y ]± )

and taking values in a Lie superalgebra of linear operators, with |Φ(X)| = |X|
and
[Φ(X), Φ(Y )]± = Φ(X)Φ(Y ) − (−)|X||Y | Φ(Y )Φ(X)
A representation of the pseudo-classical Lie superalgebra (and thus a quan-
tization of the pseudo-classical system) will be given by finding a linear map Γ+
that takes basis elements ξj to operators Γ+ (ξj ) satisfying the anticommutation
relations

[Γ+ (ξj ), Γ+ (ξk )]+ = ±δjk Γ+ (1), [Γ+ (ξj ), Γ+ (1)] = [Γ+ (1), Γ+ (1)] = 0

One can satisfy these relations by taking


1
Γ+ (ξj ) = √ γj , Γ+ (1) = 1
2
since then
1
[Γ+ (ξj ), Γ+ (ξk )]+ =[γj , γk ]+ = ±δjk
2
are exactly the Clifford algebra relations. This can be extended to a represen-
tation of the functions of the ξj of order two or less by
Theorem. A representation of the Lie superalgebra of anticommuting functions
of coordinates ξj on Rn of order two or less is given by
1 1
Γ+ (1) = 1, Γ+ (ξj ) = √ γj , Γ+ (ξj ξk ) = γj γk
2 2
Proof. We have already seen that this is a representation on polynomials of
degee zero and one. For simplicity just considering the case s = 0, in degree two
the fermionic Poisson bracket relations are given by equations 27.1 and 27.2.
For 27.1, one can show that the products of Clifford algebra generators
1
Γ+ (ξj ξk ) = γj γk
2
satisfy
1
[ γj γk , γl ] = δkl γj − δjl γk
2

304
by using the Clifford algebra relations, or by noting that this is the special case
of equation 26.2 for v = el . That equation shows that commuting by − 21 γj γk
acts by the infinitesimal rotation jk in the j − k coordinate plane.
For 27.2, one can again just use the Clifford algebra relations to show
1 1 1 1 1 1
[ γj γk , γl γm ] = δkl γj γm − δjl γk γm + δkm γl γj − δjm γl γk
2 2 2 2 2 2
One could also instead use the commutation relations for the so(n) Lie algebra
satisfied by the basis elements jk corresponding to infinitesimal rotations. One
must get identical commutation relations for the − 12 γj γk and can show that
these are the relations needed for commutators of Γ+ (ξj ξk ) and Γ+ (ξl ξm ).
Note that here we are not introducing the factors of i into the definition of
quantization that in the bosonic case were necessary to get a unitary represen-
tation of the Lie group corresponding to the real Heisenberg Lie algebra h2d+1 .
In the bosonic case we worked with all complex linear combinations of powers
of the Qj , Pj (the complex Weyl algebra Weyl(2d, C)), and thus had to identify
the specific complex linear combinations of these that gave unitary represen-
tations of the Lie algebra h2d+1 o sp(2d, R). Here we are not complexifying
for now, but working with the real Clifford algebra Cliff(r, s, R), and it is the
irreducible representations of this algebra that provide an analog of the unique
interesting irreducible representation of h2d+1 . In the Clifford algebra case, the
representations of interest are not Lie algebra representations and may be on
real vector spaces. There is no analog of the unitarity property of the h2d+1
representation.
In the bosonic case we found that Sp(2d, R) acted on the bosonic dual phase
space, preserving the antisymmetric bilinear form Ω that determined the Lie al-
gebra h2d+1 , so it acted on this Lie algebra by automorphisms. We saw (see
chapter 18) that intertwining operators there gave us a representation of the
double cover of Sp(2d, R) (the metaplectic representation), with the Lie alge-
bra representation given by the quantization of quadratic functions of the qj , pj
phase space coordinates. There is a closely analogous story in the fermionic
case, where SO(r, s, R) acts on the fermionic phase space V , preserving the
symmetric bilinear form h·, ·i that determines the Clifford algebra relations.
Here one constructs a representation of the spin group Spin(r, s, R) double
covering SO(r, s, R) using intertwining operators, with the Lie algebra repre-
sentation given by quadratic combinations of the quantizations of the fermionic
coordinates ξj .
The fermionic analog of 18.1 is

Uk Γ+ (ξ)Uk−1 = Γ+ (φk0 (ξ)) (28.1)

Here k0 ∈ SO(r, s, R), ξ ∈ V ∗ = Rn (n = r + s), φk0 is the action of k0 on V ∗ .


The Uk for k = Φ−1 (k0 ) ∈ Spin(r, s) (Φk is the 2-fold covering map) are the
intertwining operators we are looking for. The fermionic analog of 18.5 is

[UL0 , Γ+ (ξ)] = Γ+ (L · ξ)

305
where L ∈ so(r, s, R) and L acts on V ∗ as an infinitesimal orthogonal transfor-
mation. In terms of basis vectors of V ∗
 
ξ1
 .. 
ξ=.
ξn

this says
[UL0 , Γ+ (ξ)] = Γ+ (LT ξ)
Just as in the bosonic case, the UL0 can be found by looking first at the
pseudo-classical case, where one has theorem 27.1 which says

{µL , ξ}+ = LT ξ

where
1 1X
µL = ξ · Lξ = Ljk ξj ξk
2 2
j,k

One then takes


1X
UL0 = Γ+ (µL ) = Ljk γj γk
4
j,k

For the case s = 0 and a rotation in the j − k plane, with L = jk one
recovers formulas 26.1 and 26.2 from chapter 26, with
1
[− γj γk , γ(v)] = γ(jk v)
2
the infinitesimal action of a rotation on the γ matrices, and
θ θ
γ(v) → e− 2 γj γk γ(v)e 2 γj γk = γ(eθjk v)

the group version. Just as in the symplectic case, exponentiating the UL0 only
gives a representation up to sign, and one needs to go to the double cover of
SO(n) to get a true representation. As in that case, the necessity of the double
cover is best seen by use of a complex structure and an analog of the Bargmann-
Fock construction, a topic we will address in a later section.
In order to have a full construction of a quantization of a pseudo-classical
system, we need not just the abstract Clifford algebra elements given by the
map Γ+ , but also a realization of the Clifford algebra as linear operators on a
state space. As mentioned in chapter 25, it can be shown that the real Clifford
algebras Cliff(r, s, R) are isomorphic to either one or two copies of the matrix
algebras M (2l , R), M (2l , C), or M (2l , H), with the power l depending on r, s.
The irreducible representations of such a matrix algebra are just the column
vectors of dimension 2l , and there will be either one or two such irreducible
representations for Cliff(r, s, R) depending on the number of copies of the matrix
algebra. This is the fermionic analog of the Stone-von Neumann uniqueness
result in the bosonic case.

306
28.1.1 Quantization of the pseudo-classical spin
As an example, one can consider the quantization of the pseudo-classical spin
degree of freedom of section 27.3.1. In that case Γ+ takes values in Cliff(3, 0, R),
for which an explicit identification with the algebra M (2, C) of two by two
complex matrices was given in section 25.2. One has
1 1
Γ+ (ξj ) = √ γj = √ σj
2 2
and the Hamiltonian operator is

−iH = Γ+ (h) =Γ+ (B12 ξ1 ξ2 + B13 ξ1 ξ3 + B23 ξ2 ξ3 )


1
= (B12 σ1 σ2 + B13 σ1 σ3 + B23 σ2 σ3 )
2
1
=i (B1 σ1 + B2 σ2 + B3 σ3 )
2
This is nothing but our old example from chapter 7 of a fixed spin one-half
particle in a magnetic field.
The pseudo-classical equation of motion
d
ξj (t) = −{h, ξj }
dt
after quantization becomes the Heisenberg picture equation of motion for the
spin operators (see equation 7.2)
d
SH (t) = −i[SH · B, SH ]
dt
for the case of Hamiltonian
H = −µ · B
(see equation 7.1) and magnetic moment operator

µ=S

Here the state space is H = C2 , with an explicit choice of basis given by


our chosen identification of Cliff(3, 0, R) with two by two complex matrices. In
the next sections we will just consider the case of an even-dimensional fermionic
phase space, but there provide a basis-independent construction of the state
space and the action of the Clifford algebra on it.

28.2 The Schrödinger representation for fermions:


ghosts
We would like to construct representations of Cliff(r, s, R) and thus fermionic
state spaces by using analogous constructions to the Schrödinger and Bargmann-
Fock ones in the bosonic case. The Schrödinger construction took the state

307
space H to be a space of functions on a subspace of the classical phase space
which had the property that the basis coordinate functions Poisson-commuted.
Two examples of this are the position coordinates qj , since {qj , qk } = 0, or the
momentum coordinates pj , since {pj , pk } = 0. Unfortunately, for symmetric
bilinear forms h·, ·i of definite sign, such as the positive definite case Cliff(n, R),
the only subspace the bilinear form is zero on is the zero subspace.
To get an analog of the bosonic situation, one needs to take the case of
signature (d, d). The fermionic phase space will then be 2d dimensional, with
d-dimensional subspaces on which h·, ·i and thus the fermionic Poisson bracket
is zero. Quantization will give the Clifford algebra
Cliff(d, d, R) = M (2d , R)
d
which has just one irreducible representation, R2 . One can complexify this to
get a complex state space
d
HF = C2
This state space will come with a representation of Spin(d, d, R) from expo-
nentiating quadratic combinations of the generators of Cliff(d, d, R). However,
this is a non-compact group, and one can show that on general grounds it can-
not have unitary finite-dimensional representations, so there must be a problem
with unitarity.
To see what happens explicitly, consider the simplest case d = 1 of one degree
of freedom. In the bosonic case the classical phase space is R2 , and quantization
gives operators Q, P which in the Schrödinger representation act on functions

of q, with Q = q and P = −i ∂q . In the fermionic case with signature (1, 1),
basis coordinate functions on phase space are ξ1 , ξ2 , with
{ξ1 , ξ1 }+ = 1, {ξ2 , ξ2 }+ = −1, {ξ1 , ξ2 }+ = 0
Defining
1 1
η = √ (ξ1 + ξ2 ), π = √ (ξ1 − ξ2 )
2 2
we get objects with fermionic Poisson bracket analogous to those of q and p
{η, η}+ = {π, π}+ = 0, {η, π}+ = 1
Quantizing, we get analogs of the Q, P operators
1 1
η̂ = Γ+ (η) = √ (Γ+ (ξ1 ) + Γ+ (ξ2 )), π̂ = Γ+ (π) = √ (Γ+ (ξ1 ) − Γ+ (ξ2 ))
2 2
which satisfy anticommutation relations
η̂ 2 = π̂ 2 = 0, η̂π̂ + π̂ η̂ = 1
and can be realized as operators on the space of functions of one fermionic
variable η as

η̂ = multiplication by η, π̂ =
∂η

308
This state space is two complex dimensional, with an arbitrary state

f (η) = c1 1 + c2 η

with cj complex numbers. The inner product on this space is given by the
fermionic integral Z
(f1 (η), f2 (η)) = f1∗ (η)f2 (η)dη

with
f ∗ (ξ) = c1 1 + c2 η
With respect to this inner product, one has

(1, 1) = (η, η) = 0, (1, η) = (η, 1) = 1

This inner product is indefinite and can take on negative values, since

(1 − η, 1 − η) = −2

Having such negative-norm states ruins any standard interpretation of this


as a physical system, since this negative number is supposed to the probability of
finding the system in this state. Such quantum systems are called “ghosts”, and
do have applications in the description of various quantum systems, but only
when a mechanism exists for the negative-norm states to cancel or otherwise be
removed from the physical state space of the theory.

28.3 Spinors and the Bargmann-Fock construc-


tion
While the fermionic analog of the Schrödinger construction does not give a uni-
tary representation of the spin group, it turns out that the fermionic analog of
the Bargmann-Fock construction does, on the fermionic oscillator state space
discussed in chapter 24. This will work for the case of a positive definite sym-
metric bilinear form h·, ·i. Note though that n must be even since this will
require choosing a complex structure on the fermionic phase space Rn .
The corresponding pseudo-classical system will be the classical fermionic
oscillator studied in section 27.3.2. Recall that this uses a choice of complex
structure J on the fermionic phase space R2d , with the standard choice J = J0
giving the relations
1 1
θj = √ (ξ2j−1 − iξ2j ), θj = √ (ξ2j−1 + iξ2j )
2 2
for j = 1, . . . , d between real and complex coordinates. Here h·, ·i is positive-
definite, and the ξj are coordinates with respect to an orthonormal basis, so we
have the standard relation {ξj , ξk }+ = δjk and the θj , θj satisfy

{θj , θk }+ = {θj , θk }+ = 0, {θj , θk }+ = δjk

309
To quantize this system we need to find operators Γ+ (θj ) and Γ+ (θj ) that
satisfy
[Γ+ (θj ), Γ+ (θk )]+ = [Γ+ (θj ), Γ+ (θk )]+ = 0
[Γ+ (θj ), Γ+ (θk )]+ = δjk 1
but these are just the CAR satisfied by fermionic annihilation and creation
operators. We can choose

Γ+ (θj ) = aF †j , Γ+ (θj ) = aF j

and realize these operators as


aF j = , aF †j = multiplication by χj
∂χj

on the state space Λ∗ Cd of polynomials in the anticommuting variables χj . This


is a complex vector space of dimension 2d , isomorphic with the state space HF
of the fermionic oscillator in d degrees of freedom, with the isomorphism given
by

1 ↔ |0iF
χj ↔ aF †j |0iF
χj χk ↔ aF †j aF †k |0i
···
χ1 . . . χd ↔ aF †1 aF †2 · · · aF †d |0iF

where the indices j, k, . . . take values 1, 2, . . . , d and satisfy j < k < · · · .


If one defines a Hermitian inner product (·, ·) on HF by taking these basis
elements to be orthonormal, the operators aF j and a†F j will be adjoints with
respect to this inner product. This same inner product can also be defined
using fermionic integration by analogy with the Bargmann-Fock definition in
the bosonic case as
Z Pd
(f1 (χ1 , · · · , χd ), f2 (χ1 , · · · , χd )) = e− j=1 χj χj f1 f2 dχ1 dχ1 · · · dχd dχd

where f1 and f2 are complex linear combinations of the powers of the anticom-
muting variables χj . For the details of the construction of this inner product,
see chapter 7.2 of [64] or chapters 7.5 and 7.6 of [79].
The quantization using fermionic annihilation and creation operators given
here provides an explicit realization of a representation of the Clifford algebra
Cliff(2d, R) on the complex vector space HF . The generators of the Clifford
algebra are identified as operators on HF by
√ √ 1
γ2j−1 = 2Γ+ (ξ2j−1 ) = 2Γ+ ( √ (θj + θj )) = aF j + a†F j
2

310
√ √ i
γ2j = 2Γ+ (ξ2j ) = 2Γ+ ( √ (θj − θj )) = i(aF j − a†F j )
2
Quantization of the pseudo-classical fermionic oscillator Hamiltonian h of
section 27.3.2 gives
d d
iX iX †
Γ+ (h) = Γ+ (− (θj θj − θj θj )) = − (a aF − aF j a†F j ) = −iH (28.2)
2 j=1 2 j=1 F j j

where H is the Hamiltonian operator for the fermionic oscillator used in chapter
24.
Taking quadratic combinations of the operators γj , γk provides a represen-
tation of the Lie algebra so(2d) = spin(2d). This representation exponentiates
to a representation up to sign of the group SO(2d), and a true representation
of its double-cover Spin(2d). The representation that we have constructed here
on the fermionic oscillator state space HF is called the spinor representation of
Spin(2d), and we will sometimes denote HF with this group action as S.
In the bosonic case, H = Fd is an irreducible representation of the Heisenberg
group, but as a representation of M p(2d, R), it has two irreducible components,
corresponding to even and odd polynomials. The fermionic analog is that HF
is irreducible under the action of the Clifford algebra Cliff(2d, C). One way
to show this is to show that Cliff(2d, C) is isomorphic to the matrix algebra
d
M (2d , C) and its action on HF = C2 is isomorphic to the action of matrices
on column vectors.
While HF is irreducible as a representation of the Clifford algebra, it is the
sum of two irreducible representations of Spin(2d), the so-called “half-spinor”
representations. Spin(2d) is generated by quadratic combinations of the Clifford
algebra generators, so these will preserve the subspaces

S+ = span{|0iF , aF †j aF †k |0iF , · · · } ⊂ S = HF

and
S− = span{aF †j |0iF , aF †j aF †k aF †l |0iF , · · · } ⊂ S = HF
corresponding to the action of an even or odd number of creation operators on
|0iF . This is because quadratic combinations of the aF j , aF †j preserve the parity
of the number of creation operators used to get an element of S by action on
|0iF .

28.4 Complex structures, U (d) ⊂ SO(2d) and the


spinor representation
The construction of the spinor representation here has involved making a specific
choice of relation between the Clifford algebra generators and the fermionic
annihilation and creation operators. This corresponds to a standard choice of
complex structure J0 , which appears in a manner closely parallel to that of

311
the Bargmann-Fock case of section 21.1. The difference here is that for the
analogous construction of spinors a complex structure J must be chosen to
preserve not an antisymmetric bilinear form Ω, but the inner product, so one
has
hJ(·), J(·)i = h·, ·i
We will here restrict to the case of h·, ·i positive definite, and unlike in the
bosonic case, no additional positivity condition on J will be required.
J splits the complexification of the real dual phase space V ∗ = V = R2d with
its coordinates ξj into a d-dimensional complex vector space and a conjugate
complex vector space. As in the bosonic case one has

V ⊗ C = VJ+ ⊕ VJ−

and quantization of vectors in VJ+ gives linear combinations of creation op-


erators, while vectors in VJ− are taken to linear combinations of annihilation
operators. The choice of J is reflected in the existence of a distinguished di-
rection |0iF in the spinor space S = HF which is determined (up to phase) by
the condition that it is annihilated by all linear combinations of annihilation
operators.
The choice of J also picks out a subgroup U (d) ⊂ SO(2d) of those orthogonal
transformations that commute with J. Just as in the bosonic case, two different
representations of the Lie algebra u(d) of U (d) are used:

• The restriction of u(d) ⊂ so(2d) of the spinor representation described


above. This exponentiates to give a representation not of U (d), but of a
double cover of U (d) that is a subgroup of Spin(2d).

• By normal-ordering operators, one shifts the spinor representation of u(d)


by a constant and gets a representation that exponentiates to a true rep-
resentation of U (d). This representation is reducible, with irreducible
components the Λk (Cd ) for k = 0, 1, . . . , d.

In both cases the representation of u(d) is constructed using quadratic combina-


tions of annihilation and creation operators involving one annihilation operator
and one creation operator. Non-zero pairs of two creation operators will give
“Bogoliubov transformations”, changing |0iF .
Given any group element

g0 = eA ⊂ U (d)

acting on the fermionic dual phase space preserving J and the inner product, we
can use exactly the same method as in theorems 23.1 and 23.2 to construct its
action on the fermionic state space by the second of the above representations.
For A a skew-adjoint matrix we have a fermionic moment map
X
A ∈ u(d) → µA = θj Ajk θk
j,k

312
satisfying
{µA , µA0 }+ = µ[A,A0 ]
and
{µA , θ}+ = AT θ, {µA , θ}+ = AT θ
The Lie algebra representation operators are the
X †
UA0 = aF j Ajk aF k
j,k

which satisfy (see theorem 24.1)

[UA0 , UA0 0 ] = U[A,A0 ]

and
[UA0 , a†F ] = AT a†F , [UA0 , aF ] = AT aF
Exponentiating these gives the intertwining operators, which act on the an-
nihilation and creation operators as
T
UeA a†F (UeA )−1 = eA a†F , UeA aF (UeA )−1 = eA aF
T

For the simplest example, consider the U (1) ⊂ U (d) ⊂ SO(2d) that acts by

θj → e−iφ θj , θj → eiφ θj

corresponding to A = −iφ1. The moment map will be

µA = φh

where
d
X
h = −i θj θ j
j=1

is the Hamiltonian for the classical fermionic oscillator. Quantizing h (see equa-
tion 28.2) will give the Hamiltonian operator
d d
1X 1
(aF †j aF j − aF j aF †j ) = (aF †j aF j − )
X
H=
2 j=1 j=1
2

and a Lie algebra representation of u(1) with half-integral eigenvalues (±i 12 ).


Exponentiation will give a representation of a double cover of U (1) ⊂ U (d).
Quantizing h instead using normal-ordering
d
aF †j aF j
X
:H: =
j=1

313
will give a true representation of U (1) ⊂ U (d), with
d
aF †j aF j
X
UA0 = −iφ
j=1

satisfying
[UA0 , a†F ] = −iφa†F , [UA0 , aF ] = iφaF
Exponentiating, the action on annihilation and creation operators is
aF †j aF j † iφ aF †j aF j
Pd Pd
e−iφ j=1 aF e j=1 = e−iφ a†F

aF †j aF j aF †j aF j
Pd Pd
e−iφ j=1 aF eiφ j=1 = eiφ aF

28.5 An example: spinors for SO(4)


We saw in chapter 6 that the spin group Spin(4) was isomorphic to Sp(1) ×
Sp(1) = SU (2)×SU (2). Its action on R4 was then given by identifying R4 = H
and acting by unit quaternions on the left and the right (thus the two copies of
Sp(1)). While this constructs the representation of Spin(4) on R4 , it does not
provide the spin representation of Spin(4).
A conventional way of defining the spin representation is to choose an explicit
matrix representation of the Clifford algebra (in this case Cliff(4, 0, R)), for
instance
       
0 1 0 σ1 0 σ2 0 σ3
γ0 = , γ1 = −i , γ2 = −i , γ3 = −i
1 0 −σ1 0 −σ2 0 −σ3 0

where we have written the matrices in 2 by 2 block form, and are indexing the
four dimensions from 0 to 3. One can easily check that these satisfy the Clifford
algebra relations: they anticommute with each other and

γ02 = γ12 = γ22 = γ32 = 1

The quadratic Clifford algebra elements − 21 γj γk for j < k satisfy the com-
mutation relations of so(4) = spin(4). These are explicitly
   
1 i σ1 0 1 i σ1 0
− γ0 γ1 = − , − γ2 γ3 = −
2 2 0 −σ1 2 2 0 σ1
   
1 i σ2 0 1 i σ2 0
− γ0 γ2 = − , − γ1 γ3 = −
2 2 0 −σ2 2 2 0 σ2
   
1 i σ3 0 1 i σ3 0
− γ0 γ3 = − − γ1 γ2 = −
2 2 0 −σ3 2 2 0 σ3
The Lie algebra spin representation is just matrix multiplication on S = C4 ,
and it is obviously a reducible representation on two copies of C2 (the upper

314
and lower two components). One can also see that the Lie algebra spin(4) =
su(2) + su(2), with the two su(2) Lie algebras having bases
1 1 1
− (γ0 γ1 + γ2 γ3 ), − (γ0 γ2 + γ1 γ3 ), − (γ0 γ3 + γ1 γ2 )
4 4 4
and
1 1 1
− (γ0 γ1 − γ2 γ3 ), − (γ0 γ2 − γ1 γ3 ), − (γ0 γ3 − γ1 γ2 )
4 4 4
The irreducible spin representations of Spin(4) are just the spin one-half repre-
sentations of the two copies of SU (2).
In the fermionic oscillator construction, we have

S = S + + S − , S + = span{1, η1 η2 }, S − = span{η1 , η2 }

and the Clifford algebra action on S is given for the generators as (indexing
dimensions from 1 to 4)
∂ ∂
γ1 = + η1 , γ2 = i( − η1 )
∂η1 ∂η1
∂ ∂
γ3 = + η2 , γ4 = i( − η2 )
∂η2 ∂η2
Note that in this construction there is a choice of complex structure J = J0 .
This gives a distinguished vector |0i = 1 ∈ S + , as well as a distinguished sub-Lie
algebra u(2) ⊂ so(4) of transformations that act trivially on |0i, given by linear
combinations of
∂ ∂ ∂ ∂
η1 , η2 , η1 , η2 ,
∂η1 ∂η2 ∂η2 ∂η1
There is also a distinguished sub-Lie algebra u(1) ⊂ u(2) given by
∂ ∂
η1 + η2
∂η1 ∂η2
Bogoliubov transformations that are unitary transformations on the spinor
state space, but change |0i and correspond to a change in complex structure,
are given by exponentiating the Lie algebra representation operators

i(aF †1 aF †2 + aF 2 aF 1 ), aF †1 aF †2 − aF 2 aF 1

These act on |0i by multiplication by a complex number times η1 η2 . The possible


choices of complex structure are parametrized by SO(4)/U (2), which can be
identified with the complex projective sphere CP 1 = S 2 .
The construction in terms of matrices is well-suited to calculations, but
it is inherently dependent on a choice of coordinates. The fermionic version
of Bargmann-Fock is given here in terms of a choice of basis, but, like the
closely analogous bosonic construction, only actually depends on a choice of
inner product and a choice of compatible complex structure J, producing a
representation on the coordinate-independent object HF = Λ∗ VJ+ .

315
In chapter 38 we will consider explicit matrix representations of the Clifford
algebra for the case of Spin(3, 1). One could also use the fermionic oscillator
construction, complexifying to get a representation of

so(4) ⊗ C = sl(2, C) × sl(2, C)

and then restricting to the subalgebra

so(3, 1) ⊂ so(3, 1) ⊗ C = so(4) ⊗ C

This will give a representation of Spin(3, 1) in terms of quadratic combinations


of Clifford algebra generators, but unlike the case of Spin(4), it will not be
unitary. The lack of positivity for the inner product causes the same sort of
wrong-sign problems with the CAR that were found in the bosonic case for the
CCR when J and Ω gave a non-positive symmetric bilinear form. In the fermion
case the wrong-sign problem does not stop one from constructing a non-trivial
representation, but it will not be a unitary representation.

28.6 For further reading


For more about pseudo-classical mechanics and quantization, see [64] Chapter 7.
The fermionic quantization map, Clifford algebras, and the spinor representation
are discussed in detail in [42]. For another discussion of the spinor representation
from a similar point of view to the one here, see chapter 12 of [66]. Chapter 12 of
[47] contains an extensive discussion of the role of different complex structures
in the construction of the spinor representation.

316
Chapter 29

A Summary: Parallels
Between Bosonic and
Fermionic Quantization

To summarize much of the material we have covered, it may be useful to con-


sider the following table, which explicitly gives the correspondence between the
parallel constructions we have studied in the bosonic and fermionic cases.

Bosonic Fermionic

Dual phase space M = R2d Dual phase space V = Rn


Non-degenerate antisymmetric Non-degenerate symmetric
bilinear form Ω(·, ·) on M bilinear form h·, ·i on V
Poisson bracket {·, ·} on Poisson bracket {·, ·}+ on
functions on M = R2d anticommuting functions on V = Rn
Lie algebra of polynomials of Lie superalgebra of anticommuting
degree 0, 1, 2 polynomials of degree 0, 1, 2
Coordinates qj , pj , basis of M Coordinates ξj , basis of V
Quadratics in qj , pj , basis for sp(2d, R) Quadratics in ξj , basis for so(n)
Sp(2d, R) preserves Ω(·, ·) SO(n, R) preserves h·, ·i
Weyl algebra Weyl(2d, C) Clifford algebra Cliff(n, C)
Momentum, position operators Pj , Qj Clifford algebra generators γj
Quadratics in Pj , Qj provide Quadratics in γj provide
representation of sp(2d, R) representation of so(2d)
Metaplectic representation Spinor representation

317
Stone-von Neumann Uniqueness of Cliff(2d, C) representa-
Uniqueness of h2d+1 representation tion
M p(2d, R) double-cover of Sp(2d, R) Spin(n) double-cover of SO(n)
J : J 2 = −1, Ω(Ju, Jv) = Ω(u, v) J : J 2 = −1, hJu, Jvi = hu, vi

M ⊗ C = M+
J ⊕ MJ V ⊗ C = VJ+ ⊕ VJ−
U (d) ⊂ Sp(2d, R) commutes with J U (d) ⊂ SO(2d, R) commutes with J
Compatible J ∈ Sp(2d, R)/U (d) Compatible J ∈ O(2d)/U (d)
aj , a†j satisfying CCR aj F , aj †F satisfying CAR
aj |0i = 0, |0i depends on J aj F |0i = 0, |0i depends on J
Bogoliubov transformations generated Bogoliubov transformations generated
by symmetric quadratics in a†j , a†k by anti-symmetric quadratics in
aF †j , aF †k

H = Fdf in = C[w1 , . . . , wd ] = S ∗ (Cd ) H = HF = Λ∗ (Cd )

318
Chapter 30

Supersymmetry, Some
Simple Examples

If one considers fermionic and bosonic quantum system that each separately
have operators coming from Lie algebra or superalgebra representations on their
state spaces, when one combines the systems by taking the tensor product, these
operators will continue to act on the combined system. In certain special cases
new operators with remarkable properties will appear that mix the fermionic
and bosonic systems and commute with the Hamiltonian (often by giving some
sort of “square root” of the Hamiltonian). These are generically known as
“supersymmetries” and provide new information about energy eigenspaces. In
this chapter we’ll examine in detail some of the simplest such quantum systems,
examples of “superymmetric quantum mechanics”.

30.1 The supersymmetric oscillator


In the previous chapters we discussed in detail

• The bosonic harmonic oscillator in d degrees of freedom, with state space


HB generated by applying d creation operators aB j an arbitrary number
of times to a lowest energy state |0iB . The Hamiltonian is

d d
1 X 1
(aB †j aB j + aB j aB †j ) =
X
H= ~ω (NB j + )~ω
2 j=1 j=1
2

where NB j is the number operator for the j’th degree of freedom, with
eigenvalues nB j = 0, 1, 2, · · · .

• The fermionic oscillator in d degrees of freedom, with state space HF


generated by applying d creation operators aF j to a lowest energy state

319
|0iF . The Hamiltonian is

d d
1 X 1
(aF †j aF j − aF j aF †j ) =
X
H= ~ω (NF j − )~ω
2 j=1 j=1
2

where NF j is the number operator for the j’th degree of freedom, with
eigenvalues nF j = 0, 1.

Putting these two systems together we get a new quantum system with state
space
H = HB ⊗ HF

and Hamiltonian
d
X
H= (NB j + NF j )~ω
j=1

Notice that the lowest energy state |0i for the combined system has energy 0,
due to cancellation between the bosonic and fermionic degrees of freedom.
For now, taking for simplicity the case d = 1 of one degree of freedom, the
Hamiltonian is
H = (NB + NF )~ω

with eigenvectors |nB , nF i satisfying

H|nB , nF i = (nB + nF )~ω

Notice that while there is a unique lowest energy state |0, 0i of zero energy, all
non-zero energy states come in pairs, with two states

|n, 0i and |n − 1, 1i

both having energy n~ω.


This kind of degeneracy of energy eigenvalues usually indicates the existence
of some new symmetry operators commuting with the Hamiltonian operator.
We are looking for operators that will take |n, 0i to |n − 1, 1i and vice-versa,
and the obvious choice is the two operators

Q+ = aB a†F , Q− = a†B aF

which are not self adjoint, but are each other’s adjoints ((Q− )† = Q+ ).
The pattern of energy eigenstates looks like this

320
Computing anticommutators using the CCR and CAR for the bosonic and
fermionic operators (and the fact that the bosonic operators commute with the
fermionic ones since they act on different factors of the tensor product), one
finds that
Q2+ = Q2− = 0

and
(Q+ + Q− )2 = [Q+ , Q− ]+ = H

One could instead work with self-adjoint combinations

1
Q1 = Q+ + Q− , Q2 = (Q+ − Q− )
i
which satisfy
[Q1 , Q2 ]+ = 0, Q21 = Q22 = H

Notice that the Hamiltonian H is a square of the self-adjoint operator Q+ +


Q− , and this fact alone tells us that the energy eigenvalues will be non-negative.
It also tells us that energy eigenstates of non-zero energy will come in pairs

|ψi, (Q+ + Q− )|ψi

321
with the same energy. To find states of zero energy, instead of trying to solve
the equation H|0i = 0 for |0i, one can look for solutions to
Q1 |0i = 0 or Q2 |0i = 0
These operators don’t correspond to a Lie algebra representation as H does,
but do come from a Lie superalgebra representation, so are described as gen-
erators of a “supersymmetry” transformation. In more general theories with
operators like this with the same relation to the Hamiltonian, one may or may
not have solutions to
Q1 |0i = 0 or Q2 |0i = 0
If such solutions exist, the lowest energy state has zero energy and is described
as invariant under the supersymmetry. If no such solutions exist, the lowest
energy state will have a non-zero, positive energy, and satisfy
Q1 |0i =
6 0 or Q2 |0i =
6 0
In this case one says that the supersymmetry is “spontaneously broken”, since
the lowest energy state is not invariant under supersymmetry.
There is an example of a physical quantum mechanical system that has ex-
actly the behavior of this supersymmetric oscillator. A charged particle confined
to a plane, coupled to a magnetic field perpendicular to the plane, can be de-
scribed by a Hamiltonian that can be put in the bosonic oscillator form (to
show this, we need to know how to couple quantum systems to electromagnetic
fields, which we will come to later in the course). The equally spaced energy
levels are known as “Landau levels”. If the particle has spin one-half, there will
be an additional term in the Hamiltonian coupling the spin and the magnetic
field, exactly the one we have seen in our study of the two-state system. This
additional term is precisely the Hamiltonian of a fermionic oscillator. For the
case of gyromagnetic ratio g = 2, the coefficients match up so that we have
exactly the supersymmetric oscillator described above, with exactly the pattern
of energy levels seen there.

30.2 Supersymmetric quantum mechanics with


a superpotential
The supersymmetric oscillator system can be generalized to a much wider class
of potentials, while still preserving the supersymmetry of the system. In this
section we’ll introduce a so-called “superpotential” W (q), with the harmonic
oscillator the special case
q2
W (q) =
2
For simplicity, we will here choose constants ~ = ω = 1
Recall that our bosonic annihilation and creation operators were defined by
1 1
aB = √ (Q + iP ), a†B = √ (Q − iP )
2 2

322
Introducing an arbitrary superpotential W (q) with derivative W 0 (q) we can
define new annihilation and creation operators:
1 1
aB = √ (W 0 (Q) + iP ), a†B = √ (W 0 (Q) − iP )
2 2

Here W 0 (Q) is the multiplication operator W 0 (q) in the Schrödinger representa-


tion on functions of q, defined by conjugation with a unitary operator in other
unitarily equivalent representations. We keep our definition of the operators

Q+ = aB a†F , Q− = a†B aF

These satisfy
Q2+ = Q2− = 0

for the same reason as in the oscillator case: repeated factors of aF or a†F vanish.
Taking as the Hamiltonian the same square as before, we find

H =(Q+ + Q− )2
1 1
= (W 0 (Q) + iP )(W 0 (Q) − iP )a†F aF + (W 0 (Q) − iP )(W 0 (Q) + iP )aF a†F
2 2
1 1
= (W (Q) + P )(aF aF + aF aF ) + (i[P, W 0 (Q)])(a†F aF − aF a†F )
0 2 2 † †
2 2
1 0 2 2 1 0
= (W (Q) + P ) + (i[P, W (Q)])σ3
2 2
But iP is the operator corresponding to infinitesimal translations in Q, so we
have
i[P, W 0 (Q)] = W 00 (Q)
and
1 1 00
H= (W 0 (Q)2 + P 2 ) + W (Q)σ3
2 2
which gives a large class of quantum systems, all with state space

H = HB ⊗ HF = L2 (R) ⊗ C2

(using the Schrödinger representation for the bosonic factor).


The energy eigenvalues will be non-negative, and energy eigenvectors with
positive energy will occur in pairs

|ψi, (Q+ + Q− )|ψi

.
There may or may not be a state with zero energy, depending on whether
or not one can find a solution to the equation

(Q+ + Q− )|0i = Q1 |0i = 0

323
If such a solution does exist, thinking in terms of super Lie algebras, one calls
Q1 the generator of the action of a supersymmetry on the state space, and
describes the ground state |0i as invariant under supersymmetry. If no such
solution exists, one has a theory with a Hamiltonian that is invariant under su-
persymmetry, but with a ground state that isn’t. In this situation one describes
the supersymmetry as “spontaneously broken”. The question of whether a given
supersymmetric theory has its supersymmetry spontaneously broken or not is
one that has become of great interest in the case of much more sophisticated su-
persymmetric quantum field theories. There, hopes (so far unrealized) of making
contact with the real world rely on finding theories where the supersymmetry
is spontaneously broken.
In this simple quantum mechanical system, one can try and explicitly solve
the equation Q1 |ψi = 0. States can be written as two-component complex
functions
 
ψ+ (q)
|ψi =
ψ− (q)

and the equation to be solved is


 
1 ψ+ (q)
(Q+ + Q− )|ψi = √ ((W 0 (Q) + iP )a†F + (W 0 (Q) − iP )aF )
2 ψ− (q)
     
1 d 0 1 d 0 0 ψ+ (q)
= √ ((W 0 (Q) + ) + (W 0 (Q) − ) )
2 dq 0 0 dq 1 0 ψ− (q)
     
1 0 1 d 0 1 ψ+ (q)
= √ (W 0 (Q) + )
2 1 0 dq −1 0 ψ− (q)
   
1 0 1 d ψ+ (q)
=√ ( − W 0 (Q)σ3 ) =0
2 −1 0 dq ψ− (q)

which has general solution

c+ eW (q)
     
ψ+ (q) W (q)σ3 c+
=e =
ψ− (q) c− c− e−W (q

for complex constants c+ , c− . Such solutions will only be normalizable if

c+ = 0, lim W (q) = +∞
q→±∞

or
c− = 0, lim W (q) = −∞
q→±∞

If, for example, W (q) is an odd polynomial, one will not be able to satisfy either
of these conditions, so there will be no solution, and the supersymmetry will be
spontaneously broken.

324
30.3 Supersymmetric quantum mechanics and
differential forms
If one considers supersymmetric quantum mechanics in the case of d degrees of
freedom and in the Schrödinger representation, one has

H = L2 (Rd ) ⊗ Λ∗ (Rd )

the tensor product of complex-valued functions on Rd (acted on by the Weyl


algebra Weyl(2d, C)) and anticommuting functions on Rd (acted on by the
Clifford algebra Cliff(2d, C)). There are two operators Q+ and Q− , adjoints
of each other and of square zero. If one has studied differential forms, this
should look familiar. This space H is well-known to mathematicians, as the
complex valued differential forms on Rd , often written Ω∗ (Rd ), where here the

denotes an index taking values from 0 (the 0-forms, or functions) to d (the
d-forms). In the theory of differential forms, it is well known that one has an
operator d on Ω∗ (Rd ) with square zero, called the de Rham differential. Using
the inner product on Rd , one can put a Hermitian inner product on Ω∗ (Rd )
by integration, and then d has an adjoint δ, also of square zero. The Laplacian
operator on differential forms is

 = (d + δ)2

The supersymmetric quantum system we have been considering corresponds


precisely to this, once one conjugates d, δ as follows

Q+ = e−W (q) deW (q) , Q− = eW (q) δe−W (q)

In mathematics, the interest in differential forms mainly comes from the


fact that one can construct them not just on Rd , but on a general differentiable
manifold M , with a corresponding construction of d, δ,  operators. In Hodge
theory, one studies solutions of
ψ = 0
(these are called “harmonic forms”) and finds that the dimension of the space
of such solutions can be used to get topological invariants of the manifold M .

30.4 For further reading


For a reference at the level of these notes, see [22]. For more details about
supersymmetric quantum mechanics see the quantum mechanics textbook of
Tahktajan [64], and lectures by Orlando Alvarez [1]. These references also
describe the relation of these systems to the calculation of topological invariants,
a topic pioneered in Witten’s 1982 paper on supersymmetry and Morse theory
[74].

325
326
Chapter 31

The Pauli Equation and the


Dirac Operator

In chapter 30 we considered supersymmetric quantum mechanical systems where


both the bosonic and fermionic variables that get quantized take values in an
even dimensional space R2d . There are then two supersymmetry operators Q1
and Q2 , so this is sometimes called N = 2 supersymmetry (in an alternate nor-
malization, counting complex variables, it is called N = 1). It turns out however
that there are very interesting quantum mechanics systems that one can get by
quantizing bosonic variables in phase space R2d , but fermionic variables in Rd .
The operators appearing in such a theory will be given by the tensor product
of the Weyl algebra in 2d variables and the Clifford algebra in d variables.
In such a theory the operator −|P|2 will have a square root, the Dirac oper-
/ This existence of a square root of the Casimir operator provides a new
ator ∂.
way to construct irreducible representations of the group of spatial symmetries,
using a new sort of quantum free particle, one carrying an internal “spin” de-
gree of freedom. Remarkably, fundamental matter particles are well-described
in exactly this way.

1
31.1 The Pauli operator and free spin 2 particles
in d = 3
We have so far seen two quite different quantum systems based on three dimen-
sional space:

• The free particle of chapter 17. This had classical phase space R6 with
1
coordinates q1 , q2 , q3 , p1 , p2 , p3 and Hamiltonian 2m |p|2 . Quantization us-
ing the Schrödinger representation gave operators Q1 , Q2 , Q3 , P1 , P2 , P3
on the space HB = L2 (R3 ) of square-integrable functions of the position

327
coordinates. The Hamiltonian operator is

1 1 ∂2 ∂2 ∂2
H= |P|2 = − ( 2 + 2 + 2)
2m 2m ∂q1 ∂q2 ∂q3

• The spin 21 quantum system, discussed first in chapter 7 and later in


section 28.1.1. This had a pseudo-classical fermionic phase space R3 with
coordinates ξ1 , ξ2 , ξ3 which after quantization became the operators
1 1 1
√ σ1 , √ σ2 , √ σ3
2 2 2
on the state space HF = C2 . For this system we considered the Hamilto-
nian describing its interaction with a constant background magnetic field
1
H = − (B1 σ1 + B2 σ2 + B3 σ3 )
2

It turns out to be an experimental fact that fundamental fermionic particles


are described by a quantum system that is the tensor product of these two
systems, with state space

H = HB ⊗ HF = L2 (R3 ) ⊗ C2

which one can think of as two-component complex wavefunctions. This sys-


tem has a pseudo-classical description using a phase space with six conventional
coordinates qj , pj and three fermionic coordinates ξj . On functions of these
coordinates one has a generalized Poisson bracket which provides a Lie superal-
gebra structure on such functions. On generators, the non-zero bracket relations
are just
{qj , pk } = δjk , {ξj , ξk } = δjk
For now we will take the background magnetic field B = 0. In chapter 42
we will see how to generalize the free particle to the case of a free particle in a
general background electromagnetic field, and then the Hamiltonian term above
involving the B field will appear. In the absence of electromagnetic fields the
classical Hamiltonian function will still just be
1 2
h= (p + p22 + p23 )
2m 1
but now this can be written in the following form (using the Leibniz rule for a
Lie superbracket)
3 3 3 3
1 X X 1 X 1 X 2
h= { pj ξj , pk ξk } = pj {ξj , ξk }pk = p
2m j=1 2m 2m j=1 j
k=1 j,k=1

Note the appearance of the function p1 ξ1 + p2 ξ2 + p3 ξ3 which now plays a role


even more fundamental than that of the Hamiltonian (which can be expressed

328
in terms of it). In this pseudo-classical theory p1 ξ1 + p2 ξ2 + p3 ξ3 is a “super-
symmetry”, Poisson commuting with the Hamiltonian, while at the same time
playing the role of a sort of “square root” of the Hamiltonian, providing a new
sort of symmetry that can be thought of as a “square root” of an infinitesimal
time translation.
Quantization takes
1
p1 ξ1 + p2 ξ2 + p3 ξ3 → √ σ · P
2
and the Hamiltonian operator can now be written as an anticommutator and a
square
1 1 1 1 1
H= [ √ σ · P, √ σ · P]+ = (σ · P)2 = (P12 + P22 + P32 )
2m 2 2 2m 2m
(using the fact that the σj satisfy the Clifford algebra relations for Cliff(3, 0, R)).
We will define the three-dimensional Dirac operator as
∂ ∂ ∂
∂/ = σ1 + σ2 + σ3 =σ·∇
∂q1 ∂q2 ∂q3
It operates on two-component wavefunctions
 
ψ1 (q)
ψ2 (q)
Using this Dirac operator (often called in this context the “Pauli operator”) we
can write a two-component version of the Schrödinger equation (often called the
“Pauli equation” or “Schrödinger-Pauli equation”)
   
∂ ψ1 (q) 1 ∂ ∂ ∂ 2 ψ1 (q)
i =− (σ1 + σ2 + σ3 ) (31.1)
∂t ψ2 (q) 2m ∂q1 ∂q2 ∂q3 ψ2 (q)
1 ∂2 ∂2 ∂2
 
ψ1 (q)
=− ( 2 + 2 + 2)
2m ∂q1 ∂q2 ∂q3 ψ2 (q)
This free-particle version of the equation is just two copies of the standard free-
particle Schrödinger equation, so physically just corresponds to two independent
quantum free particles. It becomes much more non-trivial when a coupling to
an electromagnetic field is introduced, as will be seen in chapter 42.
The introduction of a two-component wavefunction does allow us to find
more interesting irreducible representations of the group E(3), beyond the ones
studied in chapter 17. These have eigenvalue ± 21 p (where p2 is the eigenvalue
of the first Casimir operator |P|2 ) for the second Casimir operator J · P, as
opposed to the zero eigenvalue case of single wavefunctions.
These representations will as before be on the space of solutions of the time-
independent equation, and irreducible for fixed choice of the energy E. The
equation for the energy eigenfunctions of energy eigenvalue E will be
   
1 ψ1 (q) ψ1 (q)
(σ · P)2 =E
2m ψ2 (q) ψ2 (q)

329
In terms of the inverse Fourier transform
ZZZ
1
ψ1,2 (q) = 3 eip·q ψe1,2 (p)d3 p
(2π) 2

this equation becomes


! !
2 ψe1 (p) 2 ψe1 (p)
((σ · p) − 2mE) e = (|p| − 2mE) e =0 (31.2)
ψ2 (p) ψ2 (p)

and as in chapter√17 our solution space is given by functions ψeE,1,2 (p) on the
sphere of radius 2mE = |p| in momentum space (although now, two such
functions).
Another way to find solutions to this equation is to look for solutions to
a pair of first-order equations involving the three-dimensional Dirac operator.
Solutions to ! !
ψe1 (p) √ ψe1 (p)
σ·p e = ± 2mE e
ψ2 (p) ψ2 (p)
will give solutions to 31.2, for either sign. One can rewrite this as
! !
σ · p ψe1 (p) ψe1 (p)
=± e
|p| ψe2 (p) ψ2 (p)

and we will write solutions to this equation with the + sign as ψeE,+ (p), those for
the − sign as ψeE,− (p). Note that ψeE,+ (p) and ψeE,+ (p) are each√
two-component
complex functions of the momentum, supported on the sphere 2mE = |p|.
In chapter 16 we saw that R ∈ SO(3) acts on single-component momentum
space solutions of the Schrödinger equation by

e(0, R)ψeE (p) ≡ ψeE (R−1 p)


ψeE (p) → u

This takes solutions to solutions since the operator u


e(0, R) commutes with the
Casimir operator |P|2

e(0, R)|P|2 = |P|2 u


u e(0, R)−1 = |P|2
e(0, R)|P|2 u
e(0, R) ⇐⇒ u

This is true since

u e(0, R)−1 ψ(p)


e(0, R)|P|2 u e u(0, R)|P|2 ψ(Rp)
=e e

=|R−1 P|2 ψ(R


e −1 Rp) = |P|2 ψ(p)
e

For two-component wavefunctions, we could try to just take the representa-


tion to be ! !
ψe1 (p) ψe1 (R−1 p)
u
e(0, R) e = e
ψ2 (p) ψ2 (R−1 p)

330
(or, equivalently, thinking of two component wavefunctions as the tensor product
of the space of single component wavefunction with C2 , just acting on the
first factor). If we do this, the operator σ · P does not commute with the
representation ue(0, R) because
u(0, R)−1 = (σ · R−1 P) 6= σ · P
e(0, R)(σ · P)e
u
Then rotations do not act separately on the spaces ψeE,+ (p) and ψeE,− (p).
If we want rotations to act separately on these spaces, we need to change
the action of rotations to
eS (0, R)ψeE,± (p) = ΩψeE,± (R−1 p)
ψeE,± (p) → u
where Ω is one of the two elements of SU (2) corresponding to R ∈ SO(3) (or,
in terms of tensor products, action by SU (2) on the C2 factor). Such an Ω can
be constructed using equation 6.3
φ
Ω = Ω(φ, w) = e−i 2 w·σ
Equation 6.5 shows that Ω is the SU (2) matrix corresponding to a rotation R
by an angle φ about the axis given by a unit vector w.
With this action on solutions we have
uS (0, R)−1 ψeE,± (p) =e
eS (0, R)(σ · P)e
u uS (0, R)(σ · P)Ω−1 ψeE,± (Rp)
=Ω(σ · R−1 P)Ω−1 ψeE,± (R−1 Rp)
=(σ · P)ψeE,± (p)
where we have used equation 6.5 to show
Ω(σ · R−1 P)Ω−1 = (σ · RR−1 P) = (σ · P)
Note that the two representations we get this way are representations not
of the rotation group SO(3) but of its double cover Spin(3) = SU (2) (because
otherwise there is a sign ambiguity since we don’t know whether to choose Ω
or −Ω). The translation part of the spatial symmetry group is easily seen to
commute with σ · P, so we have constructed representations of E(3), or rather,
of its double cover
] = R3 o SU (2)
E(3)
on the two spaces of solutions ψeE,± (p). We will see that these two representa-
tions are the E(3) representations described in section 17.3, the ones labeled by
the helicity ± 21 representations of the stabilizer group SO(2).
The translation part of the group acts as in the one-component case, by the
multiplication operator
eS (a, 1)ψeE,± (p) = e−i(a·p) ψeE,± (p)
u
and
eS (a, 1) = e−ia·P
u
so the Lie algebra representation is given by the usual P operator. The SU (2)
part of the group acts by a product of two commuting different actions

331
1. The same action on the momentum coordinates as in the one-component
case, just using R = Φ(Ω), the SO(3) rotation corresponding to the SU (2)
group element Ω. For example, for a rotation about the x-axis by angle φ
we have
ψeE,± (p) → ψeE,± (R(φ, e1 )−1 p)
Recall that the operator that does this is e−iφL1 where
∂ ∂
−iL1 = −i(Q2 P3 − Q3 P2 ) = −(q2 − q3 )
∂q3 ∂q2
and in general we have operators

−iL = −iQ × P

that provide the Lie algebra version of the representation (recall that at
the Lie algebra level, SO(3) and Spin(3) are isomorphic).
2. The action of the matrix Ω ∈ SU (2) on the two-component wavefunction
by
ψeE,± (p) → ΩψeE,± (p)
For a rotation by angle φ about the x-axis we have
σ1
Ω = e−iφ 2

and the operators that provide the Lie algebra version of the representation
are the
1
−iS = −i σ
2
The Lie algebra representation corresponding to the action of both of these
transformations is given by the operator

−iJ = −i(L + S)

and the standard terminology is to call L the “orbital” angular momentum, S


the “spin” angular momentum, and J the “total” angular momentum.
The second Casimir operator for this case is

J·P

and as in the one-component case the L · P part of this acts trivially on our
solutions ψeE,± (p). The spin component acts non-trivially and we have
1 1
(J · P)ψeE,± (p) = ( σ · p)ψeE,± (p) = ± |p|ψeE,± (p)
2 2
so we see that our solutions have helicity (eigenvalue of J · P divided by the
square root of the eigenvalue of |P|2 ) values ± 21 , as opposed to the integral
helicity values discussed in chapter 17, where E(3) appeared and not its double
cover.

332
31.2 The Dirac operator
One can generalize the above construction to the case of any dimension d as
follows. Recall from chapter 26 that associated to Rd with a standard inner
product, but of a general signature (r, s) (where r + s = d, r is the number
of + signs, s the number of − signs) we have a Clifford algebra Cliff(r, s) with
generators γj satisfying
γj γk = −γk γj , j 6= k
γj2 = +1 for j = 1, · · · , r γj2 = −1, for j = r + 1, · · · , d
To any vector v ∈ Rd with components vj recall that we can associate a corre-
sponding element v
/ in the Clifford algebra by
d
X
v ∈ Rd → v
/= γj vj ∈ Cliff(r, s)
j=1

Multiplying this Clifford algebra element by itself and using the relations above,
we get a scalar, the length-squared of the vector
2
/ = v12 + v22 · · · + vr2 − vr+1
v 2
− · · · − vd2 = |v|2

This shows that by introducing a Clifford algebra, we can find an interesting


new sort of square-root for expressions like |v|2 . We can define
Definition (Dirac operator). The Dirac operator is the operator
d
X ∂
∂/ = γj
j=1
∂qj

This will be a first-order differential operator with the property that its
square is the Laplacian
2 ∂2 ∂2 ∂2 ∂2
∂/ = 2 + ··· + 2 − 2 − ··· − 2
∂q1 ∂qr ∂qr+1 ∂qd

The Dirac operator ∂/ acts not on functions but on functions taking values
in the spinor vector space S that the Clifford algebra acts on. Picking a matrix
representation of the γj , the Dirac operator will be a constant coefficient first
order differential operator acting on wavefunctions with dim S components. In
chapter 44 we will study in detail what happens for the case of r = 3, s = 1 and
see how the Dirac operator there provides an appropriate wave-equation with
the symmetries of special relativistic space-time.

31.3 For further reading


] is not very conven-
The point of view here in terms of representations of E(3)
tional, but the material here about spin and the Pauli equation can be found

333
in any quantum mechanics book, see for example chapter 14 of [57]. For more
details about supersymmetric quantum mechanics and the appearance of the
Dirac operator as the generator of a supersymmetry in the quantization of a
pseudo-classical system, see [64] and [1].

334
Chapter 32

Lagrangian Methods and


the Path Integral

In this chapter we’ll give a rapid survey of a different starting point for devel-
oping quantum mechanics, based on the Lagrangian rather than Hamiltonian
classical formalism. Lagrangian methods have quite different strengths and
weaknesses than those of the Hamiltonian formalism, and we’ll try and point
these out, while referring to standard physics texts for more detail about these
methods.
The Lagrangian formalism leads naturally to an apparently very different
notion of quantization, one based upon formulating quantum theory in terms of
infinite-dimensional integrals known as path integrals. A serious investigation
of these would require another and very different volume, so again we’ll have to
restrict ourselves to outlining how path integrals work, describing their strengths
and weaknesses, and giving references to standard texts for the details.

32.1 Lagrangian mechanics


In the Lagrangian formalism, instead of a phase space R2d of positions qj and
momenta pj , one considers just the position space Rd . Instead of a Hamiltonian
function h(q, p), one has
Definition (Lagrangian). The Lagrangian L for a classical mechanical system
with configuration space Rd is a function
L : (q, v) ∈ Rd × Rd → L(q, v) ∈ R
Given differentiable paths in the configuration space defined by functions
γ : t ∈ [t1 , t2 ] → Rd
which we will write in terms of their position and velocity vectors as
γ(t) = (q(t), q̇(t))

335
one can define a functional on the space of such paths
Definition. Action
The action S for a path γ is
Z t2
S[γ] = L(q(t), q̇(t))dt
t1

The fundamental principle of classical mechanics in the Lagrangian formal-


ism is that classical trajectories are given by critical points of the action func-
tional. These may correspond to minima of the action (so this is sometimes
called the “principle of least action”), but one gets classical trajectories also for
critical points that are not minima of the action. One can define the appropriate
notion of critical point as follows
Definition. Critical point for S
A path γ is a critical point of the functional S[γ] if
d
δS(γ) ≡ S(γs )|s=0 = 0
ds
when
γs : [t1 , t2 ] → Rd
is a smooth family of paths parametrized by an interval s ∈ (−, ), with γ0 = γ.
We’ll now ignore analytical details and adopt the physicist’s interpretation
of this as the first-order change in S due to an infinitesimal change δγ =
(δq(t), δ q̇(t)) in the path.
When (q(t), q̇(t)) satisfy a certain differential equation, the path γ will be a
critical point and thus a classical trajectory:
Theorem. Euler-Lagrange equations
One has
δS[γ] = 0
for all variations of γ with endpoints γ(t1 ) and γ(t2 ) fixed if
∂L d ∂L
(q(t), q̇(t)) − ( (q(t), q̇(t))) = 0
∂qj dt ∂ q̇j
for j = 1, · · · , d. These are called the Euler-Lagrange equations.
Proof. Ignoring analytical details, the Euler-Lagrange equations follow from the
following calculations, which we’ll just do for d = 1, with the generalization to
higher d straightfoward. We are calculating the first-order change in S due to
an infinitesimal change δγ = (δq(t), δ q̇(t))
Z t2
δS[γ] = δL(q(t), q̇(t))dt
t1
Z t2
∂L ∂L
= ( (q(t), q̇(t))δq(t) + (q(t), q̇(t))δ q̇(t))dt
t1 ∂q ∂ q̇

336
But
d
δ q̇(t) = δq(t)
dt
and, using integration by parts
∂L d ∂L d ∂L
δ q̇(t) = ( δq) − ( )δq
∂ q̇ dt ∂ q̇ dt ∂ q̇
so
Z t2
∂L d ∂L d ∂L
δS[γ] = (( − )δq − ( δq))dt
t1 ∂q dt ∂ q̇ dt ∂ q̇
Z t2
∂L d ∂L ∂L ∂L
= ( − )δqdt − ( δq)(t2 ) + ( δq)(t1 )
t1 ∂q dt ∂ q̇ ∂ q̇ ∂ q̇
If we keep the endpoints fixed so δq(t1 ) = δq(t2 ) = 0, then for solutions to
∂L d ∂L
(q(t), q̇(t)) − ( (q(t), q̇(t))) = 0
∂q dt ∂ q̇
the integral will be zero for arbitrary variations δq.
As an example, a particle moving in a potential V (q) will be described by a
Lagrangian
d
1 X 2
L(q, q̇) = m q̇ − V (q)
2 j=1 j

for which the Euler-Lagrange equations will be


∂V d
− = (mq̇j )
∂qj dt
This is just Newton’s second law which says that the force coming from the
potential is equal to the mass times the acceleration of the particle.
The derivation of the Euler-Lagrange equations can also be used to study
the implications of Lie group symmetries of a Lagrangian system. When a Lie
group G acts on the space of paths, preserving the action S, it will take classical
trajectories to classical trajectories, so we have a Lie group action on the space
of solutions to the equations of motion (the Euler-Lagrange equations). In
good cases, this space of solutions is just the phase space of the Hamiltonian
formalism. On this space of solutions, we have, from the calculation above
∂L ∂L
δS[γ] = ( δq(X))(t1 ) − ( δq(X))(t2 )
∂ q̇ ∂ q̇
where now δq(X) is the infinitesimal change in a classical trajectory coming
from the infinitesimal group action by an element X in the Lie algebra of G.
From invariance of the action S under G we must have δS=0, so
∂L ∂L
( δq(X))(t2 ) = ( δq(X))(t1 )
∂ q̇ ∂ q̇

337
This is an example of a more general result known as “Noether’s theorem”.
It says that given a Lie group action on a Lagrangian system that leaves the
action invariant, for each element X of the Lie algebra we will have a conserved
quantity
∂L
δq(X)
∂ q̇
which is independent of time along the trajectory. A basic example is when
the Lagrangian is independent of the position variables qj , depending only on
the velocities q̇j , for example in the case of a free particle, when V (q) = 0. In
such a case one has invariance under the Lie group Rd of space-translations.
An infinitesimal transformation in the j-direction is given by

δqj (t) = 

and the conserved quantity is


∂L
∂ q̇j
For the case of the free particle, this will be

∂L
= mq̇j
∂ q̇j

and the conservation law is conservation of momentum.


Given a Lagrangian classical mechanical system, one would like to be able
to find a corresponding Hamiltonian system that will give the same equations
of motion. To do this, we proceed by defining the momenta pj as above, as the
conserved quantities corresponding to space-translations, so

∂L
pj =
∂ q̇j

Then, instead of working with trajectories characterized at time t by

(q(t), q̇(t)) ∈ R2d

we would like to instead use

(q(t), p(t)) ∈ R2d

where pj = ∂∂Lq̇j and this R


2d
is the Hamiltonian phase space with the conven-
tional Poisson bracket.
This transformation between position-velocity and phase space is known as
the Legendre transform, and in good cases (for instance when L is quadratic
in all the velocities) it is an isomorphism. In general though, this is not an
isomorphism, with the Legendre transform often taking position-velocity space
to a lower-dimensional subspace of phase space. Such cases require a much more
elaborate version of Hamiltonian formalism, known as “constrained Hamiltonian

338
dynamics” and are not unusual: one example we will see later is that of the
equations of motion of a free electromagnetic field (Maxwell’s equations).
Besides a phase space, for a Hamiltonian system one needs a Hamiltonian
function. Choosing
Xd
h= pj q̇j − L(q, q̇)
j=1

will work, provided one can use the relation


∂L
pj =
∂ q̇j

to solve for the velocities q̇j and express them in terms of the momentum vari-
ables. In that case, computing the differential of h one finds (for d = 1, the
generalization to higher d is straightforward)
∂L ∂L
dh =pdq̇ + q̇dp − dq − dq̇
∂q ∂ q̇
∂L
=q̇dp − dq
∂q
So one has
∂h ∂h ∂L
= q̇, =−
∂p ∂q ∂q
but these are precisely Hamilton’s equations since the Euler-Lagrange equations
imply
∂L d ∂L
= = ṗ
∂q dt ∂ q̇
The Lagrangian formalism has the advantage of depending concisely just on
the choice of action functional, which does not distinguish time in the same
way that the Hamiltonian formalism does by its dependence on a choice of
Hamiltonian function h. This makes the Lagrangian formalism quite useful in
the case of relativistic quantum field theories, where one would like to exploit the
full set of space-time symmetries, which can mix space and time directions. On
the other hand, one loses the infinite dimensional group of symmetries of phase
space for which the Poisson bracket is the Lie bracket (the so-called “canonical
transformations”). In the Hamiltonian formalism we saw that the harmonic
oscillator could be best understood using such symmetries, in particular the
U (1) symmetry generated by the Hamiltonian function. The harmonic oscillator
is a more difficult problem in the Lagrangian formalism, where this symmetry
is not manifest.

32.2 Path integrals


After the Legendre transform of a Lagrangian classical system to phase space
and thus to Hamiltonian form, one can then apply the method of quantization

339
we have discussed extensively earlier (known to physicists as “canonical quan-
tization”). There is however a very different approach to quantization, which
completely bypasses the Hamiltonian formalism. This is the path integral for-
malism, which is based upon a method for calculating matrix elements of the
time-evolution operator
i
hqT |e− ~ HT |q0 i
in the position eigenstate basis in terms of an integral over the space of paths
that go from q0 to q1 in time T . Here |q0 i is an eigenstate of Q with eigenvalue
q0 (a delta-function at q0 in the position space representation), and |qT i has Q
eigenvalue qT (as in many cases, we’ll stick to d = 1 for this discussion). This
matrix element has a physical interpretation as the amplitude for a particle
starting at q0 at t = 0 to have position qT at time T , with its norm-squared
giving the probability density for observing the particle at position qT .
To try and derive a path-integral expression for this, one breaks up the
interval [0, T ] into N equal-sized sub-intervals and calculates
i
hqT |(e− N ~ HT )N |q0 i

If the Hamiltonian breaks up as H = K + V , the Trotter product formula shows


that
i i i
hqT |e− ~ HT |q0 i = lim hqT |(e− N ~ KT e− N ~ V T )N |q0 i
N →∞

If K(P ) can be chosen to depend only on the momentum operator P and V (Q)
depends only on the operator Q then one can insert alternate copies of the
identity operator in the forms
Z ∞ Z ∞
|qihq|dq = 1, |pihp|dp = 1
−∞ −∞

This gives a product of terms that looks like


i i
hqj |e− N ~ K(P )T |pj ihpj |e− N ~ V (Q)T |qj−1 i

where the index j goes from 1 to N and the pj , qj variable will be integrated
over. Such a term can be evaluated as
i i
hqj |pj ihpj |qj−1 ie− N ~ K(pj )T e− N ~ V (qj−1 )T

1 i 1 i i i
=√ e ~ qj pj √ e− ~ qj−1 jpj e− N ~ K(pj )T e− N ~ V (qj−1 )T
2π~ 2π~
1 i pj (qj −qj−1 ) − i (K(pj )+V (qj−1 ))T
= e~ e N~
2π~
1 N
The N factors of this kind give an overall factor of ( 2π~ ) times something
which is a discretized approximation to
i
RT
(pq̇−h(q(t),p(t)))dt
e~ 0

340
where the phase in the exponential is just the action. Taking into account the
integrations over qj and pj one should have something like

N Z
1 NY ∞ ∞
Z RT
i i
hqT |e− ~ HT |q0 i = lim ( ) dpj dqj e ~ 0 (pq̇−h(q(t),p(t)))dt
N →∞ 2π~ j=1 −∞ −∞

although one should not do the first and last integrals over q but fix the first
value of q to q0 and the last one to qT . One can try and interpret this sort of
integration in the limit as an integral over the space of paths in phase space,
thus a “phase space path integral”.
This is an extremely simple and seductive expression, apparently saying that,
once the action S is specified, a quantum system is defined just by considering
integrals
Z
i
Dγ e ~ S[γ]

over paths γ in phase space, where Dγ is some sort of measure on this space of
paths. Since the integration just involves factors of dpdq and the exponential
just pdq and h, this formalism seems to share the same sort of invariance under
the infinite-dimensional group of canonical transformations (transformations of
the phase space preserving the Poisson bracket) as the classical Hamiltonian
formalism. It also appears to solve our problem with operator ordering am-
biguities, since introducing products of P s and Q’s at various times will just
give a phase space path integral with the corresponding p and q factors in the
integrand, but these commute.
Unfortunately, we know from the Groenewold-van Hove theorem that this
is too good to be true. This expression cannot give a unitary representation of
the full group of canonical transformations, at least not one that is irreducible
and restricts to what we want on transformations generated by linear functions
q and p. Another way to see the problem is that a simple argument shows
that by canonical transformations one can transform any Hamiltonian into a
free-particle Hamiltonian, so all quantum systems would just be free particles
in some choice of variables. For the details of these arguments and a careful
examination of what goes wrong, see chapter 31 of [54]. One aspect of the
problem is that, as a measure on the discrete sets of points qj , pj , points in
phase space for successive values of j are not likely to be close together, so
thinking of the integral as an integral over paths is not justified.
When the Hamiltonian h is quadratic in the momentum p, the pj integrals
will be Gaussian integrals that can be performed exactly. Equivalently, the
kinetic energy part K of the Hamiltonian operator will have a kernel in position
space that can be computed exactly. Using one of these, the pj integrals can be
eliminated, leaving just integrals over the qj that one might hope to interpret as
a path integral over paths not in phase space, but in position space. One finds,
P2
if K = 2m
i
hqT |e− ~ HT |q0 i =

341
N Z
m Y ∞ m(qj −qj−1 )2
r
i2π~T N i
PN T
lim ( )2 dqj e ~ j=1 ( 2T /N −V (qj ) N )
N →∞ N m i2π~T j=1 −∞

In the limit N → ∞ the phase of the exponential becomes


Z T
1
S(γ) = dt( m(q̇ 2 ) − V (q(t)))
0 2
One can try and properly normalize things so that this limit becomes an integral
Z
i
Dγ e ~ S[γ]

where now the paths γ(t) are paths in the position space.
An especially attractive aspect of this expression is that it provides a simple
understanding of how classical behavior emerges in the classical limit as ~ → 0.
The stationary phase approximation method for oscillatory integrals says that,
for a function f with a single critical point at x = xc (i.e. f 0 (xc ) = 0) and for a
small parameter , one has
Z +∞
1 1
√ dx eif / = p eif (xc )/ (1 + O())
i2π −∞ f 00 (c)

Using the same principle for the infinite-dimensional path integral, with f = S
the action functional on paths, and  = ~, one finds that for ~ → 0 the path
integral will simplify to something that just depends on the classical trajectory,
since by the principle of least action, this is the critical point of S.
Such position-space path integrals do not have the problems of principle
of phase space path integrals coming from the Groenewold-van Hove theorem,
but they still have serious analytical problems since they involve an attempt to
integrate a wildly oscillating phase over an infinite-dimensional space. One does
not naturally get a unitary result for the time evolution operator, and it is not
clear that whatever results one gets will be independent of the details of how
one takes the limit to define the infinite-dimensional integral.
Such path integrals though are closely related to integrals that are known
to make sense, ones that occur in the theory of random walks. There, a well-
defined measure on paths does exist, Wiener measure. In some sense Wiener
measure is what one gets in the case of the path integral for a free particle, but
taking the time variable t to be complex and analytically continuing

t → it

So, one can use Wiener measure techniques to define the path integral, getting
results that need to be analytically continued back to the physical time variable.
In summary, the path integral method has the following advantages:
• Study of the classical limit and “semi-classical” effects (quantum effects
at small ~) is straightforward.

342
• Calculations for free particles and for series expansions about the free
particle limit can be done just using Gaussian integrals, and these are rela-
tively easy to evaluate and make sense of, despite the infinite-dimensionality
of the space of paths.
• After analytical continuation, path integrals can be rigorously defined us-
ing Wiener measure techniques, and often evaluated numerically even in
cases where no exact solution is known.
On the other hand, there are disadvantages:
• Some path integrals such as phase space path integrals do not at all have
the properties one might expect, so great care is required in any use of
them.
• How to get unitary results can be quite unclear. The analytic continua-
tion necessary to make path integrals well-defined can make their physical
interpretation obscure.

• Symmetries with their origin in symmetries of phase space that aren’t


just symmetries of configuration space are difficult to see using the con-
figuration space path integral, with the harmonic oscillator providing a
good example. One can see such symmetries using the phase-space path
integral, but this is not reliable.

Path integrals for anticommuting variables can also be defined by analogy


with the bosonic case, using the notion of fermionic integration discussed earlier.

32.3 For further reading


For much more about Lagrangian mechanics and its relation to the Hamiltonian
formalism, see [2]. More along the lines of the discussion here can be found in
most quantum mechanics and quantum field theory textbooks. For the path
integral, Feynman’s original paper [14] or his book [15] are quite readable. A
typical textbook discussion is the one in chapter 8 of Shankar [57]. The book by
Schulman [54] has quite a bit more detail, both about applications and about
the problems of phase-space path integrals. Yet another fairly comprehensive
treatment, including the fermionic case, is the book by Zinn-Justin [79].

343
344
Chapter 33

Quantization of
Infinite-dimensional Phase
Spaces

Up until this point we have been dealing with finite-dimensional phase spaces
and their quantization in terms of Weyl and Clifford algebras. We will now turn
to the study of quantum systems (both bosonic and fermionic) corresponding
to infinite-dimensional phase spaces. The phase spaces of interest are spaces
of solutions of some partial differential equation, so these solutions are classi-
cal fields. The corresponding quantum theory is thus called a “quantum field
theory”. In this chapter we’ll just make some general comments about the new
phenomena that appear when one deals with such infinite-dimensional exam-
ples, without going into any detail at all. Formulating quantum field theories in
a mathematically rigorous way is a major and ongoing project in mathematical
physics research, one far beyond the scope of this text. We will treat this subject
at a physicist’s level of rigor, while trying to give some hint of how one might
proceed with precise mathematical constructions when they exist. We will also
try and indicate where there are issues that require much deeper or even still
unknown ideas, as opposed to those where the needed mathematical techniques
are of a conventional nature.

While finite-dimensional Lie groups and their representations are rather well-
understood mathematical objects, this is not at all true for infinite-dimensional
Lie groups, where mathematical results are rather fragmentary. For the case
of infinite-dimensional phase spaces, bosonic or fermionic, the symplectic or or-
thogonal groups acting on these spaces will be infinite-dimensional. One would
like to find infinite-dimensional analogs of the role these groups and their rep-
resentations play in quantum theory in the finite-dimensional case.

345
33.1 Inequivalent irreducible representations
In our discussion of the Weyl and Clifford algebras in finite dimensions, an im-
portant part of this story was the Stone-von Neumann theorem and its fermionic
analog, which say that these algebras each have only one interesting irreducible
representation (the Schrödinger representation in the bosonic case, the spinor
representation in the fermionic case). Once we go to infinite dimensions, this is
no longer true: there will be an infinite number of inequivalent irreducible rep-
resentations, with no known complete classification of the possibilities. Before
one can even begin to compute things like expectation values of observables,
one needs to find an appropriate choice of representation, adding a new layer of
difficulty to the problem that goes beyond that of just increasing the number of
degrees of freedom.
To get some idea of how the Stone-von Neumann theorem can fail, one
can consider the Bargmann-Fock quantization of the harmonic oscillator with d
degrees of freedom, and note that it necessarily depends upon making a choice
of an appropriate complex structure J (see chapter 22), with the conventional
choice denoted J0 . Changing from J0 to a different J corresponds to changing
the definition of annihilation and creation operators (but in a manner that
preserves their commutation relations). Physically, this entails a change in the
Hamiltonian and a change in the lowest-energy or vacuum state:

|0iJ0 → |0iJ

But |0iJ is still an element of the same state space H as |0iJ0 , and one gets
the same H by acting with annihilation and creation operators on |0iJ0 or on
|0iJ . The two constructions of the same H correspond to unitarily equivalent
representations of the Heisenberg group H2d+1 .
For a phase space with d = ∞, what can happen is that there can be choices
of J such that acting with annihilation and creation operators on |0iJ0 and
|0iJ gives two different state spaces HJ0 and HJ , providing two inequivalent
representations of the Heisenberg group. For quantum systems with an infinite
number of degrees of freedom, one can thus have the same algebra of operators,
but a different choice of Hamiltonian can give both a different vacuum state and
a different state space H on which the operators act. This same phenomenon
occurs both in the bosonic and fermionic cases, as one goe to infinite dimensional
Weyl or Clifford algebras.
It turns out though that if one restricts the class of complex structures J to
ones not that different from J0 , then one can recover a version of the Stone-von
Neumann theorem and have much the same behavior as in the finite-dimensional
case. Note that for invertible linear map g on phase space, g acts on the complex
structure, taking
J0 → Jg = g · J0

One can define subgroups of the infinite-dimensional symplectic or orthogonal


groups as follows:

346
Definition (Restricted symplectic and orthogonal groups). The group of linear
transformations g of an infinite-dimensional symplectic vector space preserving
the symplectic structure and also satisfying the condition

tr(A† A) < ∞

on the operator
A = [Jg , J0 ]
is called the restricted symplectic group and denoted Spres . The group of linear
transformations g of an infinite-dimensional inner-product space preserving the
inner-product and satisfying the same condition as above on [Jg , J0 ] is called the
restricted orthogonal group and denoted Ores .
An operator A satisfying tr(A† A) < ∞ is said to be a Hilbert-Schmidt
operator.
One then has the following replacement for the Stone-von Neumann theorem:
Theorem. Given two complex structures J1 , J2 on a Hilbert space such that
[J1 , J2 ] is Hilbert-Schmidt, acting on the states

|0iJ1 , |0iJ2

by annihilation and creation operators will give unitarily equivalent representa-


tions of the Weyl algebra (in the bosonic case), or the Clifford algebra (in the
fermionic case).
The standard reference for the proof of this statement is the original papers
of Shale [55] and Shale-Stinespring [56]. A detailed discussion of the theorem
can be found in [44].
When g ∈ Spres one can construct an action of this group by automorphisms
on the algebra generated by annihilation and creation operators, by much the
same method as in the finite-dimensional case (see section 23.2). Elements of
the Lie algebra of the group are represented as quadratic combinations of an-
nihilation and creation operators, with the Hilbert-Schmidt condition ensuring
that these quadratic operators have well-defined commutation relations. This
also holds true for Ores and the fermionic annihilation and creation operators.
Add to this section a calculation of an example giving an explicit free field
theory example of a family of J. See for instance the example on page 100 of
Pressley-Segal.

33.2 The anomaly and the Schwinger term


The groups Spres and Ores each have a subgroup of elements that commute
with J0 exactly, not just up to a Hilbert-Schmidt operator. One can construct
a unitary representation of this group (which we’ll call U (∞)) using quadratic
combinations of annihilation and creation operators to get a representation of
the Lie algebra, in much the same manner as in the finite-dimensional case of

347
section 23.2. The group U (∞) will act trivially on the vacuum state |0iJ0 , and
the finite-dimensional groups of symmetries of quantum field theories (coming
from, for example, the action of the rotation group on physical space) will be
subgroups of this group. For these, the problem to be discussed in this section
will not occur.
Recall though that knowing the action of the symplectic group as automor-
phisms the Weyl algebra only determines its representation on the state space
up to a phase factor. In the finite-dimensional case it turned out that this phase
factor could be chosen to be just a sign, giving a representation (the metaplectic
representation) that was a representation up to sign (and a true representation
of a double cover of the symplectic group). In the infinite dimensional case of
Spres , it turns out that the phase factors cannot be reduced to signs, and the
analog of the metaplectic representation is a representation of Spres only up to
a phase. To get a true representation, one needs to extend Spres to a larger
group Sp^ res that is not just a cover, but has an extra dimension.
In terms of Lie algebras one has

sp
] res = spres ⊕ R

with elements non-zero only in the R direction commuting with everything else.
For all commutation relations, there are now possible scalar terms to keep track
of. In the finite dimensional case we saw that such terms would occur when we
used normal-ordered operators (an example is the the shift by the scalar 12 in the
Hamiltonian operator for a harmonic oscillator with one degree of freedom), but
without normal-ordering no such terms were needed. In the infinite-dimensional
case normal-ordering is needed to avoid having a representation that acts on
states like |0iJ0 by an infinite phase change, and one can not eliminate the
effect of normal-ordering by making a well-defined finite phase change on the
way the operators act.
Commuting two elements of the Lie sub-algebra u(∞) will not give a scalar
factor, but such factors can occur when one commutes the action of elements of
spres not in u(∞), in which case they are known as “Schwinger terms”. Already
in finite dimensions, we saw that commuting the action a2 with that of (a† )2
gave a scalar factor with respect to the normal ordered a† a operator (see section
22.3), and in infinite dimensions, it is this that cannot be redefined away.
This phenomenon of new scalar terms in the commutation relations of the
operators in a quantum theory coming from a Lie algebra representation is
known as an “anomaly”, and while we have described it for the bosonic case,
much the same thing happens in the fermionic case for the Lie algebra ores .
This is normally considered to be something that happens due to quantiza-
tion, with the “anomaly” the extra scalar terms in commutation relations not
there in the corresponding classical Poisson bracket relations. From another
point of view this is a phenomenon coming not from quantization, but from
infinite-dimensionality, already visible in Poisson brackets when one makes a
choice of complex structure on the phase space. It is the occurrence for infinite-
dimensional phase spaces of certain inherently different ways of choosing the

348
complex structure that is relevant. We will see in later chapters that in quan-
tum field theories one often wants to choose a complex structure on an infinite
dimensional space of solutions of a classical field equation by taking positive
energy complexified solutions to be eigenvectors of the complex structure with
eigenvalue +i, those with negative energy to have eigenvalue −i, and it is this
choice of complex structure that introduces the anomaly phenomenon.

33.3 Higher order operators and renormaliza-


tion
We have generally restricted ourselves to considering only products of the fun-
damental position and momentum operators of degree less than or equal to two,
since it is these that have an interpretation as the operators of a Lie algebra
representation. By the Groenewold-van Hove theorem, higher-order products of
position and momentum variables have no unique quantization (the operator-
ordering problem). In the finite dimensional case one can of course consider
higher-order products of operators, for instance systems with Hamiltonian op-
erators of higher order than quadratic. Unlike the quadratic case, typically
no exact solution for eigenvectors and eigenvalues will exist, but various ap-
proximation methods may be available. In particular, for Hamiltonians that are
quadratic plus a term with a small parameter, perturbation theory methods can
be used to compute a power-series approximation in the small parameter. This
is an important topic in physics, covered in detail in the standard textbooks.
The standard approach to quantization of infinite-dimensional systems is
to begin with “regularization”, somehow modifying the system to only have
a finite-dimensional phase space. One quantizes this theory, taking advantage
of the uniqueness of its representation, then tries to take a limit that recovers
the infinite-dimensional system. Such a limit will generally be quite singular,
leading to an infinite result, and the process of manipulating these potential
infinities is called “renormalization”. Techniques for taking limits of this kind
in a manner that leads to a consistent and physically sensible result typically
take up a large part of standard quantum field theory textbooks. For many
theories, no appropriate such techniques are known, and conjecturally none are
possible. For others there is good evidence that such a limit can be successfully
taken, but the details of how to do this remain unknown (with for instance a
$1 million Millenium Prize offered for showing rigorously this is possible in the
case of Yang-Mills gauge theory).
In succeeding chapters we will go on to study a range of quantum field
theories, but due to the great complexity of the issues involved, will not address
what happens for non-quadratic Hamiltonians. Physically this means that we
will just be able to study theories of free particles, although with methods that
generalize to particles moving in non-trivial background classical fields. For
a treatment of the subject that includes interacting quantized multi-particle
systems, one of the conventional textbooks will be needed to supplement what

349
is here.

33.4 For further reading


Berezin’s The Method of Second Quantization [5] develops in detail the infinite-
dimensional version of the Bargmann-Fock construction, both in the bosonic
and fermionic cases. Infinite-dimensional versions of the metaplectic and spinor
representations are given there in terms of operators defined by integral kernels.
For a discussion of the infinite-dimensional Weyl and Clifford algebras, together
with a realization of their automorphism groups Spres and Ores (and the corre-
sponding Lie algebras) in terms of annihilation and creation operators acting on
the infinite-dimensional metaplectic and spinor representations, see [44]. The
book [47] contains an extensive discussion of the groups Spres and Ores and
the infinite-dimensional version of their metaplectic and spinor representations.
It emphasizes the origin of novel infinite-dimensional phenomena here in the
nature of the complex structures of interest in infinite dimensional examples.

350
Chapter 34

Multi-particle Systems and


Non-relativistic Quantum
Fields

The quantum mechanical systems we have studied so far describe a finite num-
ber of degrees of freedom, which may be of a bosonic or fermionic nature. In
particular we have seen how to describe a quantized free particle moving in
three-dimensional space. By use of the notion of tensor product, we can then
describe any particular fixed number of such particles. We would, however, like
a formalism capable of conveniently describing an arbitrary number of parti-
cles. From very early on in the history of quantum mechanics, it was clear
that at least certain kinds of particles, photons, were most naturally described
not one by one, but by thinking of them as quantized excitations of a classical
system with an infinite number of degrees of freedom: the electromagnetic field.
In our modern understanding of fundamental physics not just photons, but all
elementary particles are best described in this way.
Conventional textbooks on quantum field theory often begin with relativis-
tic systems, but we’ll start instead with the non-relativistic case. We’ll study a
simple quantum field theory that extends the conventional single-particle quan-
tum systems we have dealt with so far to deal with multi-particle systems. This
version of quantum field theory is what gets used in condensed matter physics,
and is in many ways simpler than the relativistic case, which we’ll take up in a
later chapter.
Quantum field theory is a large and complicated subject, suitable for a full-
year course at an advanced level. We’ll be giving only a very basic introduc-
tion, mostly just considering free fields, which correspond to systems of non-
interacting particles. Most of the complexity of the subject only appears when
one tries to construct quantum field theories of interacting particles. A remark-
able aspect of the theory of free quantum fields is that in many ways it is little
more than something we have already discussed in great detail, the quantum

351
harmonic oscillator problem. However, the classical harmonic oscillator phase
space that is getting quantized in this case is an infinite dimensional one, the
space of solutions to the free particle Schrödinger equation. To describe multiple
non-interacting fermions, we just need to use fermionic oscillators.
For simplicity we’ll set ~ = 1 and start with the case of a single spatial
dimension. We’ll also begin using x to denote a spatial variable instead of the q
conventional when this is the coordinate variable in a finite-dimensional phase
space.

34.1 Multi-particle quantum systems as quanta


of a harmonic oscillator
It turns out that quantum systems of identical particles are best understood by
thinking of such particles as quanta of a harmonic oscillator system. We will
begin with the bosonic case, then later consider the fermionic case, which uses
the fermionic oscillator system.

34.1.1 Bosons and the quantum harmonic oscillator


A fundamental postulate of quantum mechanics is that given a space of states
H1 describing a bosonic single particle, a collection of N particles is described
by
(H1 ⊗ · · · ⊗ H1 )S
| {z }
N −times

where the superscript S means we take elements of the tensor product invariant
under the action of the group SN by permutation of the factors. We want to
consider state spaces containing an arbitrary number of particles, so we define
Definition (Bosonic Fock space, the symmetric algebra). Given a complex vec-
tor space V , the symmetric Fock space is defined as

F S (V ) = C ⊕ V ⊕ (V ⊗ V )S ⊕ (V ⊗ V ⊗ V )S ⊕ · · ·

This is known to mathematicians as the “symmetric algebra” S ∗ (V ), with

S N (V ) = (V ⊗ · · · ⊗ V )S
| {z }
N −times

(recall chapter 9)
A quantum harmonic oscillator with d degrees of freedom has a state space
consisting of linear combinations of states with N “quanta” (i.e. states one gets
by applying N creation operators to the lowest energy state), for N = 0, 1, 2, . . ..
We have seen in our discussion of the quantization of the harmonic oscillator that
in the Bargmann-Fock representation, the state space is just C[z1 , z2 , . . . , zd ],
the space of polynomials in d complex variables.

352
The part of the state space with N quanta has dimension
 
N +d−1
N

which grows with N . This is just the binomial coefficient giving the number
of d-variable monomials of degree N . The quanta of a harmonic oscillator are
indistinguishable, which corresponds to the fact that the space of states with
N quanta can be identified with the symmetric part of the tensor product of N
copies of Cd .
More precisely, what one has is

Theorem. Given a vector space V , there is an isomorphism of algebras between


the symmetric algebra S ∗ (V ∗ ) (where V ∗ is the dual vector space to V ) and the
algebra C[V ] of polynomial functions on V .

We won’t try and give a detailed proof of this here, but one can exhibit the
isomorphism explicitly on generators. If zj ∈ V ∗ are the coordinate functions
with respect to a basis ej of V (i.e. zj (ek ) = δjk ), they give a basis of V ∗ which
is also a basis of the linear polynomial functions on V . For higher degrees, one
makes the identification

zj ⊗ · · · ⊗ zj ∈ S N (V ∗ ) ↔ zjN
| {z }
N −times

between N -fold symmetric tensor products and monomials in the polynomial


algebra.
Besides describing states of multiple identical quantum systems as polyno-
mials or as symmetric parts of tensor products, a third description is useful.
This is the so-called “occupation number representation”, where states are la-
beled by d non-negative integers nj = 0, 1, 2, · · · , with an identification with the
polynomial description as follows:
n
|n1 , n2 , . . . , nj , . . . , nd i ↔ z1n1 z2n2 · · · zj j · · · zdnd
So, starting with a single particle state space H1 , we have three equivalent
descriptions of the Fock space describing multi-particle bosonic states

• As a symmetric algebra S ∗ (H1∗ ) defined in terms of tensor products of the


H1∗ , the dual vector space to H1 .

• As polynomial functions on H1 .

• As linear combinations of states labeled by occupation numbers nj =


0, 1, 2, . . ..

For each of these descriptions, one can define d annihilation or creation


operators aj , a†j , satisfying the canonical commutation relations, and these will
generate an algebra of operators on the Fock space.

353
34.1.2 Fermions and the fermionic oscillator
For the case of fermionic particles, there’s an analogous Fock space construction
using tensor products:

Definition (Fermionic Fock space, exterior algebra). Given a complex vector


space V , the fermionic Fock space is

F A (V ) = C ⊕ V ⊕ (V ⊗ V )A ⊕ (V ⊗ V ⊗ V )A ⊕ · · ·

where the superscript A means the subspace of the tensor product that just
changes sign under interchange of two factors. This is known to mathemati-
cians as the “exterior algebra” Λ∗ (V ), with

ΛN (V ) = (V ⊗ · · · ⊗ V )A
| {z }
N −times

As in the bosonic case, one can interpret Λ∗ (V ∗ ) as polynomials on V , but


now polynomials in anticommuting variables.
For the case of fermionic particles with single particle state space H1 , one
can again define the multi-particle state space in three equivalent ways:

• Using tensor products and antisymmetry, as the exterior algebra Λ∗ (H1∗ ).

• As polynomial functions in anticommuting coordinates on H1 . One can


also think of such polynomials as antisymmetric multilinear functions on
the product of copies of H1 .

• In terms of occupation numbers nj , where now the only possibilities are


nj = 0, 1.

For each of these descriptions, one can define d annihilation or creation operators
aj , a†j satisfying the canonical anticommutation relations, and these will generate
an algebra of operators on the Fock space.

34.2 Solutions to the free particle Schrödinger


equation
To describe multi-particle quantum systems we will take as our single-particle
state space H1 the space of solutions of the free-particle Schrödinger equation,
as already studied in chapter 10. As we saw in that chapter, one can use a “finite
box” normalization, which gives a discrete, countable basis for this space and
then try and take the limit of infinite box size. To fully exploit the symmetries of
phase space though, we will need the “continuum normalization”, which requires
considering not just functions but distributions.

354
34.2.1 Box normalization
Recall that for a free particle in one dimension the state space H consists of
complex-valued functions on R, with observables the self-adjoint operators for
momentum
d
P = −i
dx
and energy (the Hamiltonian)

P2 1 d2
H= =−
2m 2m dx2
Eigenfunctions for both P and H are the functions of the form

ψp (x) ∝ eipx
2
p
for p ∈ R, with eigenvalues p for P and 2m for H.
Note that these eigenfunctions are not normalizable, and thus not in the
conventional choice of state space as L2 (R). One way to deal with this issue is
to do what physicists sometimes refer to as “putting the system in a box”, by
imposing periodic boundary conditions

ψ(x + L) = ψ(x)

for some number L, effectively restricting the relevant values of x to be consid-


ered to those on an interval of length L. For our eigenfunctions, this condition
is
eip(x+L) = eipx
so we must have
eipL = 1
which implies that

p= l
L
for l an integer. Then p will take on a countable number of discrete values
corresponding to the l ∈ Z, and
1 1 2πl
|li = ψl (x) = √ eipx = √ ei L x
L L
will be orthornormal eigenfunctions satisfying

hl0 |li = δll0

This “box” normalization is one form of what physicists call an “infrared cutoff”,
a way of removing degrees of freedom that correspond to arbitrarily large sizes,
in order to make a problem well-defined. To get a well-defined problem one
starts with a fixed value of L, then one studies the limit L → ∞.

355
The number of degrees of freedom is now countable, but still infinite. In order
to get a completely well-defined problem, one typically needs to first make the
number of degrees of freedom finite. This can be done with an additional cutoff,
an “ultraviolet cutoff”, which means restricting attention to |p| ≤ Λ for some
finite Λ, or equivalently |l| < ΛL
2π . This makes the state space finite dimensional
and one then studies the Λ → ∞ limit.
For finite L and Λ our single-particle state space H1 is finite dimensional,
with orthonormal basis elements ψl (x). An arbitrary solution to the Schrödinger
equation is then given by
+ ΛL

X 2πl 4π 2 l2
ψ(x, t) = αl ei L x e−i 2mL2 t
l=− ΛL

for arbitrary complex coefficients αl and can be completely characterized by its


initial value at t = 0
+ ΛL

2πl
X
ψ(x, 0) = α(l)ei L x
l=− ΛL

Vectors in H1 have coordinates αl ∈ C with respect to our chosen basis, and


these coordinates are in the dual space H1∗ .
Multi-particle states are now described by the Fock spaces F S (H1∗ ) or F A (H1∗ ),
depending on whether the particles are bosons or fermions. In the occupation
number representation of the Fock space, orthonormal basis elements are

| · · · , npj−1 , npj , npj+1 , · · · i

where the subscript j indexes the possible values of the momentum p (which are
discretized in units of 2π
L , and in the interval [−Λ, Λ]). The occupation number
npj is the number of particles in the state with momentum pj . In the bosonic
case it takes values 0, 1, 2, · · · , ∞, in the fermionic case it takes values 0 or 1.
The state with all occupation numbers equal to zero is denoted

| · · · , 0, 0, 0, · · · i = |0i

and called the “vacuum” state.


For each pj we can define annihilation and creation operators apj and a†pj .
These satisfy the commutation relations

[apj , a†pk ] = δjk

and act on states in the occupation number representation as



apj | · · · , npj−1 , npj , npj+1 , · · · i = npj | · · · , npj−1 , npj − 1, npj+1 , · · · i

a†pj | · · · , npj−1 , npj , npj+1 , · · · i =


p
npj + 1| · · · , npj−1 , npj + 1, npj+1 , · · · i
Observables one can build out of these operators include

356
• The number operator
X
N
b= a†pk apk
k

which will have as eigenvalues the total number of particles


X
b | · · · , np , np , np , · · · i = (
N npk )| · · · , npj−1 , npj , npj+1 , · · · i
j−1 j j+1
k

• The momentum operator


X
Pb = pk a†pk apk
k

with eigenvalues the total momentum of the multiparticle system.


X
Pb| · · · , npj−1 , npj , npj+1 , · · · i = ( nk pk )| · · · , npj−1 , npj , npj+1 i
k

• The Hamiltonian
X p2
k †
H
b = a ap
2m pk k
k

which has eigenvalues the total energy

X p2
b · · · , np , np , np , · · · i = (
H| j−1 j j+1 nk k )| · · · , npj−1 , npj , npj+1 , · · · i
2m
k

With ultraviolet and infrared cutoffs in place, the possible values of pj are
finite in number, H1 is finite dimensional and this is nothing but the standard
quantized harmonic oscillator (with a Hamiltonian that has different frequencies

p2j
ω(pj ) =
2m

for different values of j). In the limit as one or both cutoffs are removed,
H1 becomes infinite dimensional, the Stone-von Neumann theorem no longer
applies, and we are in the situation discussed in chapter 33. State spaces with
different choices of vacuum state |0i can be unitarily inequivalent, with not just
the dynamics of states in the state space dependent on the Hamiltonian, but the
state space itself depending on the Hamiltonian (through the characterization
of |0i as lowest energy state). Even for the free particle, we have here defined
the Hamiltonian as the normal-ordered version, which for finite dimensional H1
differs from the non-normal-ordered one just by a constant, but as cut-offs are
removed this constant becomes infinite, requiring careful treatment of the limit.

357
34.2.2 Continuum normalization
A significant problem introduced by using cutoffs such as the box normalization
is that these ruin some of the space-time symmetries of the system. The one-
particle space with an infrared cutoff is a space of functions on a discrete set of
points, and this set of points will not have the same symmetries as the usual
continuous momentum space (for instance in three dimensions it will not carry
an action of the rotation group SO(3)). In our study of quantum field theory
we would like to exploit the action of space-time symmetry groups on the state
space of the theory, so need a formalism that preserves such symmetries.
In our earlier discussion of the free particle, we saw that physicists often
work with a “continuum normalization” such that
1
|pi = ψp (x) = √ eipx , hp0 |pi = δ(p − p0 )

where formulas such as the second one need to be interpreted in terms of dis-
tributions. In quantum field theory we want to be able to think of each value
of p as corresponding to a classical degree of freedom that gets quantized, and
this “continuum” normalization will then correspond to an uncountable number
of degrees of freedom, requiring great care when working in such a formalism.
This will however allow us to readily see the action of space-time symmetries
on the states of the quantum field theory and to exploit the duality between
position and momentum space embodied in the Fourier transform.
In the continuum normalization, an arbitrary solution to the free-particle
Schrödinger equation is given by
Z ∞
1 p2
ψ(x, t) = √ α(p)eipx e−i 2m t dp
2π −∞
for some function complex-valued function α(p) on momentum space. Such
solutions are in one-to-one correspondence with initial data
Z ∞
1
ψ(x, 0) = √ α(p)eipx dp
2π −∞
This is exactly the Fourier inversion formula, expressing a function ψ(x, 0) in
^
terms of its Fourier transform ψ(x, 0)(p) = α(p). Note that we want to con-
sider not just square integrable functions α(p), but non-integrable functions
like α(p) = 1 (which corresponds to ψ(x, 0) = δ(x)), and distributions such as
α(p) = δ(p), which corresponds to ψ(x, 0) = 1.
We will generally work with this continuum normalization, taking as our
single-particle space H1 the space of complex valued functions ψ(x, 0) on R. One
can think of the |pi as an orthornomal basis of H1 , with α(p) the coordinate
function for the |pi basis vector. α(p) is then an element of H1∗ , the linear
function on H1 given by taking the coefficient of the |pi basis vector.
Quantization should take α(p) ∈ H1∗ to corresponding annihilation and cre-
ation operators a(p), a† (p). Such operators though need to be thought of as

358
operator-valued distributions: what is really a well-defined operator is not a(p),
but Z +∞
f (p)a(p)dp
−∞

for sufficiently well-behaved functions f (p). From the point of view of quanti-
zation of H1∗ , it is vectors in H1∗ that one can write in the form
Z +∞
α(f ) = f (p)α(p)dp
−∞

that have well-defined quantizations as operators.


Typically though we will follow the the standard physicist’s practice of writ-
ing formulas which are straight-forward generalizations of the finite-dimensional
case, with sums becoming integrals. Some examples are
Z +∞
N=
b a(p)† a(p)dp
−∞

for the number operator,


Z +∞
Pb = pa(p)† a(p)dp
−∞

for the momentum operator, and


Z +∞ 2
p
Hb = a(p)† a(p)dp
−∞ 2m

for the Hamiltonian operator.


To properly manipulate these formulas one must be aware that they should
be interpreted as formulas for distributions, and that in particular products of
distributions need to be treated with care. Expressions like a(p)† a(p) will only
give sensible operators when integrated against well-behaved functions. What
we really have is for instance an operator Nb (f ) which we will write formally as
Z +∞
N
b (f ) = a(p)† a(p)dp
−∞

but which is only defined for some particularly well-behaved f (and in particular
is not defined for the non-integrable choice of f = 1).

34.3 Quantum field operators


The formalism developed so far works well to describe states of multiple free
particles, but does so purely in terms of states with well-defined momenta, with
no information at all about their position. To get operators that know about
position, one can Fourier transform the annihilation and creation operators for
momentum eigenstates as follows (we’ll begin with the box normalization):

359
Definition (Quantum field operator). The quantum field operators for the free
particle system are
X X 1
ψ(x)
b = ψp (x)ap = √ eipx ap
p p L

and its adjoint


X 1
ψp∗ (x)a†k = √ e−ikx a†k
X
ψb† (x) =
p k
L
2πj
(where k takes the discrete values kj = L ).

Note that these are not self-adjoint operators, and thus not themselves ob-
servables. To get some idea of their behavior, one can calculate what they do
to the vacuum state. One has

ψ(x)|0i
b =0
1 X −ipx
ψb† (x)|0i = √ e | · · · , 0, np = 1, 0, · · · i
L p
While this sum makes sense as long as it is finite, when cutoffs are removed it is
clear that ψ̂ † (x) will have a rather singular limit as an infinite sum of operators.
It can be in some vague sense thought of as the (ill-defined) operator that creates
a particle localized precisely at x.
The field operators allow one to recover conventional wavefunctions, for sin-
gle and multiple-particle states. One sees by orthonormality of the occupation
number basis states that
1
h· · · , 0, np = 1, 0, · · · |ψb† (x)|0i = √ e−ipx = ψ p (x)
L
the complex conjugate wavefunction of the single-particle state of momentum
p. An arbitrary one particle state |Ψ1 i with wavefunction ψ(x) is a linear
combination of such states, and taking complex conjugates one finds

h0|ψ(x)|Ψ
b 1 i = ψ(x)

Similarly, for a two-particle state of identical particles with momenta pj1 and
pj2 one finds

h0|ψ(x b 2 )| · · · , 0, np = 1, 0, · · · , 0, np = 1, 0, · · · i = ψp ,p (x1 , x2 )
b 1 )ψ(x
j1 j2 j1 j2

where
ψpj1 ,pj2 (x1 , x2 )
is the wavefunction (symmetric under interchange of x1 and x2 for bosons) for
this two particle state. For a general two-particle state |Ψ2 i with wavefunction
ψ(x1 , x2 ) one has
h0|ψ(x b 2 )|Ψ2 i = ψ(x1 , x2 )
b 1 )ψ(x

360
and one can easily generalize this to see how field operators are related to
wavefunctions for an arbitrary number of particles.
Cutoffs ruin translational invariance and calculations with them quickly be-
come difficult. We’ll now adopt the physicist’s convention of working directly
in the continuous case with no cutoff, at the price of having formulas that only
make sense as distributions. One needs to be aware that the correct interpreta-
tion of such formulas may require going back to the cutoff version.
In the continuum normalization we take as normalized eigenfunctions for the
free particle
1
|pi = ψp (x) = √ eipx

with Z ∞
1 0
hp0 |pi = ei(p−p )x dx = δ(p − p0 )
2π −∞

The annihilation and creation operators satisfy

[a(p), a† (p0 )] = δ(p − p0 )

The field operators are then


Z ∞
1
ψ(x)
b =√ eipx a(p)dx
2π −∞
Z ∞
1
ψb† (x) = √ e−ipx a† (p)dx
2π −∞

and one can compute the commutators

[ψ(x),
b b 0 )] = [ψb† (x), ψb† (x0 )] = 0
ψ(x

Z ∞Z ∞
† 0 1 0 0
[ψ(x), ψ (x )] =
b b eipx e−ip x [a(p), a† (p0 )]dpdp0
2π −∞ −∞
Z ∞Z ∞
1 0 0
= eipx e−ip x δ(p − p0 )dpdp0
2π −∞ −∞
Z ∞
1 0
= eip(x−x ) dp
2π −∞
=δ(x − x0 )

One should really think of such continuum-normalized field operators as


operator-valued distributions, with the distributions defined on some appropri-
ately chosen function space, for instance the Schwartz space of functions such
that the function and its derivatives fall off faster than any power at ±∞. Given
such functions f, g, one gets operators
Z ∞ Z ∞
ψ(f
b )= f (x)ψ(x)dx,
b ψb† (g) = g(x)ψb† (x)dx
−∞ −∞

361
and the commutator relation above means
Z ∞Z ∞ Z ∞
b ), ψb† (g)] =
[ψ(f f (x)g(x0 )δ(x − x0 )dxdx0 = f (x)g(x)dx
−∞ −∞ −∞

A major source of difficulties when manipulating quantum fields is that powers


of distributions are not necessarily defined, so one has trouble making sense of
rather innocuous looking expressions like
4
(ψ(x))
b

There are observables that one can define simply using field operators. These
include:
• The number operator N̂ . One can define a number density operator

b(x) = ψb† (x)ψ(x)


n b

and integrate it to get an operator with eigenvalues the total number of


particles in a state
Z ∞
N
b= n
b(x)dx
−∞
Z ∞Z ∞Z ∞
1 0 1
= √ e−ip x a† (p0 ) √ eipx a(p)dpdp0 dx
−∞ −∞ −∞ 2π 2π
Z ∞Z ∞
= δ(p − p0 )a† (p0 )a(p)dpdp0
−∞ −∞
Z ∞
= a† (p)a(p)dp
−∞

• The total momentum operator Pb. This can be defined in terms of field
operators as
Z ∞
d b
Pb = ψb† (x)(−i )ψ(x)dx
−∞ dx
Z ∞Z ∞Z ∞
1 0 1
= √ e−ip x a† (p0 )(−i)(ip) √ eipx a(p)dpdp0 dx
−∞ −∞ −∞ 2π 2π
Z ∞Z ∞
= δ(p − p0 )pa† (p0 )a(p)dpdp0
−∞ −∞
Z ∞
= pa† (p)a(p)dp
−∞

• The Hamiltonian H.
b This can be defined much like the momentum, just
changing
d 1 d2
−i →−
dx 2m dx2

362
to find
∞ ∞
1 d2 b p2 †
Z Z
H
b = ψb† (x)(− )ψ(x)dx = a (p)a(p)dp
−∞ 2m dx2 −∞ 2m

All of these formulas really need to be interpreted as operator valued distribu-


tions: what really makes sense is not N b , but Nb (f ) for some class of functions
f , which can formally be written as
Z ∞
N
b (f ) = f (x)ψb† (x)ψ(x)dx
b
−∞

We will see that one can more generally use quadratic expressions in field
operators to define an observable O
b corresponding to a one-particle quantum
mechanical observable O by
Z ∞
Ob= ψb† (x)Oψ(x)dx
b
−∞

In particular, to describe an arbitrary number of particles moving in an external


potential V (x), one takes the Hamiltonian to be
Z ∞
1 d2
H
b = ψb† (x)(− + V (x))ψ(x)dx
b
−∞ 2m dx2
If one can solve the one-particle Schrödinger equation for a complete set of
orthonormal wavefunctions ψn (x), one can describe this quantum system using
the same techniques as for the free particle. A creation-annihilation operator
pair an , a†n is associated to each eigenfunction, and quantum fields are defined
by X X
ψ(x)
b = ψn (x)an , ψb† (x) = ψ n (x)a†n
n n

For Hamiltonians just quadratic in the quantum fields, quantum field the-
ories are quite tractable objects. They are in some sense just free quantum
oscillator systems, with all of their symmetry structure intact, but taking the
number of degrees of freedom to infinity. Higher order terms though make
quantum field theory a difficult and complicated subject, one that requires a
year-long graduate level course to master basic computational techniques, and
one that to this day resists mathematician’s attempts to prove that many ex-
amples of such theories have even the basic expected properties. In the theory
of charged particles interacting with an electromagnetic field, when the electro-
magnetic field is treated classically one still has a Hamiltonian quadratic in the
field operators for the particles. But if the electromagnetic field is treated as a
quantum system, it acquires its own field operators, and the Hamiltonian is no
longer quadratic in the fields, a vastly more complicated situation described as
an“interacting quantum field theory”.
Even if one restricts attention to the quantum fields describing one kind of
particles, there may be interactions between particles that add terms to the

363
Hamiltonian, and these will be higher order than quadratic. For instance, if
there is an interaction between such particles described by an interaction energy
v(y − x), this can be described by adding the following quartic term to the
Hamiltonian
1 ∞ ∞ b†
Z Z
ψ (x)ψb† (y)v(y − x)ψ(y)
b ψ(x)dxdy
b
2 −∞ −∞

The study of “many-body” quantum systems with interactions of this kind is a


major topic in condensed matter physics.
One can easily extend the above to three spatial dimensions, getting field op-
erators ψ(x)
b and ψb† (x), defined by integrals over three-dimensional momentum
space. For instance, in the continuum normalization
Z
1 3
ψ(x) = ( )
b 2 eip·x a(p)d3 p
2π R3

and the Hamiltonian for the free field is


Z
1 2 b
H=
b ψb† (x)(− ∇ )ψ(x)d3 x
R 3 2m

More remarkably, one can also very easily write down theories of quantum
systems with an arbitrary number of fermionic particles, just by changing com-
mutators to anticommutators for the creation-annihilation operators and using
fermionic instead of bosonic oscillators. One gets fermionic fields that satisfy
anticommutation relations

[ψ(x),
b ψb† (x0 )]+ = δ(x − x0 )

and states that in the occupation number representation have nk = 0, 1.

34.4 For further reading


This material is discussed in essentially any quantum field theory textbook.
Many do not explicitly discuss the non-relativistic case, two that do are [23]
and [32]. Two books aimed at mathematicians that cover the subject much
more carefully than those for physicists are [20] and [11].
A good source for learning about quantum field theory from the point of view
of non-relativistic many-body theory is Feynman’s lecture notes on statistical
mechanics [18].

364
Chapter 35

Field Quantization and


Dynamics for
Non-relativistic Quantum
Fields

In finite dimensions, we saw that we could think of phase space M as the space
parametrizing solutions to the equations of motion of a classical system, that
linear functions on this space carried the structure of a Lie algebra (the Heisen-
berg Lie algebra), and that quantization was given by finding an irreducible
unitary representation of this Lie algebra.
Instead of motivating the definition of quantum fields by starting with an-
nihilation and creation operators for a free particle of fixed momentum, one
can more simply just define them as what one gets by taking the space H1 of
solutions of the free single particle Schrödinger equation as a classical phase
space, and quantizing to get a unitary representation (of a Heisenberg algebra
that is now infinite-dimensional). This procedure is sometimes called “second
quantization”, with “first quantization” what was done when one started with
the classical phase space for a single particle and quantized to get the space H1
of wavefunctions.
In this chapter we’ll consider the properties of quantum fields from this point
of view, including seeing how quantization of classical Hamiltonian dynamics
gives the dynamics of quantum fields.

35.1 Quantization of classical fields


The Schrödinger equation is first-order in time, so solutions are determined by
the initial values ψ(x) = ψ(x, 0) of the wavefunction at t = 0 and elements
of the space H1 of solutions can be identified with their initial-value data, the

365
wavefunction at t = 0. Note that this is unlike typical finite-dimensional classical
mechanical systems, where the equation of motion is second-order in time, with
solutions determined by two pieces of initial-value data, the coordinates and
momenta (since one needs initial velocities as well as positions). Taking M = H1
as a classical phase space, it has the property that there is no natural splitting of
coordinates into position-like variables and momentum-like variables, and thus
no natural way of setting up an infinite-dimensional Schrödinger representation
where states would be functionals of position-like variables.
On the other hand, since wavefunctions are complex valued, H1 is already
a complex vector space, and we can quantize by the Bargmann-Fock method
using this complex structure. This is quite unlike our previous examples of
quantization, where we started with a real phase space and needed to choose a
complex structure (to get annihilation and creation operators).
Using Fourier transforms we can think of H1 either as a space of functions
ψ of position x, or as a space of functions ψe of momentum p. This corresponds
to two possible choices of orthonormal bases of the function space H1 : the |pi
(plane waves of momentum p) or the |xi (delta-functions at position x). In
the finite dimensional case it is the coordinate functions qj , pj on phase space,
which lie in the dual phase space M = M ∗ , that get mapped to operators
Qj , Pj under quantization. Here what corresponds to the qj , pj is either the
α(p) (coordinates with respect to the |pi basis) which quantize to annihilation
operators a(p), or the ψ(x) (field value at x, coordinates with respect to the |xi
basis) which quantize to field operators ψ(x).
b
As described in chapter 34 though, what is really well-defined is not the
quantization of ψ(x), but of ψ(f ) for some class of functions f . ψ(x) is the
linear function on the space of solutions H1 given by

ψ(x) : ψ ∈ H1 → ψ(x, 0)

but to get a well-defined operator, one wants the quantization not of this, but
of elements of H1∗ of the form
Z ∞
ψ(f ) : ψ ∈ H1 → f (x)ψ(x, 0)dx
−∞

To get all elements of H1∗ we will also need the ψ(x) and
Z ∞
ψ(g) : ψ ∈ H1 → g(x)ψ(x, 0)dx
−∞

Despite the potential for confusion, we will write ψ(x) for the distribution given
by evaluation at x, which corresponds to taking f to be the delta-function
δ(x − x0 ). This convenient notational choice means that one needs to be aware
that ψ(x) may be a complex number, or may be the “evaluation at x” linear
function on H1 .
So quantum fields should be “operator-valued distributions” and the proper
mathematical treatment of this situation becomes quite challenging, with one

366
class of problems coming from the theory of distributions. What class of func-
tions should appear in the space H1 ? What class of linear functionals on this
space should be used? What properties should the operators ψ(f b ) satisfy?
These issues are far beyond what we can discuss here, and they are not purely
mathematical, with the fact that the product of two distributions does not have
an unambiguous sense one indication of the difficulties of quantum field theory.
To understand the Poisson bracket structure on functions on H1 , one should
recall that in the Bargmann-Fock quantization we found that, choosing a com-
plex structure and complex coordinates zj on phase space, the non-zero Poisson
bracket relations were
{zj , iz l } = δjl
If we use the |pi basis for H1 , our complex coordinates will be α(p), α(p)
with Poisson bracket

{α(p), α(p0 )} = {α(p), α(p0 )} = 0, {α(p), iα(p0 )} = δ(p − p0 )

and under quantization we have

α(p) → a(p), α(p) → a† (p), 1 → 1

Multiplying these operators by −i gives a Heisenberg Lie algebra representation


Γ0 (unitary on the real and imaginary parts of α(p)) with

Γ0 (α(p)) = −ia(p), Γ0 (α(p)) = −ia† (p)

and the commutator relations

[a(p), a(p0 )] = [a† (p), a† (p0 )] = 0, [a(p), a† (p0 )] = δ(p − p0 )1

Using instead the |xi basis for H1 , our complex coordinates will be ψ(x), ψ(x)
with Poisson brackets

{ψ(x), ψ(x0 )} = {ψ(x), ψ(x0 )} = 0, {ψ(x), iψ(x0 )} = δ(x − x0 )

and quantization takes

ψ(x) → −iψ(x),
b ψ(x) → −iψb† (x), 1 → −i1

This gives a Heisenberg Lie algebra representation Γ0 (unitary on the real and
imaginary parts of ψ(x)) with commutator relations

[ψ(x),
b b 0 )] = [ψb† (x), ψb† (x0 )] = 0 [ψ(x),
ψ(x b ψb† (x0 )] = δ(x − x0 )1

To get well-defined operators, these formulas need to be interpreted in terms


of distributions. What we should really consider is, for functions f, g in some
properly chosen class, elements
Z ∞
ψ(f ) + ψ(g) = (f (x)ψ(x) + g(x)ψ(x))dx
−∞

367
of H1∗ with Poisson bracket relations
Z ∞
{ψ(f1 ) + ψ(g1 ), ψ(f2 ) + ψ(g2 )} = −i (f1 (x)g2 (x) − f2 (x)g1 (x))dx
−∞

Here the right-hand size of the equation is the symplectic form Ω for the dual
phase space M = H1∗ , and this should be thought of as an infinite-dimensional
version of formula ??. This is the Lie bracket relation for an infinite-dimensional
Heisenberg Lie algebra, with quantization giving a representation of the Lie
algebra, with commutation relations for field operators
Z ∞
† †
[ψ(f1 ) + ψ (g1 ), ψ(f2 ) + ψ (g2 )] =
b b b b (f1 (x)g2 (x) − f2 (x)g1 (x))dx · 1
−∞

Pretty much exactly the same formalism works to describe fermions, with the
same H1 and the same choice of bases. The only difference is that the coordinate
functions are now taken to be anticommuting, satisfying the fermionic Poisson
bracket relations of a super Lie algebra rather than a Lie algebra. After quantiza-
tion, the fields ψ(x),
b ψb† (x) or the annihilation and creation operators a(p), a† (p)
satisfy anticommutation relations and generate an infinite-dimensional Clifford
algebra, rather than the Weyl algebra of the bosonic case.

35.2 Dynamics of the free quantum field


In classical Hamiltonian mechanics, the Hamiltonian function h determines how
an observable f evolves in time by the differential equation
d
f = {f, h}
dt
Quantization takes f to an operator fb, and h to a self-adjoint operator H.
Multiplying this by −i gives a skew-adjoint operator that exponentiates (we’ll
assume here H time-independent) to the unitary operator

U (t) = e−iHt

that determines how states (in the Schrödinger picture) evolve under time trans-
lation. In the Heisenberg picture states stay the same and operators evolve, with
their time evolution given by

fb(t) = eiHt fb(0)e−iHt

Such operators satisfy


d b
f = [fb, −iH]
dt
which is the quantization of the classical dynamical equation.
To describe the time evolution of a quantum field theory system, it is gener-
ally easier to work with the Heisenberg picture (in which the time dependence

368
is in the quantum field operators) than the Schrödinger picture (in which the
time dependence is in the states). This is especially true in relativistic systems
where one wants to as much as possible treat space and time on the same foot-
ing. It is however also true in non-relativistic cases due to the complexity of
the description of the states (inherent since one is trying to describe arbitrary
numbers of particles) versus the description of the operators, which are built
simply out of the quantum fields.
The classical phase space to be quantized is the space H1 of solutions of
the free particle Schrödinger equation, parametrized by the initial data of a
complex-valued wavefunction ψ(x, 0) ≡ ψ(x), with Poisson bracket
{ψ(x), iψ(x0 )} = δ(x − x0 )
Time translation on this space is given by the Schrödinger equation, which says
that wavefunctions will evolve with time dependence given by
∂ i ∂2
ψ(x, t) = ψ(x, t)
∂t 2m ∂x2
If we take our Hamiltonian function on H1 to be
Z +∞
−1 ∂ 2
h= ψ(x) ψ(x)dx
−∞ 2m ∂x2
then we will get the single-particle Schrödinger equation from the Hamiltonian
dynamics, since

ψ(x, t) ={ψ(x, t), h}
∂t
Z +∞
−1 ∂ 2
={ψ(x, t), ψ(x0 , t) 02
ψ(x0 , t)dx0 }
−∞ 2m ∂x
−1 +∞ ∂2
Z
= ({ψ(x, t), ψ(x0 , t)} 0 2 ψ(x0 , t)+
2m −∞ ∂x
∂2
ψ(x0 , t){ψ(x, t), 0 2 ψ(x0 , t)})dx0 )
∂x
−1 +∞ 2
∂2
Z

= (−iδ(x − x0 ) 0 2 ψ(x0 , t) + ψ(x0 , t) 0 2 {ψ(x, t), ψ(x0 , t)})dx0 )
2m −∞ ∂x ∂x
i ∂2
= ψ(x, t)
2m ∂x2
Here we have used the derivation property of the Poisson bracket and the lin-
∂2
earity of the operator ∂x 02 .

Note that there are other forms of the same Hamiltonian function, related
to the one we chose by integration by parts. One has
d2 d d d
ψ(x) ψ(x) = (ψ(x) ψ(x)) − | ψ(x)|2
dx2 dx dx dx
d d d d2
= (ψ(x) ψ(x) − ( ψ(x))ψ(x)) + ( 2 ψ(x))ψ(x)
dx dx dx dx

369
so neglecting integrals of derivatives (assuming boundary terms go to zero at
infinity), one could have used
+∞ +∞
−1 d2
Z Z
1 d
h= | ψ(x)|2 dx or h = ( ψ(x))ψ(x)dx
2m −∞ dx 2m −∞ dx2

Instead of working with position space fields ψ(x, t) we could work with
their momentum space components. Recall that we can write solutions to the
Schrödinger equation as
Z ∞
1
ψ(x, t) = √ α(p, t)eipx dp
2π −∞
where
p2
α(p, t) = α(p)e−i 2m t
Using these as our coordinates on the H1 , dynamics is given by

∂ p2
α(p, t) = {α(p, t), h} = −i α(p, t)
∂t 2m
and one can easily see that one can choose
Z ∞ 2
p
h= |α(p)|2 dp
−∞ 2m

as Hamiltonian function in momentum space coordinates. This is the same


expression one would get by substituting the expression for ψ in terms of α and
calculating h from its formula as a quadratic polynomial in the fields.
In momentum space, quantization is simply given by
Z ∞ 2
p †
α(p) → a(p), h → H =b a (p)a(p)dp
−∞ 2m

where we have normal-ordered H b so that the vacuum energy is zero.


In position space the expression for the Hamiltonian operator (again normal-
ordered) will be:
Z +∞
−1 ∂ 2 b
Hb = ψb† (x) ψ(x)dx
−∞ 2m ∂x2
Using this quantized form, essentially the same calculation as before (now with
operators and commutators instead of functions and Poisson brackets) shows
that the quantum field dynamical equation
d b
ψ(x, t) = −i[ψ(x,
b t), H]
b
dt
becomes
∂ b i ∂2 b
ψ(x, t) = ψ(x, t)
∂t 2m ∂x2

370
The field operator ψ(x,
b t) satisfies the Schrödinger equation which now ap-
pears as a differential equation for operators rather than for wavefunctions.
One can explicitly solve such a differential equation just as for wavefunctions,
by Fourier transforming and turning differentiation into multiplication. If the
operator ψ(x,
b t) is related to the operator a(p, t) by
Z ∞
b t) = √1
ψ(x, eipx a(p, t)dp
2π −∞
then the Schrödinger equation for the a(p, t) will be

∂ −ip2
a(p, t) = a(p, t)
∂t 2m
with solution
p2
a(p, t) = e−i 2m t a(p, 0)
The solution for the field will then be
Z ∞
b t) = √1
ψ(x,
p2
eipx e−i 2m t a(p)dp
2π −∞
where the operators a(p) ≡ a(p, 0) are the initial values.
We will not enter into the important topic of how to compute observables
in quantum field theory that can be connected to experimentally important
quantities such as scattering cross-sections. A crucial role in such calculations
is played by the following observables:
Definition (Green’s function or propagator). The Green’s function or propa-
gator for a quantum field theory is the amplitude, for t > t0

G(x, t, x0 , t0 ) = h0|ψ(x,
b t)ψb† (x0 , t0 )|0i

The physical interpretation of these functions is that they describe the am-
plitude for a process in which a one-particle state localized at x is created at time
t0 , propagates for a time t − t0 , and then its wavefunction is compared to that
of a one-particle state localized at x. Using the solution for the time-dependent
field operator given earlier we find
Z +∞ Z +∞
1 p2 0 p0 2 0
G(x, t, x0 , t0 ) = h0|eipx e−i 2m t a(p)e−ip x ei 2m t a† (p0 )|0idpdp0
2π −∞ −∞
Z +∞ Z +∞
p2 0 0 p0 2 0
= eipx e−i 2m t e−ip x ei 2m t δ(p − p0 )dpdp0
−∞ −∞
Z +∞
0 p2 0
= eip(x−x ) e−i 2m (t−t ) dp
−∞

One can evaluate this integral, finding


−im 3 −im
(x0 −x)2
G(x0 , t0 , x, t) = ( 0
) 2 e 2π(t0 −t)
2π(t − t)

371
and that
lim G(x0 , t0 , x, t) = δ(x0 − x)
t→t0

While we have worked purely in the Hamiltonian formalism, one could in-
stead start with a Lagrangian for this system. A Lagrangian that will give the
Schrödinger equation as an Euler-Lagrange equation is

∂ ∂ 1 ∂2
L = iψ ψ − h = iψ ψ + ψ ψ
∂t ∂t 2m ∂x2
or, using integration by parts to get an alternate form of h mentioned earlier
∂ 1 ∂ 2
L = iψ ψ− | ψ|
∂t 2m ∂x
∂L
If one tries to define a canonical momentum for ψ as ∂ ψ̇
one just gets iψ.
This justifies the Poisson bracket relation

{ψ(x), iψ(x0 )} = δ(x − x0 )

but, as expected for a case where the equation of motion is first-order in time,
such a canonical momentum is not independent of ψ and the space of the wave-
functions ψ is already a phase space. One could try and quantize this system
by path integral methods, for instance computing the propagator by doing the
integral Z
i
Dγ e ~ S[γ]

over paths γ from (x, t) to (x0 , t0 ). However one needs to keep in mind the
warnings given earlier about path integrals over phase space, since that is what
one has here.

35.3 For further reading


As with the last chapter, the material here is discussed in essentially any quan-
tum field theory textbook, with two that explicitly discuss the non-relativistic
case [23] and [32]. For a serious mathematical treatment of quantum fields as
distribution-valued operators, a standard reference is [62].

372
Chapter 36

Symmetries and
Non-relativistic Quantum
Fields

In our study of the harmonic oscillator (chapter 22) we found that the sym-
metries of the system could be studied using quadratic functions on the phase
space. Classically these gave a Lie algebra under the Poisson bracket, and quan-
tization provided a unitary representation Γ0 of the Lie algebra, with quadratic
functions becoming quadratic operators. In the case of fields, the same pattern
holds, with the phase space now an infinite dimensional space, the single particle
Hilbert space H1 . Certain specific quadratic functions of the fields will provide
a Lie algebra under the Poisson bracket, with quantization then providing a
unitary representation of the Lie algebra in terms of quadratic field operators.
In chapter 35 we saw how this works for time translation symmetry, which
determines the dynamics of the theory. For the case of a free particle, the
field theory Hamiltonian is a quadratic function of the fields, providing a basic
example of how such functions generate a unitary representation on the states
of the quantum theory by use of a quadratic combination of the quantum field
operators. In this chapter we will see how other group actions on the space
H1 also lead to quadratic operators and unitary transformations on the full
quantum field theory. We would like to find a formula to these, something
that will be simplest to do in the case that the group acts on phase space as
unitary transformations, preserving the complex structure used in Bargmann-
Fock quantization.

36.1 Internal symmetries


Since the phase space H1 is a space of complex functions, there is a group that
acts unitarily on this space is the group U (1) of phase transformations of the

373
complex values of the function. Such a group action that acts trivially on the
spatial coordinates but non-trivially on the values of ψ(x) is called an “internal
symmetry”. If the fields ψ have multiple components, taking values in Cm ,
there will be a unitary action of the larger group U (m).

36.1.1 U (1) symmetry


In chapter 2 we saw that the fact that irreducible representations of U (1) are
labeled by integers is what is responsible for the term “quantization”: since
quantum states are representations of this group, they break up into states
characterized by integers, with these integers counting the number of “quanta”.
In the non-relativistic quantum field theory, the integer will just be the total
particle number. Such a theory can be thought of as an harmonic oscillator
with an infinite number of degrees of freedom, and the total particle number is
just the total occupation number, summed over all degrees of freedom.
The U (1) action on the fields ψ(x) which provide coordinates on H1 is given
by
ψ(x) → eiθ ψ(x), ψ(x) → e−iθ ψ(x)
(recall that the fields are in the dual space to H1 , so the action is the inverse to
the action of U (1) on H1 itself by multiplication by e−iθ ).
To understand the infinitesimal generator of this symmetry, first recall the
simple case of a harmonic oscillator in one variable, identifying the phase space
R2 with C so the coordinates are z, z, with a U (1) action

z → eiθ z, z → e−iθ z

The Poisson bracket is


{z, z} = −i
which implies
{zz, z} = iz, {zz, z} = −iz
Quantizing takes z → a, z → a† and we saw in chapter 22 that we have a
choice to make for the unitary operator that will be the quantization of zz:


i
zz → − (a† a + aa† )
2
This will have eigenvalues −i(n + 21 ), n = 0, 1, 2 . . . .


zz → −ia† a
This is the normal-ordered form, with eigenvalues −in.

With either choice, we get a number operator


1 † 1
N= (a a + aa† ), or N = : (a† a + aa† ) := a† a
2 2

374
In both cases we have
[N, a] = −a, [N, a† ] = a†
so
e−iθN aeiθN = eiθ a, e−iθN a† eiθN = e−iθ a†
Either choice of N will give the same action on operators. Hoever, on states
only the normal-ordered one will have the desirable feature that

N |0i = 0, e−iN θ |0i = |0i

Since we now want to treat fields, adding together an infinite number of such
oscillator degrees of freedom, we will need the normal-ordered version in order
to not get ∞ · 21 as the number eigenvalue for the vacuum state.
In momentum space, we simply do the above for each value of p and sum,
getting
Z +∞
N
b= a† (p)a(p)dp
−∞

where one needs to keep in mind that this is really an operator valued distribu-
tion, which must be integrated against some weighting function on momentum
space to get a well-defined operator. What really makes sense is
Z +∞
N
b (f ) = a† (p)a(p)f (p)dp
−∞

for a suitable class of functions f .


Instead of working with a(p), the quantization of the Fourier transform of
ψ(x), one could work with ψ(x) itself, and write
Z +∞
N
b= ψb† (x)ψ(x)dx
b
−∞

b . ψb† (x)ψ(x)
with the Fourier transform relating the two formulas for N b is also an
operator valued distribution, with the interpretation of measuring the number
density at x.
On field operators, Nb satisfies

[N b = −ψ,
b , ψ] b , ψb† ] = ψb†
b [N

so ψb acts on states by reducing the eigenvalue of N by one, while ψb† acts on


states by increasing the eigenvalue of N by one. Exponentiating, one has

e−iθN ψe b e−iθNb ψb† eiθNb = e−iθ ψb†


b b iθ N
= eiθ ψ,
b

which are the quantized versions of the U (1) action on the phase space coordi-
nates
ψ(x) → eiθ ψ(x), ψ(x) → e−iθ ψ(x)

375
that we began our discussion with.
An important property of Nb that can be straightforwardly checked is that

+∞
−1 ∂ 2 b
Z
[N
b , H]
b = [N
b, ψb† (x) ψ(x)dx] = 0
−∞ 2m ∂x2

This implies that particle number is a conserved quantity: if we start out with
a state with a definite particle number, this will remain constant. Note that the
origin of this conservation law comes from the fact that N b is the quantized gen-
erator of the U (1) symmetry of phase transformations on complex-valued fields
ψ. If we start with any Hamiltonian function h on H1 that is invariant under the
U (1) (i.e. built out of terms with an equal number of ψs and ψs), then for such
a theory N b will commute with H b and particle number will be conserved. Note
though that one needs to take some care with arguments like this, which assume
that symmetries of the classical phase space give rise to unitary representations
in the quantum theory. The need to normal-order operator products, working
with operators that differ from the most straightforward quantization by an in-
finite constant, can cause a failure of symmetries to be realized as expected in
the quantum theory, a phenomenon known as an “anomaly” in the symmetry.
In quantum field theories, due to the infinite number of degrees of freedom,
the Stone-von Neumann theorem does not apply, and one can have unitarily
inequivalent representations of the algebra generated by the field operators,
leading to new kinds of behavior not seen in finite dimensional quantum systems.
In particular, one can have a space of states where the lowest energy state |0i
does not have the property

b |0i = 0, e−iθNb |0i = |0i


N

but instead gets taken by e−iθN to some other state, with


b

N 6 0, e−iθN |0i ≡ |θi =


b |0i = 6 |0i (for θ 6= 0)
b

In this case, the vacuum state is not an eigenstate of N b so does not have a
well-defined particle number. If [N , H] = 0, the states |θi will all have the same
b b
energy as |0i and there will be a multiplicity of different vacuum states, labeled
by θ. In such a case the U (1) symmetry is said to be “spontaneously broken”.
This phenomenon occurs when non-relativistic quantum field theory is used to
describe a superconductor. There the lowest energy state will be a state without
a definite particle number, with electrons pairing up in a way that allows them
to lower their energy, “condensing” in the lowest energy state.

36.1.2 U (m) symmetry


By taking fields with values in Cm , or, equivalently, m different species of
complex-valued field ψj , j = 1, 2, . . . , m, one can easily construct quantum field

376
theories with larger internal symmetry groups than U (1). Taking as Hamilto-
nian function Z +∞ Xm
−1 ∂ 2
h= ψ j (x) ψj (x)dx
−∞ j=1 2m ∂x2

one can see that this will be invariant not just under U (1) phase transformations,
but also under transformations
   
ψ1 ψ1
 ψ2   ψ2 
 ..  → U  .. 
   
 .   . 
ψm ψm
where U is an m by m unitary matrix. The Poisson brackets will be
{ψj (x), ψ k (x0 )} = −iδ(x − x0 )δjk
and are also invariant under such transformations by U ∈ U (m).
As in the U (1) case, one can begin by considering the case of one particular
value of p or of x, for which the phase space is Cm , with coordinates zb , z b . As we
saw in section 23.1, the m2 quadratic combinations zj z k for j = 1, . . . , m, k =
1, . . . , m will generalize the role played by zz in the m = 1 case, with their
Poisson bracket relations exactly the Lie bracket relations of the Lie algebra
u(m) (or, considering all complex linear combinations, gl(m, C)).
After quantization, these quadratic combinations become quadratic combi-
nations in annihilation and creation operators aj , a†j satisfying

[aj , a†k ] = δjk


Recall (theorem 23.2) that for m by m matrices X and Y one will have
m m m
a†j Xjk ak , a†j Yjk ak ] = a†j [X, Y ]jk ak
X X X
[
j,k=1 j,k=1 j,k=1

So, for each X in the Lie algebra gl(m, C), quantization will give us a represen-
tation of gl(m, C) where X acts as the operator
m
a†j Xjk ak
X

j,k=1

When the matrices X are chosen to be skew-adjoint (Xjk = −Xkj ) this con-
struction will give us a unitary representation of u(m).
As in the U (1) case, one gets an operator in the quantum field theory just by
summing over either the a(p) in momentum space, or the fields in configuration
space, finding for each X ∈ u(m) an operator
Z +∞ X m
Xb= ψbj† (x)Xjk ψbk (x)dx
−∞ j,k=1

377
that provides a representation of u(m) and U (m) on the quantum field theory
state space. This representation takes
b† (x)Xjk ψ
R +∞ Pm
ψ bk (x)dx
eX ∈ U (m) → U (eX ) = eX = e
b
−∞ b,c=1 j

When, as for the free-particle h we chose, the Hamiltonian is invariant under


U (m) transformations of the fields ψj , then we will have

[X,
b H]
b =0

In this case, if |0i is invariant under the U (m) symmetry, then energy eigenstates
of the quantum field theory will break up into irreducible representations of
U (m) and can be labeled accordingly. As in the U (1) case, the U (m) symmetry
may be spontaneously broken, with

X|0i
b 6 0
=

for some directions X in u(m). When this happens, just as in the U (1) case
states did not have well-defined particle number, now they will not carry well-
defined irreducible U (m) representation labels.

36.2 Spatial symmetries


We saw in chapter 17 that the action of the group E(3) on physical space
R3 induces a unitary action on the space H1 of solutions to the free-particle
Schrödinger equation. Quantization of this phase space with this group action
produces a quantum field theory state space carrying a unitary representation
of the group E(3). There are three different actions of the group E(3) that one
needs to keep straight here. Given an element (a, R) ∈ E(3) one has:

1. an action on R3 , preserving the inner product on R3

x → Rx + a

2. A unitary action on H1 given by

ψ(x) → u(a, R)ψ(x) = ψ(R−1 (x − a))

on wavefunctions, or, on Fourier transforms by


−1
ψ(p)
e →u
e(a, R)ψ(p)
e = e−ia·R p ψ(R
e −1 p)

Recall that this is not an irreducible representation of E(3), but one can
get an irreducible representation by taking distributional wavefunctions
ψeE with support on the sphere |p|2 = 2mE.

378


ψ1
For the case of two-component wavefunctions ψ = satisfying the
ψ2
Pauli equation (see chapter 31), one has to use the double cover of E(3),
with elements (a, Ω), Ω ∈ SU (2) and on these the action is

ψ(x) → u(a, Ω)ψ(x) = Ωψ(R−1 (x − a))

and
−1
ψ(p)
e →u
e(a, Ω)ψ(p)
e = e−ia·R p Ωψ(R
e −1 p)

where R = Φ(Ω) is the SO(3) group element corresponding to Ω.


The infinitesimal version of this unitary action is given by the operators
−iP and −iL (in the two-component case, instead of −iL, one needs
−iJ = −i(L + S)).

3. The quantum field theory one gets by treating H1 as a classical phase


space, and quantizing using an appropriate infinite-dimensional version of
the Bargmann-Fock representation comes with another unitary represen-
tation U (a, R), on the quantum field theory state space H. This is be-
cause the representation u(a, R) preserves the symplectic structure on H1 ,
and the Bargmann-Fock construction gives a representation of the group
of such symplectic transformations, of which E(3) is a finite-dimensional
subgroup.

It is the last of these that we want to understand here, and as usual for
quantum field theory, we don’t want to try and explicitly construct the state
space H and see the E(3) action on that construction, but instead want to use
the analog of the Heisenberg picture in the time-translation case, taking the
group to act on operators. For each (a, R) ∈ E(3) we want to find operators
U (a, R) that will be built out of the field operators, and act on the field operators
as
ψ(x)
b → U (a, R)ψ(x)U
b (a, R)−1 = ψ(Rx
b + a) (36.1)

Note that here the way the group acts on the argument of the operator-valued
distribution is opposite to the way that it acts on the argument of a solution
in H1 . This is because ψ(x)
b is an operator associated not to an element of H1 ,
but to a distribution on this space, in particular the distribution ψ(x), here
meaning “evaluation of the solution ψ at x. The group will act oppositely on
such linear functions on H1 to its action on elements of H1 . For a more general
distribution of the form
Z
ψ(f ) = f (x)ψ(x)d3 x
R3

E(3) will act on f by

f → (a, R) · f (x) = f (R−1 (x − a))

379
and on ψ(f ) by
Z
ψ(f ) → (a, R) · ψ(f ) = f (R−1 (x − a))ψ(x)d3 x
R3

The distribution ψ(x) corresponds to taking f as the delta-function, and it will


transform as
ψ(x) → (a, R) · ψ(x) = ψ(Rx + a))
For spatial translations, we want to construct momentum operators −iPb that
give a Lie algebra representation of the translation group, and the operators

U (a, 1) = e−ia·P
b

after exponentiation. Note that these are not the momentum operators P that
act on H1 , but are operators in the quantum field theory that will be built out
of quadratic combinations of the field operators. By equation 36.1 we want

e−ia·P ψ(x)e ia·P


= ψ(x
b + a)
b b b

Such an operator Pb can be constructed in terms of quadratic operators in the


fields in the same way as the Hamiltonian H b was in section 35.2, although there

is an opposite choice of sign for time versus space translations (−iH = ∂t , and

−iP = − ∂x ) for reasons that appear when we combine space and time later in
special relativity. The calculation proceeds by just replacing the single-particle
Hamiltonian operator by the single-particle momentum operator P = −i∇. So
one has Z
P
b= ψb† (x)(−i∇)ψ(x)d
b 3
x
R3
In the last chapter we saw that, in terms of annihilation and creation operators,
this operator is just Z
P=
b p a† (p)a(p)d3 p
R3
which is just the integral over momentum space of the momentum times the
number-density operator in momentum space.
For spatial rotations, we found in chapter 17 that these had generators the
angular momentum operators
L = X × P = X × (−i∇)
acting on H1 . Just as for energy and momentum, we can construct angular
momentum operators in the quantum field theory as quadratic field operators
by Z
L=
b ψb† (x)(x × (−i∇))ψ(x)d
b 3
x
R3
These will generate the action of rotations on the field operators. For instance,
if R(θ) is a rotation about the x3 axis by angle θ, we will have

ψ(R(θ)x) = e−iθL3 ψ(x)eiθ L


b b b b3

380
Note that these constructions are infinite-dimensional examples of theorem
23.2 which showed how to take an action of the unitary group on phase space
(preserving Ω) and produce a representation of this group on the state space
of the quantum theory. In our study of quantum field theory, we will be con-
tinually exploiting this construction, for groups acting unitarily on the infinite-
dimensional phase space H1 of solutions of some linear field equations.

36.3 Fermions
Everything that was done in this chapter carries over straightforwardly to the
case of a fermionic non-relativistic quantum field theory of free particles. Field
operators will in this case generate an infinite-dimensional Clifford algebra and
the quantum state space will be an infinite-dimensional version of the spinor
representation. All the symmetries considered in this chpter also appear in the
fermionic case, and have Lie algebra representations constructed using quadratic
combinations of the field operators in just the same way as in the bosonic case.
In section 28.3 we saw in finite dimensions how unitary group actions on the
fermionic phase space gave a unitary representation on the fermionic oscillator
space, by the same method of annihilation and creation operators as in the
bosonic case. The construction of the Lie algebra representation operators in
the fermionic case is an infinite-dimensional example of that method.

36.4 For further reading


The material of this chapter is often developed in conventional quantum field
theory texts in the context of relativistic rather than non-relativistic quantum
field theory. Symmetry generators are also more often derived via Lagrangian
methods (Noether’s theorem) rather than the Hamiltonian methods used here.
For an example of a detailed discussion relatively close to this one, see [23].

381
382
Chapter 37

Minkowski Space and the


Lorentz Group

For the case of non-relativistic quantum mechanics, we saw that systems with
an arbitrary number of particles, bosons or fermions, could be described by
taking as the Hamiltonian phase space the state space H1 of the single-particle
quantum theory (e.g. the space of complex-valued wavefunctions on R3 in the
bosonic case). This phase space is infinite-dimensional, but it is linear and it
can be quantized using the same techniques that work for the finite-dimensional
harmonic oscillator. This is an example of a quantum field theory since it is a
space of functions (fields, to physicists) that is being quantized.
We would like to find some similar way to proceed for the case of rela-
tivistic systems, finding relativistic quantum field theories capable of describ-
ing arbitrary numbers of particles, with the energy-momentum relationship
E 2 = |p|2 c2 + m2 c4 characteristic of special relativity, not the non-relativistic
2
limit |p|  mc where E = |p| 2m . In general, a phase space can be thought of as
the space of initial conditions for an equation of motion, or equivalently, as the
space of solutions of the equation of motion. In the non-relativistic field theory,
the equation of motion is the first-order in time Schrödinger equation, and the
phase space is the space of fields (wavefunctions) at a specified initial time, say
t = 0. This space carries a representation of the time-translation group R, the
space-translation group R3 and the rotation group SO(3). To construct a rela-
tivistic quantum field theory, we want to find an analog of this space. It will be
some sort of linear space of functions satisfying an equation of motion, and we
will then quantize by applying harmonic oscillator methods.
Just as in the non-relativistic case, the space of solutions to the equation
of motion provides a representation of the group of space-time symmetries of
the theory. This group will now be the Poincaré group, a ten-dimensional
group which includes a four-dimensional subgroup of translations in space-time,
and a six-dimensional subgroup (the Lorentz group), which combines spatial
rotations and “boosts” (transformations mixing spatial and time coordinates).

383
The representation of the Poincaré group on the solutions to the relativistic
wave equation will in general be reducible. Irreducible such representations will
be the objects corresponding to elementary particles. Our first goal will be to
understand the Lorentz group, in later sections we will find representations of
this group, then move on to the Poincaré group and its representations.

37.1 Minkowski space


Special relativity is based on the principle that one should consider space and
time together, and take them to be a four-dimensional space R4 with an indef-
inite inner product:

Definition (Minkowski space). Minkowski space M 4 is the vector space R4


with an indefinite inner product given by

(x, y) ≡ x · y = −x0 y0 + x1 y1 + x2 y2 + x3 y3

where (x0 , x1 , x2 , x3 ) are the coordinates of x ∈ R4 , (y0 , y1 , y2 , y3 ) the coordi-


nates of y ∈ R4 .

Digression. We have chosen to use the − + ++ instead of the more common


+ − −− sign convention for the following reasons:

• Analytically continuing the time variable x0 to ix0 gives a positive definite


inner product.

• Restricting to spatial components, there is no change from our previous


formulas for the symmetries of Euclidean space E(3).

• Only for this choice will we have a purely real spinor representation (since
Cliff(3, 1) = M (22 , R) 6= Cliff(1, 3)).

• Weinberg’s quantum field theory textbook [71] uses this convention (al-
though, unlike him, we’ll put the 0 component first).

This inner product will also sometimes be written using the matrix
 
−1 0 0 0
0 1 0 0
ηµν =
0

0 1 0
0 0 0 1

as
3
X
x·y = ηµν xµ xν
µ,ν=0

384
Digression (Upper and lower indices). In many physics texts it is conventional
in discussions of special relativity to write formulas using both upper and lower
indices, related by

3
X
xµ = ηµν xν = ηµν xν
ν=0

with the last form of this using the Einstein summation convention.

One motivation for introducing both upper and lower indices is that special
relativity is a limiting case of general relativity, which is a fully geometrical
theory based on taking space-time to be a manifold M with a metric g that
varies from point to point. In such a theory it is important to distinguish between
elements of the tangent space Tx (M ) at a point x ∈ M and elements of its dual,
the co-tangent space Tx∗ (M ), while using the fact that the metric g provides an
inner product on Tx (M ) and thus an isomorphism Tx (M ) ' Tx∗ (M ). In the
special relativity case, this distinction between Tx (M ) and Tx∗ (M ) just comes
down to an issue of signs, but the upper and lower index notation is useful for
keeping track of those.

A second motivation is that position and momenta naturally live in dual


vector spaces, so one would like to distinguish between the vector space M 4 of
positions and the dual vector space of momenta. In the case though of a vector
space like M 4 which comes with a fixed inner product ηµν , this inner product
gives a fixed identification of M 4 and its dual, an identification that is also an
identification as representations of the Lorentz group. For simplicity, we will
not here try and distinguish by notation whether a vector is in M 4 or its dual,
so will just use lower indices, not both upper and lower indices.

The coordinates x1 , x2 , x3 are interpreted as spatial coordinates, and the


coordinate x0 is a time coordinate, related to the conventional time coordinate
t with respect to chosen units of time and distance by x0 = ct where c is the
speed of light. Mostly we will assume units of time and distance have been
chosen so that c = 1.

Vectors v ∈ M 4 such that |v|2 = v · v > 0 are called “spacelike”, those with
|v|2 < 0 “time-like” and those with |v|2 = 0 are said to lie on the “light-cone”.
Suppressing one space dimension, the picture to keep in mind of Minkowski
space looks like this:

385
We can take Fourier transforms with respect to the four space-time variables,
which will take functions of x0 , x1 , x2 , x3 to functions of the Fourier transform
variables p0 , p1 , p2 , p3 . The definition we will use for this Fourier transform will
be
Z
1
fe(p) = e−ip·x f (x)d4 x
(2π)2 M 4
Z
1
= e−i(−p0 x0 +p1 x1 +p2 x2 +p3 x3 ) f (x)dx0 d3 x
(2π)2 M 4

and the Fourier inversion formula is


Z
1
f (x) = eip·x fe(p)d4 p
(2π)2 M4

Note that our definition puts one factor of √12π with each Fourier (or inverse
Fourier) transform with respect to a single variable. A common alternate con-
vention among physicists is to put all factors of 2π with the p integrals (and
thus in the inverse Fourier transform), none in the definition of fe(p), the Fourier
transform itself.

The reason why one conventionally defines the Hamiltonian operator as i ∂t

but the momentum operator with components −i ∂x j
is due to the sign change
between the time and space variables that occurs in this Fourier transform in
the exponent of the exponential.
Discuss the sign conventions and the Fourier transform in more detail here.

386
37.2 The Lorentz group and its Lie algebra
Recall that in 3 dimensions the group of linear transformations of R3 pre-
serving the standard inner product was the group O(3) of 3 by 3 orthogonal
matrices. This group has two disconnected components: SO(3), the subgroup
of orientation preserving (determinant +1) transformations, and a component
of orientation reversing (determinant −1) transformations. In Minkowksi space,
one has

Definition (Lorentz group). The Lorentz group O(3, 1) is the group of linear
transformations preserving the Minkowski space inner product on R4 .

In terms of matrices, the condition for a 4 by 4 matrix Λ to be in O(3, 1)


will be    
−1 0 0 0 −1 0 0 0
T  0 1 0 0
 0 1 0 0
Λ  Λ =  
0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1

The Lorentz group has four components, with the component of the iden-
tity a subgroup called SO(3, 1) (which some call SO+ (3, 1)). The other three
components arise by multiplication of elements in SO(3, 1) by P, T, P T where
 
1 0 0 0
0 −1 0 0
P = 
0 0 −1 0
0 0 0 −1

is called the “parity” transformation, reversing the orientation of the spatial


variables, and
 
−1 0 0 0
0 1 0 0
T =0

0 1 0
0 0 0 1
reverses the time orientation.
The Lorentz group has a subgroup SO(3) of transformations that just act
on the spatial components, given by matrices of the form
 
1 0 0 0
0 
Λ=0

R 
0

where R is in SO(3). For each pair j, k of spatial directions one has the usual
SO(2) subgroup of rotations in the j −k plane, but now in addition for each pair
0, j of the time direction with a spatial direction, one has SO(1, 1) subgroups

387
of matrices of transformations called “boosts” in the j direction. For example,
for j = 1, one has the subgroup of SO(3, 1) of matrices of the form
 
cosh φ sinh φ 0 0
 sinh φ cosh φ 0 0
Λ=  0

0 1 0
0 0 0 1

for φ ∈ R.
The Lorentz group is six-dimensional. For a basis of its Lie algebra one can
take six matrices Mµν for µ, ν ∈ 0, 1, 2, 3 and j < k. For the spatial indices,
these are
     
0 0 0 0 0 0 0 0 0 0 0 0
0 0 −1 0 0 0 0 1 0 0 0 0 
M12 = 0 1 0 0 , M13 = 0 0 0 0 , M23 = 0 0 0 −1
    

0 0 0 0 0 −1 0 0 0 0 1 0

which correspond to the basis elements of the Lie algebra of SO(3) that we saw
in an earlier chapter. One can rename these using the same names as earlier

l1 = M23 , l2 = M13 , l3 = M12

and recall that these satisfy the so(3) commutation relations

[l1 , l2 ] = l3 , [l2 , l3 ] = l1 , [l3 , l1 ] = l2

and correspond to infinitesimal rotations about the three spatial axes.


Taking the first index 0, one gets three elements corresponding to infinitesi-
mal boosts in the three spatial directions
     
0 1 0 0 0 0 1 0 0 0 0 1
1 0 0 0 0 0 0 0 0 0 0 0
M01 = 0 0 0 0 , M02 = 1 0 0 0 , M03 = 0 0 0 0
    

0 0 0 0 0 0 0 0 1 0 0 0

These can be renamed as

k1 = M01 , k2 = M02 , k3 = M03

One can easily calculate the commutation relations between the kj and lj , which
show that the kj transform as a vector under infinitesimal rotations. For in-
stance, for infinitesimal rotations about the x1 axis, one finds

[l1 , k1 ] = 0, [l1 , k2 ] = k3 , [l1 , k3 ] = −k2

Commuting infinitesimal boosts, one gets infinitesimal spatial rotations

[k1 , k2 ] = −l3 , [k3 , k1 ] = −l2 , [k2 , k3 ] = −l1

388
Digression. A more conventional notation in physics is to use Jj = ilj for
infinitesimal rotations, and Kj = ikj for infinitesimal boosts. The intention of
the different notation used here is to start with basis elements of the real Lie
algebra so(3, 1), (the lj and kj ) which are purely real objects, before complexifying
and considering representations of the Lie algebra.
Taking the following complex linear combinations of the lj and kj
1 1
Aj = (lj + ikj ), Bj = (lj − ikj )
2 2
one finds
[A1 , A2 ] = A3 , [A3 , A1 ] = A2 , [A2 , A3 ] = A1
and
[B1 , B2 ] = B3 , [B3 , B1 ] = B2 , [B2 , B3 ] = B1
This construction of the Aj , Bj requires that we complexify (allow complex
linear combinations of basis elements) the Lie algebra so(3, 1) of SO(3, 1) and
work with the complex Lie algebra so(3, 1) ⊗ C. It shows that this Lie alge-
bra splits into a product of two sub-Lie algebras, which are each copies of the
(complexified) Lie algebra of SO(3), so(3) ⊗ C. Since
so(3) ⊗ C = su(2) ⊗ C = sl(2, C)
we have
so(3, 1) ⊗ C = sl(2, C) × sl(2, C)
In the next section we’ll see the origin of this phenomenon at the group level.

37.3 Spin and the Lorentz group


Just as the groups SO(n) have double covers Spin(n), the group SO(3, 1) has a
double cover, which we will show can be identified with the group SL(2, C) of
2 by 2 complex matrices with unit determinant. This group will have the same
Lie algebra as the SO(3, 1), and we will sometimes refer to either group as the
“Lorentz group”.
Recall that for SO(3) the spin double cover Spin(3) can be identified with
either Sp(1) (the unit quaternions) or SU (2), and then the action of Spin(3)
as SO(3) rotations of R3 was given by conjugation of imaginary quaternions or
certain 2 by 2 complex matrices respectively. In the SU (2) case this was done
explicitly by identifying
 
x3 x1 − ix2
(x1 , x2 , x3 ) ↔
x1 + ix2 −x3
and then showing that conjugating this matrix by an element of SU (2) was a
linear map leaving invariant
 
x3 x1 − ix2
det = −(x21 + x22 + x23 )
x1 + ix2 −x3

389
and thus a rotation in SO(3).
The same sort of thing works for the Lorentz group case. Now we identify
R4 with the space of 2 by 2 complex self-adjoint matrices by
 
x0 + x3 x1 − ix2
(x0 , x1 , x2 , x3 ) ↔
x1 + ix2 x0 − x3

and observe that


 
x0 + x3 x1 − ix2
det = x20 − x21 − x22 − x23
x1 + ix2 x0 − x3

This provides a very useful way to think of Minkowski space: as complex self-
adjoint 2 by 2 matrices, with norm-squared minus the determinant of the matrix.
The linear transformation
   
x0 + x3 x1 − ix2 x0 + x3 x1 − ix2
→ Λ̃ Λ̃†
x1 + ix2 x0 − x3 x1 + ix2 x0 − x3

for Λ̃ ∈ SL(2, C) preserves the determinant and thus the inner-product, since
   
x0 + x3 x1 − ix2 x0 + x3 x1 − ix2
det(Λ̃ Λ̃† ) =(det Λ̃) det (det Λ̃† )
x1 + ix2 x0 − x3 x1 + ix2 x0 − x3
=x20 − x21 − x22 − x23

It also takes self-adjoint matrices to self-adjoints, and thus R4 to R4 , since


   †
x0 + x3 x1 − ix2 † † † † x0 + x3 x1 − ix2
(Λ̃ Λ̃ ) =(Λ̃ ) Λ̃†
x1 + ix2 x0 − x3 x1 + ix2 x0 − x3
 
x0 + x3 x1 − ix2
=Λ̃ Λ̃†
x1 + ix2 x0 − x3

Note that both Λ̃ and −Λ̃ give the same linear transformation when they act by
conjugation like this. One can show that all elements of SO(3, 1) arise as such
conjugation maps, by finding appropriate Λ̃ that give rotations or boosts in the
µ − ν planes, since these generate the group.
Recall that the double covering map

Φ : SU (2) → SO(3)

was given for Ω ∈ SU (2) by taking Φ(Ω) to be the linear transformation in


SO(3)    
x3 x1 − ix2 x3 x1 − ix2
→Ω Ω−1
x1 + ix2 −x3 x1 + ix2 −x3
We have found an extension of this map to a double covering map from SL(2, C)
to SO(3, 1). This restricts to Φ on the subgroup SU (2) of SL(2, C) matrices
satisfying Λ̃† = Λ̃−1 .

390
Digression (The complex group Spin(4, C) and its real forms). Recall that
we found that Spin(4) = Sp(1) × Sp(1), with the corresponding SO(4) trans-
formation given by identifying R4 with the quaternions H and taking not just
conjugations by unit quaternions, but both left and right multiplication by dis-
tinct unit quaternions. Rewriting this in terms of complex matrices instead of
quaternions, we have Spin(4) = SU (2) × SU (2), and a pair Ω1 , Ω2 of SU (2)
matrices acts as an SO(4) rotation by
   
x0 − ix3 −x2 − ix1 x0 − ix3 −x2 − ix1
→ Ω1 Ω2
x2 − ix1 x0 + ix3 x2 − ix1 x0 + ix3

preserving the determinant x20 + x21 + x22 + x23 .


For another example, consider the identification of R4 with 2 by 2 real ma-
trices given by  
x0 + x3 x2 + x1
(x0 , x1 , x2 , x3 ) ↔
x2 − x1 x0 − x3
Given a pair of matrices Ω1 , Ω2 in SL(2, R), the linear transformation
   
x0 + x3 x2 + x1 x0 + x3 x2 + x1
→ Ω1 Ω2
x2 − x1 x0 − x3 x2 − x1 x0 − x3

preserves the reality condition on the matrix, and preserves


 
x + x3 x2 + x1
det 0 = x20 + x21 − x22 − x23
x2 − x1 x0 − x3

so gives an element of SO(2, 2) and we see that Spin(2, 2) = SL(2, R) ×


SL(2, R).
The three different examples

Spin(4) = SU (2) × SU (2), Spin(3, 1) = SL(2, C)

and
Spin(2, 2) = SL(2, R) × SL(2, R)
that we have seen are all so-called “real forms” of a fact about complex groups
that one can get by complexifying any of the examples, i.e. considering elements
(x0 , x1 , x2 , x3 ) ∈ C4 , not just in R4 . For instance, in the Spin(4) case, taking
the x0 , x1 , x2 , x3 in the matrix
 
x0 − ix3 −x2 − ix1
x2 − ix1 x0 + ix3

to have arbitrary complex values z0 , z1 , z2 , z3 one gets arbitrary 2 by 2 complex


matrices, and the transformation
   
z0 − iz3 −z2 − iz1 z − iz3 −z2 − iz1
→ Ω1 0 Ω2
z2 − iz1 z0 + iz3 z2 − iz1 z0 + iz3

391
preserves this space as well as the determinant (z02 + z12 + z22 + z32 ) for Ω1 and Ω2
not just in SU (2), but in the larger group SL(2, C). So we find that the group
SO(4, C) of complex orthogonal transformations of C4 has spin double cover

Spin(4, C) = SL(2, C) × SL(2, C)

Since spin(4, C) = so(3, 1) ⊗ C, this relation between complex Lie groups corre-
sponds to the Lie algebra relation

so(3, 1) ⊗ C = sl(2, C) × sl(2, C)

we found explicitly earlier when we showed that by taking complex coefficients


of generators lj and kj of so(3, 1) we could find generators Aj and Bj of two
different sl(2, C) sub-algebras.

37.4 For further reading


Those not familiar with special relativity should consult a textbook on the
subject for the physics background necessary to appreciate the significance of
Minkowski space and its Lorentz group of invariances. An example of a suitable
such book aimed at mathematics students is Woodhouse’s Special Relativity [76].
Most quantum field theory textbooks have some sort of discussion of the
Lorentz group and its Lie algebra, although the issue of its complexification is
often not treated. A typical example is Peskin-Schroeder [46], see the beginning
of Chapter 3. Another example is Quantum Field Theory in a Nutshell by
Tony Zee, see Chapter II.3 [77] (and test your understanding by interpreting
properly some of the statements included there such as “The mathematically
sophisticated say the algebra SO(3, 1) is isomorphic to SU (2) ⊗ SU (2)”).

392
Chapter 38

Representations of the
Lorentz Group

Having seen the importance in quantum mechanics of understanding the repre-


sentations of the rotation group SO(3) and its double cover Spin(3) = SU (2)
one would like to also understand the representations of the Lorentz group. We’ll
consider this question for the double cover SL(2, C). As in the three-dimensional
case, only some of these will also be representations of SO(3, 1). One difference
from the SO(3) case is that these will be non-unitary representations, so do not
by themselves provide physically sensible state spaces. All finite-dimensional
irreducible representations of the Lorentz group are non-unitary (except for the
trivial representation). The Lorentz group does have unitary irreducible repre-
sentations, but these are infinite-dimensional and a topic we will not cover.

38.1 Representations of the Lorentz group


In the SU (2) case we found irreducible unitary representations (πn , V n ) of di-
mension n+1 for n = 0, 1, 2, . . .. These could also be labeled by s = n2 , called the
“spin” of the representation, and we will do that from now on. These represen-
tations can be realized explicitly as homogeneous polynomials of degree n = 2s
in two complex variables z1 , z2 . For the case of Spin(4) = SU (2) × SU (2), the
irreducible representation will just be tensor products
V s1 ⊗ V s2
of SU (2) irreducibles, with the first SU (2) acting on the first factor, the second
on the second factor. The case s1 = s2 = 0 is the trivial representation, s1 =
1 2
2 , s2 = 0 is one of the half-spinor representations of Spin(4) on C , s1 = 0, s2 =
1 1
2 is the other, and s1 = s2 = 2 is the representation on four-dimensional
(complexified) vectors.
Turning now to Spin(3, 1) = SL(2, C), one can take the SU (2) matrices
acting on homogeneous polynomials to instead be SL(2, C) matrices and still

393
have irreducible representations of dimension 2s + 1 for s = 0, 12 , 1, . . .. These
will now be representations (πs , V s ) of SL(2, C). There are several things that
are different though about these representations:
• They are not unitary (except in the case of the trivial representation).
1
For example, for the defining representation V 2 on C2 and the Hermitian
inner product < ·, · >
   0
 ψ10
 
ψ1 ψ1
< , >= ψ1 ψ2 · = ψ1 ψ10 + ψ2 ψ20
ψ2 ψ20 ψ20

is invariant under SU (2) transformations Ω since


   0  0
ψ1 ψ1  † ψ1
<Ω ,Ω >= ψ 1 ψ 2 Ω · Ω
ψ2 ψ20 ψ20

and Ω† Ω = 1 by unitarity. This is no longer true for Ω ∈ SL(2, C).


• The condition that matrices Ω ∈ SL(2, C) do satisfy is that they have
determinant 1. It turns out that Ω having determinant one is equivalent
to the condition that Ω preserves the antisymmetric bilinear form on C2 ,
i.e.    
0 1 0 1
ΩT Ω=
−1 0 −1 0
To see this, take  
α β
Ω=
γ δ
and calculate
     
0 1 0 αδ − βγ 0 1
ΩT Ω= = (det Ω)
−1 0 βγ − αδ 0 −1 0
1
As a result, on representations V 2 of SL(2, C) we do have a non-degenerate
bilinear form
   0   0
ψ1 ψ1  0 1 ψ1
( , ) → ψ1 ψ2 = ψ1 ψ20 − ψ2 ψ10
ψ2 ψ20 −1 0 ψ20
1
that is invariant under the SL(2, C) action on V 2 and can be used to
identify the representation and its dual.
Such a non-degenerate bilinear form is called a “symplectic form”, and we
have already made extensive use of these as a fundamental structure that
occurs on a Hamiltonian phase space. For the simplest case of the phase
space R2 for one degree of freedom, such a form is given with respect to
a basis as the same matrix
 
0 1
=
−1 0

394
that we find here giving the symplectic form on a representation space C2
of SL(2, C). In the phase space case, everything was real, and the invari-
ance group of  was the real symplectic group Sp(2, R) = SL(2, R). What
occurs here is just the complexification of this story, with the symplectic
form now on C2 , and the invariance group now SL(2, C).
• In the case of SU (2) representations, the complex conjugate representa-
tion one gets by taking as representation matrices π(g) instead of π(g) is
equivalent to the original representation (the same representation, with a
different basis choice, so matrices changed by a conjugation). To see this
for the spin- 12 representation, note that SU (2) matrices are of the form
 
α β
Ω=
−β α

and one has


   −1  
0 1 α β 0 1 α β
=
−1 0 −β α −1 0 −β α

so the matrix  
0 1
−1 0
is the change of basis matrix relating the representation and its complex
conjugate.
This is no longer true for SL(2, C). One cannot complex conjugate arbi-
trary 2 by 2 complex matrices of unit determinant by a change of basis,
and representations πs will not be equivalent to their complex conjugates
πs .
To add: show that this is true
The classification of irreducible finite dimensional SU (2) representation was
done earlier in this course by considering its Lie algebra su(2), complexified to
give us raising and lowering operators, and this complexification is sl(2, C). If
you take a look at that argument, you see that it mostly also applies to irre-
ducible finite-dimensional sl(2, C) representations. There is a difference though:
now flipping positive to negative weights (which corresponds to change of sign
of the Lie algebra representation matrices, or conjugation of the Lie group rep-
resentation matrices) no longer takes one to an equivalent representation. It
turns out that to get all irreducibles, one must take both the representations
we already know about and their complex conjugates. Using the fact that the
tensor product of one of each type of irreducible is still an irreducible, one can
show (we won’t do this here) that the complete list of irreducible representations
of sl(2, C) is given by
Theorem (Classification of finite dimensional sl(2, C) representations). The
irreducible representations of sl(2, C) are labeled by (s1 , s2 ) for sj = 0, 21 , 1, . . ..

395
These representations are built out of the representations (πs , V s ) with the ir-
reducible (s1 , s2 ) given by
(πs1 ⊗ π s2 , V s1 ⊗ V s2 )
and having dimension (2s1 + 1)(2s2 + 1).
All these representations are also representations of the group SL(2, C) and
one has the same classification theorem for the group, although we will not try
and prove this. We will also not try and study these representations in general,
but will restrict attention to the four cases of most physical interest.
• (0, 0): The trivial representation on C, also called the “spin 0” or scalar
representation.
• ( 12 , 0): These are called left-handed (for reasons we will see later on) “Weyl
spinors”. We will often denote the representation space C2 in this case as
SL , and write an element of it as ψL .
• (0, 12 ): These are called right-handed Weyl spinors. We will often denote
the representation space C2 in this case as SR , and write an element of it
as ψR .
Note that the representations of SL(2, C) on SL and SR are described
explicitly below.
• ( 21 , 12 ): This is called the “vector” representation since it is the complexifi-
cation of the action of SL(2, C) as SO(3, 1) transformations of space-time
vectors that we saw earlier. Recall that for Ω ∈ SL(2, C) this action was
   
x0 + x3 x1 − ix2 x0 + x3 x1 − ix2
→Ω Ω†
x1 + ix2 x0 − x3 x1 + ix2 x0 − x3
Since Ω† is the conjugate transpose this is the action of SL(2, C) on the
representation SL ⊗ SR .
add an explicit identification of matrices and the tensor product
This representation is on a vector space C4 = M (2, C), but preserves
the subspace of self-adjoint matrices that we have identified with the
Minkowski space R4 .
The reducible 4 complex dimensional representation ( 21 , 0) ⊕ (0, 12 ) is known as
the representation on “Dirac spinors”. As explained earlier, of these representa-
tions, only the trivial one is unitary. Only the trivial and vector representations
are representations of SO(3, 1) as well as SL(2, C).
One can manipulate these Weyl spinor representations ( 12 , 0) and (0, 12 ) in
a similar way to the treatment of tangent vectors and their duals in tensor
analysis. Just like in that formalism, one can distinguish between a represen-
tation space and its dual by upper and lower indices, in this case using not
the metric but the SL(2, C) invariant bilinear form  to raise and lower indices.
With complex conjugates and duals, there are four kinds of irreducible SL(2, C)
representations on C2 to keep track of

396
• SL : This is the standard defining representation of SL(2, C) on C2 , with
Ω ∈ SL(2, C) acting on ψL ∈ SL by

ψL → ΩψL

A standard index notation for such things is called the “van der Waer-
den notation”. It uses a lower index α taking values 1, 2 to label the
components
 
ψ1
ψL = = ψα
ψ2
and in this notation Ω acts by

ψα → Ωβα ψβ

For instance, the element


θ
Ω = e−i 2 σ3

that acts on vectors by a rotation by an angle θ around the z-axis acts on


SL by
   
ψ1 θ ψ1
→ e−i 2 σ3
ψ2 ψ2

• SL∗ : This is the dual of the defining representation, with Ω ∈ SL(2, C)



acting on ψL ∈ SL∗ by

ψL → (Ω−1 )T ψL∗

This is a general property of representations: given any finite-dimensional


representation (π(g), V ), the pairing between V and its dual V ∗ is pre-
served by acting on V ∗ by matrices (π(g)−1 )T , and these provide a repre-
sentation ((π(g)−1 )T , V ∗ ). In van der Waerden notation, one uses upper
indices and writes
ψ α → ((Ω−1 )T )α
βψ
β

Writing elements of the dual as row vectors, our example above of a par-
ticular Ω acts by
 θ
ψ1 ψ2 → ψ1 ψ2 ei 2 σ3


Note that the matrix  gives an isomorphism of representations between


SL and SL∗ , given in index notation by

ψ α = αβ ψβ

where  
0 1
αβ =
−1 0

397
• SR : This is the complex conjugate representation to SL , with Ω ∈ SL(2, C)
acting on ψR ∈ SR by
ψR → ΩψR
The van der Waerden notation uses a separate set of dotted indices for
these, writing this as
β̇
ψα̇ → Ωα̇ ψβ̇
Another common notation among physicists puts a bar over the ψ to
denote that the vector is in this representation, but we’ll reserve that
notation for complex conjugation. The Ω corresponding to a rotation
about the z-axis acts as
   
ψ1̇ i θ2 σ3 ψ1̇
→e
ψ2̇ ψ2̇


• SR : This is the dual representation to SR , with Ω ∈ SL(2, C) acting on
∗ ∗
ψR ∈ SR by
∗ −1 ∗
ψR → (Ω )T ψR
and the index notation uses raised dotted indices
−1 T α̇ β̇
ψ α̇ → ((Ω ) )β̇ ψ

Our standard example of a Ω acts by


θ
ψ2̇ e−i 2 σ3
 
ψ1̇ ψ2̇ → ψ1̇

Another copy of   
α̇β̇ 0 1
 =
−1 0

gives the isomorphism of SR and SR as representations, by

ψ α̇ = α̇β̇ ψβ̇

Restricting to the SU (2) subgroup of SL(2, C), all these representations


are unitary, and equivalent. As SL(2, C) representations, they are not unitary,
and while the representations are equivalent to their duals, SL and SR are
inequivalent (since as we have seen one cannot complex conjugate SL(2, C)
matrices by a conjugation).

38.2 Dirac γ matrices and Cliff(3, 1)


In our discussion of the fermionic version of the harmonic oscillator, we defined
the Clifford algebra Cliff(r, s) and found that elements quadratic in its genera-
tors gave a basis for the Lie algebra of so(r, s) = spin(r, s). Exponentiating these
gave an explicit construction of the group Spin(r, s). We can apply that general

398
theory to the case of Cliff(3, 1) and this will give us explicitly the representations
( 12 , 0) and (0, 12 ).
If we complexify our R4 , then its Clifford algebra becomes just the algebra
of 4 by 4 complex matrices

Cliff(3, 1) ⊗ C = Cliff(4, C) = M (4, C)

We will represent elements of Cliff(3, 1) as such 4 by 4 matrices, but should


keep in mind that we are working in the complexification of the Clifford algebra
that corresponds to the Lorentz group, so there is some sort of condition on
the matrices that should be kept track of. There are several different choices of
how to explicitly represent these matrices, and for different purposes, different
ones are most convenient. The one we will begin with and mostly use is some-
times called the chiral or Weyl representation, and is the most convenient for
discussing massless charged particles. We will try and follow the conventions
used for this representation in [71].

Digression. Note that the Aj and Bj we constructed using the lj and kj were
also complex 4 by 4 matrices, but they were acting on complex vectors (the com-
plexification of the vector representation ( 12 , 12 )). Now we want 4 by 4 matrices
for something different, putting together the spinor representations ( 21 , 0) and
(0, 21 ).

Writing 4 by 4 matrices in 2 by 2 block form and using the Pauli matrices


σj we assign the following matrices to Clifford algebra generators
       
0 1 0 σ1 0 σ2 0 σ3
γ0 = −i , γ1 = −i , γ2 = −i , γ3 = −i
1 0 −σ1 0 −σ2 0 −σ3 0

One can easily check that these satisfy the Clifford algebra relations for gener-
ators of Cliff(1, 3): they anticommute with each other and

γ02 = −1, γ12 = γ22 = γ32 = 1

The quadratic Clifford algebra elements − 21 γj γk for j < k satisfy the com-
mutation relations of so(3, 1). These are explicitly
     
1 i σ3 0 1 i σ2 0 1 i σ1 0
− γ1 γ2 = − , − γ1 γ3 = − , − γ2 γ3 = −
2 2 0 σ3 2 2 0 σ2 2 2 0 σ1

and
     
1 1 −σ1 0 1 1 −σ2 0 1 1 −σ3 0
− γ0 γ1 = , − γ0 γ2 = , − γ0 γ3 =
2 2 0 σ1 2 2 0 σ2 2 2 0 σ3

They provide a representation (π 0 , C4 ) of the Lie algebra so(3, 1) with

1 1 1
π 0 (l1 ) = − γ2 γ3 , π 0 (l2 ) = − γ1 γ3 , π 0 (l3 ) = − γ1 γ2
2 2 2

399
and
1 1 1
π 0 (k1 ) = − γ0 γ1 , π 0 (k2 ) = − γ0 γ2 , π 0 (k3 ) = − γ0 γ3
2 2 2
Note that the π 0 (lj ) are skew-adjoint, since this representation of the so(3) ⊂
so(3, 1) sub-algebra is unitary. The π 0 (kj ) are self-adjoint and this representa-
tion π 0 of so(3, 1) is not unitary.
On the two commuting SL(2, C) subalgebras of so(3, 1) ⊗ C with bases
1 1
Aj = (lj + ikj ), Bj = (lj − ikj )
2 2
this representation is
     
i σ1 0 i σ2 0 i σ3 0
π 0 (A1 ) = − , π 0 (A2 ) = − , π 0 (A3 ) = −
2 0 0 2 0 0 2 0 0

and
     
0 i 0 0 0 i 0 0 0 i 0 0
π (B1 ) = − , π (B2 ) = − , π (B3 ) = −
2 0 σ1 2 0 σ2 2 0 σ3

We see explicitly that the action of the quadratic elements of the Clifford
algebra on the spinor representation C4 is reducible, decomposing as the direct

sum SL ⊕ SR of two inequivalent representations on C2
 
ψL
Ψ= ∗
ψR

with complex conjugation (interchange of Aj and Bj ) relating the sl(2, C) ac-



tions on the components. The Aj act just on SL , the Bj just on SR . An
alternative standard notation to the two-component van der Waerden notation
is to use the four components of C4 with the action of the γ matrices. The
relation between the two notations is given by
 
ψα
Ψα ↔
φα̇

where the index α on the left takes values 1, 2, 3, 4 and the indices α, α̇ on the
right each take values 1, 2.
An important element of the Clifford algebra is constructed by multiplying
all of the basis elements together. Physicists traditionally multiply this by i to
make it self-adjoint and define
 
−1 0
γ5 = iγ0 γ1 γ2 γ3 =
0 1

This can be used to produce projection operators from the Dirac spinors onto
the left and right-handed Weyl spinors
1 1 ∗
(1 − γ5 )Ψ = ψL , (1 + γ5 )Ψ = ψR
2 2

400
There are two other commonly used representations of the Clifford algebra
relations, related to the one above by a change of basis. The Dirac representation
is useful to describe massive charged particles, especially in the non-relativistic
limit. Generators are given by
   
D 1 0 D 0 σ1
γ0 = −i , γ1 = −i
0 −1 −σ1 0
   
0 σ2 0 σ3
γ2D = −i , γ3D = −i
−σ2 0 −σ3 0
and the projection operators for Weyl spinors are no longer diagonal, since
 
0 1
γ5D =
1 0

A third representation, the Majorana representation, is given by (now no


longer writing in 2 by 2 block form, but as 4 by 4 matrices)
   
0 0 0 −1 1 0 0 0
M
0 0 1 0  M
0 −1 0 0
γ0 =    , γ1 =   
0 −1 0 0  0 0 1 0
1 0 0 0 0 0 0 −1
   
0 0 0 1 0 −1 0 0
0 0 −1 0 −1 0 0 0
γ2M  , γ3M = 

= 
0 −1 0 0 0 0 0 −1
1 0 0 0 0 0 −1 0
with  
0 −1 0 0
1 0 0 0
γ5M = i
0

0 0 1
0 0 −1 0
The importance of the Majorana representation is that it shows the interesting
possibility of having (in signature (3, 1)) a spinor representation on a real vector
space R4 , since one sees that the Clifford algebra matrices can be chosen to be
real. One has  
0 −1 0 0
1 0 0 0
γ0 γ1 γ2 γ3 = 
0 0

0 1
0 0 −1 0
and
(γ0 γ1 γ2 γ3 )2 = −1
The Majorana spinor representation is on SM = R4 , with γ0 γ1 γ2 γ3 a real
operator on this space with square −1, so it provides a complex structure on
SM . Recall that a complex structure on a real vector space gives a splitting of

401
the complexification of the real vector space into a sum of two complex vector
spaces, related by complex conjugation. In this case this corresponds to

SM ⊗ C = SL ⊕ SR

the fact that complexifying Majorana spinors gives the two kinds of Weyl
spinors.

38.3 For further reading


Most quantum field theory textbook have extensive discussions of spinor rep-
resentations of the Lorentz group and gamma matrices, although most use the
opposite convention for the signature of the Minkowski metric. Typical exam-
ples are Peskin-Schroeder [46] and Quantum Field Theory in a Nutshell by Tony
Zee, see Chapter II.3 and Appendix E [77].

402
Chapter 39

The Poincaré Group and its


Representations

In the previous chapter we saw that one can take the semi-direct product of
spatial translations and rotations and that the resulting group has infinite-
dimensional unitary representations on the state space of a quantum free parti-
cle. The free particle Hamiltonian plays the role of a Casimir operator: to get
irreducible representations one fixes the eigenvalue of the Hamiltonian (the en-
ergy), and then the representation is on the space of solutions to the Schrödinger
equation with this energy. This is a non-relativistic procedure, treating time and
space (and correspondingly the Hamiltonian and the momenta) differently. For
a relativistic analog, we will use instead the semi-direct product of space-time
translations and Lorentz transformations. Irreducible representations of this
group will be labeled by a continuous parameter (the mass) and a discrete pa-
rameter (the spin or helicity), and these will correspond to possible relativistic
elementary particles.
In the non-relativistic case, the representation occurred as a space of solu-
tions to a differential equation, the Schrödinger equation. There is an analogous
description of the irreducible Poincaré group representations as spaces of solu-
tions of relativistic wave equations, but we will put off that story until succeeding
chapters.

39.1 The Poincaré group and its Lie algebra


Definition (Poincaré group). The Poincaré group is the semi-direct product
P = R4 o SO(3, 1)
with double-cover
P̃ = R4 o SL(2, C)
The action of SO(3, 1) or SL(2, C) on R4 is the action of the Lorentz group on
Minkowski space.

403
We will refer to both of these groups as the “Poincaré group”, meaning by
this the double-cover only when we need it because spinor representations of the
Lorentz group are involved. The two groups have the same Lie algebra, so the
distinction is not needed in discussions that only need the Lie algebra. Elements
of the group P will be written as pairs (a, Λ), with a ∈ R4 and Λ ∈ SO(3, 1).
The group law is

(a1 , Λ1 )(a2 , Λ2 ) = (a1 + Λ1 a2 , Λ1 Λ2 )

The Lie algebra LieP = LieP̃ has dimension 10, with basis

t0 , t1 , t2 , t3 , l1 , l2 , l3 , k1 , k2 , k3

where the first four elements are a basis of the Lie algebra of the translation
group, and the next six are a basis of so(3, 1), with the lj giving the subgroup of
spatial rotations, the kj the boosts. We already know the commutation relations
for the translation subgroup, which is commutative so

[tj , tk ] = 0

We have seen that the commutation relations for so(3, 1) are

[l1 , l2 ] = l3 , [l2 , l3 ] = l1 , [l3 , l1 ] = l2

[k1 , k2 ] = −l3 , [k3 , k1 ] = −l2 , [k2 , k3 ] = −l1


and that the commutation relations between the lj and kj correspond to the
fact that the kj transform as a vector under spatial rotations, so for example
commuting the kj with l1 gives an infinitesimal rotation about the 1-axis and

[l1 , k1 ] = 0, [l1 , k2 ] = k3 , [l1 , k3 ] = −k2

The Poincaré group is a semi-direct product group of the sort discussed in


chapter 16 and it can be represented as a group of 5 by 5 matrices in much the
same way as elements of the Euclidean group E(3) could be represented by 4
by 4 matrices (see chapter 17). Writing out this isomorphism explicitly for a
basis of the Lie algebra, we have
     
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 −1 0 0
     
l1 ↔  0 0 0 −1 0 l2 ↔ 0 0 0 0 0 l3 ↔ 0 1 0 0 0
    
0 0 1 0 0 0 −1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
     
0 1 0 0 0 0 0 1 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 0  0 0 0 0 0
     
k1 ↔ 0 0 0 0 0 k2 ↔ 1 0 0 0 0 k3 ↔ 
   
 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0  1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

404
   
0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1
   
t0 ↔ 
0 0 0 0 0 t1 ↔ 0
 0 0 0 0
0 0 0 0 0  0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
   
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
   
t2 ↔ 
0 0 0 0 1 t3 ↔ 0
 0 0 0 0
0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0
We can use this explicit matrix representation to compute the commutators
of the infinitesimal translations tj with the infinitesimal rotations and boosts
(lj , kj ). t0 commutes with the lj and t1 , t2 , t3 transform as a vector under
rotations, For instance, for infinitesimal rotations about the 1-axis

[l1 , t1 ] = 0, [l1 , t2 ] = t3 , [l1 , t3 ] = −t2

with similar relations for the other axes.


For boosts one has

[kj , t0 ] = tj , [kj , tj ] = t0 , [kj , tk ] = 0 if j 6= k, k 6= 0

Note that infinitesimal boosts do not commute with infinitesimal time transla-
tion, so after quantization boost will not commute with the Hamiltonian and
thus are not the sort of symmetries which act on spaces of energy eigenstates,
preserving the energy.

39.2 Representations of the Poincaré group


We want to find unitary irreducible representations of the Poincaré group. These
will be infinite dimensional, so given by operators π(g) on a Hilbert space H,
which will have an interpretation as a single-particle relativistic quantum state
space. The standard physics notation for the operators giving the representation
is U (a, Λ), with the U emphasizing their unitarity. To classify these representa-
tions, we recall from chapter 18 that irreducible representations of semi-direct
products N o K are associated with pairs of a K-orbit Oα in the space N̂ and
an irreducible representation of the corresponding little group Kα .
For the Poincaré group, N̂ = R4 is the space of characters (one-dimensional
representations) of the translation group of Minkowski space. These are labeled
by an element p = (p0 , p1 , p2 , p3 ) that has a physical interpretation as the energy-
momentum vector of the state such that π(x) (for x in the translation group
N ) acts as multiplication by

ei(−p0 x0 +p1 x1 +p2 x2 +p3 x3 )

405
Equivalently, the p0 , p1 , p2 , p3 are the eigenvalues of the energy and momentum
operators

P0 = −iπ 0 (t0 ), P1 = iπ 0 (t1 ), P2 = iπ 0 (t2 ), P3 = iπ 0 (t3 )

that give the representation of the translation part of the Poincaré group Lie
algebra on the states.
The Lorentz group acts on this R4 by

p → Λp

and, restricting attention to the p0 − p3 plane, the picture of the orbits looks
like this

Unlike the Euclidean group case, here there are several different kinds of
orbits Oα . We’ll examine them and the corresponding stabilizer groups Kα
each in turn, and see what can be said about the associated representations.
One way to understand the equations describing these orbits is to note that
the different orbits correspond to different eigenvalues of the Poincaré group
Casimir operator
P 2 = −P02 + P12 + P22 + P32

406
This operator commutes with all the generators of the Lie algebra of the Poincaré
group, so by Schur’s lemma it must act as a scalar times the identity on an
irreducible representation (recall that the same phenomenon occurs for SU (2)
representations, which can be characterized by the eigenvalue j(j +1) of the Cas-
mir operator J2 for SU (2)). At a point p = (p0 , p1 , p2 , p3 ) in energy-momentum
space, the Pj operators are diagonalized and P 2 will act by the scalar

−p20 + p21 + p22 + p23

which can be positive, negative, or zero, so given by m2 , −m2 , 0 for various


m. The value of the scalar will be the same everywhere on the orbit, so in
energy-momentum space orbits will satisfy one of the three equations

2
−m

2 2 2 2
−p0 + p1 + p2 + p3 = m2

0

Note that in this chapter we are just classifying Poincaré group representa-
tions, not actually constructing them. It is possible to construct these represen-
tations using the data we will find that classifies them, but this would require
introducing some techniques (for so-called “induced representations”) that go
beyond the scope of this course. In later chapters we will explicitly construct
these representations in certain specific cases as solutions to certain relativistic
wave equations.

39.2.1 Positive energy time-like orbits


One way to get negative values −m2 of the Casimir P 2 is to take the vector
p = (m, 0, 0, 0), m > 0 and generate an orbit Om,0,0,0 by acting on it with
the Lorentz group. This will be the upper, positive energy, hyperboloid of the
hyperboloid of two sheets

−p20 + p21 + p22 + p23 = −m2

so q
p0 = p21 + p22 + p23 + m2
The stabilizer group of Km,0,0,0 is the subgroup of SO(3, 1) of elements of
the form  
1 0
0 Ω
where Ω ∈ SO(3), so Km,0,0,0 = SO(3). Irreducible representations of this
group are classified by the spin. For spin 0, points on the hyperboloid can
be identified with positive energy solutions to a wave equation called the Klein-
Gordon equation and functions on the hyperboloid both correspond to the space
of all solutions of this equation and carry an irreducible representation of the
Poincaré group. In the next chapter we will study the Klein-Gordon equation,

407
as well as the quantization of the space of its solutions by quantum field theory
methods.
We will later study the case of spin 12 , where one must use the double cover
SU (2) of SO(3). The Poincaré group representation will be on functions on
the orbit that take values in two copies of the spinor representation of SU (2).
These will correspond to solutions of a wave equation called the massive Dirac
equation.
For choices of higher spin representations of the stabilizer group, one can
again find appropriate wave equations and construct Poincaré group represen-
tations on their space of solutions, but we will not enter into this topic.

39.2.2 Negative energy time-like orbits


Starting instead with the energy-momentum vector p = (−m, 0, 0, 0), m > 0,
the orbit O−m,0,0,0 one gets is the lower, negative energy component of the
hyperboloid
−p20 + p21 + p22 + p23 = −m2
satisfying q
p0 = − p21 + p22 + p23 + m2

Again, one has the same stabilizer group K−m,0,0,0 = SO(3) and the same con-
stuctions of wave equations of various spins and Poincaré group representations
on their solution spaces as in the positive energy case. Since negative energies
lead to unstable, unphysical theories, we will see that these representations are
treated differently under quantization, corresponding physically not to particles,
but to antiparticles.

39.2.3 Space-like orbits


One can get positive values m2 of the Casimir P 2 by considering the orbit
O0,0,0,m of the vector p = (0, 0, 0, m). This is a hyperboloid of one sheet,
satisfying the equation

−p20 + p21 + p22 + p23 = m2

It is not too difficult to see that the stabilizer group of the orbit is K0,0,0,m =
SO(2, 1). This is isomorphic to the group SL(2, R), and it has no finite-
dimensional unitary representations. These orbits correspond physically to
“tachyons”, particles that move faster than the speed of light, and there is
no known way to consistently incorporate them in a conventional theory.

39.2.4 The zero orbit


The simplest case where the Casimir P 2 is zero is the trivial case of a point
p = (0, 0, 0, 0). This is invariant under the full Lorentz group, so the orbit
O0,0,0,0 is just a single point and the stabilizer group K0,0,0,0 is the entire Lorentz

408
group SO(3, 1). For each finite-dimensional representation of SO(3, 1), one gets
a corresponding finite dimensional representation of the Poincaré group, with
translations acting trivially. These representations are not unitary, so not usable
for our purposes.

39.2.5 Positive energy null orbits


One has P 2 = 0 not only for the zero-vector in momentum space, but for a
three-dimensional set of energy-momentum vectors, called the null-cone. By
the term “cone” one means that if a vector is in the space, so are all products
of the vector times a positive number. Vectors p = (p0 , p1 , p2 , p3 ) are called
“light-like” or “null” when they satisfy
|p|2 = −p20 + p21 + p22 + p23 = 0
One such vector is p = (1, 0, 0, 1) and the orbit of the vector under the action
of the Lorentz group will be the upper half of the full null-cone, the half with
energy p0 > 0, satisfying q
p0 = p21 + p22 + p23
The stabilizer group K1,0,0,1 of p = (1, 0, 0, 1) includes rotations about the
x3 axis, but also boosts in the other two directions. It is isomorphic to the
Euclidean group E(2). Recall that this is a semi-direct product group, and it
has two sorts of irreducible representations
• Representations such that the two translations act trivially. These are
irreducible representations of SO(2), so one-dimensional and characterized
by an integer n (half-integers when one uses the Poincaré group double
cover).
• Infinite dimensional irreducible representations on a space of functions on
a circle of radius r
The first of these two gives irreducible representations of the Poincaré group
on certain functions on the positive energy null-cone, labeled by the integer n,
which is called the “helicity” of the representation. We will in later chapters
consider the cases n = 0 (massless scalars, wave-equation the Klein-Gordon
equation), n = ± 21 (Weyl spinors, wave equation the Weyl equation), and n =
±1 (photons, wave equation the Maxwell equations).
The second sort of representation of E(2) gives representations of the Poincaré
group known as “continuous spin” representations, but these seem not to cor-
respond to any known physical phenomena.

39.2.6 Negative energy null orbits


Looking instead at the orbit of p = (−1, 0, 0, 1), one gets the negative energy part
of the null-cone. As with the time-like hyperboloids of non-zero mass m, these
will correspond to antiparticles instead of particles, with the same classification
as in the positive energy case.

409
39.3 For further reading
The Poincaré group and its Lie algebra is discussed in pretty much any quantum
field theory textbook. Weinberg [71] (Chapter 2) has some discussion of the
representations of the Poincaré group on single particle state spaces that we have
classified here. Folland [20] (Chapter 4.4) and Berndt [7] (Chapter 7.5) discuss
the actual construction of these representations using the induced representation
methods that we have chosen not to try and explain here.

410
Chapter 40

The Klein-Gordon Equation


and Scalar Quantum Fields

In the non-relativistic case we found that it was possible to build a quantum


theory describing arbitrary numbers of particles by “second quantization” of the
standard quantum theory of a free particle. This was done by taking as classical
phase space the space of solutions to the free particle Schrödinger equation,
a space which carries a unitary representation of the Euclidean group E(3).
This is an infinite dimensional space of functions (the space of solutions can be
identified with the space of initial conditions, which is the space of wavefunctions
at a fixed time), but one can quantize it using analogous methods to the case of
the finite-dimensional harmonic oscillator (annihilation and creation operators).
After such quantization we get a quantum field theory, with a state space that
describes an arbitrary number of particles. Such a state space provides a unitary
representation of the E(3) group and we saw how to construct the momentum
and angular momentum operators that generate it.

To make the same sort of construction for relativistic systems, we want to


start with an irreducible unitary representation not of E(3), but of the Poincaré
group P. In the last chapter we saw that such things were classified by orbits
Oα of the Lorentz group on momentum space, together with a choice of rep-
resentation of the stabilizer group Kα of the orbit. The simplest case will be
the orbits Om,0,0,0 , and the choice of the trivial spin-zero representation of the
stabilizer group Km,0,0,0 = SO(3). These orbits are characterized by a positive
real number m, and are hyperboloids in energy-momentum space. Points on
these orbits correspond to solutions of a relativistic analog of the Schrödinger
representation, the Klein-Gordon equation, so we will begin by studying this
equation and its solutions.

411
40.1 The Klein-Gordon equation and its solu-
tions
Recall that a condition characterizing the orbit in momentum space that we
want to study was that the Casimir operator P 2 of the Poincaré group acts on
the representation corresponding to the orbit as the scalar m2 . So, we have the
operator equation

P 2 = −P02 + P12 + P22 + P32 = −m2

characterizing the Poincaré group representation we are interested in. Inter-



preting the Pj as the standard differentiation operators −i ∂x j
on a space of
wavefunctions, generating the infinitesimal action of the translation group on
such wavefunctions, we get the following differential equation for wavefunctions:
Definition (Klein-Gordon equation). The Klein-Gordon equation is the second-
order partial differential equation

∂2 ∂2 ∂2 ∂2
(− + + + )φ = m2 φ
∂t2 ∂x21 ∂x22 ∂x23
or
∂2
(−+ ∆ − m2 )φ = 0
∂t2
for functions φ(x) on Minkowski space (which may be real or complex valued).
This equation is the simplest Lorentz-invariant wave equation to try, and
historically was the one Schrödinger first tried (he then realized it could not
account for atomic spectra and instead used the non-relativistic equation that
bears his name). Taking Fourier transforms
Z
e = 1
φ(p) d4 xe−i(−p0 x0 +p·x) φ(x)
(2π)2
the Klein-Gordon equation becomes

(p20 − p21 − p22 − p23 − m2 )φ(p)


e =0

Solutions to this will be functions φ(p)


e that are non-zero only on the hyperboloid

p20 − p21 − p22 − p23 − m2 = 0

in energy-momentum space R4 . This hyperboloid has two components, with


positive and negative energy
p0 = ±ωp
where q
ωp = p21 + p22 + p23 + m2
Ignoring one dimension these look like

412
In the non-relativistic case, a continuous basis of solutions of the Schrödinger
equation labeled by p ∈ R3 was given by the functions
|p|2
eip·x e−i 2m t

with a general solution a superposition of these with coefficients ψ(p)


e given by
the Fourier inversion formula
Z
1 |p|2
ip·x −i 2m t 3
ψ(x, t) = 3/2
ψ(p)e
e e d x
(2π) R3

The complex values ψ(p)


e gave coordinates on our single-particle space H1 , and
we had actions on this of the group of time translations (generated by the Hamil-
tonian) and the Euclidean group E(3) (generated by momentum and angular
momentum).
In the relativistic case we want to study the corresponding single-particle
space H1 of solutions to the Klein-Gordon equation, but parametrized in a way
that makes clear the action of the Poincaré group on this space. Coordinates on
the space of such solutions will now be given by complex-valued functions φ(p)
e on
4
the energy-momentum space R , supported on the two-component hyperboloid
(here p = (p0 , p)). The Fourier inversion formula giving a general solution in
terms of these coordinates will be
Z
1
φ(x, t) = δ(p20 − ωp2 )φ(p)e
e i(p·x−p0 t) 4
d p
(2π)3/2 M 4

413
with the integral over the 3d hyperboloid expressed as a 4d integral over R4
with a delta-function on the hyperboloid in the argument.
The delta function distribution with argument a function f (x) depends only
on the zeros of f , and if f 0 6= 0 at such zeros, one has
X 1
δ(f (x)) = δ(f 0 (xj )(x − xj )) = δ(x − xj )
|f 0 (xj )|
xj :f (xj )=0

For each p, one can apply this to the case of the function of p0 given by

f = p20 − ωp2

on R4 , and using
d 2
(p − ωp2 ) = 2p0 = ±2ωp
dp0 0
one finds
Z
1 1 i(p·x−p0 t)
φ(x, t) = (δ(p0 − ωp ) + δ(p0 + ωp ))φ(p)e
e dp0 d3 p
(2π)3/2 M 4 2ωp
Z 3
1 e+ (p)e−iωp t + φe− (p)eiωp t )eip·x d p
= ( φ
(2π)3/2 R3 2ωp

Here
φe+ (p) = φ(ω
e p , p), φe− (p) = φ(−ω
e p , p)

are the values of φe on the positive and negative energy hyperboloids. We see
that instead of thinking of the Fourier transforms of solutions as taking values
on energy-momentum hyperboloids, we can think of them as taking values just
on the space R3 of momenta (just as in the non-relativistic case), but we do
have to use both positive and negative energy Fourier components, and to get
a Lorentz invariant measure need to use
d3 p
2ωp

instead of d3 p.
A general complex-valued solution to the Klein-Gordon equation will be
given by the two complex-valued functions φe+ , φe− , but we can impose the
condition that the solution be real-valued, in which case one can check that the
pair of functions must satisfy the condition

φe− (p) = φe+ (−p)

Real-valued solutions of the Klein-Gordon equation thus correspond to arbitrary


complex-valued functions φe+ defined on the positive energy hyperboloid, which
fixes the value of the other function φe− .

414
40.2 Classical relativistic scalar field theory
We would like to set up the Hamiltonian formalism, finding a phase space H1
and a Hamiltonian function h on it such that Hamilton’s equations will give us
the Klein-Gordon equation as equation of motion. Such a phase space will be
an infinite-dimensional function space and the Hamiltonian will be a functional.
We will here blithely ignore the analytic difficulties of working with such spaces,
and use physicist’s methods, with formulas that can be given a legitimate inter-
pretation by being more careful and using distributions. Note that now we will
take the fields φ to be real-valued, this is the so-called real scalar field.
Since the Klein-Gordon equation is second order in time, solutions will be
parametrized by initial data which, unlike the non-relativistic case now requires
the specification at t = 0 of not one, but two functions,

φ(x) = φ(x, 0), φ̇(x) = φ(x, t)|t=0
∂t
the values of the field and its first time derivative.
We will take as our phase space H1 the space of pairs of functions (φ, π),
with coordinates φ(x), π(x) and Poisson brackets
{φ(x), π(x0 )} = δ(x − x0 ), {φ(x), φ(x0 )} = {π(x), π(x0 )} = 0
We want to get the Klein-Gordon equation for φ(x, t) as the following pair of
first order equations
∂ ∂
φ = π, π = (∆ − m2 )φ
∂t ∂t
which together imply
∂2
φ = (∆ − m2 )φ
∂t2
To get these as equations of motion, we just need to find a Hamiltonian
function h on the phase space H1 such that

φ ={φ, h} = π
∂t

π ={π, h} = (∆ − m2 )φ
∂t
One can check that two choices of Hamiltonian function that will have this
property are Z
h= H(x)d3 x
R3
where
1 2 1
H= (π − φ∆φ + m2 φ2 ) or H = (π 2 + (∇φ)2 + m2 φ2 )
2 2
where the two different integrands H(x) are related (as in the non-relativistic
case) by integration by parts so these just differ by boundary terms that are
assumed to vanish.

415
To be added: work out one of the two above Poisson brackets
One could instead have taken as starting point the Lagrangian formalism,
with an action Z
S= L d4 x
M4

where
1 ∂ 2
L= (( φ) − (∇φ)2 − m2 φ2 )
2 ∂t
This action is a functional now of fields on Minkowski space M 4 and is Lorentz
invariant. The Euler-Lagrange equations give as equation of motion the Klein-
Gordon equation
(2 − m2 )φ = 0
One recovers the Hamiltonian formalism by seeing that the canonical momentum
for φ is
∂L
π= = φ̇
∂ φ̇
and the Hamiltonian density is
1 2
H = π φ̇ − L = (π + (∇φ)2 + m2 φ2 )
2
Besides the position-space Hamiltonian formalism, we would like to have one
for the momentum space components of the field, since for a free field it is these
that will decouple into an infinite collection of harmonic oscillators. For a real
solution to the Klein-Gordon equation we have
Z 3
1 e+ (p)e−iωp t + φe (−p)eiωp t )eip·x d p
φ(x, t) = (φ +
(2π)3/2 R3 2ωp
d3 p
Z
1
= 3/2
(φe+ (p)e−iωp t eip·x + φe+ (p)eiωp t e−ip·x )
(2π) R3 2ωp

where we have used the symmetry of the integration over p to integrate over
−p instead of p.
We can choose a new way of normalizing Fourier coefficients, one that reflects
the fact that the Lorentz-invariant notion is that of integrating over the energy-
momentum hyperboloid rather than momentum space

φe+ (p)
α(p) = p
2ωp

and in terms of these we have

d3 p
Z
1
φ(x, t) = (α(p)e−iωp t eip·x + α(p)eiωp t e−ip·x ) p
(2π)3/2 R3 2ωp

416
The α(p), α(p) will have the same sort of Poisson bracket relations as the
z, z for a single harmonic oscillator, or the α(p), α(p) Fourier coefficients in the
case of the non-relativistic field:
{α(p), α(p0 )} = −iδ 3 (p − p0 ), {α(p), α(p0 ))} = {α(p), α(p0 ))} = 0
To see this, one can compute the Poisson brackets for the fields as follows. We
have
d3 p
Z
∂ 1 ip·x −ip·x
π(x) = φ(x, t)|t=0 = (−iω p )(α(p)e − α(p)e )
(2π)3/2 R3
p
∂t 2ωp
and
d3 p
Z
1
φ(x) = (α(p)eip·x + α(p)e−ip·x ) p
(2π)3/2 R3 2ωp
so
Z
0 1 0 0
{φ(x), π(x )} = ({α(p), iα(p0 )}ei(p·x−p x )
2(2π)3 R3 ×R3
0 0
− {iα(p), α(p0 )}ei(−p·x+p x ) )d3 pd3 p0
Z
1 0 0 0 0
= 3
δ 3 (p − p0 )(ei(p·x−p x ) + ei(−p·x+p x ) )d3 pd3 p0
2(2π) R3 ×R3
Z
1 0 0
= 3
(eip·(x−x ) + e−ip·(x−x ) )d3 p
2(2π) R3
=δ 3 (x − x0 )
As in the non-relativistic case, one really should work elements of H1∗ of the
form (for appropriately chosen class of functions f, g)
Z
φ(f ) + π(g) = (f (x)φ(x) + g(x)π(x))d3 x
R3

getting Poisson bracket relations


Z
{φ(f1 ) + π(g1 ), φ(f2 ) + π(g2 )} = (f1 (x)g2 (x) − f2 (x)g1 (x))d3 x
R3

This is just the infinite-dimensional analog of the Poisson bracket of two linear
combinations of the qj , pj , with the right-hand side the symplectic form Ω on
H1∗ .

40.3 The complex structure on the space of Klein-


Gordon solutions
Recall from chapter 21 that if we intend to quantize a classical phase space M
by the Bargmann-Fock method, we need to choose a complex structure J on
that phase space (or on the dual phase space M = M ∗ ). Then

M ⊗ C = M+
J ⊕ MJ

417

where M+ J is the +i eigenspace of J, MJ the −i eigenspace. The quantum
state space will be the space of polynomials on the dual of M+ J . The choice of
J corresponds to a choice of distinguished state |0iJ ∈ H, the Bargmann-Fock
state given by the constant polynomial function 1.
In the non-relativistic quantum field theory case we saw that basis elements
of M could be taken to be either the linear functionals ψ(x) and their conjugates
ψ(x) or, Fourier transforming, the linear functionals α(p) and their conjugates
α(p). These coordinates are not real-valued, but complex-valued, and as a result
M came with a distinguished natural complex structure J, which is +i on the
ψ(x) or the α(p), and −i on their conjugates.
In the relativistic scalar field theory, we must do something very different.
The solutions to the Klein-Gordon equation we are considering are real-valued,
not complex-valued functions, and give a real phase space M to be quantized
(what happens when we consider a theory with configuration space complex val-
ued fields will be discussed in chapter 41). When we complexify and look at the
space M ⊗ C, it naturally decomposes as a representation of the Poincaré group
into two pieces: M+ , the complex functions on the positive energy hyperboloid
and M− , the complex functions on the negative energy hyperboloid. More ex-
plicitly, we can decompose a complexified solution φ(x, t) of the Klein-Gordon
equation as φ = φ+ + φ− , where
Z 3
1 −iωp t ip·x d p
φ+ (x, t) = α(p)e e
(2π)3/2 R3
p
2ωp
and
d3 p
Z
1
φ− (x, t) = α(p)eiωp t e−ip·x p
(2π)3/2 R3 2ωp
We will take as complex structure the operator J that is +i on positive
energy wavefunctions and −i on negative energy wavefunctions. Complexified
classical fields in M+ get quantized as annihilation operators, those in M−
as creation operators. Since conjugation interchanges M+ and M− , non-zero
real-valued classical fields have components in both M+ and M− since they
are their own conjugates.
One motivation for this particular choice of J is that it leads to a state space
with states of non-negative energy. Theories with states of arbitrarily negative
energy are considered undesirable since they will tend to have no stable vacuum
state (since any supposed vacuum state could potentially decay into states of
large positive and large negative energy, while preserving total energy). To see
the mechanism for non-negative energy, first consider again the non-relativistic
case, where the Hamiltonian is (for d = 1)
Z +∞ Z ∞ 2
1 d p
h= | ψ(x)|2 dx = |α(p)|2 dp
2m −∞ dx −∞ 2m

This is positive definite, either on M+ (the ψ) or M− (the ψ). Quantization


takes Z ∞ 2
p †
h→H b = a (p)a(p)dp
−∞ 2m

418
which is an operator with positive eigenvalues (this is in normal-ordered form,
but the non-normal-ordered version is still positive, although adding an infinite
positive constant).
WARNING: HAVEN’T FINISHED REWRITING REST OF THIS SEC-
TION.
The Hamiltonian function h is the quadratic polynomial function of the
coordinates φ(x), π(x)
Z
1 2
h= (π + (∇φ)2 + m2 φ2 )d3 x
R 2
3

and by laborious calculation one can substitute the above expressions for φ, π
in terms of α(p), α(p) to find h as a quadratic polynomial in these coordinates
on the momentum space fields. A quicker way to find the correct expression is
to use the fact that different momentum components of the field decouple, and
we know the time-dependence of such components, so just need to find the right
h that generates this.
If, as in the non-relativistic case, we interpret φ as a single-particle wave-
function, Hamilton’s equation of motion says

{φ, h} = φ
∂t
and applying this to the component of φ+ with momentum p, we just get
multiplication by −iωp . The energy of such a wavefunction would be ωp , the

eigenvalue of i ∂t . These are called “positive frequency” or “positive energy”
wavefunctions. In the case of momentum components of φ− , the eigenvalue is
−ωp , and one has “negative frequency” or “negative energy” wavefunctions.
An expression for h in terms of momentum space field coordinates that will
have the right Poisson brackets on φ+ , φ− is
Z
h= ωp α(p)α(p)d3 p
R3
and this is the same expression one could have gotten by a long direct calcula-
tion.

In the non-relativistic case, the eigenvalues of the action of i ∂t on the
2
wavefunctions ψ were non-negative ( |p| 2m ) so the single-particle states had non-
negative energy. Here we find instead eigenvalues ±ωp of both signs, so single-
particle states can have arbitrarily negative energies. This makes a physically
sensible interpretation of H1 as a space of wavefunctions describing a single
relativistic particle difficult if not impossible. We will however see in the next
section that there is a way to quantize this H1 as a phase space, getting a
sensible multi-particle theory with a stable ground state.

40.4 Quantization of the real scalar field


Given the description we have found in momentum space of a real scalar field
satisfying the Klein-Gordon equation, it is clear that one can proceed to quan-

419
tize the theory in exactly the same way as was done with the non-relativistic
Schrödinger equation, taking momentum components of fields to operators by
replacing
α(p) → a(p), α(p) → a† (p)
where a(p), a† (p) are operator valued distributions satisfying the commutation
relations
[a(p), a† (p0 )] = δ 3 (p − p0 )
For the Hamiltonian we take the normal-ordered form
Z
H=
b ωp a† (p)a(p)d3 p
R3

Starting with a vacuum state |0i, by applying creation operators one can create
arbitary positive energy multiparticle states of free relativistic particles with
single-particle states having the energy momentum relation
p
E(p) = ωp = |p|2 + m2

If we try and consider not momentum eigenstates, but position eigenstates,


in the non-relativistic case we could define a position space complex-valued field
operator by Z
1
ψ(x)
b = a(p)eip·x d3 p
(2π)3/2 R3
which has an interpretation as an annhilation operator for a particle localized
at x. Solving the dynamics of the theory gave the time-dependence of the field
operator Z
1 p2
−i 2m t ip·x 3
ψ(x, t) =
b a(p)e e d p
(2π)3/2 R3
For the relativistic case we must do something somewhat different, defining
Definition (Real scalar quantum field). The real scalar quantum field operators
are the operator-valued distributions defined by

d3 p
Z
1
φ(x)
b = 3/2
(a(p)eip·x + a† (p)e−ip·x ) p (40.1)
(2π) R3 2ωp

d3 p
Z
1
π
b(x) = (−iωp )(a(p)eip·x − a† (p)e−ip·x ) p (40.2)
(2π)3/2 R3 2ωp
By essentially the same computation as for Poisson brackets, one can com-
pute commutators, finding

[φ(x),
b b(x0 )] = iδ 3 (x − x0 ), [φ(x),
π b b 0 )] = [b
φ(x b(x0 )] = 0
π (x), π

These can be interpreted as the relations of a unitary representation of a Heisen-


berg Lie algebra, now the infinite dimensional Lie algebra corresponding to the
phase space H1 of solutions of the Klein-Gordon equation.

420
The Hamiltonian operator will be quadratic in the field operators and can
be chosen to be
Z
1
H=
b π (x)2 + (∇φ(x))
: (b b 2
+ m2 φ(x)
b 2 ) : d3 x
R3 2

This operator is normal ordered, and a computation (see for instance [10]) shows
that in terms of momentum space operators this is just
Z
H
b = ωp a† (p)a(p)d3 p
R3

the Hamiltonian operator discussed earlier.


The dynamical equations of the quantum field theory are now
∂ b b −iH]
φ = [φ, b =π
b
∂t
∂ b = (∆ − m2 )φb
π
b = [bπ , −iH]
∂t
which have as solution the following equation for the time-dependent field op-
erator:
d3 p
Z
1 −iωp t ip·x † iωp t −ip·x
φ(x,
b t) = (a(p)e e + a (p)e e )
(2π)3/2 R3
p
2ωp

This is a superposition of annihilation operators for momentum eigenstates


of positive energy and creation operators for momentum eigenstates of negative
energy. Note that, unlike the non-relativistic case, here the quantum field oper-
ator is self-adjoint. In the next chapter we will see what happens in the case of
a complex scalar quantum field, where the operator and its adjoint are distinct.
It is a characteristic feature of relativistic field theory that what one is quan-
tizing is not just a space of positive energy wavefunctions, but a space that
includes both positive and negative energy wavefunctions, assigning creation
operators to one sign of the energy, annihilation operators to the other, and by
this mechanism getting a Hamiltonian operator with spectrum bounded below.
In more complicated quantum field theories there will be other operators (for
instance, a charge operator) that can distinguish between “particle” states that
correspond to wavefunctions of positive energy and “antiparticle” states that
correspond to wavefunctions of negative energy. The real scalar field case is
rather special in that there are no such operators and one says that here “a
particle is its own antiparticle.”

40.5 The propagator


As explained in the non-relativistic case, in quantum field theory explicitly
dealing with states and their time-dependence is awkward, so we work in the
Heisenberg picture, expressing everything in terms of a fixed, unchangeable

421
state |0i and time-dependent operators. For the free scalar field theory, we have
explicitly solved for the time-dependence of the field operators. A basic quantity
needed for describing the propagation of quanta of a quantum field theory is
the propagator:
Definition (Green’s function or propagator, scalar field theory). The Green’s
function or propagator for a scalar field theory is the amplitude, for t > t0

G(x, t, x0 , t0 ) = h0|φ(x, b 0 , t0 )|0i


b t)φ(x

By translation invariance, the propagator will only depend on t − t0 and


x − x0 , so we can just evaluate the case (x0 , t0 ) = (0, 0), using the formula for
the time dependent field to get
Z
1
G(x, t, 0, 0) = h0|(a(p)e−iωp t eip·x + a† (p)eiωp t e−ip·x )
(2π)3 R3 ×R3
d3 p d3 p0
(a(p0 ) + a† (p0 ))|0i p p
2ωp 2ωp0
d3 p d3 p0
Z
= δ 3 (p − p0 )e−iωp t eip·x p p
R3 ×R3 2ωp 2ωp0
d3 p
Z
= e−iωp t eip·x
R3 2ωp

For t > 0, this gives the amplitude for propagation of a particle in time t
from the origin to the point x.
Plan to expand this section. Compare to non-relativistic propagator. Com-
pute commutator of fields at arbitrary space-time separations, show that com-
mutator of fields at space-like separations vanishes.

40.6 Fermionic scalars


Explain nature of problems. Non-positivity of the Hamiltonian. Commutator
at space-like separations does not vanish.

40.7 For further reading


Pretty much every quantum field theory textbook has a treatment of the rela-
tivistic scalar field with more details than here, and significantly more physical
motivation. A good example with some detailed versions of the calculations
done here is chapter 5 of [10]. See Folland [20], chapter 5 for a mathematically
more careful treatment of the distributional nature of the scalar field operators.

422
Chapter 41

Symmetries and Relativistic


Scalar Quantum Fields

Just as for non-relativistic quantum fields, the theory of free relativistic scalar
quantum fields starts by taking as phase space an infinite dimensional space of
solutions of an equation of motion. Quantization of this phase space involves
constructing field operators which provide a representation of the corresponding
Heisenberg Lie algebra, by an infinite dimensional version of the Bargmann-Fock
construction. The equation of motion has its own representation-theoretical
significance: it is an eigenvalue equation for a Casimir operator of a group of
space-time symmetries, picking out an irreducible representation of that group.
In this case the Casimir operator is the Klein-Gordon operator, and the space-
time symmetry group is the Poincaré group. The Poincaré group acts on the
phase space of solutions to the Klein-Gordon equation, preserving the Poisson
bracket. One can thus use the same methods as in the finite-dimensional case
to get a representation of the Poincaré group by intertwining operators for
the Heisenberg Lie algebra representation (that representation is given by the
field operators). These methods give a representation of the Lie algebra of the
Poincaré group in terms of quadratic combinations of the field operators.
We’ll begin with the case of an even simpler group action on the phase space,
that coming from an “internal symmetry” one gets if one takes multi-component
scalar fields, with an orthogonal group or unitary group acting on the real or
complex vector space in which the classical fields take their values.

41.1 Internal symmetries


The real scalar field theory of chapter 40 lacks one feature of the non-relativistic
theory, which is an action of the group U (1) by phase changes on complex fields.
This is needed to provide a notion of “charge” and allow the introduction of
electromagnetism into the theory. In the real scalar field theory there is no dis-
tinction between states describing particles and states describing antiparticles.

423
To get a theory with such a distinction we need to introduce fields with more
components. Two possibilities are to consider real fields with m components, in
which case we will have a theory with SO(m) symmetry, and U (1) = SO(2) the
m = 2 special case, or to consider complex fields with m components, in which
case we have theories with U (m) symmetry, and m = 1 the U (1) special case.

41.1.1 SO(m) symmetry and real scalar fields


Taking as single particle phase space H1 the space of pairs φ1 , φ2 of real solutions
to the Klein-Gordon equation, elements g(θ) of the group SO(2) will act on the
dual phase space H1∗ of coordinates on such solutions by
      
φ1 (x) φ1 (x) cos θ sin θ φ1 (x)
→ g(θ) · =
φ2 (x) φ2 (x) − sin θ cos θ φ2 (x)
      
π1 (x) π1 (x) cos θ sin θ π1 (x)
→ g(θ) · =
π2 (x) π2 (x) − sin θ cos θ π2 (x)
Here φ1 (x), φ2 (x), π1 (x), π2 (x) are the coordinates for initial values at t = 0 of a
Klein-Gordon solution. The Fourier transforms of solutions behave in the same
manner.
This group action on H1 breaks up into a direct sum of an infinite number
(one for each value of x) of identical cases of rotations in a configuration space
plane, as discussed in section 18.2.2. We will use the calculation there, where we
found that for a basis element L of the Lie algebra of SO(2) the corresponding
quadratic function on the phase space with coordinates q1 , q2 , p1 , p2 was

µL = q1 p2 − q2 p1

For the case here, we just take

q1 , q2 , p1 , p2 → φ1 (x), φ2 (x), π1 (x), π2 (x)

To get a quadratic functional on the fields that will have the desired Poisson
bracket with the fields for each value of x, we need to just integrate the analog
of µL over R3 . We will denote the result by Q, since it is an observable that
will have a physical interpretation as electric charge when this theory is coupled
to the electromagnetic field (see chapter 42):
Z
Q= (π2 (x)φ1 (x) − π1 (x)φ2 (x))d3 x
R3

One can use the field Poisson bracket relations

{φj (x), πk (x0 )} = δjk δ(x − x0 )

to check that
       
φ1 (x) −φ2 (x) π (x) −π2 (x)
{Q, }= , {Q, 1 }=
φ2 (x) φ1 (x) π2 (x) π1 (x)

424
Quantization of the classical field theory gives us a unitary representation U
of SO(2), with
Z
U 0 (L) = −iQb = −i b1 (x)φb2 (x))d3 x
π2 (x)φb1 (x) − π
(b
R3

The operator
U (θ) = e−iθQ
b

will act by conjugation on the fields:


!   !
φb1 (x) −1 cos θ sin θ φb1 (x)
U (θ) b U (θ) =
φ2 (x) − sin θ cos θ φb2 (x)
    
πb1 (x) −1 cos θ sin θ πb1 (x)
U (θ) U (θ) =
πb2 (x) − sin θ cos θ πb2 (x)
It will also give a representation of SO(2) on states, with the state space de-
composing into sectors each labeled by the integer eigenvalue of the operator Q b
(which will be called the “charge” of the state).
Using the definitions of φb and π b (40.1 and 40.2) one can compute Q
b in terms
of annihilation and creation operators, with the result
Z
Q
b=i (a†2 (p)a1 (p) − a†1 (p)a2 (p))d3 p (41.1)
R3

One expects that since the time evolution action on the classical field space
commutes with the SO(2) action, the operator Q b should commute with the
Hamiltonian operator H. This can readily be checked by computing [H,
b b Q]
b
using Z
H
b = ωp (a†1 (p)a1 (p) + a†2 (p)a2 (p))d3 p
R3

Note that the vacuum state |0i is an eigenvector for Q b and H b with eigenvalue
† †
0: it has zero energy and zero charge. States a1 (p)|0i and a2 (p)|0i are eigen-
vectors of H b with eigenvalue and thus energy ωp , but these are not eigenvectors
of Q,
b so do not have a well-defined charge.
All of this can be generalized to the case of m > 2 real scalar fields, with a
larger group SO(m) now acting instead of the group SO(2). The Lie algebra is
now multi-dimensional, with a basis the elementary antisymmetric matrices jk ,
with j, k = 1, 2, · · · , m and j < k, which correspond to infinitesimal rotations in
the j − k planes. Group elements can be constructed by multiplying rotations
eθjk in different planes. Instead of a single operator Q, b we get multiple operators
Z
−iQ b jk = −i bj (x)φbk (x))d3 x
πk (x)φbj (x) − π
(b
R3

and conjugation by
Ujk (θ) = e−iθQjk
b

425
rotates the field operators in the j − k plane. These also provide unitary oper-
ators on the state space, and, taking appropriate products of them, a unitary
representation of the full group SO(m) on the state space. The Q b jk commute
with the Hamiltonian, so the energy eigenstates of the theory break up into ir-
reducible representations of SO(m) (a subject we haven’t discussed for m > 3).

41.1.2 U (1) symmetry and complex scalar fields


Instead of describing a scalar field system with SO(2) symmetry using a pair
φ1 , φ2 of real fields, it is more covenient to identify the R2 that the fields take
values in with C, and work with complex scalar fields and a U (1) symmetry.
This will allow us to work with a set of annihilation and creation operators
for particle states with a definite value of the charge observable. Note that we
were already forced to introduce a complex structure (given by the splitting of
complexified solutions of the Klein-Gordon equation into positive and negative
energy solutions) as part of the Bargmann-Fock quantization. This is a second
and independent source of complex numbers in the theory.
We express a pair of real-valued fields as complex-valued fields using
1 1
φ = √ (φ1 + iφ2 ), π = √ (π1 − iπ2 )
2 2
These can be thought of as initial-value data parametrizing complex solutions
of the Klein-Gordon equation, giving a phase space that is infinite-dimensional,
with four real dimensions for each value of x. Instead of

φ1 (x), φ1 (x), π1 (x), π2 (x)

we can think of the complex-valued fields and their complex conjugates

φ(x), φ(x), π(x), π(x)

as providing a real basis of the coordinates on phase space.


The Poisson bracket relations on such complex fields will be

{φ(x), φ(x0 )} = {π(x), π(x0 )} = {φ(x), π(x0 )} = {φ(x), π(x0 )} = 0

{φ(x), π(x0 )} = {φ(x), π(x0 )} = δ(x − x0 )


and the classical Hamiltonian is
Z
h= (|π|2 + |∇φ|2 + m2 |φ|2 )d3 x
R3

Note that introducing complex fields in a theory like this with field equations
that are second-order in time means that for each x we have a phase space
with two complex dimensions (φ(x) and π(x)). Using Bargmann-Fock methods
requires complexifying one’s phase space, which is a bit confusing here since
the phase space is already is given in terms of complex fields. We can however
proceed to find the operator that generates the U (1) symmetry as follows.

426
In terms of complex fields, the SO(2) transformations on the pair φ1 , φ2 of
real fields become U (1) phase transformations, with Q now given by
Z
Q = −i (π(x)φ(x) − π(x)φ(x))d3 x
R3

satisfying
{Q, φ(x)} = iφ(x), {Q, φ(x)} = −iφ(x)
Quantization of the classical field theory gives a representation of the infinite
dimensional Heisenberg algebra with commutation relations

[φ(x),
b b 0 )] = [b
φ(x b(x0 )] = [φb† (x), φb† (x0 )] = [b
π (x), π π † (x), π
b† (x0 )] = 0

[φ(x),
b b(x0 )] = [φb† (x), π
π b† (x0 )] = iδ 3 (x − x0 )
Quantization of the quadratic functional Q of the fields is done with the normal-
ordering prescription, to get
Z
Q = −i
b : (b
π (x)φ(x)
b b† (x)φb† (x)) : d3 x
−π
R3

Taking L = i as a basis element for u(1), one gets a unitary representation


U of U (1) using
U 0 (L) = −iQb

and
U (θ) = e−iθQ
b

U acts by conjugation on the fields:


b (θ)−1 = e−iθ φ,
U (θ)φU b U (θ)φb† U (θ)−1 = eiθ φb†

π U (θ)−1 = eiθ π
U (θ)b π † U (θ)−1 = e−iθ π
b, U (θ)b b†
It will also give a representation of U (1) on states, with the state space de-
composing into sectors each labeled by the integer eigenvalue of the operator
Q.
b
In the Bargmann-Fock quantization of this theory, we can express quantum
fields now in terms of a different set of two annihilation and creation operators
1 1
a(p) = √ (a1 (p) + ia2 (p)), a† (p) = √ (a†1 (p) − ia†2 (p))
2 2
1 1
b(p) = √ (a1 (p) − ia2 (p)), b† (p) = √ (a†1 (p) + ia†2 (p))
2 2
The only non-zero commutation relations between these operators will be

[a(p), a† (p0 )] = δ(p − p0 ), [b(p), b† (p0 )] = δ(p − p0 )

427
so we see that we have, for each p, two independent sets of standard annihilation
and creation operators, which will act on a tensor product of two standard
harmonic oscillator state spaces. The states created and annihilated by the
a† (p) and a(p) operators will have an interpretation as particles of momentum
p, whereas those created and annihilated by the b† (p) and b(p) operators will
be antiparticles of momentum p. The vacuum state will satisfy

a(p)|0i = b(p)|0i = 0

Using these creation and annihilation operators, the definition of the complex
field operators is

Definition (Complex scalar quantum field). The complex scalar quantum field
operators are the operator-valued distributions defined by

d3 p
Z
1 ip·x † −ip·x
φ(x)
b = (a(p)e + b (p)e )
(2π)3/2 R3
p
2ωp

d3 p
Z
1
φb† (x) = (b(p)e ip·x
+ a †
(p)e −ip·x
)
(2π)3/2 R3
p
2ωp
d3 p
Z
1 ip·x † −ip·x
π
b(x) = (−iω p )(a(p)e − b (p)e )
(2π)3/2 R3
p
2ωp
d3 p
Z
1
b† (x) =
π 3/2
(−iωp )(b(p)eip·x − a† (p)e−ip·x ) p
(2π) R3 2ωp

These operators provide a representation of the infinite-dimensional Heisen-


berg algebra given by the linear functions on the phase space of solutions to
the complexified Klein-Gordon equation. This representation will be on a state
space describing both particles and antiparticles. The commutation relations
are

[φ(x),
b b 0 )] = [b
φ(x b(x0 )] = [φb† (x), φb† (x0 )] = [b
π (x), π π † (x), π
b† (x0 )] = 0

[φb† (x), π
b(x0 )] = [φ(x),
b b† (x0 )] = iδ 3 (x − x0 )
π
The Hamiltonian operator will be
Z
Hb = π † (x)b
: (b π (x) + (∇φb† (x))(∇φ(x))
b + m2 φb† (x)φ(x))
b : d3 x
R3
Z
= ωp (a† (p)a(p) + b† (p)b(p))d3 p
R3

Note that the classical solutions to the Klein-Gordon equation have both
positive and negative energy, but the quantization is chosen so that negative
energy solutions correspond to antiparticle annihilation and creation operators,
and all states of the quantum theory have non-negative energy.

428
41.2 Poincaré symmetry and scalar fields
Momentum and energy operators, angular momentum operators. Discuss action
of Lorentz boosts.
The Poincaré group action on the coordinates φ(p)
e on H1 will be given by

e = e−ipa φ(Λ
u(a, Λ)φ(p) e −1 p)

41.3 For further reading

429
430
Chapter 42

U (1) Gauge Symmetry and


Coupling to the
Electromagnetic Field

We have now constructed both relativistic and non-relativistic quantum field


theories for free scalar particles. In the non-relativistic case we had to use
complex valued fields, and found that the theory came with an action of a U (1)
group, the group of phase transformations on the fields. In the relativistic case
real-valued fields could be used, but if we took complex-valued ones (or used
pairs of real-valued fields), again there was an action of a U (1) group of phase
transformations. This is the simplest example of a so-called “internal symmetry”
and it is reflected in the existence of an operator Q called the “charge”.
In this chapter we’ll see to to go beyond the theory of free particles by
introducing classical electromagnetic forces acting on quantized particles. We
will see that this can be done using the U (1) group action, with Q now having
the interpetation of “electric charge”: the strength of the coupling between the
particle and the electric field. This requires a new sort of space-time dependent
field, called the “vector potential”, and by using this field one finds quantum
theories with a large, infinite-dimensional group of symmetries, a group called
the “gauge group”. In this chapter we’ll study this new symmetry and see
how to use it and the vector potential to get quantum field theories describing
particles interacting with electromagnetic fields. In the next chapter we’ll go on
to study the question of how to quantize the vector potential field, leading to a
quantum field-theoretic description of photons.

42.1 U (1) gauge symmetry


In sections 36.1.1 and 41.1 we saw that the existence of a U (1) group action
by overall phase transformations on the field values led to the existence of an

431
operator with certain commutation relations with the field operators, acting
with integral eigenvalues on the space of states. Instead of just multiplying
fields by a constant phase eiϕ , one can imagine multiplying by a phase that
varies with the coordinates x, so

ψ(x) → eiϕ(x) ψ(x)

(so ϕ will be a function, taking values in R/2π). By doing this, we are making
a huge group of transformations act on the theory. Elements of this group are
called gauge transformations:
Definition (Gauge group). The group G of functions on R4 with values in the
unit circle U (1), with group law given by point-wise multiplication

eiϕ1 (x) · eiϕ2 (x) = ei(ϕ1 (x)+ϕ2 (x))

is called the U (1) gauge group, or group of U (1) gauge transformations.


This is an infinite-dimensional group, and new methods are needed to study
its representations (although we will mainly be interested in invariant states, so
just the trivial representation of the group).
Terms in the Hamiltonian that just involve ψ(x)ψ(x) will be invariant under
the group G, but terms with derivatives such as

|∇ψ|2

will not, since when


ψ → eiϕ(x) ψ(x)
one has the inhomogeneous behavior

∂µ ψ(x) → ∂µ (eiϕ(x) ψ(x)) = eiϕ(x) (i∂µ ϕ(x) + ∂µ )ψ(x)

To deal with this problem, one introduces a new degree of freedom


Definition (Connection or vector potential). A U (1) connection (mathemati-
cian’s terminology) or vector potential (physicist’s terminology) is a function A
on space-time R4 taking values in R4 , with its components denoted

Aµ (x)

and such that the gauge group G acts on the space of U (1) connections by

Aµ (x) → Aµ (x) + i∂µ ϕ(x)

With this new object one can define a new sort of derivative which will have
homogeneous transformation properties
Definition (Covariant derivative). Given a connection A, the associated co-
variant derivative in the µ direction is the operator

(DA )µ = ∂µ − iAµ (x)

432
Note that under a gauge tranformation, one has
(DA )µ ψ → eiϕ(x) (DA )µ ψ
and terms in a Hamiltonian such as
3
X
((DA )j ψ)((DA )j ψ)
j=1

will be invariant under the infinite-dimensional group G.


Write out Hamiltonian version of Schrodinger and KG, coupled to a vector
potential
Digression. Explain the path-integral formalism, weighting of paths. Lagrangian
form, just minimal coupling.

42.2 Electric and magnetic fields


While the connection A is the fundamental geometrical quantity needed to con-
struct theories with gauge symmetry, one often wants to work instead with
quantities derived from A which describe the information contained in A that
does not change when one acts by a gauge transformation. To a mathemati-
cian, this is the curvature of a connection, to a physicist, it is the field strengths
derived from a vector potential.
Digression. If one is familiar with differential forms, the definition of the cur-
vature F of a connection A is most simply made by thinking of A as an element
of Ω(R4 ), the space of 1-forms on space-time R4 . Then the curvature of A is
simply the 2-form F = dA, where d is the de Rham differential. The gauge
group acts on connections by
A → A + dϕ
The curvature or field strength of a vector potential is defined by:
Definition (Curvature or field strength). To a connection Aµ (x) one can as-
sociate the curvature
Fµν = ∂µ Aν − ∂ν Aµ
This is a set of functions on R4 depending on two indices µ, ν that can each
take values 0, 1, 2, 3.
The 6 independent components of Fµν are often written in terms of two
vectors E, B with components
∂Aj ∂A0 ∂A
Ej = −F0j = − + or E = − + ∇A0
∂t ∂xj ∂t
∂Al ∂Ak
Bj = jkl ( − ) or B = ∇×A
∂xk ∂xl
E is called the electric field, B the magnetic field.

433
42.3 The Pauli-Schrödinger equation in an elec-
tromagnetic field
The Pauli-Schrödinger equation (31.1) describes a free spin-half non-relativistic
quantum particle. One can couple it to a vector potential by the “minimal
coupling” prescription of replacing derivatives by covariant derivatives, with
the result
   
∂ ψ1 (q) 1 ψ1 (q)
i( − iA0 ) =− (σ · (∇ − iA))2
∂t ψ2 (q) 2m ψ2 (q)

Give examples of an external magnetic field, and of a Coulomb potential.

42.4 Non-abelian gauge symmetry


42.5 For further reading

434
Chapter 43

Quantization of the
Electromagnetic Field: the
Photon

Understanding the classical field theory of coupled scalar fields and vector po-
tentials is rather difficult, with the quantized theory even more so, due to the
fact that the Hamiltonian is no longer quadratic in the field variables. If one
simplifies the problem by ignoring the scalar fields and just considering the
vector potentials, one does get a theory with quadratic Hamiltonian that can
be readily understood and quantized. The classical equations of motion are
the Maxwell equation in a vacuum, with solutions electromagnetic waves. The
quantization will be a relativistic theory of free, massless particles of helicity
±1, the photons.
To get a sensible, unitary theory of photons, one must take into account
the infinite dimensional gauge group G that acts on the classical phase space of
solutions to the Maxwell equations. We will see that there are various ways of
doing this, each with its own subtleties.

43.1 Maxwell’s equations


First using differential forms:

dF = 0, d ∗ F = 0

Then in components.
Show that gauge transform of a solution is a solution, gauge group acts on
the solution space.

435
43.2 Hamiltonian formalism for electromagnetic
fields
Equations in Hamiltonian form. Hamiltonian is E 2 + B 2 .
First problem: data at fixed t does not give a unique solution. Deal with
this by going to temporal gauge A0 = 0.
Second problem: no Gauss’s law. Have remaining symmetry under time-
independent gauge transformations. Compute moment map for time-independent
gauge transformation.

43.3 Quantization
Two general philosophies: impose constraints on states, or on the space one
quantizes.

43.4 Field operators for the vector potential


Relate to the helicity one irreps of Poincare.

43.5 For further reading

436
Chapter 44

The Dirac Equation and


Spin-1/2 Fields

The space of solutions to the Klein-Gordon equation gives an irreducible repre-


sentation of the Poincaré group corresponding to a relativistic particle of mass
m and spin zero. Elementary matter particles (quarks and leptons) are spin 1/2
particles, and we would like to have a relativistic wave equation that describes
them, suitable for building a quantum field theory. This is provided by a re-
markable construction that uses the Clifford algebra and its action on spinors
to find a square root of the Klein-Gordon equation. The result, the Dirac equa-
tion, requires that our fields take not just real or complex values, but values in a
spinor ( 12 , 0) ⊕ (0, 21 ) representation of the Lorentz group. In the massless case,
the Dirac equation decouples into two separate Weyl equations, for left-handed
(( 12 , 0) representation of the Lorentz group) and right-handed ((0, 12 ) represen-
tation) Weyl spinor fields. As with the scalar fields discussed earlier, for now
we are just considering free quantum fields, which describe non-interacting par-
ticles. In following chapters we will see how to couple these fields to U (1) gauge
fields, allowing a description of electromagnetic particle interactions.

44.1 The Dirac and Weyl Equations


Recall from our discussion of supersymmetric quantum mechanics that we found
that in n-dimensions the operator
n
X
Pj2
j=1

has a square root, given by


n
X
γ j Pj
j=1

437
where the γj are generators of the Clifford algebra Cliff(n). The same thing is
true for Minkowski space, where one takes the Clifford algebra Cliff(3, 1) which
is generated by elements γ0 , γ1 , γ2 , γ3 satisfying

γ02 = −1, γ12 = γ22 = γ32 = +1, γj γk + γk γj = 0 forj 6= k

which we have seen can be realized in various ways an an algebra of 4 by 4


matrices.
We have seen that −P02 +P12 +P22 +P32 is a Casimir operator for the Poincaré
group, and that we get irreducible representations of the Poincaré group by using
the condition that this acts as a scalar −m2

−P02 + P12 + P22 + P32 = m2

Using the Clifford algebra generators, we find that this Casimir operator has a
square root
±(−γ0 P0 + γ1 P1 + γ2 P2 + γ3 P3 )
so one could instead look for solutions to

±(−γ0 P0 + γ1 P1 + γ2 P2 + γ3 P3 ) = im

If the operators Pj continue to act on functions φ(x) on Minkowski space as



−i ∂x j
, then this square root of the Casimir acts on the tensor product of the
spinor space S ' C4 and a space of functions on Minkowski space. We will
denote such objects by Ψ; they are C4 -valued functions on Minkowski space.
Taking the negative square root, we have

Definition (Dirac operator and the Dirac equation). The Dirac operator is the
operator
∂ ∂ ∂ ∂
/ = −γ0
D + γ1 + γ2 + γ3
∂x0 ∂x1 ∂x2 ∂x3
and the Dirac equation is the equation

/ = mΨ

This rather simple looking equation contains an immense amount of non-


trivial mathematics and physics, providing a spectacularly successful description
of the behavior of physical elementary particles. Writing it more explicitly
requires a choice of a specific representation of the γj as complex matrices, and
we will use the chiral or Weyl representation, here written as 2 by 2 blocks
       
0 1 0 σ1 0 σ2 0 σ3
γ0 = −i , γ1 = −i , γ2 = −i , γ3 = −i
1 0 −σ1 0 −σ2 0 −σ3 0

which act on  
ψL
Ψ=
ψR

438
(recall that using γ-matrices this way, ψL transforms under the Lorentz group
as the SL representation, ψR as the dual of the SR representation).
The Dirac equation is then
 ∂    
0 ( ∂x − σ · ∇) ψL ψL

0
= m
( ∂x 0
+ σ · ∇) 0 ψ R ψ R

or, in components

( − σ · ∇)ψR = mψL
∂x0

( + σ · ∇)ψL = mψR
∂x0
In the case that m = 0, these equations decouple and we get

Definition (Weyl equations). The Weyl wave equations for two-component


spinors are
∂ ∂
( − σ · ∇)ψR = 0, ( + σ · ∇)ψL = 0
∂x0 ∂x0
To find solutions to these equations we Fourier transform, using
Z
1
ψ(x0 , x) = d4 p ei(−p0 x0 +p·x) ψ̃(p0 , p)
(2π)2

and see that the Weyl equations are

(p0 + σ · p)ψ̃R = 0

(p0 − σ · p)ψ̃L = 0
Since
(p0 + σ · p)(p0 − σ · p) = p20 − (σ · p)2 = p20 − |p|2
both ψ̃R and ψ̃L satisfy
(p20 − |p|2 )ψ̃ = 0
so are functions with support on the positive (p0 = |p|) and negative (p0 = −|p|)
energy null-cone. These are Fourier transforms of solutions to the massless
Klein-Gordon equation

∂2 ∂2 ∂2 ∂2
(− 2 + 2 + 2 + )ψ = 0
∂x0 ∂x1 ∂x2 ∂x23

Recall that our general analysis of irreducible representations of the Poincaré


group showed that we expected to find such representations by looking at func-
tions on the positive and negative energy null-cones, with values in representa-
tions of SO(2), the group of rotations preserving the vector p. Acting on the
spin-1/2 representation, the generator of this group is given by the following
operator:

439
Definition (Helicity). The operator
σ·p
h=
|p|

is called the helicity operator. It has eigenvalues ± 12 , and its eigenstates are
said to have helicity ± 12 . States with helicity + 21 are called “left-handed”, those
with helicity − 12 are called “right-handed”.

We see that a continuous basis of solutions to the Weyl equation for ψL is


given by the positive energy (p0 = |p|) solutions

uL (p)ei(−p0 x0 +p·x)

where uL ∈ C2 satisfies
1
huL = + uL
2
so the helicity is + 12 , and negative energy (p0 = −|p|) solutions

uL (p)ei(−p0 x0 +p·x)

with helicity − 12 . After quantization, this wave equation gives a field theory
describing masslessleft-handed particles and right-handed antiparticles. The
Weyl equation for ψR will give a description of massless right-handed particles
and left-handed antiparticles.
Show Lorentz covariance.
Canonical formalism Hamiltonian, Lagrangian
Separate sub-sections for the Dirac and Majorana cases

44.2 Quantum Fields


Start by recalling fermionic harmonic oscillator
Write down expressions for the quantum fields in the various cases

44.3 Symmetries
Write down formulas for Poincaré and internal symmetry actions.

44.4 The propagator


44.5 For further reading
Maggiore

440
Chapter 45

An Introduction to the
Standard Model

45.1 Non-Abelian gauge fields


Explain non-abelian case.
Note no longer quadratic.
Asymptotic freedom.
Why these groups and couplings?

45.2 Fundamental fermions


mention the possibility of a right-handed neutrino
SO(10) spinor.
Problem, why these representations? Why three generations

45.3 Spontaneous symmetry breaking


Explain general idea with U (1) case. Mention superconductors.
Higgs potential.
Mass terms. Kobayashi-Maskawa.

45.4 For further reading

441
442
Chapter 46

Further Topics

There’s a long list of topics that should be covered in a quantum mechanics


course that aren’t discussed here due to lack of time in the class and lack of
energy of the author. Two important ones are:
• Scattering theory. Here one studies solutions to Schrödinger’s equation
that in the far past and future correspond to free-particle solutions, with
a localized interaction with a potential occuring at finite time. This is
exactly the situation analyzed experimentally through the study of scat-
tering processes. Use of the representation theory of the Euclidean group,
the semi-direct product of rotations and translations in R3 provides in-
sight into this problem and the various functions that occur, including
spherical Bessel functions.
• Perturbation methods. Rarely can one find exact solutions to quantum
mechanical problems, so one needs to have at hand an array of approxima-
tion techniques. The most important is perturbation theory, the study of
how to construct series expansions about exact solutions. This technique
can be applied to a wide variety of situations, as long as the system in
question is not too dramatically of a different nature than one for which
an exact solution exists.

More advanced topics where representation theory is important:


• Conformal geometry. For theories with massless particles, conformal group
as extension of Poincaré group. Twistors.
• Infinite dimensional non-abelian groups: loop groups and affine Lie alge-
bras, the Virasoro algebra. Conformal quantum field theories.

443
444
Appendix A

Conventions

I’ve attempted to stay close to the conventions used in the physics literature,
leading to the choices listed here. Units have been chosen so that ~ = 1.
To get from the self-adjoint operators used by physicists as generators of
symmetries, multiply by −i to get a skew-adjoint operator in a unitary repre-
sentation of the Lie algebra, for example
• The Lie bracket on the space of functions on phase space M is given by
the Poisson bracket, determined by

{q, p} = 1

Quantization takes 1, q, p to self-adjoint operators 1, Q, P . To make this


a unitary representation of the Heisenberg Lie algebra h3 , multiply the
self-adjoint operators by −i, so they satisfy

[−iQ, −iP ] = −i1, or [Q, P ] = i1

In other words, our quantization map is the unitary representation of h3


that satisfies

Γ0 (q) = −iQ, Γ0 (p) = −iP, Γ0 (1) = −i1

• The classical expressions for angular momentum quadratic in qj , pj , for


example
l1 = q2 p3 − q3 p2
under quantization go to the self-adjoint operator

L1 = Q2 P3 − Q3 P2

and −iL1 will be the skew-adjoint operator giving a unitary representation


of the Lie algebra so(3). The three such operators will satisfy the Lie
bracket relations of so(3), for instance

[−iL1 , −iL2 ] = −iL3

445
σ
For the spin 12 representation, the self-adjoint operators are Sj = 2j ,
σ
the Xj = −i 2j give the Lie algebra representation. Unlike the integer
spin representations, this representation does not come from the bosonic
quantization map Γ.

Given a unitary Lie algebra representation π 0 (X), the unitary group action
on states is given by
0
|ψi → π(eθX )|ψi = eθπ (X) |ψi

Instead of considering the action on states, one can consider the action on
operators by conjugation
0 0
O → O(θ) = e−θπ (X) Oeθπ (X)

or the infinitesimal version of this


d
O(θ) = [O, π 0 (X)]

If a group G acts on a space M , the representation one gets on functions on
M is given by
π(g)(f (x)) = f (g −1 · x)
Examples include

• Space translation (q → q + a). On states one has

|ψi → e−iaP |ψi

which in the Schrödinger representation is


d d
e−ia(−i dq ) ψ(q) = e−a dq ψ(q) = ψ(q − a)
d
So, the Lie algebra action is given by the operator −iP = − dq . On
operators one has
O(a) = eiaP Oe−iaP
or infinitesimally
d
O(a) = [O, −iP ]
da
• Time translation (t → t − a). The convention for the Hamiltonian H is
opposite that for the momentum P , with the Schrödinger equation saying
that
d
−iH =
dt
On states, time evolution is translation in the positive time direction, so
states evolve as
|ψ(t)i = e−itH |ψ(0)i

446
Operators in the Heisenberg picture satisfy

O(t) = eitH Oe−itH

or infinitesimally
d
O(t) = [O, −iH]
dt
which is the quantization of the Poisson bracket relation in Hamiltonian
mechanics
d
f = {f, h}
dt
Conventions for special relativity.
Conventions for representations on field operators.
Conventions for anticommuting variables. for unitary and odd super Lie
algebra actions.

447
448
Bibliography

[1] Alvarez, O., Lectures on quantum mechanics and the index theorem, in
Geometry and Quantum Field Theory, Freed, D, and Uhlenbeck, K., eds.,
American Mathematical Society, 1995.

[2] Arnold, V., Mathematical Methods of Classical Mechanics, Springer-Verlag,


1978.

[3] Artin, M., Algebra, Prentice-Hall, 1991.

[4] Baym, G., Lectures on Quantum Mechanics, Benjamin, 1969.

[5] Berezin, F., The Method of Second Quantization, Academic Press, 1966.

[6] Berezin, F., and Marinov, M., Particle Spin Dynamics as the Grassmann
Variant of Classical Mechanics, Annals of Physics 104 (1972) 336-362.

[7] Berndt, R., An Introduction to Symplectic Geometry, AMS, 2001.

[8] Berndt, R., Representations of Linear Groups, Vieweg, 2007.

[9] Cannas da Silva, A., Lectures on Symplectic Geometry, Lecture Notes in


Mathematics 1764, Springer-Verlag, 2006.

[10] Das, A., Field Theory, a Path Integral Approach, World Scientific, 1993.

[11] Dimock, J., Quantum Mechanics and Quantum Field Theory, Cambridge
University Press, 2011.

[12] Dolgachev, I., Introduction to Physics, Lecture Notes http://www.math.


lsa.umich.edu/~idolga/lecturenotes.html

[13] Fadeev, L.D. and Yakubovskii, O.A., Lectures on Quantum Mechanics for
Mathematics Students, AMS, 2009.

[14] Feynman, R., Space-time approach to non-relativistic quantum mechanics,


Reviews of Modern Physics 20 (1948) 367-387.

[15] Feynman, R. and Hibbs, A., Quantum Mechanics and Path Integrals,
McGraw-Hill, 1965.

449
[16] Feynman, R., Feynman Lectures on Physics,Volume 3, Addison-Wesley,
1965. Online at http://feynmanlectures.caltech.edu

[17] Feynman, R., The Character of Physical Law, page 129, MIT Press, 1967.

[18] Feynman, R., Statistical Mechanics: A set of lectures, Benjamin, 1972.

[19] Folland, G., Harmonic Analysis in Phase Space, Princeton, 1989.

[20] Folland, G., Quantum Field Theory: A tourist guide for mathematicians,
AMS, 2008.

[21] Gelfand, I., and Vilenkin, N. Ya., Generalized Functions, Volume 4, Aca-
demic Press, 1964.

[22] Gendenshtein, L., and Krive, I., Supersymmetry in quantum mechanics,


Sov. Phys. Usp. 28 (1985) 645-666.

[23] Greiner, W. and Reinhardt, J., Field Quantization Springer, 1996.

[24] Guillemin, V. and Sternberg, S., Variations on a Theme of Kepler, AMS,


1990.

[25] Guillemin, V. and Sternberg, S., Symplectic Techniques in Physics, Cam-


bridge University Press, 1984.

[26] Gurarie, D., Symmetries and Laplacians, Dover, 2008.

[27] Hall, B., Lie Groups, Lie Algebras, and Representations: An Elementary
Introduction, Springer-Verlag, 2003.

[28] Hall, B., An Elementary Introduction to Groups and Representations http:


//arxiv.org/abs/math-ph/0005032

[29] Hall, B., Quantum Theory for Mathematicians, Springer-Verlag, 2013.

[30] Hannabuss, K., An Introduction to Quantum Theory, Oxford University


Press, 1997.

[31] Haroche, S. and Ramond, J-M., Exploring the Quantum: Atoms, Cavities
and Photons, Oxford University Press, 2006.

[32] Hatfield, B., Quantum Field Theory of Point Particles and Strings,
Addison-Wesley, 1992.

[33] Hermann, R., Lie Groups for Physicists, W. A. Benjamin, 1966.

[34] Hirsch, M., and Smale, S., Differential Equations, Dynamical Systems, and
Linear Algebra, Academic Press, 1974.

[35] Kirillov, A., Lectures on the Orbit Method, AMS, 2004.

450
[36] Kostant, B., Quantization and unitary representations: Part I, Prequanti-
zation, in Lecture Notes in Mathematics, 170 (1970) 87-208.

[37] Landsman, N. P., Between classical and quantum, in Philosophy of Physics,


North-Holland, 2006. http://arxiv.org/abs/quant-ph/0506082

[38] Lawson, H. B. and Michelsohn, M-L., Spin Geometry, Princeton University


Press, 1989.

[39] Lion, G., and Vergne, M., The Weil Representation, Maslov index and
Theta series, Birkhäuser, 1980.

[40] Mackey, G., Mathematical Foundations of Quantum Mechanics, Ben-


jamin/Cummings, 1963.

[41] Mackey, G., Unitary Group Representations in Physics, Probability and


Number Theory, Benjamin/Cummings, 1978.

[42] Meinrenken, E., Clifford Algebras and Lie Theory, Springer-Verlag, 2013.

[43] Messiah, A., Quantum Mechanics, Volumes 1 and 2, North-Holland, 1961-


2.

[44] Ottesen, J., Infinite Dimensional Groups and Algebra in Quantum Physics,
Springer, 1995.

[45] Penrose, R., The Road to Reality, Jonathan Cape, 2004.

[46] Peskin, M., and Schroeder, D., An Introduction to Quantum Field Theory,
Westview Press, 1995.

[47] Pressley, A., and Segal, G., Loop Groups, Oxford University Press, 1986.

[48] Presskill, J., Quantum Computing Course Notes, http://www.theory.


caltech.edu/people/preskill/ph229/#lecture

[49] Porteous, I., Clifford Algebras and the Classical Groups Cambridge Univer-
sity Press, 1995.

[50] Ramond, P., Group Theory: A physicist’s survey, Cambridge University


Press, 2010.

[51] Rosenberg, J., A Selective History of the Stone-von Neumann Theorem,


in Operator Algebras, Quantization and Noncommutative Geometry, AMS,
2004.

[52] Schlosshauer, M., Decoherence and the Quantum-to-Classical Transition,


Springer, 2007.

[53] Schlosshauer, M., (Ed.) Elegance and Engima, Springer-Verlag, 2011.

451
[54] Schulman, L., Techniques and Applications of Path Integration, John Wiley
and Sons, 1981.
[55] Shale, D., Linear symmetries of free boson fields, Trans. Amer. Math. Soc.
103 (1962) 149-167.

[56] Shale, D. and Stinespring, W., States of the Clifford algebra, Ann. of Math.
(2) 80 (1964) 365-381.
[57] Shankar, R., Principles of Quantum Mechanics, 2nd Ed., Springer, 1994.
[58] Singer, S., Linearity, Symmetry, and Prediction in the Hydrogen Atom,
Springer-Verlag, 2005.

[59] Sternberg, S., Group Theory and Physics, Cambridge University Press,
1994.
[60] Stillwell, J., Naive Lie Theory Springer-Verlag, 2010. http://www.
springerlink.com/content/978-0-387-78214-0

[61] Stone, M., The Physics of Quantum Fields, Springer-Verlag, 2000.


[62] Streater, R. and Wightman, A., PCT, Spin and Statatistics, and All That,
Benjamin, 1964.
[63] Strichartz, R., A Guide to Distribution Theory and Fourier Transforms,
World Scientific, 2003.
[64] Takhtajan, L., Quantum Mechanics for Mathematicians, AMS, 2008.
[65] Talman, J., Special Functions: A Group Theoretical Approach (based on
lectures of Eugene P. Wigner), Benjamin, 1968.

[66] Taylor, M., Noncommutative Harmonic Analysis, AMS, 1986. Benjamin,


1969.
[67] Teleman, C., Representation Theory Course Notes, http://math.
berkeley.edu/~teleman/math/RepThry.pdf
[68] Townsend, J., A Modern Approach to Quantum Mechanics, University Sci-
ence Books, 2000.
[69] Tung, W-K., Group Theory in Physics, World Scientific Publishing 1985.
[70] Warner, F., Foundations of Differentiable Manifolds and Lie Groups,
Springer-Verlag, 1983.

[71] Weinberg, S., The Quantum Theory of Fields I, Cambridge University


Press, 1995.
[72] Weyl, H., The Theory of Groups and Quantum Mechanics, Dover, 1950.
(First edition in German, 1929)

452
[73] Wigner, E., The Unreasonable Effectiveness of Mathematics in the Natural
Sciences, Comm. Pure Appl. Math. 13 (1960) 1-14..
[74] Witten, E., Supersymmetry and Morse Theory, J. Differential Geometry
17 (1982) 661-692.

[75] Woodhouse, N. M. J., Geometric Quantization, 2nd edition, Springer-


Verlag, 1997.
[76] Woodhouse, N. M. J., Special Relativity, Springer-Verlag, 2002.
[77] Zee, A., Quantum Field Theory in a Nutshell, Princeton University Press,
2003.

[78] Zelevinsky, V., Quantum Physics, Volume 1: From Basics to Symmetries


and Perturbations Wiley, 2010.
[79] Zinn-Justin, J., Path Integrals in Quantum Mechanics, Oxford University
Press, 2005.

[80] Zurek, W., Decoherence and the Transition from Quantum to Classical –
Revisited, http://arxiv.org/abs/quant-ph/0306072
[81] Zurek, W., Quantum Darwinism, Nature Physics 5 (2009) 181.

453

You might also like