0% found this document useful (0 votes)
16 views7 pages

Gaussian Integrals

Uploaded by

farrider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views7 pages

Gaussian Integrals

Uploaded by

farrider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Gaussian Integrals

H. Sonoda 12 May 2019


last update 21 June 2019

1 Real symmetric matrices and hermitian matrices


1.1 Real symmetric matrices
Let S be an N -by-N real symmetric matrix:

Sij = Sji ∈ R (i, j = 1, · · · , N )

S has N real eigenvectors v (i) (i = 1, · · · , N ) with a real eigenvalue λi :


N
(i) (i)
X
Sv (i) = λi v (i) ⇐⇒ Sjk vk = λi vj
k=1

We call S positive when all the eigenvalues are positive:

(∀i) λi > 0

We normalize the eigenvectors so that


N
(i) (j)
X
v (i)T v (j) = δij ⇐⇒ vk vk = δij
k=1

Completeness of the basis v (i) (i = 1, · · · , N ) implies


N N
(i) (i)
X X
v (i) v (i)T = 1N ⇐⇒ vj vk = δjk
i=1 i=1

This means that the N -by-N matrix defined by


 
(j)
O = v (1) , v (2) , · · · , v (N ) ⇐⇒ Oij = vi

is a real orthogonal matrix:

v (1)T
 
 v (2)T  
OT O = 
 (1) (2)
 v , v , · · · , v (N ) = 1N

..
 . 
v (N )T

1
We wish to show that O diagonalizes S:
 
λ1
 λ2 
OT SO = diag(λ1 , λ2 , · · · , λN ) = 
 
.. 
 . 
λN

(Proof)    
SO = S v (1) , · · · , v (N ) = λ1 v (1) , · · · , λN v (N )

Hence,
 
λ1
v (1)T
 
..
   λ2 
OT SO =  (1)
 λ1 v , · · · , λN v
(N )
=
   
. .. 
 . 
v (N )T
λN

(QED)

1.2 Hermitian matrices


Let H be a hermitian N -by-N matrix satisfying

H † = H ⇐⇒ Hji

= Hij

It has N eigenvectors:
Hv (i) = λi v (i) (i = 1, · · · , N )
with real eigenvalues λi . We call H positive if (∀i) λi > 0. We normalize the eigenvectors
as
v (i)† v (j) = δij
so that
N
X
v (i) v (i)† = 1N
i=1

Let  
U = v (1) , · · · , v (N )

so that
v (1)†
 

U † =  ... 
 

v (N )†
U is unitary since  
U †U = v (i)† v (j) = δij
ij

2
Let us show that U diagonalizes H:

U † HU = diag(λ1 , · · · , λN )

(Proof)    
HU = H v (1) , · · · , v (N ) = λ1 v (1) , · · · , λN v (N )
Hence,
v (1)†
  
λ1
 .. 
 
† (1)
U HU =  .  λ1 v , · · · , λN v (N )
=
 .. 
. 
v (N )† λN

2 Gaussian integrals
2.1 Real Gaussian integrals
We now consider the Gaussian integral
 
Z N Z  
N 1 X
N 1 T
I(A) = d x exp −  xi Aij xj = d x exp − x Ax

2 2
i,j=1

where A is a positive symmetric matrix.


Let O be a real orthogonal matrix that diagonalizes A:

OT AO = diag (a1 , · · · , aN ) (ai > 0)

We change variables from x to y so that

Oy = x ⇐⇒ y = OT x

Under the orthogonal change of variables, the volume element is invariant:

dN x = dN y

(The determinant of O is ±1.)


Since
N
X
T T T
x Ax = y O AOy = ai yi2
i=1
we obtain
N
! N Z N  1
Z ∞   
1X Y 1 Y 2π 2
I(A) = dN y exp − ai yi2 = dyi exp − ai yi2 =
2 −∞ 2 ai
i=1 i=1 i=1
N N
(2π) 2 (2π) 2
= QN √ =√
i=1 ai det A

3
2.2 Complex Gaussian integrals
We next compute the complex Gaussian integral
 
Z N Z  
2N 1 X
∗ 2N 1 †
J(B) = d z exp −  zi Bij zj = d z exp − z Bz

2 2
i,j=1

where B is a positive hermitian matrix, and d2 z = d<zd=z.


This can be calculated analogously. Let U be a unitary matrix that diagonalizes B:
U † BU = diag (b1 , · · · , bN ) (bi > 0)
We then introduce new complex variables by
w = U † z ⇐⇒ z = U w
so that
N
X
z † Bz = w† U † BU w = bi |wi |2
i=1
Since the volume element dN w
is invariant under the unitary transformation, we obtain
N
Z   Z !
2N 1 † 2N 1X 2
J(B) = d z exp − z Bz = d w exp − bi |wi |
2 2
i=1
N
Y 2π (2π)N (2π)N
= = QN =
bi i=1 bi
det B
i=1

3 Exercises
1. Compute Z  
N 1 T T
d x exp − x Ax + j x
2
(Solution) We complete the square:
1 1 T  1
− xT Ax + j T x = − x − A−1 j A x − A−1 j + j T A−1 j
2 2 2
Shifting the integration variables, we obtain
Z  
1
dN x exp − xT Ax + j T x
2
Z  
N 1 −1 T
 −1
 1 T −1
= d x exp − x − A j A x − A j + j A j
2 2
 Z  
1 T −1 1
= exp j A j dN x exp − xT Ax
2 2
  N
1 T −1 (2π) 2
= exp j A j √
2 det A

4
2. Compute Z  
2N 1 † † †
d z exp − z Bz + j z + z j
2
(Solution) We complete the square:
1 1 †  1
− z † Bz + j † z + z † j = − z − B −1 j B z − B −1 j + j † B −1 j
2 2 2
Shifting the integration variables, we obtain
Z  
2N 1 † † †
d z exp − z Bz + j z + z j
2
Z  
2N 1 −1 †
 −1
 1 † −1
= d z exp − z − B j B z − B j + j B j
2 2
 Z  
1 † −1 1
= exp j B j d2N z exp − z † Bz
2 2
(2π)N
 
1 † −1
= exp j B j
2 det B

4 Application to path integrals


We first compute the euclidean path integral of a free particle:

1 T
Z  Z 
2 1
[dx] x(0)=0 exp − dt ẋ(t) = √
x(T )=0 2 0 2πT
(Derivation by the operator formalism)
1 T
Z  Z 
[dx] x(0)=0 exp − dt ẋ(t) = hx = 0| e−T H |x = 0i
2
x(T )=0 2 0
where
1
H = p2
2
Hence,
Z
−T H dp
h0| e |0i = h0| e|−T H
{z } |pi |hp{z
| 0i
2π }
p2 =1
=e−T 2
Z
dp −T p2 1
= e 2 =√ QED
2π 2πT
Using the above result, let us calculate the path integral for a harmonic oscillator:
1 T
Z  Z 
2 2 2

Zω ≡ [dx] x(0)=0 exp − dt ẋ(t) + ω x(t)
x(T )=0 2 0
1 T d2
Z  Z   
= [dx] x(0)=0 exp − dt x(t) − 2 + ω 2 x(t)
x(T )=0 2 0 dt

5
The differential operator
d2
− + ω2
dt2
plays the role of a symmetric matrix A.
We expand r ∞
2X πnt
x(t) = xn sin
T T
n=1
in terms of the normalized sine functions:
πn0 t
Z T
2 πnt
dt sin sin = δn,n0
0 T T T
Using

dx
√ n
Y
[dx] = A
n=1

we obtain
1 T d2
Z  Z   
2
Zω = [dx] x(0)=0 exp − dt x(t) − 2 + ω x(t)
x(T )=0 2 0 dt

Z Y "  #
dxn 1X  πn 2

2
=A √ · exp − +ω
2π 2 T
n=1 n=1

Y 1
=A q
πn 2

n=1
T + ω2

Hence, we obtain the ratio


v !1
∞ u πn 2 ∞
 2
r
Zω Y u
T
Y 1 ωT
= t = =
Z0 πn 2 ω2 T 2 sinh ωT

n=1 T + ω2 n=1 1+ π 2 n2

Using
1
Z0 = √
2πT
we obtain r
ω
Zω =
2π sinh ωT
Let us now consider a euclidean path integral:
Z  Z T  
1 2 1 2 2
Z[j] ≡ [dx] x(0)=0 exp − dt ẋ(t) + ω x(t) − j(t)x(t)
x(T )=0 0 2 2
where r
ω
Z[0] = Zω =
2π sinh ωT

6
Integration by parts gives
T
d2
Z  Z    
1 2
Z[j] = [dx] x(0)=0 exp − dt x(t) − 2 + ω x(t) − j(t)
x(T )=0 0 2 dt

As an analog of A−1 , we define G(t, t0 ) by

∂2
 
− 2 + ω G(t, t0 ) = δ(t − t0 )
2
∂t

and
G(0, t0 ) = G(T, t0 ) = 0
This gives
1
G(t, t0 ) = θ(t0 − t) sinh ωt sinh ω(T − t0 ) + θ(t − t0 ) sinh ω(T − t) sinh ωt0

ω sinh ωT
It is legitimate to change variables as
Z T
x(t) −→ x(t) − dt0 G(t, t0 )j(t0 )
0

which satisfies the boundary conditions at t = 0, T . Since


Z T Z T Z T
d2
    
0 0 0 1 2 0 0 0
dt x(t) + dt G(t, t )j(t ) − 2 +ω x(t) + dt G(t, t )j(t )
0 0 2 dt 0
Z T
1 T
Z T
d2
    Z
1
= dt x(t) 2
− 2 + ω x(t) + j(t) + dt dt0 j(t)G(t, t0 )j(t0 )
0 2 dt 2 0 0

we obtain

 Z T Z T 
1 0 0 0
Z[j] = Z[0] exp dt dt j(t)G(t, t )j(t )
2 0 0
r  Z T Z T 
ω 1 0 0 0
= exp dt dt j(t)G(t, t )j(t )
2π sinh ωT 2 0 0

You might also like