Quantum Scattering Theory and Applications
Quantum Scattering Theory and Applications
A thesis presented by Adam Lupu-Sax to The Department of Physics in partial ful llment of the requirements for the degree of Doctor of Philosophy in the subject of Physics Harvard University Cambridge, Massachusetts September 1998
Abstract
Scattering theory provides a convenient framework for the solution of a variety of problems. In this thesis we focus on the combination of boundary conditions and scattering potentials and the combination of non-overlapping scattering potentials within the context of scattering theory. Using a scattering t-matrix approach, we derive a useful relationship between the scattering t-matrix of the scattering potential and the Green function of the boundary, and the t-matrix of the combined system, e ectively renormalizing the scattering t-matrix to account for the boundaries. In the case of the combination of scattering potentials, the combination of t-matrix operators is achieved via multiple scattering theory. We also derive methods, primarily for numerical use, for nding the Green function of arbitrarily shaped boundaries of various sorts. These methods can be applied to both open and closed systems. In this thesis, we consider single and multiple scatterers in two dimensional strips (regions which are in nite in one direction and bounded in the other) as well as two dimensional rectangles. In 2D strips, both the renormalization of the single scatterer strength and the conductance of disordered many-scatterer systems are studied. For the case of the single scatterer we see non-trivial renormalization e ects in the narrow wire limit. In the many scatterer case, we numerically observe suppression of the conductance beyond that which is explained by weak localization. In closed systems, we focus primarily on the eigenstates of disordered manyscatterer systems. There has been substantial investigation and calculation of properties of the eigenstate intensities of these systems. We have, for the rst time, been able to investigate these questions numerically. Since there is little experimental work in this regime, these numerics provide the rst test of various theoretical models. Our observations indicate that the probability of large uctuations of the intensity of the wavefunction are explained qualitatively by various eld-theoretic models. However, quantitatively, no existing theory accurately predicts the probability of these uctuations.
Acknowledgments
Doing the work which appears in this thesis has been a largely delightful way to spend the last ve years. The nancial support for my graduate studies was provided by a National Science Foundation Fellowship, Harvard University and the Harvard/Smithsonian Institute for Theoretical Atomic and Molecular Physics (ITAMP). Together, all of these sources provided me with the wonderful opportunity to study without being concerned about my nances. My advisor, Rick Heller, is a wonderful source of ideas and insights. I began working with Rick four years ago and I have learned an immense amount from him in that time. From the very rst time we spoke I have felt not only challenged but respected. One particularly nice aspect of having Rick as an advisor is his ready availability. More than one tricky part of this thesis has been sorted out in a marathon conversation in Rick's o ce. I cannot thank him enough for all of his time and energy. In the last ve years I have had the great pleasure of working not only with Rick himself but also with his post-docs and other students. Maurizio Carioli was a post-doc when I began working with Rick. There is much I cannot imagine having learned so quickly or so well without him, particularly about numerical methods. Lev Kaplan, a student and then post-doc in the group, is an invaluable source of clear thinking and uncanny insight. He has also demonstrated a nearly in nite patience in discussing our work. My class-mate Neepa Maitra and I began working with Rick at nearly the same time and have been partners in this journey. Neepa's emotional support and perceptive comments and questions about my work have made my last ve years substantially easier. Alex Barnett, Bill Bies, Greg Fiete, Jesse Hersch, Bill Hosten and Areez Mody, all graduate students in Rick's group, have given me wonderful feedback on this and other work. The substantial post-doc contingent in the group, Michael Haggerty, Martin Naraschewski and Doron Cohen have been equally helpful and provided very useful guidance along the way. At the time I began graduate school I was pleasantly surprised by the cooperative spirit among my classmates. Many of us spent countless hours discussing physics and sorting out problem sets. Among this crowd I must particularly thank Martin Bazant, Brian Busch, Sheila Kannappan, Carla Levy, Carol Livermore, Neepa Maitra, Ron Rubin and Glenn Wong for making many late nights bearable and, oftentimes, fun. I must particularly thank Martin, Carla and Neepa for remaining great friends and colleagues in the years that followed. I have had the great fortune to make good friends at various stages in my life and I am honored to count these three among them.
5 It is hard to imagine how I would have done all of this without my ancee, Kiersten Conner. Our upcoming marriage has been a singular source of joy during the process of writing this thesis. Her un agging support and boundless sense of humor have kept me centered throughout graduate school. My parents, Chip Lupu and Jana Sax, have both been a great source of support and encouragement throughout my life and the last ve years have been no exception. The rest of my family has also been very supportive, particularly my grandmothers, Sara Lupu and Pauline Sax and my step-mother Nancy Altman. It saddens me that neither of my grandfathers, Dave Lupu or N. Irving Sax, are alive to see this moment in my life but I thank them both for teaching me things that have helped bring me this far.
Contents
Title Page . . . . . . . . . . . . . . . . . Abstract . . . . . . . . . . . . . . . . . . Acknowledgments . . . . . . . . . . . . . Citations to Previously Published Work Table of Contents . . . . . . . . . . . . . List of Figures . . . . . . . . . . . . . . List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 4 6 7 10 12
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 2.5 Cross-Sections . . . . . . . . . . . . . Unitarity and the Optical Theorem . Green Functions . . . . . . . . . . . Zero Range Interactions . . . . . . . Scattering in two dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
13 15 20 24 26 29 31 33 39 49 50 52 53 55 58 61 64
19
3.1 Multiple Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Renormalized t-matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 Introduction . . . . . . . . . . . . . . . . Boundary Wall Method I . . . . . . . . Boundary Wall Method II . . . . . . . . Periodic Boundary Conditions . . . . . . Green Function Interfaces . . . . . . . . Numerical Considerations and Analysis From Wavefunctions to Green Functions Eigenstates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
49
Contents
8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
One Scatterer in a Wide Wire . . . . . . . . . . . . The Green function of an empty periodic wire . . . Renormalization of the ZRI Scattering Strength . . From the Green function to Conductance . . . . . Computing the channel-to-channel Green function One Scatterer in a Narrow Wire . . . . . . . . . .
65
65 69 73 74 75 76 81 89
6.1 Dirichlet boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Periodic boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disorder Averages . . . . . . . . . . . . . . . . . . . . . . . . . Mean Free Path . . . . . . . . . . . . . . . . . . . . . . . . . . . Properties of Randomly Placed ZRI's as a Disordered Potential Eigenstate Intensities and the Porter-Thomas Distribution . . . Weak Localization . . . . . . . . . . . . . . . . . . . . . . . . . Strong Localization . . . . . . . . . . . . . . . . . . . . . . . . . Anomalous Wavefunctions in Two Dimensions . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
94
8.1 Transport in Disordered Systems . . . . . . . . . . . . . . . . . . . . . . . . 113 Extracting eigenstates from t-matrices . . . . . . . . . . . . . . . . . . Intensity Statistics in Small Disordered Dirichlet Bounded Rectangles Intensity Statistics in Disordered Periodic Rectangle . . . . . . . . . . Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
112 121
123 125 129 143
De nitions . . . . . . . . . . . . . . . . . . . . . . . . Scaling L . . . . . . . . . . . . . . . . . . . . . . . . Integration of Energy Green functions . . . . . . . . Green functions of separable systems . . . . . . . . . Examples . . . . . . . . . . . . . . . . . . . . . . . . The Gor'kov (bulk superconductor) Green Function
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
164
Contents
C.1 Standard Linear Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 C.2 Orthogonalization and the QR Decomposition . . . . . . . . . . . . . . . . . 168 C.3 The Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . 169 D.1 Identites from xn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 D.2 Convergence of Green Function Sums . . . . . . . . . . . . . . . . . . . . . . 172 Polar Coordinates . . . . . . . . . . . . . . . Bessel Expansions . . . . . . . . . . . . . . . Asymptotics as kr ! 1 . . . . . . . . . . . . Limiting Form for Small Arguments (kr ! 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
166
171
176
List of Figures
4.1 Transmission (at normal incidence) through a at wall via the Boundary Wall method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 A Periodic wire with one scatterer and an incident particle. . . . . . . . . . 5.2 \Experimental" setup for a conductance measurement. The wire is connected to ideal contacts and the voltage drop at xed current is measured. . . . . . 5.3 Re ection coe cient of a single scatterer in a wide periodic wire. . . . . . . 5.4 Number of scattering channels blocked by one scatterer in a periodic wire of varying width. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Transmission coe cient of a single scatterer in a narrow periodic wire. . . . 5.6 Cross-Section of a single scatterer in a narrow periodic wire. . . . . . . . . . 6.1 Comparison of Dressed-t Theory with Numerical Simulation . . . . . . . . . 62 66 67 68 69 77 78 90
7.1 Porter-Thomas and exponential localization distributions compared. . . . . 108 7.2 Porter-Thomas,exponential localization and log-normal distributions compared. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7.3 Comparison of log-normal coe cients for the DOFM and SSSM. . . . . . . 111 8.1 The wire used in open system scattering calculations. . . . . . . . . . . . . 8.2 Numerically observed mean free path and the classical expectation. . . . . . 8.3 Numerically observed mean free path after rst-order coherent back-scattering correction and the classical expectation. . . . . . . . . . . . . . . . . . . . . 8.4 Transmission versus disordered region length for (a) di usive and (b) localized wires. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 116 117 119
9.1 Typical low energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots.For the top left wavefunction ` = :12 = :57 whereas ` = :23 = :11 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order. . . . . . . . . . . 126
10
List of Figures
11
9.2 Typical medium energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots. For the top left wavefunction ` = :25 = :09 whereas ` = :48 = :051 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order. . . . 127 9.3 Intensity statistics gathered in various parts of a Dirichlet bounded square. Clearly, larger uctuations are more likely in at the sides and corners than in the center. The (statistical) error bars are di erent sizes because four times as much data was gathered in the sides and corners than in the center. . . . 128 9.4 Intensity statistics gathered in various parts of a Periodic square (torus). Larger uctuations are more likely for larger =`. The erratic nature of the smallest wavelength data is due to poor statistics. . . . . . . . . . . . . . . 133 9.5 Illustrations of the tting procedure. We look at the reduced 2 as a function of the starting value of t in the t (top, notice the log-scale on the y-axis) then choose the C2 with smallest con dence interval (bottom) and stable reduced 2 . In this case we would choose the C2 from the t starting at t = 10.134 9.6 Numerically observed log-normal coe cients ( tted from numerical data) and tted theoretical expectations plotted (top) as a function of wavenumber, k at xed ` = :081 and (bottom) as function of ` at xed k = 200. . . . . . . . 136 9.7 Typical wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown. . . . . 137 9.8 Anomalous wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown. We note that the scale here is di erent from the typical states plotted previously. 138 9.9 The average radial intensity centered on two typical peaks (top) and two anomalous peaks (bottom). . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 9.10 Wavefunction deviation under small perturbation for 500 scatterers in a 1 1 periodic square (torus). ` = :081 = :061. . . . . . . . . . . . . . . . . . . 142
List of Tables
9.1 Comparison of log-normal tails of P (t) for di erent maximum allowed singular value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 9.2 Comparison of log-normal tails of P (t) for strong and weak scatterers at xed and `. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 C.1 Matrix decompositions and computation time . . . . . . . . . . . . . . . . . 170
12
Chapter 1
14
is meaningless, and therefore so is the impact parameter. The purpose of the theory is here only to calculate the probability that, as a result of the collision, the particles will deviate (or, as we say, be scattered ) through any given angle.
15
division between perturbation and unperturbed motion is one of de nition, not of physics. Much of the art in using perturbation theory comes from recognizing just what division of the problem will give a solvable unperturbed motion and a convergent perturbation series. In scattering, the division between free motion and collision seems much more natural and less exible. However, many of the methods developed in this thesis take advantage of what little exibility there is in order to solve some problems not traditionally in the purview of scattering theory as well as attack some which are practically intractable by other means.
16
above. Multiple scattering theory takes this split and some very clever book-keeping and solves a very complex problem. Our treatment di ers somewhat from Fadeev's in order to emphasize similarities with the techniques introduced in section 3.2. A separation between free propagation and collision and its attendant book-keeping have more applications than multiple scattering. In section 3.2 we develop the central new theoretic tool of this work, the renormalized t-matrix. In multiple scattering theory, we used the separation between propagation and collision to piece together the scattering from multiple targets, in essence complicating the collision phase. With appropriate renormalization, we can also change what we mean by propagation. We derive the relevant equations and spend some time exploring the consequences of the transformation of propagation. The sort of change we have in mind will become clearer as we discuss the applications. Both of the methods explained in chapter 3 involve combining solved problems and thus solving a more complicated problem. The techniques discussed in chapter 4 are used to solve some problems from scratch. In their simplest form they have been applied to mesoscopic devices and it is hoped that the more complex versions might be applied to look at dirty and clean superconductor normal metal junctions. We begin working on applications in chapter 5 where we explore our rst non-trivial example of scatterer renormalization, the change in scatterer strength of a scatterer placed in a wire. We begin with a xed two-dimensional zero range interaction of known scattering amplitude. We place this scatterer in an in nite straight wire (channel of nite width). Both the scatterer in free space and the wire without scatterer are solved problems. Their combination is more subtle and brings to bear the techniques developed in 3.2. Much of the chapter is spent on necessary applied mathematics, but it concludes with the interesting case of a wire which is narrower than the cross-section of the scatterer (which has zero-range so can t in any nite width wire). This calculation could be applied to a variety of systems, hydrogen con ned on the surface of liquid helium for one. Next, in chapter 6 we treat the case of the same scatterer placed in a completely closed box. While a wire is still open and so a scattering problem, it is at rst hard to imagine how a closed system could be. After all, the di erential cross-section makes no sense in a closed system. Wonderfully, the equations developed for scattering in open systems are still valid in a closed one and give, in some cases, very useful methods for examining properties of the closed system. As with the previous chapter, much of the work in this chapter is preliminary but necessary applied mathematics. Here, we rst confront the oddity of using
17
the equations of scattering theory to nd the energies of discrete stationary states. With only one scatterer and renormalization, this turns out to be mathematically straightforward. Still, this idea is important enough to the sequel that we do numerical computations on the case of ground state energies of a single scatterer in a rectangle with perfectly re ective walls. Using the methods presented here, this is simply a question of solving one non-linear equation. We compare the energies so calculated to numerically calculated ground state energies of hard disks in rectangles computed with a standard numerical technique. This is intended both as con rmation that we can extract discrete energies from these methods and as an illustration of the similarity between isolated zero range interactions and hard disks. Having spent a substantial amount of time on examples of renormalization, we return multiple scattering to the picture as well. We will consider in particular disordered sets of xed scatterers, motivated, for example, by quenched impurities in a metal. Before we apply these techniques to disordered systems, we consider disordered systems themselves in chapter 7. Here we de ne and explain some important concepts which are relevant to disordered systems as well as discuss some theoretical predictions about various properties of disordered systems. We return to scattering in a wire in chapter 8. Instead of the single scatterer of chapter 5 we now place many scatterers in the same wire and consider the conductance of the disordered region of the wire. We use this to examine weak localization, a quantum e ect present only in the presence of time-reversal symmetry. In the nal chapter we use the calculations of this chapter as evidence that our disorder potential has the properties we would predict from a hard disk model, as we explored for the one scatterer case in chapters 5 and 6. Our nal application is presented in chapter 9. Here we examine some very speci c properties of disordered scatterers in a rectangle. These calculations were in some sense the original inspiration for this work and are its most unique achievement. Here calculations are performed which are, apparently, out of reach of other numerical methods. These calculations both con rm some theoretical expectations and confound others leaving a rich set of new questions. At the same time, it is also the most specialized application we consider, and not one with the broad applicability of the previous applications. In chapter 10 we present some conclusions and ideas for future extensions of the ideas in this work. This is followed (after the bibliography) by a variety of technical appen-
18
Chapter 2
20
scattering cross-section. We then make the somewhat lengthy calculation which relates the di erential cross-section to the potential of the scatterer. We perform this calculation for arbitrary spatial dimension. At rst, this may seem like more work than necessary to review scattering theory. However, in what follows we will frequently use two dimensional scattering theory. While we could have derived everything in two dimensions, we would then have lost the reassuring feeling of seeing familiar three dimensional results. The arbitrary dimension derivation gives us both. We proceed to consider the consequences of particle conservation, or unitarity, and derive the d-dimensional optical theorem. It is interesting to note that for both this calculation and the previous one, the dimensional dependence enters only through the asymptotic expansion of the plane wave. Once we have this machinery in hand, we proceed to discuss point scatterers or \zero range interactions" as they will play a large role in various applications which follow. In the nal section we focus brie y on two dimensions since two dimensional scattering theory is the stage on which all the applications play out.
2.1 Cross-Sections
At rst, we will generalize to arbitrary spatial dimension a calculation from 28] (pp. 803-5) relating the scattering cross-section to matrix elements of the potential, V . We consider a domain in which the stationary solutions of the Schrodinger equation are known, and we label these by k . For example, in free space,
k(
r) = eik r:
(2.1)
In the presence of a potential there will be new stationary solutions, labeled by in terms of k where superscript plus and minus labels the asymptotic behavior of d-dimensional spherical waves. In particular
( )
H
and
k(
=E
(2.2) (2.3)
r) r
!1
k(
ikr r) + fk ( ) e d;1 :
21
We assume the plane wave, k (r) is wide but nite so we may always go far enough away that scattering at any angle but = 0 involves only the scattered part of the wave. Since the ux of the scattered wave is j = Imf scatt r scatt g = ^jfk ( )j2 =rd 1 , the probability r per unit time of a scattered particle to cross a surface element da is
;
f ( ) 2 v k d 1 da = v fk ( ) d : r
;
(2.4)
But the current density ( ux per unit area) in the incident wave is v so
If unambiguous, we will replace ka and kb by a and b respectively. We proceed to develop a relationship between the scattering function f and matrix ^ elements of the potential, V . This will lead us to the de nition of the so-called scattering t-matrix. ~ Consider two potentials, U (r) and U (r) (both of which fall o faster 1=r). We will show ~b
;
a b
!
+ = jfka ( b ) j2
(2.5)
^ ^ ~ U ;U
+ a
(2.6) (2.7)
(2.8) (2.9)
+ We multiply (2.9) by ~b and the complex conjugate of (2.8) by a and then subtract ~ the latter equation from the former. Since U (r) and U (r) are real, we have (dropping the a's and b's when unambiguous)
; 2hm
~
;
n ~
r2 + ; r2
+
(+)
]+ ~
^ ~ U ;U
(+)
= 0:
(2.10)
h2 lim Z n ~ = 2m R r<R
!1
r2 + ; r2
i +o
dr:
(2.11)
22
and
we de ne
1
1 2]
@2; @1 @r 2 @r
1 2 ] r=R R
(2.12)
d 1d
;
f 1 2gR
= Green's Theorem implies
Z W Z h
r 2 ; 2r 1] da
2
f 1 2 gR =
~
;
Z
r<R
2 2 1 (r 2 ) ; (r 1 )
dr
+
(2.13)
h = 2m Rlim
1 n ik}| r b
;
n ~
o
R
!1
:
2 }|
(2.14)
To evaluate the surface integral, we substitute the asymptotic form of the 's:
n lim ~ R ( ; lim f R |
; !1 !1
o +
;
r {z 2
3
R R ) eikr eika r
!1
= lim e
eika r R + Rlim e
z o{
;
d;1
R }
+ Rlim
!1
;f
ikb r f +
eikr
) {
Since we are performing these integrals at large r, we require only the asymptotic form of the plane wave and only in a form suitable for integration. We nd this form by doing a stationary phase integral 41] of an arbitrary function of solid angle against a plane wave at large r. That is, Z I = rlim eik rf ( r ) d r (2.16)
!1
We nd the points where the exponential varies most slowly as a function of the integration variables, in this case the angles in r . Since k r = kr cos ( kr ) the stationary phase points will occur at kr = 0 . We expand the exponential around each of these points to yield Z 1 ( ( 2 f( ) d + I exp ikr d=11 1 ; 2 ki) ; ri) r r i Z ( ( 2 f( ) d : exp ikr d=11 ;1 + 1 ki) ; ri) r r i 2
; ;
23
We perform all the integrals using complex Gaussian integration to yield an asymptotic form for the plane wave (to be used only in an integral):
eik r \
where r + k = 0 (enforced by the second -function) means that ^ = ;k. r ^ We'll attack the integrals in equation (2.15) one at a time, beginning with Since ka and kb have the same length, ka + kb is orthogonal to ka ; kb . Thus, we can always choose our angular integrals such that our innermost integral is exactly zero: Z2 @ n ikb r ika ro Z 2 2 eia sin d = 1 eia sin = 0: e e cos eia sin d = 1
;
2 " ikr
d;1
2
( r ; k ) eikr + id
i ( r + k ) e ikr
;
(2.17)
kb ) ^ r
d :
(2.18)
Thus limR e = 0. R We can do the second integral using the asymptotic form of the plane wave. The only contribution comes from the incoming part of the plane wave,
!1 ;
ikb r eika r
ia
ia
(2.19)
lim e R
!1
ikb r f +
r(d 1)=2
;
eikr
We can do the third integral exactly the same way. Again, only the incoming part of the plane wave contributes,
d;1
2
(2.20)
( ; lim f R
!1
The fourth integral is zero since both waves are purely outgoing. Thus lim R
=;
d;1
2
d;1
2
(2.21)
n ^
!1
= (2i)
d+1
2
;d
2
f + ( b) ; f (; a )
;
(2.22)
which, when substituted into equation 2.14 gives the desired result. ^ ^ ^ ~ Let's apply the result (2.7) to the case U = V , U = 0. We have
D ^ b V
^ V
+ a
(2.23)
(2.24)
24
(2.25)
We now have
Since the d-dimensional density of states per unit energy is dp m %d (E ) = (2 1h)d pd 1 dE = ( 1 )d (hk)d 1 hk = (2 m )d (hk)d 2 (2.27) h h and the initial velocity is hk=m, we can write our nal result for cross-section in a more useful form, d a b = 2 D V + E 2 % (E ) ^ a (2.28) d d hv b
; ; ; !
a b
!
2 + = fa ( b ) 2 = m4 (2 )1 d kd
D ^ b V
E + 2: a
(2.26)
where all of the dimensional dependence is in the density of states and the matrix element. For purposes which will become clear later, it is useful to de ne the so-called ^ \t-matrix" operator, t (E ) such that ^ t (E ) j and our result may be re-written as
a
^ i=V ^ t (E )
(2.29)
a b
!
2 = hv
E2
%d (E ):
(2.30)
= F ( )eik r0 d
0
(2.31)
(+) So the asymptotic outgoing form of is (where f ( a b ) = fa ( b ) is simply a more symmetric notation than used in the previous section)
F(
)eik r0 +
eikr Z F ( )f ; 2 r d;1
0
(2.32)
25
For large r we can use the asymptotic for of the plane wave (2.17) to perform the rst integral. We then get
ikr
d;1
2
F(
)eikr + id 1 F (
;
)e
ikr
d :
;
(2.33)
We can write this more simply (without the common factor (2 =ik)(d 1)=2 )
where
(2.34) (2.35)
d;1
2
f^
)d
^ fF ( ) = F ( )f (
0
(2.36)
is elastic, we must have as many particles going into the center as there are going out of ^ the center. and the normalization of these two waves must be the same. So S is unitary: ^^ S S = 1:
y
(2.37)
d;1
2
f^ f^
y
(2.38)
i
d;3
2
d;3
2
f^ ; i
;d
2
f^ = i 2k
y
d;1
2
^ f^ f:
y
(2.39)
f(
) ;i
0
d;3
2
f(
) = i 2k
d;1
2
f(
00
)f (
00
)d
00
(2.40)
n d;3
2
f(
o ) =1 k
2 2
d;1
2
jf (
) 2d
1 = 2 2k
d;1
2
(2.41)
26
which is the optical theorem. Invariance under time reversal (interchanging initial and nal states and the direction of motion of each wave) implies
S( f(
) = S (; ) = f (;
; ;
) )
^ ^ (z ; H i )G( ) (z ) = ^ 1
(2.42)
where \^" is the identity operator. The i is used to avoid di culties when z is equal to 1 ^ an eigenvalue of H . is always taken to zero at the end of a calculation. We frequently use 2 ^ ^ ^ the Green function operator, Go (z ) corresponding to H = Ho = ; 2hm r2 . ^ ^ ^ Consider the Hamiltonian H = Ho + V . As in the previous section, we denote the ^ ^ eigenstates of Ho by j a i and the eigenstates of H by j a i. These satisfy ^ Ho j a i = Ea j ^ H a = Ea
a
(2.43) (2.44)
27
^ ^ = j a i + Go (Ea )V a : (2.45) ^ The claim is easily proved by applying the operator (Ea ; Ho i ) to the left of both ^ ^ ^ sides of the equation since (Ea ; Ho i ) j a i = V j a i, (Ea ; Ho i ) j a i = 0 and ^ (Ea ; Ho i )Go (Ea ) = ^. Using the t-matrix, we can re-write this as 1
a a
We claim
^ ^ = j a i + Go (Ea )t (Ea ) j a i
(2.46)
but we can also re-write (2.45) by iterating it (inserting the right hand side into itself as j a i) to give h i ^ ^ ^ ^ = j a i + Go (Ea )V j a i + Go V j a i + : (2.47) a From (2.46, 2.47) we get a useful expression for the t-matrix: ^ ^^ ^ ^^ ^^ ^ ^ t (z ) = V + V Go (z)V + V Go (z)V Go (z)V + ^ ^^ ^ t (z) = V 1 ; V Go (z)
(2.48)
^ We factor out the rst V in each term and sum the geometric series to yield
^ ^ = V Go (z ) Go
i1 n^ o 1
; ;
^ ^ ^ = V z ; Ho ; V i ^ ^ = V G (z ) z ; Ho i
^ (z ) ; V
;
1n^
Go
(z ) (2.49)
^ where (2.49) is frequently used as the de nition of t (z ). We now proceed to develop an equation for the Green function itself. ^ ^ G (z ) = z ; Ho ; V i = ^ z ; Ho i ^ = 1 ; z ; Ho i ^ ^ We expand 1 ; Go (z )V
;
1
;
^ 1 ; z ; Ho i
;
^ V
;
^ ^ = 1 ; Go (z )V
;
i 1^ Go (z ):
;
^ V
(z ; Ho i )
28
(2.55) (2.56)
^o and, using (2.49) and the de nition of G( ) (z ), we get a t-matrix version of this equation, namely ^ ^ ^ ^ ^ G (z ) = Go (z) + Go (z)t (z )Go (z): (2.57)
"
; h2 z + 2m r2 ; V (r) i G (r r z) = r ; r : r
0 0
(2.58)
We begin by considering an arbitrary point ro and a small ball around it, B (ro ). We can move the origin to ro and then integrate both sides of (2.58) over this volume:
"
We now consider the ! 0 limit of this equation. We assume that the potential is nite and continuous at r = ro so V (ro ; r) can be replaced by V (ro ) in the integrand. We can safely assume that Z lim G (ro ; r 0 z) dr = 0 (2.60) 0 since, if it weren't, the integral of the r2 G term would be in nite. We are left with Z (2.61) lim r2G (ro ; r 0 z) dr = 2m : 2 0
! !
B( )
(2.59)
B(ro )
@ G (r ; r 0 z ) d 1 d = 2m : (2.62) o h2 @B(0 ) @r So we have a rst order di erential equation for G (r r z ) for small = jr ; ro j: @ G ( z) = 2m 1 d (2.63) @ h2 Sd
lim 0
! ; 0 ;
B (0 )
29
where
is the surface area of the unit sphere in d-dimensions (this is easily derived by taking the product of d Gaussian integrals and then performing the integral in radial coordinates, see e.g., 32], pp. 501-2). In particular, in two dimensions the Green function has a logarithmic singularity \on the diagonal" where r ! r . In d > 2 dimensions, the diagonal singularity goes as r2 d . It is worth noting that in one dimension there is no diagonal singularity in G. As a consequence of our derivation of the form of the singularity in G, we have proved that, as long as V (r) is nite and continuous in the neighborhood of r , then
0 ;
2 Sd = ;(d=2)
d=2
(2.64)
r!r
lim G (r r z ) ; Go (r r z ) < 1
(2.65)
30
interaction. Choosing the wave function to be zero at the interaction point leads to the mathematical formalism of \self-adjoint extension theory" so named because the restriction of the Hamiltonian operator to the space of functions which are zero at a point leaves a non self-adjoint Hamiltonian. The family of possible extensions which would make the Hamiltonian self-adjoint correspond to various scattering strengths 2]. Much of this complication arises because of an attempt to write the Hamiltonian explicitly or to make sure that every possible zero range interaction is included in the formalism. To avoid these details, we consider a very limited class of zero-range interactions, namely zero-range s-wave scatterers. Consider a scatterer placed at the origin in two dimensions. We assume the physip cal scatterer being modeled is small compared to the wavelength, lambda = 2 = E and thus scatters only s-waves. So we can write the t-matrix (for a general discussion of t-matrices see, e. g., 35]), ^ t (z ) = j0i s (z ) h0j : (2.66) If, at energy E , j i is incident on the scatterer, we write the full wave (incident plus scattered) as ^ ^ = j i + Go (E )t (E ) j i (2.67) (r) = (r) + Go (r 0 E )s (E ) (0): (2.68)
At this point the scatterer strength, s (z ), is simply a complex constant. We can consider s (E ) as it relates to the cross-section. From equation (2.26) with V j a i replaced D E ^ by t j a i we have (since b t (z ) a = s (z ))
2 (E ) = Sd m4 (2 )1 d kd 3 js (E )j2
(2.69)
where Sd , the surface area of the unit sphere in d dimensions is given by (2.64). We also consider another length scale, akin to the three-dimensional scattering length. Instead of looking at the asymptotic form of the wave function, we look at the swave component of the wave function by using Ro (r), the regular part of the s-wave solution to the Schrodinger equation, as an incident wave. We then have
+ (r) = R + o (r ) + Go (r
(2.70)
31
(2.71)
We can reverse this line of argument and nd s+ (E ) for a particular ae s( E ) = ; Ro (ae ) (2.72) G+(ae E ) o The point interaction accounts for the s-wave part of the scattering from a hard disk of radius ae . From equation 2.69 the cross section of a point interaction with e ective radius ae is 2 2 (E ) = Sd m4 (2 )1 d kd 3 Ro (aa ) (2.73) G+ (ae E ) h o but this is exactly the s-wave part of the cross-section of a hard disk in d-dimensions. Though zero range interactions have the cross-section of hard disks, depending on the dimension and the value of s+(E ), the point interaction can be attractive or repulsive. In three dimensions, the E ! 0 limit of ae exists and is the scattering length as de ned in the modern sense 36]. It is interesting to note that other authors, e.g., Landau and Lifshitz in their classic quantum mechanics text 26] de ne the scattering length as we have de ned the e ective radius, namely as the rst node in the s-wave part of the wave function. These de nitions are equivalent in three dimensions but quite di erent in two where the modern scattering length is not well de ned but, for any nite energy, the e ective radius is.
; ;
a b
!
2 + = fa ( b ) 2 = m4 2 1k h
jV j
+ 2 a
(2.74)
32
implying that as E ! 0, (E ) ! 1 which is very di erent from three dimensions. Also, the optical theorem in two dimensions is di erent than its three-dimensional counterpart: Im e
i ;
f(
o sk ) = 8 :
(2.75)
We have already mentioned the di erence in the diagonal behavior of the two and three dimensional Green functions. The logarithmic singularity in G prevents a simple idea of scattering length from making sense in two dimensions. This singularity comes from the form of the free-scattering solutions in two dimensions (the equivalent of the threedimensional spherical harmonics). In two dimensions the free scattering solutions are Bessel and Neumann functions of integer order. The small argument and asymptotic properties of these functions are summarized in appendix E. In two dimensions, we can write the speci c form of the causal t-matrix for a zero ^ range interaction with e ective radius ae located at rs: t+ (E ) = jrsi s+ (E ) hrs j with 4o ;i HJ(1)((pEa)) (2.76) Ea o (1) where Jo (x) is the Bessel function of zeroth order and Ho (x) is the Hankel function of zeroth order.
s+(E ) =
Chapter 3
33
34
(r) +
X
j =i
6
G+ (r rj E )s+(E ) + (rj ): j j B
(3.1)
The number i (ri ) represents the amplitude of the wave that hits scatterer i last. That is, + (r) is determined by all the other + (r) (j 6= i). The full solution can be written in i j + (r ): terms of the i i
+ (r) =
(r) +
X
i
G+ (r ri E )s+(E ) + (ri ): i i B
(3.2)
The expression (3.1) gives a set of linear equations for the i (ri ). This can be seen more simply from the following substitution and rearrangement:
+ (r ) i i
(3.3)
We de ne the N -vectors a and b via ai = + (ri ) and bi = (ri ) and rewrite (3.3) i as a matrix equation h + i 1 ; t (E )GB +(E ) a = b (3.4)
where 1 is the N N identity matrix, t(E ) is a diagonal N N matrix de ned by (t)ii = si (E ) and G+ (E ) is an o -diagonal propagation matrix given by B
8 + < G+ (E ) ij = : GB (ri rj E ) for i 6= j B 0 for i = j: More explicitly, 1 ; t+ (E )G+ (E ) is given by (suppressing the \E" and \+"): B 0 1 ;s1GB (r1 r2) ;s1GB (r1 rN ) 1 B C B ;s2G(r2 r1 ) 1 ;s2GB (r2 rN ) C : B C C 1 ; tGB = B B C .. .. .. ... B C . . . @ A ;sN GB (rN r1) ;sN GB (rN r2) 1
(3.5)
(3.6)
35
The o -diagonal propagator is required since the individual t-matrices account for the diagonal propagation. That is, the scattering events where the incident wave hits scatterer i, ^ propagates freely and then hits scatterer i again are already counted in ti . We can look at this diagrammatically. We use a solid line to indicate causal propagation and a dashed line ending with an \ i " to indicate scattering from the ith ^ scatterer. With this \dictionary," we can write the in nite series form of ti as ^ ti =
i
(3.7)
i i
(3.8)
Now we consider multiple scattering from two scatterers. The Green function has the direct term, terms from just scatterer 1, terms from just scatterer 2 and terms involving both, i.e., +
1
1 1
2 2
1 2
2 1
(3.9)
The o diagonal propagator appearing in multiple scattering theory allows to add only the terms involving more than one scatterer, since the one scatterer terms are already accounted ^i for in each t+ . If, at energy E , 1 ; t+GB is invertible, we can solve the matrix equation (3.4) for a: a = 1 ; tGB 1 b (3.10)
;
where the inverse is here is just ordinary matrix inversion. We substitute (3.10) into (3.2) to get
+ (r ) =
( r) +
X
ij
ij
(rj ):
(3.11)
X
ij
ij
hrj j
(3.12)
36
An analogous solution can be constructed for j i by replacing all the outgoing solutions in the above with incoming solutions (superscript \+" goes to superscript \-"). We have shown that scattering from N zero range interactions is solved by the inversion of an N N matrix. As we will see below, generalized multiple scattering theory is not so simple. It does, however, rely on the inversion of an operator on a smaller space than that in which the problem is posed.
(3.15)
The derivation begins to get complicated here. Since the scattering space is not necessarily discrete, we cannot map our problem onto a nite matrix. We now begin to create a framework in which the results of the previous section can be generalized. ^ We de ne the projection operators, Pi , which are projectors onto the ith scatterer, that is 8 D ^ E < f (r) if r 2 Ci r Pi f = : : (3.16) 0 if r 2 Ci = ^ Pi=1 ^ Also we de ne a projection operator for the whole scattering space, P = N Pi .
37
We can project our equations for the i (r) onto each scatterer in order to get equations analogous to the matrix equation we had for i (ri ) in the previous section: ^ Pi
E ^ ^ ^ X^ = Pi j i + Pi GB tj i
j =i
6
(3.17)
and, fur purely formal reasons, we de ne a quantity analogous to the vector a in the zero range scatterer case: X^ E = Pi i : (3.18) We note that (r) is non-zero on the scattering space only. With these de nitions, we can develop a linear equation for j summing (3.17) over the N scatterers: ^ = Pj i +
i
i.
We begin by (3.19)
X ^ ^ X^ Pi GB tj
i j =i
6
^ ^^ ^ ^ ^ Since ti is una ected by multiplication by Pi , we have ti = Pi ti and ti Thus we can re-write (3.19) as ^ = Pj i + or
j i = t^i
X^ ^ X^^ P i GB P j t j
i j =i
6
(3.20)
XX ^ ^ ^ ^ ^ = Pj i + Pi GB Pj tj
i j =i
6
j i:
(3.21)
We can simplify this equation if, as in the zero range scatterer case, we de ne an o -diagonal background Green function operator,
GB =
i=1
^ = X tm ^ t
m
GB ^ = t
XX ^ ^ ^ ^ Pi GB Pj tj
i j =i
6
^ = P j i + GB ^ t
(3.25)
38
i:
^ The operator P ; GB ^ is an operator on functions on the scattering space, S , and the t boldface ;1 superscript indicates inversion with respect to the scattering space only. In the case of zero range interactions the scattering space is a discrete set and the inverse is just ordinary matrix inversion. In general, nding this inverse involves solving a set of coupled linear integral equations. ^ We note that the projector, P, is just the identity operator on the scattering space so h^ i h i ^ P ; GB ^ 1 = 1 ; GB ^ 1 : t t (3.27)
; ;
^ = P ; GB ^ t
i 1^ Pj i:
;
(3.26)
We can re-write (3.15), yielding ^ t = j i + GB ^ Substituting (3.26) into (3.28) gives ^ t ^ = j i + G B ^ 1 ; GB ^ t The identity implies
;
(3.28)
i 1^ Pj
; ;
^ ^^ ^^ ^ A(1 ; B A) 1 = (1 ; AB ) 1 A
^j i: t
^ t
which is zero outside the scattering space. Our wavefunction can now be written ^ ^ = j i + GB t
j i:
(3.33)
This derivation seems much more complicated than the special case presented rst. While this is true, the underlying concepts are exactly the same. The complications arise from the more complicated nature of the individual scatterers. Each scatterer now leads to a linear integral equation, rather than a linear algebraic equation what was simply a
39
set of linear equations easily solved by matrix techniques, becomes a set of linear integral equations which are di cult to solve except in special cases. The techniques in this section are also useful formally. We will use them later in this chapter to e ect an alternate proof of the scatterer renormalization discussed in the next section.
40
^ ^ like to nd a t-matrix, T + (z ), for the scatterer such that the full Green function, G+ (z ), may be written ^ ^B ^B ^ ^B G+(z ) = G+ (z) + G+ (z)T + (z )G+ (z ): (3.36) ^ We'll call T + (z ) the \renormalized" t-matrix. This name will become clearer below. Let's start with a guess. What can happen in our rectangle that couldn't happen in free-space? The answer is simple: amplitude may scatter o of the scatterer, hit the walls and return to the scatterer again. That is, there are multiple scattering events between the background and the scatterer. Diagrammatically, this is just like the two scatterer multiple scattering theory considered in the previous section where scatterer 1, instead of being another scatterer, is the background (see equation 3.9). Naively we would expect to add up all the scattering events (dropping the z 's): ^ ^ ^ ^B^ ^ ^B^ ^B^ T + = t+ + t+ G+ t+ + t+G+ t+G+ t+ + =
"X
1
n=0
^ ^B n ^ t+G+ t:
(3.37)
(3.38)
t+(r r ) = s+ (r ; rs ) (r
0
; rs)
0
(3.39)
T + (r r ) = s+ (r ; rs ) (r
0
(3.40)
T +(r r ) = (r ; rs ) (r
0
(3.41)
as an operator equation:
1 ^+ (3.42) ^+ G+ t : 1 ; t ^B This is not quite right. With multiple scattering we had to de ne an o -diagonal Green function since the diagonal part was already accounted for by the individual t-matrices. Something similar is needed here or we will be double counting terms which scatter, propagate without hitting the boundary, and then scatter again. ^ T+ =
41
^o ^B We can correct this by subtracting G+ from G+ in the previous derivation: ^ ^ ^ ^B ^o ^ ^ ^B ^o ^ ^B ^o ^ ^ T + = t+ + t+(G+ ; G+)t+ + t+ (G+ ; G+)t+ (G+ ; G+)t+ + = ^+ ^1+ ^ + t+ : 1 ; t (GB ; Go ) (3.43) For future reference, we'll state the nal result: ^ ^ h^1 i t+(z): T + (z ) = (3.44) + (z ) G+ (z ) ; G+ (z ) ^o ^ 1;t B We haven't proven this but we can derive the same result more rigorously in at least two ways, both of which are shown below. The rst proof follows from the expression (2.49) for the t-matrix derived in section 2.3. This is a purely formal derivation but it has the advantage of being relatively compact. Our second derivation uses the generalized multiple scattering theory of section 3.1.2. While this derivation is algebraically quite tedious, it emphasizes the arbitrariness of the split between scatterer and background by treating them on a completely equal footing. ^o We note that the free-space Green function operator, G+ (z ), could be replaced by any Green function for which the t-matrix of the scatterer is known. That should be clear from the derivations below.
3.2.1 Derivation
Formal Derivation
^ ^ ^ ^ ^ Suppose we have H = Ho + HB + Hs where HB is the Hamiltonian of the \back^ ground," and Hs is the \scatterer" Hamiltonian which may be any reasonable potential. There is a t-matrix for the scatterer without the background: ^ ^ ^ ^ t (z) = HsGs (z)(z ; Ho i ) (3.45)
and for the scatterer in the presence of the background where the background is treated as part of the propagator: ^ ^ ^ ^ ^ T (z) = HsG (z)(z ; Ho ; HB i ): ^ This yields an expression for the full Green function, G (z ) operator: ^ ^ ^ ^ ^ G (z ) = GB (z) + GB (z)T (z )GB (z ) (3.47) (3.46)
42
where GB (z ) solves
^ ^ ^ (z ; Ho ; HB i )GB (z ) = ^: 1 (3.48) ^ ^ ^ ^ We wish to nd T (z ) in terms of t (z ), Go (z ), and GB (z ). Formally, we can use ^ (3.45) to solve for Hs: h^ i 1 ^ ^ ^ ^ ^ ^ ^ Hs = t (z )(z ; Ho i ) 1 Gs (z ) = t (z )Go (z )(z ; Ho ; Hs i ): (3.49)
; ;
(3.50)
which we can re-write h^ i ^ ^ ^ ^ ^ ^ T (z ) = t (z )Go (z) Go (z ) + Go (z)t (z)Go (z) h^ ih ^ i 1 ^ ^ ^ GB (z ) + GB (z)T (z)GB (z ) GB (z )
; ;
(3.51) (3.52)
;
(3.53)
1
i 1^ GB (z)
;
^ ^ ^ t (z) 1 + Go (z )t (z)
;
(3.54) (3.55)
1h
^ ^ 1 + t (z )Go (z )
;
^ t (z ) ^ t (z )
= = =
h n
^ ^ ^ ^ 1 + t (z )Go (z ) 1 ; 1 + t (z )Go (z )
^ ^ t (z )GB (z)
^ ^ ^ ^ 1 + t (z )Go (z ) ; t (z )GB (z )
^ t (z ) (3.56)
^ i t (z ): ^ ^ 1 ; t (z ) GB (z ) ; Go (z )
h^1
43
hi ^ = ti t ij ^
ij
(3.57) (3.58)
and Go :
According to our derivation of section 3.1.2, the t-matrix may be written ^ ^ t t = 1 ; ^Go
8 < Go ij = : ^0 i = j GB i 6= j h i
;
^: t
(3.59)
^ Painful as it is, let's write out all of the terms in the above expression for t. We drop the \hats" on all the operators since everything in sight is an operator. First we have to invert 1 ; ^Go and we have to do it carefully because none of these operators necessarily t commute. 0 1 1 ;t1 Go A 1 ; ^Go = @ t (3.60) ; t2 G o 1 so 0 1 1 (1 ; t1 Go t2 Go ) 1 t1 Go(1 ; t2 Got1 Go) 1 A 1 ; ^Go = @ t (3.61) t2 Go (1 ; t1Go t2 Go ) 1 (1 ; t2 Got1 Go) 1 and thus
; ; ; ; ;
1 ; ^Go t
0 1 ^ = @ (1 ; t1 Go t2 Go ) t1 t 1
; ;
t1 Go (1 ; t2 Go t1 Go ) 1 t2 A : t2 Go (1 ; t1Go t2 Go ) t1 (1 ; t2 Go t1Go) 1 t2
; ;
(3.62)
44
So, in detail,
(3.63)
G = G1 + G1T2 G1
(3.64)
where T2 is the renormalized t-matrix operator for scatterer 2 in the presence of scatterer 1. First we add and subtract Go t1 Go :
or
(3.65)
but So
G = Go + Go t1Go + Go (1 ; t1Go t2Go ) 1 ; 1 t1Go +Go (1 ; t2 Got1 Go) 1 t2 Go +Go t1 Go (1 ; t2 Got1 Go) 1 t2 Go + Go t2 Go (1 ; t1 Go t2 Go ) 1 t1 Go
; ; ; ;
(3.66)
(1 ; t1 Go t2 Go )
;1
= (1 ; t1 Got2 Go) 1 t1 Go t2 Go :
; ;
(3.67)
G = Go + Go t1Go + Go t1 Go(1 ; t2 Got1 Go) 1 t2 Go t1 Go (3.69) +Go (1 ; t2 Got1 Go) 1 t2 Go +Go t1 Go (1 ; t2 Got1 Go) 1 t2 Go + Go (1 ; t2 Go t1 Go ) 1 t2 Go t1 Go : Several terms now have the common factor, (1 ; t2 Go t1 Go ). This allows us to collapse
; ; ; ;
G = Go + Go t1Go + Go (1 ; t1 Got2 Go) 1 t1Go t2 Go t1 Go : +Go (1 ; t2 Got1 Go) 1 t2 Go +Go t1 Go (1 ; t2 Got1 Go) 1 t2 Go + Go t2 Go (1 ; t1 Go t2 Go ) 1 t1 Go We use the operator identity (3.55) to rewrite G:
; ; ;
(3.68)
several terms:
(3.70)
45
but Go + Go t1 Go = G1 is the Green function for scatterer 1 alone. Also, t2 (Go t1 Go ) = t2 (G1 ; Go), so we have 1 (3.71) G = G1 + G1 1 ; t (G ; G ) t2 G1 : 2 1 o Now we see that equation 3.71 has exactly the same form as equation 2.49. So we have derived (restoring the \ " and the \z ") ^ ^ h^1 i t (z) T2 (z ) = (3.72) ^ ^ 1 ; t2 (z ) G1 (z ) ; Go (z ) 2 which is identical to equation 3.44 as was to be shown. In the notation of the previous derivation, T2 (z ) = T (z ), t2 (z ) = t(z ) and G1 (z ) = GB (z ).
3.2.2 Consequences
Free Space Background
^ ^ ^ ^ What happens if GB (z ) = Go (z )? Our formula should reduce to T (z ) = t (z ). And it does: 1 ^ ^ ^ T (z ) = ^ (3.73) ^ (z )B ; Go (z )] t (z ) = t (z ): ^ 1 ; t (z ) G
Closed Systems
Suppose GB comes from a nite domain, e.g., a rectangular domain in two dimensions. Then we have 1 ^ ^ ^ (3.74) T = ^ ^1 ^ t = ^ ^ ^ ^ t : 1 ; t (GB ; Go ) 1 ; t (Go tB Go ) ^ It is not obvious from this that T doesn't depend on the choice if incoming or outgoing solutions in the above equation. However, it is clear from physical considerations that a closed system only has one class of solutions. In fact, the above equation is independent of the choice of incoming or outgoing solutions for the free space quantities. We can show this in a non-rigorous way by observing that 1 ^ T= (3.75) i 1 h^ ^ ^ t ; GB ; Go
;
and that
^ t
^ ^ = Hs 1 ; Go
;
(3.76)
46
(3.77) To show this rigorously would require a careful de nition of what these various inverses mean since many of the operators can be singular. We need to properly de ne the space on which these operators act. This would be similar to the de nition of the scattering space used in section 3.1.2. The Green function operator of a closed system has poles at the eigenenergies. o o ^ ^ That is, tB (z ) and GB (z ) have poles at z = En for n 2 f1 : : : 1g. For z near En ^ R ^ tB (z ) z ; n (3.78) E
; ; 0
so
^ ^ ^ T 1 = Hs 1 ; GB :
and
^ Go (z ) has no poles (though it may have other sorts of singularities). So we de ne ^ ^ n ^ ^ n Rn = Go (Eo )Rn Go (Eo ):
0
^ R ^ ^ ^ GB (z ) Go (z); n Go (z) : z En
0
(3.79) (3.80)
^ Since we have added a scatterer, in general none of the poles of GB (z ) should o ^ be poles of G(z ). This is something we can check explicitly. Suppose z = En + where j j << 1. Then ^ ^ ^ ^ ^ 0 ^ G(z) = G(En + ) = Rn + Rn t^1(z)Rn t (z) Rn (3.81) ^ 1; which is easily simpli ed: 0 1 ^ Rn @1 ; 1 ^ A G(z) = (3.82) 1 ; t^ (z)Rn : ^ ^ ^ o We assume that t (En ) 6= 0 and we know that Rn 6= 0 because En is a simple pole of GB (z ). So there exists such that So we have and thus ^ ^ t ( z ) Rn 1 ; t^ (z)Rn ^ 1
<< 1:
1+ ^ ^ t (z)Rn
^ G(z) = ^ 1 + O( ): t (z ) ^ ^ ^ 0 Therefore, the poles of GB (z ) are not poles of G(z ) unless t(En ) = 0.
47
^ So Where are the Poles in G(z)? ^ ^ As we expect, the analytic structure of G(z ) and T (z ) coincide (at least for the ^ ^ most part, see 10], p. 52). More simply, since the poles of GB (z ) are not poles of G(z ), ^ ^ only poles of T (z ) will contribute poles to G(z ). Recall ^ ^ h^1 i t (z) T (z ) = (3.86) ^ ^ 1 ; t (z ) GB (z ) ; Go (z ) o ^ so poles of T (z ) occur at En (6= En ) satisfying h^ i ^ ^ t (En ) GB (En ) ; Go (En ) = 1 (3.87) or 1 : (3.88) = ^ ^ ^ Go (En)tB (En )Go (En ) ^ ^ This is a simple equation to use when GB comes from scatterers added to free space so tB is ^ known. When GB is a Green function given a-priori, e.g., the Green function of an in nite wire in 2 dimensions, the above equation becomes somewhat more di cult to evaluate. We'll address this issue in a later chapter (5) about scattering in 2-dimensional wires. ^ ^ ^ t (En ) = GB (En) ; Go (En)
;
^ T (z ) =
1 s (z)
GB (rs rs z ) ; Go (rs rs z)
i jrs ihrs j :
(3.90)
48
But, as we have already discussed in section 2.3, in more than one dimension, the Green function, G (r r z ) for the Schrodinger equation is singular in the r ! r limit. So, we have to de ne T (z ) a bit more formally: 1 ^ i T (z ) = rlim 1 h (3.91) 0 rs ; GB (rs r z) ; Go (rs r z) jrsihrsj : s (z)
0 0 ! 0 0
This limit can be quite di cult to evaluate. Four particular cases are dealt with in chapters 5 and 6.
Chapter 4
50
We next consider scattering from a boundary between two regions with di erent known Green functions. This cannot be handled as a boundary condition but, nonetheless, all the scattering takes place at the interface. This method could be used to scatter from a potential barrier of xed height that was the original motivation for its development. It could also be used to scatter from a superconductor embedded in a normal metal or viceversa since each has its own known Green function (see A.6). This idea is being actively pursued.
(4.1)
(r(s)) = 0
(4.2)
emerges as the limit of the potential's parameters ( ! 1). For nite , the potential has the e ect of a penetrable or \leaky" wall. A similar idea has been used to incorporate Dirichlet boundary conditions into certain classes of solvable potentials in the context of the path integral formalism 19]. Here we use the delta wall more generally, resulting in a widely applicable and accurate procedure to solve boundary condition problems for arbitrary shapes. Consider the Schrodinger equation for a d-dimensional system, H (r) (r) = E (r), with H = H0 + V . As is well known, the solution for (r) is given by (r) = (r) + dr GE (r r )V (r ) (r ) 0
0 0 0 0 0
(4.3)
where (r) solves H0 (r) (r) = E (r) and GE (r r ) is the Green function for H0 . Hereafter, 0 for notational simplicity, we will suppress the superscript E in GE . 0 Now, we introduce a -type potential
V (r) =
ds (r ; r(s))
(4.4)
51
where the integral is over C , a connected or disconnected surface. r(s) is the vector position of the point s on C (we will call the set of all such vectors S ), and is the potential's strength. Clearly, V (r) = 0 for r 2 S . = In the limit ! 1, the wavefunction will satisfy (4.2) (with (s) = 1) as shown below. For nite , a wave function subject to the potential (4.4) will satisfy a \leaky" form of the boundary condition. Inserting the potential (4.4) into (4.3), the volume integral is trivially performed with the delta function, yielding (r) = (r) +
(4.5)
Thus, if (r(s)) = T (r(s)) is known for all s, the wave function everywhere is obtained from (4.5) by a single de nite integral. For r = r(s ) some point of S (r(s )) = (r(s )) +
00 00
(4.6)
ds G0 (s s ) (s ):
0 00 0 0
(4.7)
i 1~
;
(4.8)
where ~ ~ stand for the vectors of (s)'s and (s)'s on the boundary, and ~ for the identity I operator. The tildes remind us that the free Green function operator and the wave-vectors are evaluated only on the boundary. We de ne h i ~ 1 T = ~ ; G0 I (4.9)
;
T (r(s )) = ds T (s s) (s):
0 0
(4.10)
In order to make contact with the standard t-matrix formalism in scattering theory 35], we note that a T operator for the whole space may be written as
(4.11)
52
For So,
! 1, the operator T converges to ; G0 1. Inserting this into (4.12), we have h i ~ = ~ ; G0 G0 1 ~ = 0: I ~ ~ (4.13) satis es a Dirichlet boundary condition on the surface C for = 1.
; ;
h~ i
(4.12)
or, in position space, (r) = (r) + which we can simplify to (r) = (r) +
dr dr Go(r r E )t(r r E ) (r )
0 00 0 0 00 00 0 00 0 0 00 00
t ](s) =
ds T (s s E ) (r(s ))
0 0 0
ds Go (r r(s ) E )t ](s ) = ; (r(s)) (4.19) which we may solve for t ](s) (e.g., using standard numerical methods.) Formally, we can
0 0 0
Now we enforce the dirichlet boundary condition (r(s)) = 0. That gives us a Fredholm integral equation of the rst kind:
(4.20) where the new notation reminds us that the inverse is calculated only on the boundary. That is Go 1(s s ) satis es
0 ; 0 0 ; 0
t ](s) = ; ds Go 1(s s E ) (s )
(4.21)
53
where @n(r(s)) denotes the partial derivative in the direction normal to the curve r(s). We note that di erent parameterizations of the surfaces may yield di erent solutions. For example, consider the unit square in two-dimensions. Standard periodic boundary conditions specify that (0 y) @ (0 y) @x (x 0) @ (x 0) @y = = = = (1 y) 8y 2 0 1] @ (1 y) 8y 2 0 1] @x (x 1) 8x 2 0 1] @ (x 1) 8x 2 0 1]: @y
(4.24)
We choose the surface C1 as the left side and top of the square and C2 as the right side and bottom of the box. We may choose a real parameter, s 2 0 2], where s 2 0 1] parameterizes the sides of the box from top to bottom and s 2 (1 2] parameterizes the top and bottom from right to left. However, twisted periodic boundaries may also be considered: (0 y) @ (0 y) @x (x 0) @ (x 0) @y = = = = (1 1 ; y) 8y 2 0 1] @ (1 1 ; y) 8y 2 0 1] @x (1 ; x 1) 8x 2 0 1] @ (1 ; x 1) 8x 2 0 1]: @y
(4.25)
54
In our language, this simply means choosing a di erent parameterization of the two pieces of the box. To solve this problem, we now expand in terms of the free-space Green function on the boundary: (r) = (r) + (r1 (s)) +
(4.26)
(r2 (s)) +
@n(r1Z(s)) (r1 (s))+ (4.27) h i @n(r1 (s)) G(r1 (s) r1 (s ) E )f1 (s ) + @n(r1 (s)) G(r1(s) r2 (s ) E )f2 (s ) ds =
0 0 0 0 0
@n(r2Z(s)) (r2 (s))+ (4.28) h i @n(r2 (s)) G(r2(s) r1 (s ) E )f1 (s ) + @n(r2 (s)) G(r2(s) r2 (s ) E )f2 (s ) ds (4.29)
0 0 0 0 0
This is a set of coupled Fredholm equations of the rst type. To make this clearer we de ne:
a(s) a (s) G1 (s s E ) G2 (s s E ) G1 (s s E ) G2 (s s E )
0 0 0 0 0 0 0
= = = = = =
(r2 (s)) ; (r1 (s)) @n(r2 (s)) (r1 (s)) ; @n(r1 (s)) (r2 (s)) (4.30) Go (r1 (s) r1 (s ) E ) ; Go(r2 (s) r1 (s ) E ) Go (r1 (s) r2 (s ) E ) ; Go(r2 (s) r2 (s ) E ) @n(r1 (s)) Go (r1 (s) r1 (s ) E ) ; @n(r2 (s)) Go (r2 (s) r1 (s ) E ) @n(r1 (s)) Go (r1 (s) r2 (s ) E ) ; @n(r2 (s)) Go (r2 (s) r2 (s ) E ): (4.31)
0 0 0 0 0 0 0 0
(4.32)
55
0 @ G1
0
where the G 's are linear integral operators in the space of functions on the boundary and the f 's and a's are vectors (functions) in that space. Formally, we can solve this equation:
G2 A @ f1 A = @ a A G1 G2 f2 a
0 0
10
1 0
(4.33)
0 1 0 @ f1 A = @ G1 f2 G1
0
G2 A 1 @ a A G2 a
; 0 0
1 0
(4.34)
While we cannot usually invert this operator analytically, we can sample our boundary at a discrete set of points. We then construct and invert this operator in this nite dimensional space and, using this nite basis, construct an approximate solution to our scattering problem.
(4.35)
56 (4.36)
where (r) = out (r) ; in(r). The incoming wave inside the potential step is puzzling at rst. Because the basis in which we are expanding the solutions is not orthogonal, we have some freedom in choosing in(r) = 0. i) We can choose in(r) = 0 which corresponds to the entire inside wavefunction being produced at the boundary sources. This is mathematically correct but a little awkward when the potential step is small compared to the incident energy.. When the step is small, the wavefunction inside and outside will be much like the incoming wave which leads us to ii) A more physical but harder to de ne choice is to choose in(r) to be a solution to the wave equation inside the step and which has the property
Vo 0
!
in(r) =
(4.37)
Though this seems ad hoc, it is mathematically as valid as choice (i) and has the nice property that lim t ](s) = 0 (4.38) Vo 0 in,out which is appealing. Now we write our boundary conditions:
!
out (r(s)) = in(r(s)) @n(s) out (r(s)) = @n(s) in(r(s)) or out (r(s)) + Gout (r(s) r(s ) E ) tout ](s ) ds = in(r(s)) + Gin(r(s) r(s ) E ) tin ](s ) ds
0 0
(4.39)
(4.40)
@n(s) out(r(s)) + @n(s) Gout (r(s) r(s ) E ) tout ](s ) ds = Z @n(s) in(r(s)) + @n(s) Gin(r(s) r(s ) E ) tin ](s ) ds
0 0 0 0 0 0
(4.41)
57
The above is a set of coupled Fredholm equations of the rst type. To make this clearer we de ne:
a(s) = in(r(s)) ; out (r(s)) a (s) = @n(s) in(r(s)) ; @n(s) out (r(s))
0
E) E) E) E)
= = = =
Gout(r(s) r(s ) E ) Gin(r(s) r(s ) E ) @n(s) Gout (r(s) r(s ) E ) @n(s) Gin(r(s) r(s ) E ):
0 0 0 0
(4.42)
(4.43)
with all the G's and 's given. We may schematically represent this as a matrix equation:
0 @ Go
0
;Gi A @ v A = @ a A Go ;Gi w a
0 0
10
1 0
(4.44)
0 1 0 @ v A = @ Go w Go
0
;Gi A 1 @ a A ;Gi a
; 0 0
1 0
(4.45)
This formal solution is not much use except perhaps in a special geometry. However, it does lead directly to a numerical scheme. Simply discretize the boundary by breaking it into N pieces fCi g of length . Label the center of each piece by si and change all the integrals in the integral equations to sums over i. Now the schematic matrix equation actually becomes
58
a 2N like.
We might also worry about mutliple step edges or di erent steps inside each other. All this will work as well but we will get a set of equations for each interface so the problem may get quite costly. This would not be a sensible way to handle a smoothly varying potential. However, as noted at the beginning, the formalism here works for any known Gin and Gout and so certain smooth potentials may be handled if their Green function's are known.
ds G0 (r r(s)) (r(s))
(r(sj ))
Z
C
ds G0 (r r(s))
(4.46)
with sj the middle point of Cj and rj = r(sj ). Now, considering r = ri we write (ri ) = P (ri ) + N=1 Mij (rj ) (for M , see discussion below). If = ( (r1 ) : : : (rN )), and j = ( (r1 ) : : : (rN )), we have = + M , and thus = T , with T = (I ; M) 1 , which is the discrete T matrix. So
;
i = (T )i =
N Xh
j =1 N X j =1
(I ; M)
ij j
(4.47) (4.48)
and
(r)
(r) +
G0(r rj )
j (T )j
where we have used a mean value approximation to the last integral in (4.46) and de ned j the volume of Cj .
59
Mij =
We can approximate
Z
C
ds G0(ri r(s)):
(4.49) (4.50)
Mij G0(ri rj ) j :
However, G0 (ri rj ) may diverge for i = j (e.g., the free particle Green functions in two or more dimensions). We discuss these approximations in detail in Section 4.6.3. If we consider ! 1, it is easy to show from the above results that (r) (r) ;
N X j =1
G0 (r rj )
j (M
)j :
(4.51)
Equation (4.51) is then the approximated wave function of a particle under H0 interacting with an impenetrable region C .
G0 (r rj ) (B
)j :
(4.53)
we notice that B has the same o -diagonal elements as Mms . The diagonal elements of B p have the form (where k = E )
Bii = 1 l
Z l=2
;
l=2
G (ri r(si + x) E ) dx
60
If we want to identify this with a multiple scattering problem we must have 1=Ti (E ) = kl 1 2 ln 2e which is the low energy form of the point interaction t-matrix discussed in section 2.4 for a scatterer of scattering length l=2e. Thus, in the many scatterer limit (kl << 1), the Dirichlet boundary wall method becomes the multiple scattering of many pointlike scatterers along the boundaries where each scatterer has scattering length l=2e. The numerical solution (4.53) approaches the solution (4.5) as N ! 1. In practice, we choose N to be some nite but large number. In this section we explain how to choose N for a given problem and how the approximation (4.50) a ects this choice. In order to analyze the performance of the numerical solution, we must de ne some measure of the quality of the solution. We measure how well a Dirichlet boundary blocks the ow of current directed at it. Thus we measure the current,
j = =f (r)r (r)g
(4.55)
behind a straight wall of length l. To simplify the analysis we integrate j n over a \detector" located on one side of a wall with a normally incident plane wave on the other side. We divide this integrated current by the current which would have been incident on the detector without the wall present. We call this ratio, T , the transmission coe cient of the wall. Instead of T as a function of N , we consider T vs. , where = 2 N=(lk) is the number of boundary pieces per wavelength. We consider three methods of constructing the matrix M for each value of . The rst is the simplest approximation:
C
8R > i ds G0(ri r(s)) i = j > < Mij = > (4.56) > : G0 (ri rj ) i 6= j which we call the \fully-approximated" M. The next is a more sophisticated approximation with 8R > j ds G0(ri r(s)) jsi ; sj j < k > < Mij = > (4.57) > : G (r r ) js ; s j
C
0 i j
61
which we call the \band-integrated" M because we perform the integrals only inside a band of =(2 ) wavelengths. Finally, we consider
Mij =
ds G0 (ri r(s))
8ij
(4.58)
which we call the \integrated" M. Numerically, the \band-integrated" and \integrated" M require far more computational work than the \fully-approximated" M which requires the fewest integrals. All ; methods of calculating M scale as O N 2 . The calculation of T or T from M scales as O ;N 3 and the calculation of (r) for a particular r from a given T scales as O (N ). Which of these various calculations dominates the computation time depends on what sort of computation is being performed. When computing wavefunctions, computation time is typically dominated by the large number of O (N ) vector multiplications. However, when calculating (r) in only a small number of places, e.g., when performing a ux calculation, ; computation time is often dominated by the O N 3 construction of T . In Figure 4.1 we plot log10 T vs. for the three methods above and 2 30. We see that all three methods block more than 99% of the current for > 5. However, it is clear from the Figure that the \integrated" M and to a lesser extent the \band-integrated" M strongly outperform the \fully-approximated" M for all plotted.
62
20 5 10 p
Figure 4.1: Transmission (at normal incidence) through a at wall via the Boundary Wall method.
63
where T (s s ) = Go(s s )] 1 . So
0 0 ; 0 0
0 1 0 @ f1 A = @ G1 f2 G1
0 0
G2 A 1 @ a A G2 a
; 0 0
1 0
If we de ne
(r) = Go (r r E ) (r) = Go (r r E )
0 0
we have
G(r r E ) = Go (r r E ) +
0 0
Though this looks like we have to solve many more equations than just to get the wavefunction, we note that the operator inverse which we need to get the wavefunction is su cient to get the Green function, just as in the Dirichlet case. We simply apply that inverse to more vectors. Thus for all boundary conditions, the Green function requires extra matrix-vector multiplication work but the same amount of matrix inversion work.
64
4.8 Eigenstates
It is also useful to be able to use the above methods to identify eigenenergies and eigenstates (if they exist) of the above boundary conditions. This is actually quite simple. All of the various cases involved inverting some sort of generalized Green function operator on the boundary. This inverse is a generalized t-matrix and its poles correspond to eigenstates. Poles of t correspond to linear zeroes of G and so we may use standard techniques to check for a singular operator. If the operator we are inverting is singular, its nullspace holds the coe cients required to form the eigenstate. A more concrete explanation of this can be found in section 9.1.
Chapter 5
65
66
11111 00000 11111 00000 11111 00000 11111 00000 11111 00000
Figure 5.1: A Periodic wire with one scatterer and an incident particle. We begin with a simple classical argument. Suppose a particle in the wire is incident with angle with respect to the walls, as in gure 5.1. What is the probability that such a particle scatters? For a small scatterer, the probability is approximately P ( ) = 1 W cos . Of course, this must break down before P ( ) > 1 but for << W this will a small range of . We now need to know how the various incident angles are populated in a particular scattering process. For this, we must think more carefully about the physical system we have in mind. In our case, this is a \two probe" conductance measurement on our wire, as pictured in gure 5.2. Our physical system involves connecting our wire to contacts which serve as reservoirs of scattering particles (e.g., electrons), running a xed current, I , through the system and then measuring the voltage, V . Theoretically, it is the ratio of current to voltage which is interesting, thus we de ne the unitless conductance of this system,g = (h=e2 )I=V .
67
11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 00000000 11111111
1 0 1 0
111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 111111111 000000000 000000000 111111111
Figure 5.2: \Experimental" setup for a conductance measurement. The wire is connected to ideal contacts and the voltage drop at xed current is measured. In such a setup all of the transverse quantum channels are populated with equal probability. Since the quantum channels are uniformly distributed in momentum we have for the probability density of nding a particular transverse wavenumber, (ky )dky = 2 1E dky . p We also know that ky = E sin and together these give the density of incoming angles in 1 the plane of the scatterer. ( )d = 2 cos . We'll also assume the scattering is isotropic half of the scattered wave scatters backward. So, we have 1 Z =2 P ( ) ( ) d = : R= 2 (5.1) 4W =2
p ;
The maximum cross section of a zero range interaction in two dimensions is 4= E (see 2.4) and corresponds to scattering all incoming s-waves. If we put this into (5.1), we get R = EW . Interestingly, this is exactly one quantized channel of re ection. The wire has as many channels as half wavelengths t across it, Nc = EW and the re ection coe cient is simply 1=Nc , indicating that one channel of re ection in free space (the s-wave channel) is also one channel of re ection in the wire, though no longer one speci c channel. We can check this conclusion numerically (the numerical techniques will be discussed later in the chapter) as a check on the renormalization technique and the numerical method. In gure 5.3 we plot the numerically computed re ection coe cient and the theoretical value of 1=Nc for wires of varying widths, from 15 half wavelengths to 150 half wavelengths. The expected behavior is nicely con rmed, although there is a noticeable
p p
68
discrete jump at a couple of widths where new channels have just opened.
0.07 Numerical Data Quasi-Classical 0.06
0.05
Figure 5.3: Re ection coe cient of a single scatterer in a wide periodic wire. As the wire becomes narrower (at the left of the gure) we see that the agreement between the measured value and the quasi-classical theory is poorer. This is no surprise since our quasi-classical argument is bound to break down as the height of the wire becomes comparable to the wavelength and scatterer size. This is a hint of what we will see in section 5.6 where the limit of the narrow wire is considered. Before we consider that problem, we develop some necessary machinery. First we compute the Green function of the empty periodic wire, we then consider the renormalization of the scattering amplitude in a wire and the connection between Green functions and transmission coe cients. It is interesting to watch the transition from wide to narrow in terms of scattering channels. Above, we saw that the scattering from one scatterer in a wide wire can be understood classically. As we will see later in the chapter, scattering in the narrow wire (W < a ) is much more complex. If we shrink the wire from wide to narrow we can watch this transition occur. This is shown in gure 5.4.
69
0.8
0.6
0.4
0.2
Figure 5.4: Number of scattering channels blocked by one scatterer in a periodic wire of varying width.
hx y jk a i = eikx
where a (y) satis es
a (y)
(5.2)
d ; dy2
ZW
0
a (y)
y =W
= 1
70
is an eigenstate of the in nite ordered periodic wire. We can write the Green function of the in nite ordered wire as (see, e.g., 10], or appendix A)
X ^B G+ (z ) =
a
;1
jk aihk aj z ; "a ; k2 + i
dk
(5.7)
In order to perform the diagonal subtraction required to renormalize the single scatterer tmatrices (see section 3.2) we need to compute the Green function in position representation. Equations 5.3-5.6 are satis ed by
1 0 (y ) = h q2 (0) (y) = a q W sin (1) (y ) = 2 cos a W
2 ay W 2 ay W
2 2
(5.8)
where the cos and sin solutions are degenerate for each a. Since the eigenbasis of the wire is a product basis (the system is separable) we can apply the result of appendix A, section A.4 and we have: ^B G+ (z ) =
X
a
jaihaj
X
a
go ) (E ; "a ) ^(
(5.9)
or, in the position representation (we will switch between the vector r and the pair x y frequently in what follows),
G+ (r r E ) = G+ (x y x y z ) = B B
0 0 0
+ a (y) a (y )go (x
0
x E ; "a )
0
(5.10)
;1
eik(x x0) dk z ; k2 + i
;
i 2 z exp (i 1 exp 2
; p
p
;
pz jx ; x j) ;pj jjx ; x j
0 0
When doing the Green function sum, we have to sum over all of the degenerate states at each energy. Thus, for all but the lowest energy mode (which is non-degenerate), the y-part of the sum looks like: (5.11) sin 2 a y sin 2 a y + cos 2 a y cos 2 a y = cos 2 a(y ; y )
0
71
0
which is sensible since the Green function of the periodic wire can depend only on y ; y . So, at this point, we have
G+ (x B
As nice as this form for GB is, we need to do some more work. To renormalize free space scattering matrices we need to perform the diagonal subtraction discussed in section 3.2.2. In order for that subtraction to yield a nite result, GB must have a logarithmic diagonal singularity. The next bit of work is to make this singularity explicit. + It is easy to see where the singularity will come from. Since go (x x ;j j) 1 for E real there exists an M such that,
p
G+ (x y x y E ) const. + 21 B
a=M a
X1
1
(5.13)
which diverges. We now proceed to extract the singularity more systematically. We begin by substituting the de nition of go and explicitly splitting the sum into two parts. The rst part is a nite sum and includes energetically allowed transverse modes (open channels). For these modes the energy argument to go is positive and waves can propagate down the wire. The rest of the sum is over energetically forbidden transverse modes (closed channels). For these modes, waves in the x direction are evanescent. We p de ne N , the greatest integer such that E ; 4 2 N 2 =W > 0, ka = E ; 4 2 a2 =W 2 and p a = 4 2 a2 =W 2 ; E and have
G+ (x B
y x y E) =
0 0
N ;iei E x x0 ; i X cos 2 p W 2W E
p j ; j
1 ; W X cos 2
1
a>N
a(y ; y ) e W
0
a=1
a(y ; y ) eika x W ka
0 j ;
x0
a jx;x0 j
(5.14)
In order to extract the singularity, we add and subtract a simpler in nite sum (see D.1), 1 X cos 2 a(y ; y ) exp ; 2Wa jx ; x j 2 a=1 W a exp ( ; )=W = 41 ln 2 cosh 2 (x ; x 2=h]x; 2xcos 2 ](y ; y )=W ] )
1 0 0 0 0 0
(5.15)
72
to GB . This gives
G+ (x B
2 ik x 4e a
yx y
0
j ;
ka
a(y ; y ) W
0
2i a
exp 2 (x ; x )=W ] (5.16) 2 cosh 2 (x ; x )=W ] ; 2 cos 2 (y ; y )=W ] : This is an extremely useful form for numerical work. For x 6= x or y 6= y we have transformed a slowly converging sum into a much more quickly converging sum. This is dealt with in detail in D.2.2. In this form, the singular part of the sum is in the logarithm term and the rest of the expression is convergent for all x ; x y ; y . In fact, the remaining in nite sum is uniformly convergent for all x ; x , as shown in (D.2.2). We can now perform the diagonal subtraction of Go . We begin by considering the x ! x y ! y limit of G: N i i X 1; W lim y0 G+ (x y x y E ) = ;p ; W x x0 y 2W E a=1 ka 2i a
a jx;x0 j
a
0
W exp
; 2Wa jx ; x j 3 5
0
2 a
; 41 ln
1 ;W X
exp ; x )=W lim ln 2 cosh 2 (x ; x )2 (x ; 2 cos 2 ] (y ; y )=W ] : x0 y y0 =W ] We can use equation D.6 to simplify the limit of the logarithm: exp 2 (x ; x )=W ] 1 lim ln 0 y y0 4 x x 2 cosh 2 (x ; x )=W ] ; 2 cos 2 (y ; y )=W ]
a>N
!
; 2Wa a
0 0 0
; 41 x
(5.17)
= 41 x x0 y y0 ln lim
! !
"
2#
2h
(x ;
x ) 2 + (y
0 0
y )2
0
i)
; + 21 rlim0 ln jr ; r j : r
! 0
(5.18) (5.19)
73
we have
(5.20)
which is independent of x and y as it must be for a translationally invariant system and nite, as proved in section 2.3.1. The case of a Dirichlet bounded wire is very similar and so there's no need to repeat the calculation. For the sake of later calculations, we state the results here. We have
(5.21)
and
; 2 1a a
(5.22)
G+ (E ) = G+ (E ) + G+ (E )T + (E )G+ (E ) B B B
(5.23)
where T + (E ) is computed via the techniques in chapter 3, namely renormalization of the free space t-matrices.
74
We begin with a single scatterer in free space at x = y = 0. The t-matrix of that scatterer is (see section 2.4) t+(E ) = s+(E ) j0ih0j (5.24) which is renormalized by scattering from the boundaries of the wire:
(5.25)
where
= 1=s+ (E ) ; G+ (0 0 E ) B
(5.26)
N ip + i X 1 ; W 2W E W a=1 ka 2i a
X 1
a>N
; 2Wa ; 21 ln a
W E
2p
1 + 4 Yo(R) (0)
(5.27)
N ip + i X sin2 a y 2W E W a=1
; 2Wa ka i
1
sin2 ya a>N
; 2Wa a
1 + 4 Yo(R) (0):
0 0
So we have
0 0
y + 41 ln sin2 W
0 0
; 21 ln hpE
(5.28)
g = Tr(T T )
y
(5.30)
75
where T is the transmission matrix, i. e., (T )ab is the amplitude for transmitting from channel a in the left lead to channel b in the right lead. T is constructed from G+ (r r E ) via s k (T )ab = ;iva k b G+ (x x E ) exp ;i(kb x ; ka x )] (5.31) ab
0 0 0
where
^ G+ (x x E ) = x a G+(E ) x b = ab
0 0 0
E Z
+ a (y ) G (x
y x y E ) b (y ) dy dy
0 0 0
(5.32)
is the Green function projected onto the channels of the leads. Since the choice of x and x are arbitrary, we can choose them large enough that all the evanescent modes are arbitrarily small and thus we can ignore the closed channels. So there are only a nite number of propagating modes and the trace in the Fisher-Lee relation is a nite sum. We p note that the prefactor va ka =kb is there simply because we have normalized our channels R via 0h j a (y)j dy = 1 rather than to unit ux. More detail on this is presented in 16] and even more in the review 40].
X ^B G+ (z ) =
a
0
;1
so Since
jk aihk aj z ; "a ; k2 + i dk
Z
1 ; ;1
^B x a G+ ( z ) x b =
ab
G+ (x x z ) = ab
0
+; ab go x x z + +go (x xs z
; "a)
X
ij
a ( ys ) S
+ (z )
+; b (ys ) go xs
x z ; "b
0
If the t-matrix comes from the multiple scattering of many zero range interactions the t-matrix can be written: ^ t(z) =
(5.36)
76
In that case we will have a slightly more complicated expression for the channel-to-channel Green function:
G+ (x x z ) = ab
0
+; ab go x x z X + + go (x xi ij
0
77
a)
0.8 0.6 T 0.4 0.2 0
0.5
Figure 5.5: Transmission coe cient of a single scatterer in a narrow periodic wire. It is possible to de ne a sort of cross-section in one dimension, for instance via the d = 1 optical theorem from section 2.2. In gure 5.6, we plot the cross-section of the scatterer rather than the transmission coe cient.
78
Figure 5.6: Cross-Section of a single scatterer in a narrow periodic wire. There are various unexpected features of this transmission. In particular, why should the transmission go to 0 at some nite wire width? Also, why should the transmission go to 0 for a zero width wire? A less obvious feature, but one which is perhaps more interesting, is that for very small widths the behavior of the transmission coe cient is independent of the original scattering strength. These features are consequences of the renormalization of the scattering amplitude. To see how this transmission occurs, we compute the full channel-to channel Green function for one scatterer in the center (x = 0) of a narrow (h ! 0) periodic wire. Since the wire is narrow, there is only one open channel so the channel to channel Green function has only one term. lim G+ (x W 0 00
!
+ y x y E ) = go (x x E )
0 0 0
+ +go (x 0 E ) p1
lim S + (E ) W 0
!
p1W go+(0 x
E ):
79
S + (E ) =
For small W ,
ip
(5.38) (5.39)
1= a
1
W 1 + EW 2 2 a 8 2 a2
(5.40)
So
ip ; 1 ln 2 a + 1:2 EW 2 1 : (5.41) W 16 3 2W E 2 p Using the de nition of go and factoring out the large i=2W E in S + we have ; piE ei E x x0 lim0 G+ (x y x y E ) 00 W 2 p + ;i ei E x p1 2 E " Wp # p 2W E 1 + W E ln 2 a ; 1:2 E 3=2 W 3 i i W 8i 3 ; p1W 2piE ei E x0 (5.42)
; p 0 0 j ; j ! p j j p j j 0 0
"
Since we are interested only in transmission, we can assume x < 0 and x > 0 so jx ; x j = jxj + jx j. With this caveat, we have
0 0 0
; piE ei E x x0 2 " p i ei E ( x + x0 ) 1 + W E ln + p i 2 E
p j j j j
2 a
lim G+ (x W 0 00
!
y x y E)
0 0
ei E( x + x0 )
p j j j j
"
h ln 2 a 2 W
3 ; 1:2 EW3 : 16
(5.44)
(T )00 = ;i W E ln 2Wa
"
3=2 3 ; 1:2 E W
(5.45)
80
since, in units where h2 =2m = 1, v = 2 E . We have plotted this small W approximation j(T )00j2 in gure 5.5. We see that there is good quantitative agreement for widths of fewer than 1=4 wavelength. What is perhaps more surprising is the reasonably good qualitative agreement for the entire range of one wavelength.
Chapter 6
(6.1)
82
"
h2 r2 + E (~) = 0 r 2m ~ 2 @ D ) (~) = 0 r r
(6.2)
in the domain D subject to the boundary condition (6.3) where @ D is the boundary of D. 2 As in the previous chapter we set 2hm = 1. Our equation reads
@2 + @2 + E @x2 @y2
(x y) = 0:
(6.4)
@ @ The eigen-functions of L = ; @x22 + @y22 in the above domain with given boundary condition are 2 n x m y (6.5) nm (x y) = p sin l sin W
which satisfy
lW
;
0
@2 + @2 @x2 @y2
n2 2 + m2 2 nm (x y) = l2 W2
1
nm (x
y ):
(6.6)
We can thus write down a box Green function in the position representation
0
(6.7)
which satis es
@2 @2 z + @x2 + @y2 GB (x y x y z) = (x ; x ) (y ; y ):
0 0 0 0
(6.8)
;m y
m y0
(6.9)
83
We de ne
kn = N =
n
E ; n l2 l pE
2 2
n2 2 ; E l2
where x] is the greatest integer less than equal or equal to x. and then apply a standard trigonometric identity to the product of sines in the inner sum to yield
m (y y0 ) W W22 k2 n
;
y ; cos m (W+y0) ; m2
(6.13) (6.14)
for 0 x 2 . So we have
0 0
n x sin n x l l n=1 sin kn (W ; y))] sin kn y ] : kn sin kn W We now re-write our expression for GB yet again: GB (x y x y E ) =
0 0 1 0 0
; 2 X sin l
84
As we will show below, the sum in (6.15) is not uniformly convergent as y ! y . However, this limit is essential for calculation of renormalized t-matrices. We can, however, add and subtract something from each term so that we are left with convergent sums and singular sums with limits we understand. Since the sum is symmetric under y $ y (this is obvious from the physical symmetry as well as the original double sum, equation 6.7), we may choose y < y . We de ne = y ; y > 0. and re-write the sum (6.15) as follows:
0 0 0 0
+1 l where
X
1
n=N +1
0
un
(6.16) (6.17)
1;e
n (2W ;
2 nW
1 )
;e
n (2W ;2y+
(6.18) (6.19)
We de ne
1 ; exp(;2 M W )] 1 exp (; ) n M
;
; ln
; ln
(6.20)
85
and therefore
1 (6.21) l 1 ; exp l which converges for > 0 but not uniformly. Thus we cannot take the ! 0 limit inside the sum. However, if we can subtract o the diverging part, we may be able to nd a uniformly converging sum as a remainder. With that in mind, we de ne
1 ;
exp
;M
(6.22) hn ( ) = sin n l x sin n l x nl exp ; nl P We rst note that the sum n=1 hn ( ) may be performed exactly (see appendix D) yielding:
0 1
9 =
(6.23)
1=2
; nl
3 5: (6.24)
As we show in appendix D, we can place a rigorous upper bound on n=M gn ( ) ; hn ( )] for su ciently large M . More precisely, given
2=
(6.25)
then
(6.26)
GB (x y x y E ) =
0
N X sin
; n x sin l
n=1
; W )]
86 ) + hn (2W ; 2y + )] ) + hn (2W ; 2y + )]
+1 l +1 l
X
1
n=1
X
1
+1 l +1 l
n=N +1
X
1 1
; 1 X gn(2h ; ) ; hn(2h ; )] l
n=N +1
n=N +1
X
1
n=N +1
gn(2W ; 2y + ) ; hn (2W + 2y ; )]
(6.27)
GB (x y x y E ) =
0 0
N X sin
; n x sin l h
;l
n=1 N 1X n=1
; W )]
) )
+ + + +
n=N +1
; gn( ) ; hn( )]
i 9 8 h (x+x0 ) i + sinh2 y = 1 < sin2 (x2+x0) + sinh2 (22l+ 2l i 2l l i + 4 ln : 2 h (x x0 ) y (x x0 ) + sinh2 + sinh2 (22l+ sin 2l 2l 2l i 8 h (x+x0 ) i + sinh2 (2W ) 9 = 1 < sin2 (x2+x0) + sinh2 2l i 2l l i + 4 ln : 2 h (x x0 ) (x x0 ) + sinh2 (2W ) sin + sinh2 2l 2l 2l
; ; ; ;
) + hn (2W ; 2y + )]
9 =
(2W 2y+ ) 2l (2W 2y+ ) 2l
; ;
9 =
X X
1 1
n=N +1 n=N +1
gn (2y + ) ; hn(2y + )]
; gn(2h ; ) ; hn(2h ;
)]
(6.28) (6.29)
n=N +1
gn (2h ; 2y + ) ; hn (2h + 2y ; )] :
When using these expressions we truncate the in nite sums at some nite M . The analysis in appendix D allows us to bound the truncation error. With this in mind we de ne a
87
SM (x y x y E ) =
0 0
N X sin
; n x sin l h
;l
n=1 N 1X n=1
; W )]
8 h (x+x0 ) i 1 ln < sin2 h 2l i + sinh2 ; 4 : sin2 (x x0) + sinh2 2l 9 =
;
) + hn (2W ; 2y + )]
(2W ) 2l (2W ) 2l
; ;
9 =
; gn( ) ; hn( )]
gn (2y + ) ; hn(2y + )]
; gn(2h ; ) ; hn(2h ;
)] (6.30)
(x+x0 ) i + sinh2 h (y0 y) i 9 = 2l i h (y20 l y) i : (x x0 ) + sinh2 2l 2l
; ; ;
gn (2h ; 2y + ) ; hn(2h + 2y ; )] :
(6.31)
We now have an approximate form for GB which involves only nite sums and straightforward function evaluations. This will be analytically useful when we subtract Go from GB . It will also prove numerically useful in computing GB .
Within the dressed-t formalism, we often need to calculate GB (r r z ) ; Go (r r z ). This quantity deserves special attention because it involves a very sensitive cancellation of in nities. Recall that ; i lim0 Go (r r k2 ) = rlim0 21 ln kjr ; r j + 1 Yo(R) (0) ; 4 (6.32) r r r 4
0 0 ! !
88
where Yo(R) is the regular part of Yo as de ned in equation A.48 of section A.5.1. If GB ; Go is to be nite, we need a canceling logarithmic singularity in GB . Equation 6.31 makes it apparent that just such a singularity is also present in GB . The logarithm term in that equation has a denominator which goes to 0 as r ! r . We carefully manipulate the logarithms and write an explicitly nite expression for GB ; Go
0
GB (r E ) = SM (x y x y E ) + 1 PM=1 sin n l n
2
n x
(6.33)
i +4
; 21 ln kl ; Yo(R)(0) + 21 ln ; 2 ; 41 ln ;sin lx
6.1.5 Ground State Energies
One nice consequence of equation 3.44 is that it predicts a simple formula for the ground state energy of one scatterer with a known Green function background. Speci cally, (3.44) implies that poles of T (E ) occur when 1 = G (r E ) (6.34) B s s(E ) From section 2.4 we have 1 = 1 i ; Yo (ka) (6.35) s(E ) 4 Jo (ka) for ka < 8 we have 1 = i ; 1 ln ( ) ; 1 Y (R) ( ) = i ; 1 ln kl ; 1 ln : (6.36) s(E ) 4 2 4 o 4 2 2 We de ne F (E ) = s(1 ) ; GB (r E ) so E
F (E ) =
We can use the ka < 8 expansion of the Neumann function to simplify F a bit: h i F (E ) = ; 21 ln a=l ; 1 Yo(R) (ka) ; Yo(R) (0) 4
89
1
2n x
In gure 6.1 we compare the numerical solution of the equation F (E ) = 0 with a numerical simulation performed with a standard method (Successive Over Relaxation 33]).
"
h2 r2 + E (~) = 0 r 2m
= (l y ) @ = @x (x y) x=l = (x W ) @ = @y (x y) :
y=W
(6.38)
in the domain
@ @x @ @y
(0 y) (x y) x=0 (x 0) (x y )
y=0
@2 + @2 + E @x2 @y2
(x y) = 0:
(6.39)
@ @ The eigen-functions of L = ; @x22 + @y22 in the above domain with given boundary condition are (1) (x y ) = p2 sin 2n x sin 2m y (6.40) nm l W lW (2) (x y ) = p 2 cos 2n x cos 2m y (6.41) nm
lW
90
80 60 40 20
0 .25
2
.75 1
0.1
0.15
0.2
0.25
Numerical Simulation and Dressed-t Theory for Ground State Energy Shifts (small ) 28 26
3 3 33 E 24 3 3 3 3
22 20 3 0
0.002
y) = y) =
2n x cos 2m y l W 2n x sin 2m y
@2 + @2 @x2 @y2
2 2 2 2 (x y) = n l2 + m 2 nm W
nm (x
y ):
(6.45)
Note that the m = 0 and n = 0 is permissible but only the cosine state survives so there is no degeneracy. We can thus write down a box Green function in the position representation
n x x cos m y y 2 + 4 X cos l W GB (x y x y E ) = lWE lW (6.46) n22 2 ; m2 22 E; l n m=1 W where trigonometric identities have been applied to collapse the sines and cosines into just two cosines. We note that this Green function depends only on jx ; x j and jy ; y j as it must.
1 j ; j ; 0 0 0 0
0j
0j
6.2.2 Re-summing GB
As with the Dirichlet case, we'd like to re-sum this Green function to make it a single sum and to create an easier form for numerical use and the Go ; GB subtraction. We begin by reorganizing GB as follows:
m x x 2 + 4 X cos n jy ; y j X cos l GB (x y x y E ) = lWE lW (6.47) n22 2 ; m2 22 W n=1 m=1 E ; l W and then apply the following Fourier series identity (see, e.g., 18]) to the inner sum:
1 0 1 j ; 0 0
0j
(6.48)
for 0 x 2 . We de ne
N = kn =
s2
W pE E ; 4W n 2
2 2
(6.49) (6.50)
hp l i h i N cos E 2 ; X X cos 2 nY cos kn 2l ; X 1 G B (X Y E ) = pE sin pE l + W n=1 W kn sin kn l 2h 2 h l i 2 2 cosh 1 ; W X cos W nY sinh nl 2 ; X : (6.54) n n2 n=N +1
1
We now follow a similar derivation to the one for Dirichlet boundaries. We choose an M > N such that we may approximate n 2 l n . We can then approximate GB by a nite sum plus a logarithm term arising from the highest energy terms in the sum. That sum looks like 2 1 X cos 2l nY e l nX (6.55) 2 n=M +1 n
1 ;
X xk
1
k=1
1 = ln 1 ; x : k
(6.56)
hp l i h l i N cos E 2 ; X 1 X cos 2 nY cos kn 2 ; X W ( SpM ) (X Y E ) = p p l + W n=1 l 2h E sin E 2 kn sin kn 2 h l i M 1 X cos 2 nY cosh n 2 ; X : W ;W (6.57) l n sinh n 2 n=N +1
and write our approximate GB as
We de ne
GB (X Y E )
M 1 X cos 2 nY e W +2 n n=1
( SpM ) (X Y E ) + 41 ln 1 ; 2e
;
WX
WX
cos 2 Y + e W
WX
(6.58)
93
GB (r E ) = GB (r r E ) ; Go (r r E )
(6.59)
though both GB (r r E ) and Go (r r E ) are (logarithmically) in nite. As with the Dirichlet case, all we need to do is carefully manipulate the logarithm in GB in the X Y ! 0 limit. Our answer is
GB (r
( E ) = SpM ) (0
(6.60)
Chapter 7
Disordered Systems
7.1 Disorder Averages
7.1.1 Averaging
When working with disordered systems we are rarely interested a particular realization of the disorder but instead in average properties. The art in calculating quantities in such systems is cleverly approximating these averages in ways which are appropriate for speci c questions. In this section we will consider only the average Green function of a disordered system, hGi (rather than, for instance, higher moments of G). Good treatments of these approximations and more powerful approximation schemes may be found in 10, 34, 9]. ^ Suppose, at xed energy, we have N ZRI's with individual t-matrices ti (E ) = si jri ihrj j. Any property of the system depends, at least in principle, on all the variables si ri . We imagine that there are no correlations between di erent scatterers (location or strength) and that each has the same distribution of locations and strengths. Thus we can de ne the ensemble average of the Green function operator G(E ):
hGi =
V
"Y Z N
i=1
(7.1)
We will typically use uniformly distributed scatterers and xed scattering strengths. In this case r (ri ) = 1 where V is the volume of the system and s (si ) = so (si ; so ) where so is the xed scatterer strength. We can write G as Go + GoTGo and, since Go is independent of the disorder,
hGi = Go + Go hT i Go:
94
(7.2)
95
Thus we must compute hT i. This is such a useful quantity, there is quite a bit of machinery developed just for this computation.
7.1.2 Self-Energy
hGi = Go + Go hGi
and thus
hGi 1 = Go 1 ;
; ; ;
Go 1 ; = E ; ] ; Ho :
Within the rst two approximations we discuss, the self-energy is just proportional to the identity operator so it can be thought of as just shifting the energy. We can also use (7.3) to nd in terms of hT i: = hT i (1 + Go hT i) or for hT i in terms of :
;
(7.6) (7.7)
Thus knowledge of either hT i or is equivalent. Recall that G = Go + Go TGo = Go + GoV Go + GoV GoV Go + means that the amplitude for a particle to propagate from one point to another is the sum of the amplitude for it to propagate from the initial point to the nal point without interacting with the potential and the amplitude for it to propagate from the initial point to the potential, interact with the potential one or more times and then propagate to the nal point. We can illustrate this diagrammatically:
hT i =
(1 ; Go ) 1 :
;
G=
(7.8)
where solid lines represent free propagation (Go ) and a dashed line ending in an \ " indicates an interaction with the impurity potential (V ). Each di erent \ " represents an interaction with the impurity potential at a di erent impurity. When multiple lines connect
96
to the same interaction vertex, the particle has interacted with the same impurity multiple times. An irreducible diagram is one which cannot be divided into two sub-diagrams just by cutting a solid line (a free propagator). The self-energy, , is equivalent to a sum over only irreducible diagrams (with the incoming and outgoing free propagators removed): = + + + + (7.9)
which is enough to evaluate G since we can build all the diagrams from the irreducible ones by adding free propagators:
hGi = Go + Go
Go + Go Go Go +
= Go (1 ; Go)
(7.10)
There are a variety of standard techniques for evaluating the self-energy. The simplest approximation used is known as the \Virtual Crystal Approximation" (VCA). This is equivalent to replacing the sum over irreducible diagrams by the rst diagram in the sum, i.e., = hV i. Since we don't use the potential itself, this approximation is actually more complicated to apply then the more accurate \average t-matrix approximation' (ATA). We note that, in a system where the impurity potential is known, hV i is just a real number and so the VCA just shifts the energy by the average value of the potential. The ATA is a more sophisticated approximation that replaces the sum (7.9) by a sum of terms that involve a single impurity, + + + + (7.11)
but this is, up to averaging, the same as the single scatterer t-matrix, ti . Thus the ATA is P equivalent to = h i ti i. This approximation neglects diagrams like
which involve scattering from two or more impurities. We note that scattering from two or more impurities is included in G, just not in . Of course while scattering from several impurities is accounted for in G, interference between scattering from various impurities is
97
neglected since diagrams which scatter from one impurity than other impurities and then the rst impurity again are neglected. That is,
is included but is not. At low concentrations, such terms are quite small. However, as the concentration increases, these diagrams contribute important corrections to G. One such correction comes from coherent backscattering which we'll discuss in greater detail in section 7.5. We will use the ATA below to show that the classical limit of the quantum mean free path is equal to the classical mean free path. For N uniformly distributed xed strength scatterers, the average is straightforward:
(7.12)
For each term in the sum, the r delta function will do one of the volume integrals and the rest will simply integrate to , canceling the factors of 1=V out front. The s delta functions will do all of the s integrals, leaving
*X + N
i=1
^ ti = N so = nso : V
(7.13)
Thus the self energy is simply proportional to the scattering strength multiplied by the concentration. We note that so is in general complex and this will make the poles of the Green function complex, implying an exponential decay of amplitude as a wave propagates. We will interpret this decay as the wave-mechanical mean free time.
98
we have a point particle that has just scattered o of one of the scattering centers. It now points in a random direction. What is the probability that it can travel a distance ` without scattering again? If the particle travels a distance x without scattering, then there must be a tube of volume x which is empty of scattering centers. The probability of that is given by the product of the chances that each of the N scatterers (without the re ective walls this would be N ; 1 but since we will take N large and we do have re ective walls, we'll leave it as N ) is not in the volume x . That chance is 1 ; x so V More precisely 1 ; P (N ) (x) is the probability that the free path-length is less than or equal to x. We de ne n = N=V , the concentration. So We take the N ! 1 while n = const. limit which is valid for in nite systems and a good approximation when the mean free path is smaller than the system size. We have
P (N ) (x) = 1 ; x V
(7.14)
P (N ) (x) = 1 ; xNn
(7.15)
n x
(7.16)
`qc = hxi =
@ x @x (1 ; P (x)) dl =
Z
0
P (x) dl = n1 :
(7.17)
Quasi-classical (indicated by the subscript \qc") here means that the transport between scattering events is classical but the cross-section of each scatterer is computed from quantum mechanics.
99
mean free path). In what follows we'll show that this is equivalent to the low-density weak-scattering approximation to the self-energy discussed above. We begin by noting that the free Green function takes a particularly simple form in the momentum representation: G (p p E ) = (p ; p ) : (7.18)
o
0
E ; Ep
If we write =
; i; we have
E;
; Ep
0
Now we consider the Fourier transform of this Green function with respect to energy which gives us the time-domain Green function in the momentum representation (we are ignoring the energy dependence of only for simplicity):
G(p p t ) =
0
Z Z
;1 1
dE e dE e
iEt iEt
G(p p E )
0
= =
;i (p ; p )e
0
;1
E;
i(Ep + )t e ;t
;
(p ; p ) + i; ; Ep
0
which implies an exponential attenuation of the wave if ;; = Imf g is negative. For the ATA, we have = nso which, for a two dimensional ZRI with scattering 4 Ea length a, is n HiJ(o( Ea) ) and thus o
p ; p
(7.21)
which is manifestly negative. We can associate the damping with a mean free time, via ; = 1=2 . Since, at p xed energy, the velocity (in units where h2 =2m = 1 is v = 2 E ) we have for the mean free path, ` pE Ho(pEa) 2 1 ` = v = 4n 2 p = (7.22) Jo ( Ea) n reproducing the quasi-classical result.
100
i=1
k jT j k = T (k k )
0
N X i=1
sie
i(k k0 ) ri :
;
T (k k )
0
N hsi f (k ; k )
0 ;
where
The function f , has a two important properties. Firstly, f (0) = 1 which implies, as we saw in 7.1.1, that hT i = N hsi. Also, when the bounding region, , is all of space we have (7.27) f (q) = 1 (q) =
q0
Together, these properties imply that the average ATA t-matrix cannot change the momentum of a scattered particle except insofar as the system is nite. A nite system will give a region of low momentum where momentum transfer can occur but for momenta larger than 1=Lo , momentum transfer will still be suppressed. We now consider the second moment of T or, more speci cally,
D
0
T (k k ) 2 ; T (k k )
0 0
2E :
(7.28)
;
Since we have
T (k k ) 2
N X i j =1
si sj e
i(k k0 ) ri ei(k k0 ) rj
;
(7.29)
T (k k ) 2
0
E
=
N N X D 2E X
6
(7.30)
101
2E 2E
Thus
T (k k ) 2 ; T (k k )
0 0
hD 2 E
jsj ; jhsij2 f (k ; k ) 2
0
T (k k ) 2 ; T (k k )
0 0
N jso j2 1 ; f (k ; k ) 2 :
0
In this case,
At this point it is worth considering our geometry and computing f explicitly. We'll assume we are placing scatterers in a rectangle with length (x-direction) a and width (y-direction) b. Then we have, up to an arbitrary phase,
(7.34) (7.35)
Thus 1 ;jf (k ; k )j2 is zero for k = k and then grows to 1 for larger momentum transfer. The zero momentum transfer \hole" in the second moment of T is an artifact of a potential made up of a xed number of xed size scatterers. To make contact with the standard condensed matter theory of disordered potentials, we should allow those xed numbers to vary, thus making a more nearly constant second moment of T . We can do this easily enough by allowing the size of the scatterers to vary as well. Then < jsj2 > ;j < s > j2 6= 0. In fact we should choose a distribution of scatterer sizes such that < jsj2 > ;j < s > j2 < jsj2 >. Of course, the scatterer strength, s, is not directly proportional to the scattering length. For example, if the scattering length varies uniformly over a small range a. It is straightforward to show that, for small a,
0
sinc x = sin x : x
(k a)2 ds 2 : 12 d(ka)
(7.36)
102
P (t) =
*X
1
t ; j (ro
)2
(E ; E )
(7.37)
where is the mean level spacing and ro is a speci c point in the system. Since P (t) is a R probability distribution, we have 0 P (t) dt = 1 and since the integrated square wavefuncR tion is normalized to 1, we also have 0 tP (t) dt = 1= . Though computing this quantity in general is quite hard, we can, as a baseline, compute it in one special case. As a naive guess at the form of the wavefunction in a disordered system, we conjecture that the wavefunction is a Gaussian random variable at each point in the system. That is, we assume that the distribution of values of , ( ) is
1
( ) = ae b
; j
(7.38)
where a and b are constants to be determined. We note that in a bounded system, we can make all the wavefunctions real. We proceed to determine the constants a and b. First, we R (x) dx = 1 which gives use the fact that is a probability distribution so
1
1=a
;1
;1
bx2 dx = a
implying a =
R x2 (x) dx = 1=
1 ;1
(7.39)
p =2 . So we have
;1
(7.40)
2 =2 : ( )= 2 e From this, we can compute P (t) via Z Z (x ; pt) + (x + pt) 2 (x) dx = P (t ) = t;x (x) dx: 2jxj We use the delta functions to do the integral and have p (p t) : P (t) =
; j j 1 1 ;1 ;1
P (t) = 2 t e
t=2
(7.44)
103
which is known as the \Porter-Thomas" distribution. If time-reversal symmetry is broken, e.g., by a magnetic eld, the wavefunction will, in general, be complex. In that case we can use the same argument where the real part and imaginary part of are each Gaussian random variables. In that case we get
P (t) = e
t:
(7.45)
This di erence between time-reversal symmetric systems and their non-symmetric counterparts is a recurring motif in disordered quantum systems. A derivation of the above results from Random Matrix Theory (using the assumption that the Hamiltonian is a random matrix with the symmetries of the system) is available many places, for example 14]. In this language, the Hamiltonian of time-reversal invariant systems are part of the \Gaussian Orthogonal Ensemble" (GOE) whereas Hamiltonians for systems without time-reversal symmetry are part of the \Gaussian Unitary Ensemble" (GUE). We will adopt this bit of terminology in what follows. Of course, most systems do not behave exactly as the appropriate random matrix ensemble would indicate. These di erences manifest themselves in a variety of properties of the system. In the numerical simulations which follow in chapters 8 and 9 we will see these departures quite clearly. For disordered systems, GOE behavior is expected when there are many weak scattering events in every path which traverses the disordered region, guaranteeing di usive transport without signi cant quantum e ects from scattering phases. More precisely, to see GOE behavior, we expect to need a mean free path which is much smaller than the system size (` << L) but much larger than the wavelength (` >> ). We will see this limit emerge in wavefunction statistics in chapter 9.
104
chaotic (but not disordered) systems, wavefunction scarring 20] is the best known form of weak localization. In disordered systems, the most important consequence of of weak localization is the reduction of conductance due to coherent back-scattering. It is not di cult to estimate the coherent back-scattering correction to the conductance. We begin by noting the conductance we expect for a wire with no coherent backscattering. Speci cally, when L >> ` >> we expect the DC conductivity of a disordered wire to satisfy the Einstein relation ; = e2 d D (7.46)
where ; is the conductivity, e is the charge of the electron, d is the d-dimensional density of states per unit volume and D is the classical di usion constant. The DC conductivity, ; is proportional to P (r1 r2 ), the probability that a particle starting at point r1 on one side of the system reaches r2 on the other side. Quantum mechanically, this quantity can be evaluated semiclassically by a sum over classical paths, p, X 2 P (r1 r2 ) = Ap (7.47) where Ap = jAp j eiSp and Sp is the integral of the classical action over the path. The quantum probability di ers from the classical in the interference terms:
p
X
6
p=p0
Ap Ap0 :
(7.48)
Typically, disorder averaging washes out the interference term. However, when r1 = r2 , the terms arising from paths which are time-reversed partners will have strong interference even after averaging since they will always have canceling phases. Since every path has a time reversed partner, we have
hP (r r)i = 2 P (r r)classical
But this enhanced return probability implies a suppressed conductance since P (r r )dr = ; 1 by conservation of probability. Thus ; must be smaller by a factor of ; P (r r)classical due to this interference e ect. But P (r r)classical is something we can compute straightforwardly. If we de ne R(t) to be the probability that a particle which left the point r at time t = 0 returns at
0 0
(7.49)
105
time t, we have
P (r r)classical =
Z tc
R(t)dt:
(7.50)
The lower cuto , = v=` is there since our particle must scatter at least once to return and that takes a time of order the mean free time. The upper cuto is present since we have only a nite disordered region and so, after a time tc = L2 =D the particle has di used out o p and will not return. For a square sample, Lo is ambiguous up to a factor of 2. The upper cuto can also be provided by a phase coherence time . If particles lose phase coherence, for instance by interaction with a nite temperature heat bath, only paths which take less time than will interfere. In this case the expression for the classical return probability is slightly modi ed: Z P (r r)classical = e t= R(t)dt: (7.51) The return probability, R(t)dt, can be estimated for a di usive system. Of all the trajectories that scatter, only those that pass within a volume v dt of the origin contribute. The probability that a scattered particle falls within that volume is just the ratio of it to the total volume of di using trajectories, (Dt)d=2 Vd where d is the e ective number of dimensions (the number of dimensions of the disordered sample >> `) and Vd = d=2 =(d=2)! is the volume of the unit sphere in d-dimensions (this is easily calculated using products of Gaussian integrals, see e.g., 32] pp. 501-2) and D = v`=d. So
1 ;
v dt R(t)dt = (Dt)d=2 V :
d
(7.52)
With this expression for R(t)dt in hand, we can do the integral (7.50) and get
d ;
2
(7.53)
For future reference we state the speci c results for one and two dimensions. Rather than state the result of the estimation above, we give the correct leading order results (computed by diagrammatic perturbation theory, see, e.g., 4, 12]). These results have the same dependence on ` and Lo but slightly di erent prefactors than our estimate. ;=;h :
2 e2 < 3
2 ln 2Lo `
p
p2Lo ; `
d=1 d=2
(7.54)
106
However, the wavepacket has been di using for a time t and so we can use its autocorrelation function, A(t), de ned by
X
n
(0) jn i e iEn t
;
(7.56)
to look at the spectrum of the wavepacket. If we can completely resolve the spectrum, no more dynamics occurs except the phase evolution of the eigenstates of the box. Thus the wavepacket has localized. After a time t, we can resolve levels with spacing E = h . We t de ne the \Thouless Conductance" g via
E 2 2 g = E = hDd=2 E d 1 t d 1:
; ;
(7.57)
If g < 1 we can resolve the levels of the wavepacket and it localizes. Conversely, if g > 1, we cannot resolve the eigenstates making up the wavepacket and di usion continues. The rst conclusion we can draw from this argument is the dimensional dependence of the t ! 1 limit of g. In one dimension, it is apparent that limt g = 0 and thus all states in a weakly disordered one dimensional system localize. In two dimensions, this argument is inconclusive and seems to depend on the strength of the disorder. In fact, it is believed that all states in weakly disordered two dimensional systems localize as well but
!1
107
with exponentially large localization lengths. For d > 2, limt g = 1 and we expect the states to be extended. When measuring conductance, the di erence between localized and extended states in the disordered region is dramatic. If the state is exponentially localized in the disordered region, it will not couple well to the leads and the conductance will be suppressed. We can look for this e ect by looking at the conductance of a disordered wire as a function of the length of the disordered region. If the states are extended, we expect the conductance to vary as 1=L whereas if the states are localized we expect the conductance to vary as e L= where is the localization length. The e ect of exponential localization on wavefuntion statistics is equally dramatic. For instance, in two dimensions, since we know that
;
2 e 2
;j ;
r r o j=
(7.58)
2r=
2 ln 2 2 t =2 t : (7.59) In gure 7.1 we plot the Porter-Thomas distribution and the exponential localization disp p tribution for a localization length = =10 and = =100 so we can see just how stark this e ect is.
P (t) = 2
r dr t ;
j (r)j2
=2
r dr t ; 2 2 e
1 Porter-Thomas Strong localization, localization length = L/10 Strong localization, localization length = L/100
0.1
0.01
0.001
P(t)
0.0001
1e-05
1e-06
1e-07
1e-08 0 5 10 15 t 20 25 30
108
109
C2 ln2 t :
P (t) e
(7.60)
Various workers have argued for di erent forms for C2 which depends on the energy E , the mean free path, `, the system size, Lo and the symmetry class of the system (existence of time-reversal symmetry). The log-normal distribution looks strikingly di erent from either the Porter-Thomas distribution or the strong localization distribution. In gure 7.2 we plot all of these distributions for an example choice of for t > 20. We only consider large values of t because for small values of t, P (t) will have a di erent form. For small enough values of t these calculations predict that P (t) has the Porter-Thomas form. This allows them to be trivially distinguished from the strong-localization form which is very di erent from Porter-Thomas for small t. We will focus in particular on two di erent calculations. The rst, appearing in 39], uses the direct optimal uctuation method (DOFM) and predicts where D1 is an O(1) constant. Another calculation, appearing in 15] and using the supersymmetric sigma model (SSSM) predicts
(2) C2 = 4 ln(F k` =`) L 2 o (1) C2 = 4 ln(Fk` ) kL 1
(7.61)
(7.62)
where = 1 for time-reversal invariant systems and = 2 for systems without time-reversal symmetry and D2 is an O(1) constant. (1 In order to see the di erences between C2 2) , in gure 7.3 we plot both of these coe cients versus both wavelength and mean free path for various values of D1 and D2 .
7.8 Conclusions
In this chapter we reviewed material on quenched disorder in open and closed metallic systems. In the chapters that follow we will often compare to these results or try to verify them with numerical calculations.
0.0001 1e-06 1e-08 1e-10 1e-12 P(t) 1e-14 1e-16 1e-18 1e-20 1e-22 1e-24 20 30 40 50 60 t 70 80 90 100 Porter-Thomas Strong localization, localization length = L/100 Log-normal form
110
111
3 333 33 3 33 33 33 333 10 333 33 33 333 333 8 33 333 333 333 33 333 6 333 333 3 333 333 333 333 4 333 3333 3333 3333 3333 3333 2 3333 3333 333 333
0 0.04 0.06 0.08 0.1 0.12 0.14
DOFM SSSM
`
18
16 3 14 12 10 8 6 4 2
3 3 3 3 3 3 3 3 3 3 33 33 33 33 333 333 3333 3333 33333 333333 333333333 333333333 3333333333333333 333333333333333333 3333 3
0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05
DOFM SSSM
0 0.005
Figure 7.3: Comparison of log-normal coe cients for the DOFM and SSSM.
Chapter 8
1111 0000 1111 0000 Disordered Region Leads 1111 0000 0000 1111 11111111111111111111111111111111111111111111111111 00000000000000000000000000000000000000000000000000 111111111111111 000000000000000 11111111111 00000000000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000
L
Figure 8.1: The wire used in open system scattering calculations. To connect with the language of mesoscopic systems, we may think of the disordered region as a mesoscopic sample and the semi-in nite ordered regions on each side as perfectly conducting leads. For example, one can imagine realizing this system with an AlGaAs \quantum dot." 27]. We can measure many properties of this system. For instance, we have used renormalized scattering techniques to model a particular quantum dot 23]. Typical quantum 112
113
dot experiments involve measuring the conductance of the quantum dot as a function of various system parameters (e.g., magnetic eld, dot shape or temperature). Thus we should consider how to extract conductance from a Green function. In this section we discuss numerically computed transport coe cients. This allows us to verify that our disorder potential has the properties that we expect classically. Since we are interested in intensity statistics and how they depend on transport properties, it is important to compute these properties in the same model we use to gather statistics. For instance, as discussed in the previous chapter, the ATA breaks down when coherent backscattering contributes a signi cant weak localization correction to the di usion coe cient. In this regime it is useful to verify that the corrections to the transport are still small enough to use an expansion in =`. When the disorder is strong enough, strong localization occurs and a di erent approximation is appropriate. Of course, transport in disordered systems is interesting in its own right. Our method allows a direct exploration of the theory of weak localization in a disordered two dimensional systems.
114
us a di erent relationship between the re ection coe cient and the cross-section of each obstacle. It is instructive to compute the expected re ection coe cient as a function of the concentration and cross-section under the di usion assumption and compare that result, RD , to RB = 4W computed under the ballistic assumption. We begin from the relation between the intensity transmission coe cient, TD , ; and the number of open channels, Nc: 2 T = (h=e ); (8.1)
D
i.e., the transmission coe cient is just the unitless conductance per channel. From this we see that ; does not go to 1 for an empty wire as we might expect. Only a nite amount of ux can be carried in a wire with a nite number of open channels and this gives rise to a so-called \contact resistance" 8]. We thus split the unitless conductance into a disordered region dependent part (;s ) and a contact part: ! h ; = e2 =h + 1 1 (8.2) e2 ;s Nc where the contact part is chosen so that limT 1 ;s = 1. At this point we invoke the assumption of di usive transport. This allows us to L use the Einstein relation 8] to relate the conductivity of the sample W ;s to the di usion constant via L 2 (8.3) W ;s = e D where is the density of states per unit volume = 1=(4 ) in two dimensions. The L=W in front of ;s relates conductivity to conductance in two dimensions. We now have h;= L + 1 1 = hW DNc : (8.4) e2 hW D Nc LNc + hW D If we substitute this into (8.1) we have
; ! ;
Nc
D TD = LN hWhW D : (8.5) c+ As in the previous chapters, we choose units where h = 1 and m = 1=2, so h2 =(2m) = 1. We also choose units where the electron charge, e = 1. We now use D = v`=d = k` 37] and Nc kW= and get ` (8.6) TD = 2 ` L+ 2
115
and
RD = 1 ; TD = RD =
L+
L : `
2
(8.7) (8.8)
For very few scatterers we usually have N << W=2 and thus R =N 2 :
D
1 : W 1 + 2N
Compare this to the ballistic result and we see that they are related by RB =RD = 2 =8. This factor arises from the di erent distributions of the incoming angles. In practice, there can be a large crossover region between these two behaviors, when the non-uniform distribution of the incoming angles can make a signi cant di erence in the observed conductance. We note that (8.7) can be rearranged to yield: (8.10) ` = 2L 1 ; 1 = L 2 TD
(8.9)
RD
RD
which we will use as a way to compare numerical results with the assumption of quasiclassical di usion.
116
Figure 8.2: Numerically observed mean free path and the classical expectation.
0.05 100 0.45 0.35 0.25 0.15 0.4 0.3 0.2 0.1 mean free path
0.55
0.5
200
300
400 concentration
500
600
117
Figure 8.3: Numerically observed mean free path after rst-order coherent back-scattering correction and the classical expectation.
0.05 100 0.45 0.35 0.25 0.15 0.4 0.3 0.2 0.1 mean free path
0.55
0.5
200
300
400 concentration
500
600
118
is not completely di usive. As we saw at the beginning of this chapter, ballistic scattering leads to a larger re ection coe cient than di usive ones. This leads to an apparently smaller mean free path. More interestingly, there is coherent back-scattering at all concentrations (see section 7.5), though its e ect is larger at higher concentration since the change in conductance due to weak localization is proportional to =`. We correct the conductance, via (7.54), to rst order in =` and then plot the corrected mean free path and the classical expectation in gure 8.3. The agreement is clearly better, though there is clearly some other source of reduced conductance. At the lowest concentrations there is still a ballistic correction as noted above but this cannot account for the lower than expected conductance at thigher concentrations where the motion is clearly di usive. As =` increases, the di erence between the classical and quantum behavior does as well. For large enough =` this will lead to localization. In order to verify that the transport is still not localized, we compute the transmission coe cient vs. the length of the disordered region for xed concentration. If the transport is di usive, T will satisfy (8.5) which predicts T 1=L for large L. If instead the wavefunctions in the disordered region are exponentially localized, T will fall exponentially with distance, i.e., T e L= . In gure 8.4 we plot T versus L for two di erent concentrations and energies. In both plots, 5 realizations of the disorder potential are averaged at each point. In gure 8.4a there are 35 wavelengths across the width of the wire, as in the previous plot, and the concentration is 250 scatterers per unit area. T is clearly more consistent with the di usive expectation than the strong localization prediction. We compare this to gure 8.4b where the wavelength and mean free path are comparable and the wire is only a few wavelengths wide. What we see is probably quasione dimensional strong localization. Consequently, T does not satisfy (8.5) but rather has an exponential form. We note that the data in gure 8.4b is rather erratic but still much more strongly consistent with strong localization than di usion. Numerical observation of exponential localization in a true two dimensional system would be very di cult since the two dimensional localization length is exponentially long in the mean free path.
;
119
a)
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 L 1.5 T
b)
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 L 1.5 T
Figure 8.4: Transmission versus disordered region length for (a) di usive and (b) localized wires.
120
1. Compute the random locations of N scatterers. 2. Compute the renormalized t-matrix of each scatterer. 3. Compute the scatterer-scatterer Green functions for all pairs of scatterers. 4. Construct the inverse multiple scattering matrix, T 1 .
;
6. Use formula (5.37) to compute the channel-to-channel Green function. 7. Use (5.31) and the Fisher-Lee formula to nd the conductance ;. The bottleneck in this computation can be either the O(N 3 ) SVD or the O(N 2 Nc2 ) application of (5.37) depending the concentration and the energy. The computations appearing above were done in one to four hours on a fast desktop workstation (DEC Alpha 500/500). There are faster matrix inversion techniques than the SVD but few are as stable when operating on near-singular matrices. Though that is crucial for the nite system eigenstate calculations of the next chapter, it is non-essential here. If one were to switch to an LU decomposition or a QR decomposition (see appendix C) we could speed up the inversion stage by a factor of four or two respectively. For most of the calculations we performed, the computation of the channel-to-channel Green function computation was more time consuming and so such a switch was never warranted. 8. Repeat for as many realizations as required to get h;i.
Chapter 9
122
ever, experimental limitations make it di cult to consider enough ensembles to do proper averaging. Instead, the statistics are gathered from states at several di erent energies. Since the mean free path and wavelength both depend on the energy, the mixing of di erent distributions makes analysis of the data di cult. Still, the data does suggest that large uctuations in wavefunction intensities are possible in two dimensional weakly disordered systems. Numerical methods, which would seem a natural way to do rapid ensemble averaging, have not been applied or, at least, not been successful at reaching the necessary parameter regimes and speed requirements to consider the tails of the intensity distribution. There are some results for the tight-binding Anderson model 30]. However, the nature of the computations required for the Anderson model makes it similarly di cult to gather su cient intensity statistics to t the tails of the distribution. It is the purpose of the section to illustrate how the techniques discussed in this thesis can provide the data which is so sorely needed to move the theory (see section 7.7) forward. We begin with an abstract treatment of the extraction of discrete eigen-energies and corresponding eigenstates from t-matrices. This is a topic worth considering carefully since it is the basis for all the calculations which follow. The t-matrix has a pole at the eigenenergies so the inverse of the t-matrix is nearly singular. Once the preliminaries are out of the way, we discuss the di culties of studying intensity statistics in the parameter regime and with the methods of 24]. In particular, we consider the impact of Dirichlet boundaries on systems of this size. We consider the possibility that dynamics in the sides and corners of a Dirichlet rectangle can have a strong e ect on the intensity statistics and perhaps mask the e ects predicted by the Field-Theory. We also discuss eigenstate intensity statistics for a periodic rectangle (a torus) with various choices of disorder potential (i.e. various ` and ). We'll brie y discuss the tting procedure used to show that the distributions are well described by a log-normal form and extract coe cients. We'll then compare these coe cients to the predictions of eld theory. The computation of intensity statistics in the di usive weak disorder regime is the most di cult numerical work in this thesis. It also produces results which are at odds with existing theory. Thus we spend some time exploring the stability of the numerics and possible explanations for the di erences between the results and existing theory. Finally, as in the last chapter, we'll discuss the numerical techniques more speci cally and outline the algorithm. Here, some less than obvious ideas are necessary to gather
123
(9.1)
(9.2)
^ ^ ^ which exists since t 1 (En ) is explicitly non-null on PR jvi. We can now invert t 1 (z ) in the neighborhood of z = En : ^ ^ ^ ^ t(En + ) jvi BR PR jvi + C PN jvi :
;
(9.3)
^ ^ Thus, the residue of t(z ) at z = En projects any vector onto the null-space of t 1 (En ). Recall that the full wavefunction is written ^ ^ ^ j ij i + GB t^j i : / lim0 GB C PN j i
!
(9.4)
Since the t-matrix term has a pole, the incident wave term is irrelevant and the wavefunction ^ ^ is (up to a constant) GB PN j i. When the state at En is non-degenerate (which is generic in disordered systems), ^ there exists a vector j i the projector may be written PN = j ih j and thus ^ j ni = N GB (En) j i where N is a normalization constant. In position representation,
n (r) =
(9.5)
GB (r r En) (r ) dr :
0 0 0
(9.6)
124
P ( ^ If the state is m-fold degenerate, PN = m j j ih j j and we have the solutions nj ) = j =1 ^ Nj GB j j i Thus, the task of nding eigenenergies of a multiple scattering system is equivalent ^ to nding En such that t 1 (En ) has a non-trivial null space. Finding the corresponding eigenstates is done by nding a basis for that null space. Suppose that our t-matrix is generated by the multiple scattering of N zero range scatterers (see 3.1). In this case, the procedure outlined above can be done numerically using the Singular Value Decomposition (SVD). We have
;
^ t 1=
;
X
ij
(9.7)
We can decompose A via A = U VT where U and V are orthogonal and is diagonal. The elements of are called the \singular values" of A. We can detect rank de ciency (the existence of a null-space) in A by looking for zero singular values. Far more detail on numerical detection of rank-de ciency is available in 17]. P Once zero singular values are found, the vector, (where jalphai = i = i jri i), needed to apply (9.5), sits in the corresponding column of V. It is important to note that we have extracted the eigenstate without ever actually inverting A which would incur tremendous numerical error. Thus our wavefunction is written (r) = N
X
i
GB (r ri En ) i:
(9.8)
We normalize the wavefunction by sampling at many points and determining N numerically. In practice, we may use this procedure in a variety of ways. Perhaps the simplest is looking at parts of the spectra of speci c con gurations of scatterers. We de ne SN (E ) ^ as the smallest singular value of t 1 (E ) (standard numerical techniques for computing the SVD give the smallest singular value as ( )NN ). Computing SN (E ) is O(N 3 ). Then we 2 use standard numerical techniques (e.g., Brent's method, see 33]) to minimize SN (E ). We then check that minimum found is actually zero (within our numerical tolerance of zero, to be precise). These standard numerical techniques are more e cient when the minima are quadratic which is why we square the smallest singular value. We have to be careful to consider SN (E ) at many energies per average level spacing so we catch all the levels in the spectrum. Since we know the average level spacing (see 2.27) is 1 = h2 4 (9.9) V %2(E) 2m V
;
125
we know approximately how densely to search in energy. The generic level repulsion in disordered systems 14] helps here, since the possibility of nearby levels is smaller.
126
Figure 9.1: Typical low energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots.For the top left wavefunction ` = :12 = :57 whereas ` = :23 = :11 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order.
127
Figure 9.2: Typical medium energy wavefunctions (j j2 is plotted) for 72 scatterers in a 1 1 Dirichlet bounded square. Black is high intensity, white is low. The scatterers are shown as black dots. For the top left wavefunction ` = :25 = :09 whereas ` = :48 = :051 for the bottom wavefunction. ` increases from left to right and top to bottom whereas decreases in the same order.
128
0.001
0.0001
1e-05
1e-06
Figure 9.3: Intensity statistics gathered in various parts of a Dirichlet bounded square. Clearly, larger uctuations are more likely in at the sides and corners than in the center. The (statistical) error bars are di erent sizes because four times as much data was gathered in the sides and corners than in the center.
0.1
10
15 t
20
25
129
bin by the total number of counts and the width of each bin. As is clear from the gure, anomalous peaks are more likely near the sides and most likely near the corners. This large boundary e ect makes it unlikely that existing theory can be t to data which is, e ectively, an average over all regions of the rectangle.
;1 ; S (E )G w = 0: B
130
(1 ; S (E )GB )w = w 1 GB w = S (; ) w E
~ Thus there exists a nearby S (E ) = S (E )=(1 ; ) such that we do have an eigenstate at E . ~ Since S (E ) parameterizes the renormalized scatterer strength, S (E ) corresponds to a nearby scatterer strength. Thus, in order to include states corresponding to small eigenvalues we must choose an initial scattering strength such that S (E )=(1 ; ) is still a possible scattering strength. As discussed in section C.3, for a symmetric matrix, the singular values are, up to a sign, the eigenvalues. Thus we may use the columns of V corresponding to small singular values to compute eigenstates. This provides a huge gain in e ciency. We pick a particular energy, choose a realization of the potential, nd the SVD of the t-matrix and then compute a state from every vector corresponding to a small singular value. Small here means small enough that the resultant scatterer strength is in some speci ed range. We frequently get more than one state per realization. With this technique in place, there is little we can do to further optimize each use of the SVD. The other bottleneck in state production is the computation of the background Green function. Each Green function requires performing a large sum of trigonometric and hyperbolic trigonometric functions. Thus for moderate size systems (e.g., 500 scatterers), lling the inverse multiple scattering matrix takes far longer than inverting it. Green function computation time is also a bottleneck when computing the wavefunction from the multiple scattering t-matrix. We could try to improve the convergence of the sums and thus require fewer terms per Green function. That would bring a marginal improvement in performance. Of course, when function computation time is a bottleneck a frequent approach is to tabulate function values and then use lookups into the table to get function values later. A naive application of this is of no use here since our scatterers move in every realization and thus tabulated Green functions for one realization are useless for the next. There is a simple way to get
E 1 ; S (; ) GB w = 0: 1
131
the advantages of tabulated functions and changing realizations. Instead of tabulating the Green functions for one realization of scatterers, we tabulate them for a larger number of scatterers and then choose realizations from the large number of precomputed locations. For example, if we need 500 scatterers per realization, we pre-compute the Green functions for 1000 scatterers and choose 500 of them at a time for each realization. In order to check that this doesn't lead to any signi cant probability of getting physically similar ensembles, we sketch here an argument from 4]. Consider a random potential of size L with mean free path `. A particle di using through this system typically undergoes L2 =`2 collisions. The probability that a particular scatterer is involved in one of these collisions is roughly this number divided by the total number of scatterers, nLd , where n is the concentration and d is the dimension of the system. Thus a shift of one scatterer can, e.g., shift an energy level by about a level spacing when
E= L 2 1 = 1 (9.14) ` nLd n`2Ld 2 is of order one. That is, we must move n`2 Ld 2 scatterers. In particular, in two dimensions we must move no = 0 2 scatterers to completely change a level. n` 1 M There are @ A ways to choose N scatterers from M possible locations and N 0 12 @ M A ways to independently choose two sets. Of those pairs of sets N
; ;
0 @ M
have N ; no or more scatterers in common. The rst factor is the number of ways to choose the common N ; no scatterers and the second is the number of ways to choose the rest independently. Thus the probability, p2 , that two independently chosen sets of scatterers have N ; no or more in common is
N ; no
10 12 M ; N + no A A@
no
(9.15)
0 10 12 M A @ M ; N + no A @ N ; no no : p2 = 0 12 MA @
N
(9.16)
132
We will be looking at thousands or millions of realizations. The probability that no pair among R realizations shares as many as no scatterers is
pR = 1 ; (1 ; p2 )R(R 1) :
;
(9.17)
For large M and N , this probability is extremely small. For example, the fewest scatterers we use will be 500 which we'll choose from 1000 possible locations. Our simulations are in a 1 1 square and the mean free path is :08 implying that no < 4. The chance that all but 4 scatterers are common between any of one million realizations 500 scatterers chosen from a pool of 1000 is less than 10 268 . Thus we need not really concern ourselves with the possibility of oversampling a particular con guration of the disorder potential. The combination of getting one or more states from nearly very realization and pre-computing the Green functions, leads to an improvement of more than three orders of magnitude over the method used for the smaller systems of 9.2. This allows us to consider systems with more scatterers and thus get closer to the limit ` Lo . With this improvement we can proceed to look at the distribution of scatterer intensities for various values of ` and in a Lo = 1 square.
;
Co C1 ln t C2 ln2 t
; ;
(9.18)
where Co is just a normalization constant, C1 is the power in the power-law and C2 is the log-normal coe cient. We note that the expectation is that log-normal behavior will occur in the tail of the distribution. In order to account for this we t each numerically computed
133
0.001
1e-05
1e-06
1e-07
1e-08
0.0001
Figure 9.4: Intensity statistics gathered in various parts of a Periodic square (torus). Larger uctuations are more likely for larger =`. The erratic nature of the smallest wavelength data is due to poor statistics.
1e-09 P(t)
0.01
0.1
10
134
1000
Reduced ChiSquared
100
10
0.1 2 4 6 8 10 12 Minimum t 14 16 18 20
-0.8
-0.9
-1
-1.1
-1.2
-1.3
-1.4
-1.5 2 4 6 8 10 12 Minimum t 14 16 18 20
Figure 9.5: Illustrations of the tting procedure. We look at the reduced 2 as a function of the starting value of t in the t (top, notice the log-scale on the y-axis) then choose the C2 with smallest con dence interval (bottom) and stable reduced 2. In this case we would choose the C2 from the t starting at t = 10.
135
P (t) beginning at various values of t. We then look at the reduced 2 , 2 = ( 2 ; D)=Nd (where there are D tting parameters and Nd data points) for each t. A plot of a typical
and
sequence of 2 values is plotted in gure 9.5 (top). Once 2 settles down to a near constant value, we choose the t with the smallest con dence interval for the tted C2 . A typical sequence of C2 's and con dence intervals for one t is plotted in gure 9.5 (bottom). The behavior of 2 is consistent with the assumption that P (t) does not have the form (9.18) until we reach the tail of the distribution. As discussed in section 7.7 there are two eld theoretic computations which give two di erent forms for C2 : (1) C2 = 4 ln(Fk` ) (9.19) kL
1
We can attempt to t our observed C2 's to these two forms. We nd that neither form works very well at all. In gure 9.6 we compare these ts to the observed values of C2 as we vary k at xed ` = :081 (top) and vary ` at xed k = 200 ( = :031). Thus, while the numerically computed intensity statistics are well tted by a lognormal distribution as predicted by theory, the coe cients of the log-normal do not seem to be explained by existing theory.
(9.20)
136
3.5
Fitted C2(2)
C2
2.5
1.5
14 numerical 12 Fitted C2
(1)
Fitted C2(2) 10
Figure 9.6: Numerically observed log-normal coe cients ( tted from numerical data) and tted theoretical expectations plotted (top) as a function of wavenumber, k at xed ` = :081 and (bottom) as function of ` at xed k = 200.
137
Figure 9.7: Typical wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown.
138
Figure 9.8: Anomalous wavefunctions (j j2 is plotted) for 500 scatterers in a 1 1 periodic square (torus) with ` = :081 = :061. The density of j j2 is shown. We note that the scale here is di erent from the typical states plotted previously.
139
Anomalous Peaks 1 Scaled Average Radial Wavefunction (peak=34.1) 0.9 Scaled Average Radial Wavefunction (peak=60.1) 0.8 0.7 0.6 <|psi|2> 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 kr/(2 pi) 2.5 3 3.5 4 4.5
Figure 9.9: The average radial intensity centered on two typical peaks (top) and two anomalous peaks (bottom).
140
In gure 9.9 we plot R(r) for two typical peaks (one from each wavefunction in gure 9.7) and the two anomalous peaks from the wavefunctions in gure 9.8. Here we see that each set of peaks have a very similar behavior in their average decay and oscillation. The anomalous peaks have a more quickly decaying envelope as they must to reach the same Gaussian random background value. This is predicted in 39], although we have not yet con rmed the quantitative prediction of those authors. Again we note
maxf( )2 g : (9.23) maxfj j2 g The former is a standard measure of the di erence of functions and the latter we expect to be more sensitive to changes in anomalously large peaks.
141
In order to see if a small perturbation in scatterer locations produces a small change in the wavefunction, we need an appropriate scale for the scatterer perturbation. If Random Matrix Theory applies, we know that a single state can be shifted by one level (and thus completely de-correlated) by moving one scatterer with cross-section approximately the wavelength by one wavelength. From that argument and that the fact that the motion of each scatterer is uncorrelated with the motion of the others, we can see that the appropriate parameter is approximately p = N x (9.24) where x is average distance moved by a single scatterer. That is, if the error in the wavefunction is comparable to we can assume it comes from physical changes in the wavefunction not numerical error. In gure 9.10 we plot our two measures of wavefunction deviation against (on a log-log scale) for several between 3:6 10 7 and 3 for a state with an anomalously large peak. Since our deviations are no larger then expected from Random Matrix Theory, we can assume that the numerical stability is su ciently high for our purposes. Though not exactly a source of numerical error, we might worry that including states that result from small but non-zero singular values has an in uence on the statistics. If this were the case, we would need to carefully choose our singular value cuto in order to match the eld theory. However, the in uence on the statistics is minimal as summarized in table 9.1.
;
C1 C2
Table 9.1: Comparison of log-normal tails of P (t) for di erent maximum allowed singular value.
9.3.5 Conclusions
In contrast to the well understood phenomena observed in disordered wires, we have observed some rather more surprising things in disordered squares. While the various theoretical computations of the expected intensity distribution appear to correctly predict the shape of the tail of the distribution, none of them seem to correctly predict the dependence of that shape on wavelength or mean free path.
142
sqrt(max|dpsi|2/max|psi|2)
sqrt(V<|dpsi|2>)
0.001
0.0001
0.01
1e-05
1e-06
1e-07
Figure 9.10: Wavefunction deviation under small perturbation for 500 scatterers in a 1 1 periodic square (torus). ` = :081 = :061.
1e-08 0.1
10
1e-06
1e-05
0.0001
0.01
0.1
143
We have considered a variety of explanations for the discrepancies between the eld theory and our numerical observations. There is one way in which our potential di ers drastically from the potential which the eld theory assumes. We have a nite number of discrete scatterers whereas the eld theory takes the concentration to in nity while taking the scatterer strength to zero while holding the mean free path constant. Thus it seems possible that there are processes which can cause large uctuations in j j2 which depend on there being nite size scatterers. In order to explore this possibility we consider the dependence of P (t) on the scatterer strength at constant wavelength and mean free path. Speci cally, at two di erent wavelengths ( = :06 and = :03) and xed ` = :08 we double the concentration and halve the cross-section of the scatterers. The results are summarized in table 9.2. While changes in concentration and scatterer strength do in uence the coe cients of the distribution, they do not do so enough to explain the discrepancy with the eld theory. = :061 = :031 Strong Scatterers Weak Scatterers
C1 C2 C1 C2
;1:96
Table 9.2: Comparison of log-normal tails of P (t) for strong and weak scatterers at xed and `.
9.4 Algorithms
We have used several di erent algorithms in di erent parts of this chapter. We have gathered spectral information about particular realizations of scatterers, gathered statistics in small systems where we only used realizations with a state in a particular energy window, gathered statistics from nearly every realization by allowing the scatterer size to change and computed wavefunctions for particular realizations and energies. Below, we sketch the algorithms used to perform these various tasks. We will frequently use the smallest singular value of a particular realization at a ( particular energy which we denote SNi) (E ) where i labels the realization. When only one ( realization is involved, the i will be suppressed. To compute SNi) (E ) we do the following:
144
1. Compute the renormalized t-matrix of each scatterer. 2. Compute the scatterer-scatterer Green functions for all pairs of scatterers. 3. Construct the inverse multiple scattering matrix, T 1 .
;
4. Find the singular value decomposition of T value. 1. Load the scatterer locations and sizes.
In order to nd spectra from Ei to Ef for a particular realization of scatterers 2. choose a E less than the average level spacing. 3. Set E = Ei (a) If E < Ef nd SN (E ), SN (E + E ) and SN (E + 2 E ), otherwise end. (b) If the smallest singular value at E + E is not smaller than for E and E = 2 E , increase E by E and repeat from (a). (c) Otherwise, apply a minimization algorithm to SN (E ) in the region near E + E . Typically, minimization algorithms will begin from a triplet as we have calculated above. (d) If the minimum is not near zero, increment E and repeat from (a). (e) If the minimum coincides with an energy where the renormalized t-matrices are extremely small, it is probably a spurious zero brought on by a state of the empty background. Increment E and repeat from (a). (f) The energy Eo at which the minimum was found is an eigenenergy. Save it, increment E by E and repeat from (a). The bottleneck in this computation is the lling of the inverse multiple scattering matrix and computation of the O(N 3 ) SVD. Performance can be improved by a clever choice of E but too large a E can lead to states being missed altogether. The optimal E can only be chosen by experience or by understanding the width of the minima in Sm (E ). The origin of the width of these minima is not clear. The simpler method for computing intensity statistics by looking for states in an energy window of size 2 E about an energy E goes as follows:
145
1. Choose a realization of scatterer locations. 2. Use the method above to check if there is an eigenstate in the desired energy window about E . If not, choose a new realization and repeat. 3. If there is an eigenstate, use the singular vector corresponding to the small singular value to compute the eigenstate. Make a histogram of the values of j j2 sampled on a grid in position space with spacing approximately one half-wavelength in each direction. 4. Combine this histogram with previously collected data and repeat with a new realization. The bottlenecks in this computation are the same as the previous computation. In this case, a clever choice of window size can improve performance. The more complex method for computing intensity statistics by looking only at energy E is a bit di erent: 1. Randomly choose a number of locations (approximately twice the number of scatterers). 2. Compute and store all renormalized t-matrices and the Green functions from each scatterer to each other scatterer. 3. Choose a grid on which to compute the wavefunction. 4. Compute and store the Green function from each scatterer to each location on the wavefunction grid. 5. Choose a subset of the precomputed locations, construct the inverse multiple scattering matrix, T 1 and nd the SVD of T 1 .
; ;
6. For each singular value smaller than some cuto (we've tried both :1 and :01), compute the associated eigenstate on the grid, count the values of j j2 and combine with previous data. Choose a new subset and repeat. For this computation the bottlenecks are somewhat di erent. The O(N 3 ) SVD is one bottleneck as is computation of individual wavefunctions which is O(SN 2 ) where S is the number of points on the wavefunction grid.
146
For all of these methods, near singular matrices are either sought, frequently encountered or both. This requires a numerical decomposition which is stable in the presence of small eigenvalues. The SVD is an ideal choice. The SVD is usually computed via transformation to a symmetric form and then a symmetric eigendecomposition. Since the matrix we are decomposing can be chosen to be symmetric, we could use the symmetric eigendecomposition directly. We imagine some marginal improvement might result by the substitution of such a decomposition for the SVD.
Chapter 10
Conclusions
Scattering theory can be applied to some unusual problems and in some unexpected ways. Several ideas of this sort have been developed and applied in this work. All the methods developed here are related to the fundamental idea of scattering theory, namely the separation between propagation and collision. The applications range from the disarmingly simple single scatterer in a wire to the obviously complex problem of intensity statistics in weakly disordered two dimensional systems. These ideas allow calculation of some quantities which are di cult to compute other ways, for example the scattering strength of a scatterer in a narrow two dimensional wire as discussed in section 5.6. It also allows simpler calculation of some quantities which have been computed other ways, e.g., the eigen-states of one zero range interaction in a rectangle, also known as the \Seba billiard." The methods developed here also lead to vast numerical improvements in calculations which are possible but di cult other ways, for example the calculation of intensity statistics in closed weakly disordered two dimensional systems as demonstrated in section 9.3. The results of these intensity statistics calculations are themselves quite interesting. They appear to contradict previous theoretical predictions about the likelihood of large uctuations in the wavefunctions in such systems. At the same time some qualitative features of these theoretical predictions have been veri ed for the rst time. There are a variety of foreseeable applications of these techniques. One of the most exciting, is the possible application to superconductor normal-metal junctions. For instance, a disordered normal metal region with a superconducting wall will have di erent 147
148
dynamics because of the Andre'ev re ection from the superconductor. The superconductor energy gap can be used to probe various features of the dynamics in the normal metal. Also, the eld of quantum chaos has, so far, been focused on systems with an obvious chaotic classical limit. Systems with purely quantum features very likely have di erent and interesting behavior. A particle-hole system like the superconductor is only one such example. A di erent sort of application is to renormalized scattering in atomic traps. The zero range interaction is a frequently used model for atom-atom interactions in such traps. The trap itself renormalizes the scattering strengths as does the presence of other scatterers. Some of this can be handled, at least in an average sense, with the techniques developed here. We would also like to extend some of the successes with point scatterers to other shapes. There is an obvious simplicity to the zero range interaction which will not be shared with any extended shape. However, other shapes, e.g., nite length line segments, have simple scattering properties which can be combined in much the way we have combined single scatterers in this work.
Bibliography
1] M. Abramowitz and I.A. Stegun, editors. Handbook of Mathematical Functions. Dover, London, 1965. 2] S. Albeverio, F. Geszetsky, R. H egh-Krohn, and H. Holden. Solvable models in quantum mechanics. Springer-Verlag, Berlin, 1988. 3] S. Albeverio and P. Seba. Wave chaos in quantum systems with point interaction. J. Stat. Phys., 64(1/2):369{83, 1991. 4] Boris L. Altshuler and B.D. Simons. Universalities: From anderson localization to quantum chaos. In E. Akkermans, G. Montambaux, J.-L. Pichard, and J. Zinn-Justin, editors, Les Houches 1994: Mesoscopic Quantum Physics (LXI), Les Houches, pages 1{98, Sara Burgerhartstraat 25, P.O. Box 211, 1000 AE Amsterdam, The Netherlands, 1994. Elsevier Science B.V. 5] G.E. Blonder, M. Tinkham, and T.M. Klapwijk. Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion. Phys. Rev. B, 25(7):4515{32, April 1982. 6] M. Born. Zur quantenmechanik der stossvorgange. Zeitschrift fur Physik, 37:863{67, 1926. Translated into English by Wheeler, J.A. and Zurek W.H. (1981) and reprinted in Quantum Theory and Measurement, Wheeler, J.A. and Zurek W.H. eds., Princeton Univeristy Press (1983). 7] A. Cabo, J. L. Lucio, and H. Mercado. On scale invariance and anomalies in quantum mechanics. ajp, 66(6):240{6, March 1998. 8] S. Datta. Electronic Transport in Mesoscopic Systems. Number 3 in Cambridge Stud149
Bibliography
150
ies in Semiconductor Physics and Microelectronic Engineering. Cambridge University Press, The Pitt Building, Trumpington Street, Cambridge CB2 1RP, 1995. 9] S. Doniach and E.H. Sondheimer. Green's Functions for Solid State Physicists. Number 44 in Frontiers in Physics Lecture Note Series. The Benjamin/Cummings Publishing Company, Inc., 1982. 10] E.N. Economou. Green's Functions in Quantum Physics. Number 7 in Solid-State Sciences. Springer-Verlag, 175 Fifth Ave., New York, NY 10010, USA, 2nd edition, 1992. 11] J.D. Edwards, Lupu-Sax A.S., and Heller E.J. Imaging the single electron wavefunction in mesoscopic structures. To Be Published, 1998. 12] F. Evers, D. Belitz, and Wansoo Park. Density expansion for transport coe cients: Long-wavelength versus fermi surface nonanalyticities. prl, 78(14):2768{2771, April 1997. 13] P. Exner and P. Seba. Point interactions in two and three dimensions as models of small scatterers. Phys. Rev. A, 222:1{4, October 1996. 14] Haake F. Quantum Signatures of Chaos, volume 54 of Springer Series in Synergetics. Springer-Verlag, SV-addr, 1992. 15] V.I. Fal'ko and K.B. Efetov. Statistics of prelocalized states in disordered conductors. Physical Review B, 52(24):17413{29, December 1995. 16] Daniel S. Fisher and Patrick A. Lee. Relation between conductivity and transmission matrix. Phys. Rev. B, 23(12):6851{4, June 1981. 17] Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins University Press, 2715 North Charles Street, Baltimore, MD 21218-4319, 3 edition, 1996. 18] I.S. Gradshteyn and I.M. Ryzhik. Table of Integrals, Series, and Products. Academic Press, Inc., 1250 Sixth Avenue, San Diego, CA 92101-4311, 5th edition, 1994.
Bibliography
151
19] C. Grosche. Path integration via summation of perturbation expansions and applications to totally re ecting boundaries, and potential steps. Phys. Rev. Lett., 71(1), 1993. 20] E.J. Heller. Bound-state eigenfunctions of classically chaotic hamiltonian systems: scars of periodic orbits. Phys. Rev. Lett., 55:1515{8, 1984. 21] E.J. Heller, M.F. Crommie, C.P. Lutz, and D.M. Eigler. Scattering and absorption of surface electron waves in quantum corrals. Nature, 369:464, 1994. 22] K. Huang. Statistical Mechanics. Wiley, New York, 1987. 23] J.A. Katine, M.A. Eriksson, A.S. Adourian, R.M. Westervelt, J.D. Edwards, A.S. Lupu-Sax, E.J. Heller, K.L. Campman, and A.C. Gossard. Point contact conductance of an open resonator. prl, 79(24):4806{9, December 1997. 24] A. Kudrolli, V. Kidambi, and S. Sridhar. Experimental studies of chaos and localization in quantum wave functions. Phys. Rev. Lett., 75(5):822{5, July 1995. 25] L.D. Landau and Lifshitz E.M. Mechanics. Pergamon Press, Pergamon Press Inc., Maxwell House, Fairview Park, Elmsford, NY 10523, 3rd edition, 1976. 26] L.D. Landau and Lifshitz E.M. Quantum Mechanics. Pergamon Press, Pergamon Press Inc., Maxwell House, Fairview Park, Elmsford, NY 10523, 3rd edition, 1977. 27] C. Livermore. Coulomb Blocakade Spectroscopy of Tunnel-Coupled Quantum Dots. PhD thesis, Harvard University, May 1998. 28] A Messiah. Quantum Mechanics, volume II. John Wiley & Sons, 1963. 29] A.D. Mirlin. Spatial structure of anomalously localized states in disordered conductors. J. Math. Phys., 38(4):1888{917, April 1997. 30] K. Muller, B. Mehlig, F. Milde, and M. Schreiber. Statistics of wave functions in disordered and in classically chaotic systems. prl, 78(2):215{8, January 1997. 31] M. Olshanii. Atomic scattering in the presence of an external con nement and a gas of impenetrable bosons. prl, 81(5):938{41, August 1998.
Bibliography
152
32] R.K Pathria. Statistical Mechanics, volume 45 of International Series in Natural Philosophy. Pergamon Press, Elsevier Science Inc., 660 White Plains Road, Tarrytown, NY, 10591-5153, 1972. 33] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical recipes in C: The art of scienti c computing. Cambridge University Press, The Pitt Building, Trumplington Street, Cambridge CB2 1RP, 40 West 20th Street, New York, NY 100114211, USA, 2nd edition, 1992. 34] G. Rickayzen. Green's Functions and Condensed Matter. Number 5 in Techniques of Physics. Academic Press, Academic Press Inc. (London) Ltd., 24/28 Oval Road, London NW1, 1980. 35] L.S. Rodberg and R.M. Thaler. Introduction to the quantum theory of scattering. Academic Press, Inc., 111 Fifth Avenue, New York, NY 10003, USA, 1st edition, 1970. 36] J.J. Sakurai. Modern Quantum Mechanics. Addison Wesley, 1st edition, 1985. 37] P. Sheng. Introduction to Wave Scattering , Localization, and Mesoscopic Phenomena. Academic Press, Inc., 525 B Street, Suite 1900, San Diego, CA 92101-4495, 1995. 38] T. Shigehara. Conditions for the appearance of wave chaos in quantum singular systems with a pointlike scatterer. Phys. Rev. E, 50(6):4357{70, December 1994. 39] I.E. Smolyarenko and B.L. Altshuler. Statistics of rare events in disordered conductors. Phys. Rev. B, 55(16):10451{10466, April 1997. 40] Douglas A. Stone and Aaron Szafer. What is measured when you measure a resistance?{ the landauer formula revisited. IBM Journal of Research Development, 32(3):384{412, May 1988. 41] D. Zwillinger. Handbook of Integration. Jones & Bartlett, One Exeter Plaza, Boston, MA 02116, 1992.
Appendix A
Green Functions
A.1 De nitions
Green functions are solutions to a particular class of inhomogeneous di erential equations of the form ; ; z ; L(r)] G r r z = r ; r : (A.1) G is determined by (A.1) and boundary conditions for r and r lying on the surface S of the domain . Here z is a complex variable while L(r) is a di erential operator which is time-independent, linear and Hermitian. L(r) has a complete set of eigenfunctions f n (r)g which satisfy L(r) n(r) = n n(r): (A.2) Each of the n (r) satisfy the same boundary conditions as G (r r z ). The functions f n (r)g are orthonormal, Z (A.3) n (r) m (r) = nm and complete, X (A.4) n (r) n (r ) = (r ; r ):
0 0 0 0 0 0
^ (z ; L)G(z ) = 1 ^ Lj ni = n j ni h n j m i = nm X j nih nj = 1:
n
153
154
In all of the above, sums over n may be integrals in continuous parts of the spectrum. For z 6= n we can formally solve equation A.5 to get ^ G(z) = 1 ^ : (A.9) z;L Multiplying by A.8 we get X X 1 ih ^ j nih nj = X j zn; nnj : (A.10) G(z) = 1 ^ j n ih n j = ^ z;L n n z;L n We recover the r-representation by multiplying on the left by hrj and on the right by jr i X n(r) n (r ) G(r r z) = : (A.11)
0 0
In order to nd (A.11) we had to assume that z 6= n . When z = n we can write a limiting form for G^z ): ( 1 ^ G (z ) = (A.12) ^ z ;L i and it's corresponding form in position space, X n(r) n (r ) G (r r z ) = lim (A.13)
0 !
z;
^ ^ where G+(z ) is called the \retarded" or \causal" Green function and G (z ) is called the \advanced" Green function. These names are re ections of properties of the corresponding time-domain Green functions. Consider the Fourier transform of (A.13) with respect to z : Z X n(r) (r ) n iEt=h dE: (A.14) G (r r = t ; t ) = lim 21 0 E; n i e n We switch the order of the energy sums (integrals) and have X n(r) n (r ) R G (r r ) = 21 lim E ; n i e iE =h dE: (A.15) 0 n We can perform the inner integral with contour integration by closing the contour in either the upper or lower half plane. We are forced to choose the upper or lower half plane by the sign of . For > 0 we must close the contour in the lower half-plane so that the exponential forces the integrand to zero on the part of the contour not in the original integral. However, if the contour is closed in the lower half plane, only poles in the lower half plane will be picked up by the integral. Thus G+(r r ) is zero for tau < 0 and therefore corresponds only to propagation forward in time. G (r r ), on the other hand, ^ is zero for > 0 and corresponds to propagation backwards in time. G is frequently useful in formal calculations.
; 0 0 1 0 ; ! ;1 0 1 0 ; ! ;1 0 ; 0 ;
0 n
z;
155
A.2 Scaling L
^ We will nd it useful to relate the Green function of the operator L to the Green ^ function of L where is a complex constant. Suppose h i ^ ^ z ; L G (z ) = 1 (A.16) ^ ^ We note that L and L have the same eigenfunctions but di erent eigenvalues. i.e., ^ Lj ni = so So we have ^ G (z ) =
n
j ni
z
Xj
n
ih nj = 1 X j znih nj = 1 G1 z; n ; n ^ n
n
^ ^ G (z ) = 1 G1 z
Z Z
G(r1 r z )G(r r2 z ) dr =
^ I = hr1 j G(z)
jrihrj dr
jrihrj dr = 1
X jnihnj
1
z ; En
156
so But, since hn jm i = nm ,
I = hr1 j
X jnihnj X jmihmj
1
z ; En
1
z ; Em jr2 i :
(A.26)
(A.27) (z ; En ) which is very like the Green function except that the denominator is squared. We can get around this by taking a derivative:
n
I = hr1 j
X d jnihnj jr i : ;
1
dz z ; En
(A.28)
We move the derivative outside the sum and matrix element to get
(A.29)
XX j
i j ih j h j : z;" ;" i
(A.30)
ih j ih j X z ; "j ; " j
(A.31)
but now the inner sum is just another Green function (albeit of lower dimension), so we have X X ^ ^ ^ (A.32) G( ) (z) = j ih j G( ) (z ; " ) = j ih j G( )(z ; " )
157
Consider the example mentioned at the beginning of this section. We'll label the eigenstates in the x-direction by their wavenumber, k and label the transverse modes by a channel index, n. So we have ^ G( ) (z ) =
;1
jkihkj
^ Gn(E ; "(k)) i ) dk =
X
n
jnihnj
go ) (E ; "n) ^(
(A.33)
where go ) (z ) is the one dimensional free Green function. In the position representation ^( this latter equality is rewritten as
G( ) (x y x y z ) =
0 0
X
n
n(y) n (y )g( ) (x
0
x E ; "n ):
0
(A.34)
A.5 Examples
Below we examine two examples of the explicit computation of Green functions. We begin with the mundane but useful free space Green function in two dimensions. Then we consider the more esoteric Gor'kov Green function the Green function of a single electron in a superconductor.
z + r2 Go (r r z ) = r ; r :
0 0 0 0
(A.35)
By translational symmetry of L, Go (r r z ) is a function only of = jr ; r j. For 6= 0, G( z ) satis es the homogeneous di erential equation
z + r2 G( z ) = 0:
Recall that so we may re-write (A.35) as 2 z (r ; r ) = 2 1 ( )
0
(A.36) (A.37)
Go d +
0 0
Z
0
r2 Go (
z) d = 1:
0
(A.38)
158 )
r2 = 1 @@
Z
0
@ + 1 @2 2@ 2 @ @f : @
0
(A.39) (A.40)
0
r2f ( ) = @@
2
0
r2Go(
z) d = 2
0
where the last equality follows from Gauss' Theorem (in this case, the fundamental theorem of calculus). So we have Z 2 z Go d + 2 @Go = 1 (A.42) @
0 0
Z @ ; Go( z) d = 2 @Go @ @
0
(A.41)
which, as Also,
! 0, gives
(A.43) (A.44)
General solutions of (A.35) are linear combinations of Hankel functions of the rst and second kind of the form (See e.g., 1]).
p i
in
(A.45)
;
(2) H0 (
we have
are = 0. pz Since weup as looking ,for0 a= 0.independent solution we must have n A0 = ) blows !1 B The boundary condition (A.43) xes
Since i 4 . So
(1) where Ho is the Hankel function of zero order of the rst kind: (1) Ho (x) = Jo (x) + iYo (x)]
i (1) p Go(r r z) = ; 4 Ho ( z jr ; r j)
0 0
(A.46) (A.47)
where Jo (x) is the zero order Bessel function and Yo (x) is the Neumann function of zero order. It will be useful to identify some properties of Yo (x) for use elsewhere. We will often be interested in Yo (x) for small x. As x ! 0 (A.48) Y (x) = Y (R) (x) + 2 J (x) ln(x)
o o o
159
Yo(R) (x) is called the \regular" part of Yo(x). We note that Yo(R) (0) 6= 0. Ordinarily,
the speci c value of this constant is irrelevant since it is overwhelmed by the logarithm. However, we will have occasion to subtract the singular part of Go and the constant, Yo(R) (0), will be important.
^ where Ho is the single particle Hamiltonian, ^ = jri (r) hrj is the (possibly position R dependent) chemical potential and ^ = jri (r) hrj is the (possibly position dependent) superconductor energy gap. In the ^ = 0 case, we have the Schrodinger equation for jf i (the electron state) and the time-reversed Schrodinger equation for jgi (the hole state). If we form the spinor 0 1
j i = @ jjfii A g
(A.52)
160
; ;
^ Ho ; ^
1 A j i = Hj i :
(A.53)
^ In order to compute the Green function operator, G(z ), for this system we form
0 ^ z ; Ho ; ^ ^ ^ G 1 (z ) = z ; H = @ ^
; y
^ z + Ho ; ^
;^
1 A
(A.54)
which, at rst, looks di cult to invert. However, there are some nice techniques we can apply to a matrix with this form. To understand this, we need a brief review of 2 2 quantum mechanics. Recall the Pauli spin matrices are de ned
1
0 1 0 1A = @ 0 0 = @
0 1 0
0 1 1 0 A = @
i 0
;i A ;1
and that the set fI 1 2 3 g is a basis for the vector space of complex 2 2 matrices. The Pauli matrices satisfy
f i jg
i j]
= i j ; j i = ijk k = i j + j i = I ij
(A.55) (A.56)
where ijk is the three-symbol ( ijk is 1 if ijk is a cyclic permutation of 123, -1 if ijk is a non-cyclic permutation of 123 and 0 otherwise) and ij is the Kronecker delta. ^ To simplify the later manipulations we rewrite G 1 :
;
^ G 1 (z ) = g 1 (z) ^
; ;
^ ) G(z) =
1 g (z ) ^
(A.57)
where
0 ^ z ; Ho ; ^ g 1 (z ) = @ ^ ^
;
;z ;
^ A: ^ Ho ; ^
(A.58)
161 ^ g 1 (z) = aI + ib ^ ^
;
(A.59)
a = ^
^ Ho ; ^
h ^i a b = 0 8i ^ h^ ^ i i
bi bj = 0 8i j:
For our problem, as long as ^ = I const, we satisfy these assumptions. That is, we are in a uniform superconductor. So 0 1 a ; i^3 i(^1 ; i^2 ) A ^ b b b g 1 (z ) = @ ^ ^ ^ (A.60) i(b1 + ib2 ) a + i^3 ^ b Since all the operators commute, we can invert this like a 2 2 matrix of scalars:
;
:
(A.61)
g(z)^ 1 (z) = ^ g
;
^ aI ; ib ^ a2 + b2 ^ ^
^ aI + ib ^
: (A.62)
i j =1
i j=1 i<j
2
3 X ^^ bibj f
i j
(A.63)
So we have
g (z )^ 1 (z) = ^ g
;
^ a2 I + b ^ a2 + b2 ^ ^
a2 + b2 ^ ^
^ a2 I + b2 I = I : ^
(A.64)
162
At this point we have an expression for g (z ) but it's not obvious how we evaluate ^ ^ ^ ^ a2 +b2 . We use another trick, and factor g (z ) as follows (de ning b = jbj): ^ ^
I
g (z) = ^
a2 + b2 ^ ^
^ aI ; ib ^
X I + s b^ b =1 2 s= 1 a + is^ : ^ b
^
(A.65)
Why does this help? We've replaced the problem of inverting a2 + ^2 with the problem of q 2 ^ 2 b2 p ^ inverting a i^. We recall that a = ;(Ho ; ^) and b = b1 + b2 + b3 = j j2 ; z 2 . So ^ b ^ a i^ = ; Ho ; ^ i ^ b which means where ^ (^ i^) 1 = G a b
;
j j2 ; z2
j j2 ; z2
^ G (z ) =
z;
1 : ^ Ho ; ^
f (z ) = 1 1 p 2 z 2 z ;j = p2 2 z ; j j2
; y
z2 ; j
j2
j2
and then write 0 ^ h^ i 1 ^ ^ G ( )f+h(z ) + G (; )fi (z ) G ( ) ; G (; ) A: g(z ) = @ ^ ^ ^ ^ G ( ) ; G (; ) G (; )f+ (z ) + G ( )f (z ) (A.69) So, nally, we have a simple closed form expression for G(z ):
; y
^ G ( ) ; G (; ) A ^ ^ ;G (; )f+(z ) ; G ( )f (z ) : (A.70)
;
h^
Various Limits
=0
163
When
= 0 we have
f+(z f (z
;
= 0) = 1 = 0) = 0 = 0 0 A ^ ;G (; ) :
and thus
0 ^ G() ^ G(z) = @
(A.71)
Appendix B
(B.1)
(B.2)
and thus is somewhat more di cult than the case of Dirichlet boundary conditions considered in section 4.2. First, we assume n(s) a unit vector normal to C at each point s, and @n(s) f (r(s)) = n(s) rf (r(s)): (B.3) Second, we insert (B.2) into (4.3) to get (r) = (r) + ds (s ) G0 (r r(s ))
0 0 0 C 00
which then we consider at a point r(s ) on C (with the same notational abbreviation used in Section 4.2) (s ) = (s ) + ds (s ) G0 (s s )
00 00 0 0 00 0 00 00
(s ) ; 1 ; (s ) @n(s0 )
0 0
(r(s ))
0
(B.4)
(s ) + 1 ; (s ) @n(s0 )
0 0
(s ):
0
(B.5)
As it stands, (B.5) is not o linear equation in . To x this, we multiply both sides by a n (s ) + 1 ; (s )] @n(s00 ) and de ne
B (s
00
) =
(s ) + 1 ; (s ) @n(s00 )
00 00
(s )
00
164
165
) = (s ) + 1 ; (s ) @n(s00 ) (s ) n o GB (s s ) = (s ) + 1 ; (s ) @n(s00 ) G0 (s s ): 0
B (s
00 00 00 00 00 0 00 00 00 0
This yields
B (s
where again the tildes emphasize that the equation is de ned only on C . The diagonal operator ~ is ~ f (s) = (s)f (s): (B.9) We de ne h ~ i1 T B = ~ ~ ; GB ~ I 0 (B.10) that solves the original problem
;
(B.11)
for
T BB (r(s )) = ds T B (s s) B (s):
0 0
As in Section 4.2, in the limit (s) = when inserted into (B.7), gives
! 1, T B converges to ;
;
h ~Bi
G0
(B.12)
1
which, (B.13)
h i (s) = B (s) = ~ ; GB GB I ~0 ~0
~B (s) = 0
the desired boundary condition (B.1). For completeness, we expand T B in a power series
0 1 X h ~ B ~ ij A T B = ~ + ~ @ G0
1
so where
0 1 X B (j) T B (s s ) = (s ) (s ; s ) + (s ) @ T ] (s s )A
1 00 0 00 00 0 00 00 0
j =1
j =1
allowing one, at least in principle, to compute T B (s s ), and thus the wavefunction everywhere.
Appendix C
(C.1)
Ax = b
166
(C.2)
167
;
where (A)ij = aij . This implies that a formal solution is available if the matrix inverse A 1 exists. Namely, x = A 1b: (C.3)
;
Most techniques for solving (C.2) do not actually invert A but rather \decompose" A in a form where we can compute A 1 b e ciently for a given b. One such form is the LU decomposition, A = LU (C.4)
;
where L is a lower triangular matrix and U is an upper-triangular matrix. Since it is simple to solve a triangular system (see 17], section 3.1) we can solve our original equations with a two step process. We nd a y which solves Uy = b and then nd x which solves Lx = y. This is an abstract picture of the familiar process of Gaussian elimination. Essentially, there exists a product of (unit diagonal) lower triangular matrices which will make A uppertriangular. Each of these lower triangular matrices is a gauss transformation which zeroes all the elements below the diagonal in A one column at a time. The LU factorization returns the inverse of the product of the lower triangular matrices as L.and the resulting upper triangular matrix in U. It is easy to show that the product of a lower(upper) triangular matrices is lower(upper) triangular and the same for the inverse. That is, L represents a sequence of gauss transformations and U represents the result of those transformations. For large matrices, the LUD requires approximately 2N 3 =2 ops ( oating point operations) to compute. The LUD has several drawbacks. The computation of the gauss transformations involves division by aii as the ith column is zeroed. This means that if aii is zero for any i the computation will fail. This can happen two ways. If the \leading principal submatrix" of A is singular, i.e., det(A(1 : i 1 : i)) = 0 then aii will be zero when the ith column is zeroed. If A is non-singular, pivoting techniques can successfully nd an LUD of a rowpermuted version of A. Row permutation of A is harmless in terms of nding the solution. However, if A is singular then we can only chase the small pivots of A down r columns where r = rank (A). At this point we will encounter small pivots and numerical errors will destroy the solution. Thus we are led to look for methods which are stable even when A is singular.
168
169
A as we do the QRD and then, though we still cannot solve an ill-posed problem we can extract a least squares solution from this column pivoted QRD (QRD CP). For large N , the QRD requires approximately 4N 3 =3 ops (twice as many as LU) and QRD CP requires 4N 3 ops to compute.
A = U VT
(C.6)
where U and V are orthogonal and = diag( 1 : : : N ). In this formulation, zero singular values correspond to zero eigenvalues and the corresponding vectors may be extracted from V. Since the SVD is computed entirely with orthogonal transformations, it is stable even when applied to singular matrices. The SVD of a symmetric matrix is an eigendecomposition. That is, the singular values of a symmetric matrix are the absolute values of the eigenvalues and the singular vectors are the eigenvectors. If a symmetric matrix has two eigenvalues with the same absolute value but opposite sign, the SVD cannot distinguish these eigenvalues and may mix them. For non-degenerate eigenvalues, the sign information is encoded in UVT which is a diagonal matrix made up of 1. All of this follows from a careful treatment of the uniqueness of the decomposition. The SVD can be applied to computing only the singular values (SVD S), the singular values and the matrix V (SVD SV) and the singular values, the matrix V, and the matrix U (SVD USV). The op count is di erent for these three versions (we include LUD and QRD for comparison) and we summarize this in table C.3.
170
ops seconds 20N 3 13.51 12N 3 9:55 3 4N 2:13 4N 3 1.73 3 =3 4N :43 2N 3 =3 N/A
Table C.1: Comparison of op counts and timing for various matrix decompositions. The timing test was the decomposition of a 500 500 matrix on a DEC Alpha 500/500 workstation. We include this table to point out that choice of algorithm can have a dramatic e ect on computation time. For instance, when looking for a t such that A(t) is singular, we may use either the SVD S or the QRD to examine the rank of A(t). However, using the QRD will be at least 4 times faster than using the SVD S.
Appendix D
X xn
1
n=1
(D.1) (D.2)
X ein e
1
n=1
1 = ln 1 ; ei
for
real > 0:
X cos(n )e
1
n=1
1 = Re ln 1 ; exp(i 1 = Im ln 1 ; exp(i
0
; ;
= 1 ln 2 cosh e; 2 cos 2
! !
(D.3)
and
X sin(n )e
1
n=1
)
0
(D.4) (D.5)
Since
; ) ; cos n(
e
+ )
0
1 = 4 ln cosh cosh
; cos( + ; cos( ;
171
; 2 cos( ;
0 0
#
0
; ln
"
2 cosh
; 2 cos(
#)
+ )
0
) )
172
0 0
1 = 4 ln cosh cosh
1 sinh2 2 + sin2 = 4 ln sinh2 2 + sin2 We also note that for ln 2 cosh e; 2 cos
"
; 1 + 1 ; cos( + ; 1 + 1 ; cos( ;
+ 0# 2 0 2
;
) )
<< 1
=
; ln
ln
1+e
; ei ; e
;
2+ 2
(D.6)
since, in the small argument expansion of the exponentials, the constant and linear terms cancel. Similarly, for j ; j << 1
0
"
+ 0# 2 0 2
;
ln
sin2
+ 2
; ln
"
2+(
)2
(D.7)
where n =
q n2
n2 2 =l2 ; E
1=2
exp (; n )
(D.8)
; E , and
bn ( ) = sin(nx) sin(nx ) nl exp
0
In this case, we may perform n=1 bn ( ) for 0 using the identities above (section D.1). P a ( );b ( ) converges and that it converges uniformly We need to show that n=1 n n with respect to for all 0. Since j sin(nx) sin(nx )j 1, we have
1 1 0
; nl
(D.9)
nh
El2 1 ; n2 2
1=2
; nl
3 5:
173 exp ; nl
;
2 nh
"
El2 1 ; n2 2
; nl
El2 1 ; n2 2
!# 2 4 1;e
exp El n
;
2 nh
El2 1 ; n2 2
; exp ; nl
;
3 5
; exp ; El 5 n
p 1. n > l 2E implies
exp ; nl
"
El2 1 ; n2 2
!#
< exp
;nl 2
!
(D.10)
3. n >
El2 1 ; n2 2
ln 2 2 + E 2h
2 2 < 1 + nEl2 2
(D.11)
implies 1;e
;
2 nh
< 1 + 2e
2 nh
(D.12)
4. x 0 ) e x 1 ; x,
;
p2E l r ;n
2l
"
ln 2C 2 + E 2h
,
2 1 + 2El
1 + 2Ce ,
2 nh
n2
;
#
1 ; El n
: (D.13)
E+
1 El2 2 2h ln 2
;n
"
p2E l sE +
2l
El + 5El2 : n n2 2
(D.14)
"
1 El2 2 2h ln 2
, 5 3 + MEl 2 : 2
2El3
3M 2 1
; exp ; 2l
2l ;
(D.15)
174
1 exp(
;
El2 h; 5El3 (D.16) an ( ) ; bn ( )] < exp ; Ml + 3M 2 2 : 2 M 2 1 ; exp ; h 2 2l n=M P a ( ) ; b ( )] converges for all 0. Thus n=M n n El 2 Further, since n exp ; n l < nEl2 , 2 2 2 7 3 (D.17) jan( ) ; bn( )j < nEl3 = fn 3 P P and n=M fn converges. Therefore n=M an ( ) ; bn ( )] converges uniformly with respect
1 1 1 1
2l
2l
)<
2l h 1 exp( 2l h)
; ;
and
"
to for
an ( ) = cos(njx ; x j) n2 2 =l2 ; E
0
1=2
exp (; n )
(D.18)
In this case, we may perform n=1 bn ( ) for 0 using the identities above (section D.1). P a ( );b ( ) converges and that it converges uniformly We need to show that n=1 n n with respect to for all 0. Since j cos(njx ; x j)j 1, we have
1 1 0
; nl
(D.19)
jan( ) ;2bn( )j = ! 0 1 3 s l 4 1 ; El2 1=2 exp @; n 1 ; El2 A ; exp ; n 5 : 2 n2 n n2 2 l l p For x < 1, 1 < 1 and exp(; xa) < exp(;xa) so
; p
"
exp ; nl
; nl
El2 1 ; n2 2
!# 2 ! El2 4 1; 2 2 n
exp El n
; exp ; nl
;
3 5
; exp ; El 5 n
Since
175
p 1. n > l 2E implies
exp ; nl
"
El2 1 ; n2 2
!#
< exp
;nl 2
!
(D.20)
El2 1 ; n2 2
2 2 < 1 + nEl2 2
(D.21)
3. x 0 ) e x > 1 ; x, (proof?)
;
;n
"
2l
2 1 + 2El
n2
1 ; El n
(D.22) (D.23)
;n
"
2l
El + 2El2 : n n2 2
2l
"
2El3
3M 2 1
; exp ;; 2l
2l
;
El3 + M2 3 :
;
(D.24)
;
1 exp(
2l
)<
2l h 1 exp( 2l h)
;
and
X
1 1
Thus n=M an ( ) ; bn ( )] converges for all 0. 2El2 , El exp ; n Further, since n 2l < n2 2
n=M
"
(D.25)
4 jan( ) ; bn( )j < nEl3 = fn (D.26) 3 P P and n=M fn converges. Therefore n=M an ( ) ; bn ( )] converges uniformly with respect
1 1
to for
Appendix E
X
1
l=
(i)l Jl (kr)
X
1 1
;1
l=
Jl (kr)eil
(i)l Jl (kr) cos l
X
1
;1
l=
X
1
;1
l=
;1
l=
;1
176
177
E.3 Asymptotics as kr ! 1
Jn (kr) Yn (kr)
(1) Hn (kr) (1) (2) Hn (kr) = Hn (kr)
r 2 n kr cos kr ; 2 r 2 sin kr ; n2 kr r 2 n
kr e r 2 i(kr kr e
;
i(kr
;4 ;4
1 kr n 2