انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

1

الكلية كلية العلوم للبنات     القسم قسم فيزياء الليزر     المرحلة 5
أستاذ المادة ايناس محمد سلمان الربيعي       07/12/2016 09:50:55
Quantum Mechanical Background
To make this book self-contained and accessible to a broader audience, we
begin with an outline of the mathematical framework of quantum theory,
introducing vector spaces and linear operators, the postulates of quantum
mechanics, the Schr¨odinger equation, and the density operator. In order to
illustrate as well as motivate much of the discussion, we review the properties
of the simplest, yet very important quantum mechanical system—the quantized
harmonic oscillator, which is encountered repeatedly in the sections that
follow.
Our presentation of the fundamental principles of quantum theory follows
the traditional approach based on the standard set of postulates found
in most textbooks. It could be viewed as a “pragmatic” approach in which
quantum mechanics is accepted as an operational theory geared to predicting
the outcomes of measurements on physical systems under well defined conditions.
We have deliberately stayed clear of, depending on disposition, semiphilosophical
issues pertaining to the relation of quantum theory and some
of its counterintuitive notions vis a vis our macroscopic experience. Thus issues
such as the collapse of the wavefunction upon measurement; the quantum
correlations—entanglement—between spatially separated systems, or else, the
non-local character of such correlations; the transition from the quantum to
the classical world; etc., are treated according to the rules of the theory, without
any excursion into philosophical implications, as they would be beyond
the scope, as well as the needs, of this book. Discussions pertaining to such
issues can be found in the relevant literature cited at the end of this book
under the title Further Reading.
1.1 The Mathematical Framework
The language of physics is mathematics and in particular it is analysis for
classical physics. The fundamental laws in mechanics, electromagnetism and
6 1 Quantum Mechanical Background
even relativity are formulated in terms of differential and/or integral equations
and so are their applications to specific problems. Quantum mechanics,
initially called wave mechanics, was also formulated in terms of differential
equations: The Schr¨odinger equation, often still called wave-equation, being a
case in point. Eventually, it was realized, however, that the essence and natural
language of quantum mechanics is that of vector spaces which means linear
algebra and functional analysis. This does not mean that one does not have
to solve differential equations in some specific applications of the theory. But
it does mean that a thorough picture of the structure of quantum systems,
their states and eigenvalues, as well as their interaction with other systems
can be obtained by studying the algebra of a suitably chosen set of operators
and the corresponding vector spaces. Thus we begin with a brief summary of
the algebra of vector space and linear operators.
1.1.1 Complex Vector Spaces
A complex vector space is a set V of elements ?i, called vectors, in which
an operation of summation ?i + ?j as well as multiplication a?i by a complex
number (c-number) a can be defined. For a vector space, the following
properties are assumed to hold true:
(a) If ?i, ?j ? V then ?i + ?j ? V
(a1) ?i + ?j = ?j + ?i
(a2) (?i + ?j) + ?k = ?i + (?j + ?k)
(b)There exists in V a zero element 0 such that ?i + 0 = ?i for all ?i ? V
(c) If ?i ? V then a?i ? V
(c1) ab?i = a(b?i)
(c2) 1• ?i = ?i
(c3) 0• ?i = 0, which means that ?i multiplied by the number 0 gives the
zero element of V
(d1) a(?i + ?j) = a?i + a?j
(d2) (a + b)?i = a?i + b?i
The element resulting from the operation (?1)?i is denoted by ??i, and using
the above properties we have
?i + (??i) = (1+(?1))?i = 0• ?i = 0.
A subset S of the elements of V is called a subspace of V if for all ?i ? S, all
of the above properties hold true with respect to S, i.e., if for all ?i, ?j ? S it
follows that ?i + ?j ? S, a?i ? S, etc.
An expression of the form
_n
i=1 ci?i, with ci complex numbers, is referred
to as a linear combination of the vectors ?1, ?2, . . . , ?n. A set of vectors
?1, ?2, . . . , ?n are said to be linearly independent if the relation
_n
i=1 ci?i = 0
is satisfied only for all ci = 0.
A vector space V is N-dimensional if there are N and not more linearly
independent vectors in V, for which the notation V(N) shall be used when
1.1 The Mathematical Framework 7
necessary. If the number of linearly independent vectors in a space can be arbitrarily
large, the space is called infinite-dimensional. Every set of N linearly
independent vectors in an N-dimensional space is a basis. If e1, e2, . . . , eN are
the vectors of a basis, every vector ?i of the space can be expressed as a linear
combination of the form
?i = c1e1 + c2e2 + . . . + cNeN , (1.1)
with the coefficients cj (j = 1, 2, . . .N) referred to as the coordinates of ?i
in that particular basis. Obviously, when vectors are added or multiplied by
a c-number a, their coordinates are added or multiplied by that number,
respectively.
From two (or more) different vector spaces V(NA)
A and V(NB)
B , with the
corresponding dimensions NA and NB, one can construct a new vector space
V(N) = V(NA)
A
?V(NB)
B , called tensor-product space, whose dimension is given
by N = NANB. If ?A is a vector in space V(NA)
A and ?B is a vector in space
V(NB)
B , the vector ? = ?A ??B is called the tensor product of ?A and ?B and
it belongs to V(N). For the tensor product vectors, the following properties
are satisfied:
(a) If ?A ? V(NA)
A and ?B ? V(NB)
B and a is any c-number, then
a(?A ? ?B) = (a?A) ? ?B = ?A ? (a?B) ? V(N)
(b1) If ?Ai
, ?Aj
? V(NA)
A and ?B ? V(NB)
B , then
(?Ai
+ ?Aj
) ? ?B = ?Ai
? ?B + ?Aj
? ?B ? V(N)
(b2) If ?A ? V(NA)
A and ?Bi
, ?Bj
? V(NB)
B , then
?A ? (?Bi
+ ?Bj
) = ?A ? ?Bi
+ ?A ? ?Bj
? V(N)
Any vector ? ? V(N) can be expressed as a linear superposition
? =
_NA
i=1
_NB
j=1
cij eij , (1.2)
where eij ? eAi
? eBi
with eAi
and eBj
being the basis vectors of spaces V(NA)
A
and V(NB)
B , respectively. All of the above properties for vector spaces hold for
the tensor product vector space V(N) = V(NA)
A
?V(NB)
B which is thus a vector
space itself.
A vector space is called a scalar product space if a function (?i, ?j) can
be defined in it, which has the properties
(?, ?) ?0 with (?, ?) = 0 iff ? = 0 , (1.3)
(?i, ?j) = (?j, ?i)
?
, (1.4)
where (. . .)? denotes the complex conjugate of (. . .).
(?i + ?j, ?k) = (?i, ?k) + (?j, ?k) , (1.5)
8 1 Quantum Mechanical Background
and for any c-number a
(?i, a?j) = a(?i, ?j) , (1.6)
which in combination with (1.4) yields (a?i, ?j) = a?(?i, ?j ). The function
(?i, ?j) is called the scalar product of the elements ?i and ?j, and the two
elements are said to be orthogonal if
(?i, ?j) = 0 . (1.7)
Given the notion of the scalar product, the norm of the vector ? is defined as
||?|| ? +
_
(?, ?) . (1.8)
From the definition of the scalar product in (1.3) it follows that (?, ?) is a real
and positive number. The vector ˆ? ? ?/||?|| is said to be normalized, since
|| ˆ?|| = 1.
With the above properties of the scalar product, it is easy to prove the
Cauchy–Schwarz inequality
|(?i, ?j)|2 ? (?i, ?i)(?j, ?j) (1.9)
which holds for any two vectors ?i, ?j ? V(N). To this end, consider
(?i ? a?j, ?i ? a?j) ? 0 ,
which can be expanded as
(?i, ?i) ? a(?i, ?j) ? a
?
(?j, ?i) + aa
?
(?j, ?j) ? 0 . (1.10)
Choosing a = (?j, ?i)/(?j, ?j ), from (1.10) we obtain the Cauchy–Schwarz
inequality (1.9).
The above notions and notation in connection with the scalar product and
its properties represent a generalization of what is usually called scalar (or
inner) product in the 3-dimensional space of traditional vector calculus. From
now on, we will adopt the, usual in quantum mechanics, Dirac notation |?_
for vectors of the space and _?i|?j_ for the scalar product, while |?_ and _?|
are also referred to, respectively, as ket and bra vectors. It can be shown that
the bra vectors belong to the space dual to that of the kets.
1.1.2 Bases and Vector Decomposition
In an N-dimensional scalar product space V(N), as in a 3-dimensional space,
one can always choose a set of N orthonormal and therefore linearly independent
vectors |ei_,
_ei|ej_ = ?ij , i,j= 1, 2, . . .,N ,
1.1 The Mathematical Framework 9
where
?ij =
_
1 if i = j
0 if i _= j
is the Kronecker delta. This set of vectors is said to form a basis { |ei_} in
terms of which any vector |?_ ? V(N) can be expressed as a linear combination
of the basis unit vectors
|?_ =
_N
i=1
ci |ei_ , (1.11)
with the c-number coefficients ci = _ei|?_, which is an immediate consequence
of the orthonormality and completeness of the basis. Thus any |?_ can be
written as
|?_ =
_N
i=1
_ei|?_ |ei_ , (1.12)
which is said to be a decomposition of vector |?_ in terms of the basis { |ei_},
with the decomposition coefficients being a generalization of the components
of a vector in 3-dimensional space on the axes of the basis chosen for the
description of the system under consideration, whether it be a point mass, an
extended rigid body, etc.
Any nonzero vector |?_ in a finite scalar product space can, as is often
desirable in quantum mechanics, be normalized. In that case, we have _?|?_ =
1, which implies
_N
i=1
|ci|2 =
_N
i=1
|_ei|?_|2 = 1 . (1.13)
This again follows from the orthonormality of the basis vectors and we can
state that, for any vector |?_ normalized or not, the decomposition
| ˆ?_ =
1
||?||
_N
i=1
_ei|?_ |ei_ , (1.14)
with ||?|| ? +
_
_?|?_ = (
_
i
|_ei|?_|2)1/2, represents a normalized vector.
Considering now a two-component tensor product vector space V(N) =
V(NA)
A
?V(NB)
B , we state without proof an important theorem of linear algebra,
known as the Schmidt (or polar) decomposition: For any vector |?_ ? V(N),
it is possible to construct the orthonormal sets of vectors |?Ai
_ ? V(NA)
A and
|?Bi
_ ? V(NB)
B , where i = 1, 2, . . . , min(NA,NB), in terms of which |?_ can be
represented as
|?_ =
_
i
si |?Ai
_ ? |?Bi
_ , (1.15)
where the Schmidt coefficients si are real non-negative numbers. Comparing
this with the expansion (1.2) which involves double summation over
10 1 Quantum Mechanical Background
i = 1, 2, . . .,NA and j = 1, 2, . . .,NB, we see that the Schmidt decomposition
allows one to represent a vector |?_ through a single sum over
i = 1, 2, . . . , min(NA,NB).
Most of the above features and properties can be generalized to the case of
infinite-dimensional (N ??) discrete spaces, for which the summations over
i extend from from 1 to ?, and the respective quantities, as for example the
square of the norm ||?||2 =
_?
i=1
|_ei|?_|2, are to be understood as the limit
of the infinite series. A further generalization having to do with transition
from a discrete to a continuous vector decomposition is discussed below, after
we introduce the notion of linear operators in a vector space and review their
properties. Infinite-dimensional discrete vector spaces, as well as continuous,
represent the most basic tool in the formulation of quantum theory, as the
possible states of any physical system correspond to the vectors of suitably
chosen and constructed vector spaces.
The vector spaces describing quantum systems are habitually said to be
Hilbert spaces, the term being most meaningful for infinite dimensional vector
spaces. Strictly speaking, a Hilbert space H is a vector space with a scalar
product, a metric generated through that scalar product and which is complete
with respect to that metric. What is meant by metric is the distance between
two vectors ?i and ?j , given in this case by the norm of their difference, i.e.,
||?i ? ?j ||. The space is complete if every Cauchy sequence ?i of vectors in
the space converges to some vector ? in that space, in the sense that
lim
i??
||?i ? ?|| = 0 .
An infinite sequence of vectors is said to be Cauchy if ||?i ? ?j || ? 0 as
i, j ? ?. For all practical purposes, it is justified to use the term Hilbert
space for all vector spaces encountered in this book.
1.1.3 Linear Operators
Let A be a function (an operation) that maps any vector of a linear space S
into another vector of the same space; symbolically
A?i = ?j . (1.16)
Such a function is called a linear operator if it satisfies the conditions
A(?i + ?j) = A?i + A?j , (1.17a)
A(c?i) = cA?i , (1.17b)
for ?i, ?j ? S and c a c-number.
The multiplication of an operator by a c-number, the addition A + B and
the product AB of two operators are defined via
1.1 The Mathematical Framework 11
(cA)? = c(A?) , (1.18a)
(A + B)? = A? + B? , (1.18b)
(AB)? = A(B?) , (1.18c)
the order of A and B in the two sides of the last equation being an essential
part of the definition. It is easy to show that if A and B are linear operators
then A + B and AB also are linear operators.
In any vector space S there is the zero operator defined via 0? = 0, the 0
in the left side represents the operator, while on the right side it denotes the
zero vector of the space S. The identity operator I is defined via I? = ?. Both
definitions are valid for any ? ? S. Linear operators are fully defined only
when the vector space on which they operate, so to speak, is also defined.
To every linear operator A on S, its adjoint operator A† can also be defined
by the relation
_A†
?i|?j_ = _?i|A?j_ . (1.19)
If it so happens that A† = A, then the operator A is said to be self-adjoint,
which for our purposes in this book is equivalent to A being Hermitian, in the
standard sense of quantum mechanics texts.
The operator B is the inverse of an operator A if
AB = BA = I , (1.20)
and is denoted by A?1.
If the inverse of a linear operator U is its Hermitian adjoint U†, in which
case
U†U = I , (1.21)
then U is said to be unitary. Equivalently, U is unitary if U† = U?1.
Given two operators A and B, their commutator is defined as
[A, B] = AB ? BA , (1.22)
and anticommutator as
[A, B]+ = AB + BA . (1.23)
The operators are said to commute (anticommute) if their commutator (anticommutator)
is zero.
A vector |?_, other that the zero vector, is said to be an eigenvector of
operator A, with eigenvalue the c-number a, if it satisfies the relation
A|?_ = a |?_ . (1.24)
If the operator is Hermitian, its eigenvalues and eigenvectors have two important
properties:
(i) All eigenvalues are real.
12 1 Quantum Mechanical Background
(ii) If |?i_ and |?j_ are two eigenvectors of A, with respective eigenvalues
ai and aj , which are not equal (ai _= aj ), then the two eigenvectors are
orthogonal to each other _?i|?j_ = 0.
These two properties follow from the definitions of hermiticity and the scalar
product.
A special case of the operator product is an operator A raised to some
integer power p, i.e., Ap, whose eigenvectors obviously coincide with the eigenvectors
of A, while the eigenvalues are given by ap
i . In general, any function
f(A) of an operator A is defined through the Taylor series (assuming it exists)
f(A) =
?_
p=0
f(p)(0)
p!
Ap , (1.25)
which is thus given by the action of the powers of A. Using the series expansion,
one can prove the operator (Baker–Hausdorff) relation
e
A+B
= e
?1
2 [A,B]e
A
e
B
, (1.26)
which holds when [A, [A, B]] = [[A, B], B] = 0 (see Prob. 1.1).
In quantum mechanics, it is convenient and useful to adopt as a basis the
normalized eigenvectors of Hermitian operators. The set of eigenvalues ai of
a linear operator A, which are real for a Hermitian operator, is called the
spectrum of A, and the expression or expansion of a vector |?_ in the basis
{ |ei_} of eigenvectors of A, i.e., |?_ =
_
i ci |ei_, is also referred to as spectral
decomposition.
In a finite-dimensional space, for every Hermitian operator there exists a
set of eigenvectors which can serve as a basis for the spectral decomposition
of any vector of the space. The spaces that are needed for the description of
physical systems, however, more often than not are infinite-dimensional and, in
many cases, at least part of the spectrum of eigenvalues is continuous. This is
indeed the case for the energy operator or Hamiltonian of the simplest atomic
system, the hydrogen atom. On the other hand, the Hamiltonian of another
basic and simple system studied below, namely the harmonic oscillator, has
an infinite but discrete spectrum. Thus, a basis must include a discrete as well
as a continuous part, which is accomplished through a generalization of the
finite-dimensional case.
Let then K be a Hermitian operator that has a discrete infinite-dimensional
spectrum with eigenvalues ?n, i.e., K |?n_ = ?n |?n_ (n = 0, 1, 2 . . .), and a
continuous part with eigenvalues ?, i.e., K |?_ = ? |?_, where the values of ?
range from some lower value to in principle infinity. In such a case, any vector
|?_ can be decomposed as follows,
|?_ =
?_
n=0
|?n__?n|?_ +
_ ?u
?l
d? |?__?|?_ , (1.27)
1.1 The Mathematical Framework 13
where ?l and ?u denote the lower and upper limits of the integration. The
condition of normalization of the vector |?_ decomposed as above now reads
?_
n=0
|_?n|?_|2 +
_ ?u
?l
d? |_?|?_|2 < ? ,
which means that the series must be summable and the integral must converge
when ?u is not finite.
Dirac delta function
The case of continuous spectra in quantum optics appears not so much through
the atomic continuum as through the interaction of a small system (few degrees
of freedom) with the outside world (environment), which is the cause
of dissipation and decoherence. The mathematical treatments of the continua
in those cases are discussed as they arise. In that context, we will often encounter
the Dirac delta function ?(x) of a real variable x. This may be a good
place to introduce and discuss some of its properties. Despite its name, ?(x)
is not really a function, in the sense of a pointwise interpretation of its value
for every value of the variable x; although it is often said that ?(x) can be
thought of as being zero for every x _= 0, while it tends to infinity at x = 0,
so that _ ?
??
?(x) dx = 1 . (1.28)
Strictly speaking, however, such an object can not be a function—hence the
term generalized function or distribution,—although its properties can be defined
rigorously through sequences of bona fide functions. It can, for example,
be shown that, if F(x) is continuous and bounded on x ? (??,+?), then
lim
_?0+
?1
?_
_ ?
??
e
?x2/_ F(x) dx = F(0) ,
which is exactly what ?(x) does. Representations of ?(x) through a variety of
alternative sequences are given at the end of this section. It has been proven
that all such sequences are equivalent, in the sense that they lead to the same
action on the function F(x).
The delta function owes its origin to the need for dealing with the derivative
?_(x) of the Heaviside step function
?(x) =
_
0 if x < 0
1 if x > 1
, (1.29)
with a discontinuity at x = 0 where the derivative does not exist in the
ordinary sense. Properly speaking, the delta function is a linear functional (or
operator) which, to every complex valued function F(x) of the real variable
14 1 Quantum Mechanical Background
x, assigns the value F(0), symbolically written as _?, F_ =
_
?(x)F(x) dx =
F(0). The class of complex functions relevant to the definition must be locally
integrable, which means that
_
?(x)F(x) dx exists on every bounded interval
x ? [xl, xu]
From the above definition and a change of variable, it is obvious that
_ xu
xl
?(x ? x0)F(x) dx = F(x0), xl < x0 < xu .
It is also instructive to show that, if the rules of integration by parts apply,
then _?_, F_ = ?_?,F__ = _?, F_, under the assumption that the class of functions
F(x) vanish at ±?. Pursuing this argument further, one can define the
derivatives of a generalized function such as ?(x) through _?_, F_ = ?_?, F__ =
?F_(0), and in general
_?(n), F_ = (?1)n_?, F(n)_ = (?1)nF(n)(0) ,
assuming of course that the functions F(x) are differentiable to arbitrary order
n. It should be evident now that the properties of generalized functions, of
which ?(x) is one example, are determined by the properties of the class of
functions on which they operate, also referred to as test functions.
It is possible to represent functions on the real axis by analytic functions
in the complex plane through the following theorem that we state without
proof: If F(x) is a bounded continuous function on the real axis, then there
exists a function F(z), analytic in the whole z-plane, except on the real x-axis,
such that
lim
_?0+
_
F(x + i_) ? F(x ? i_)
_
= F(x) for all x .
The difference inside the square brackets is the “jump” that F(x) makes as we
cross the real axis from above. Therefore, although it is impossible to represent
an arbitrary F(x) —notably one with a discontinuity—as the restriction of
an analytic function, any F(x) can be represented by such a jump.
The above theorem has an immediate application to the representation of
generalized functions. It can be stated as follows: If G is a generalized function,
then there exists a function g(z), analytic everywhere except possibly on the
real axis, such that
lim
_?0+
_ _
g(x + i_) ? g(x ? i_)
_
F(x)dx = _G, F_
for any test function of the appropriate class. g(z) is called the analytic representation
of G. Through the use of such analytic representations of generalized
functions one can show that
lim
_?0+
1
x ± i_
= P
1
x
? i??(x) , (1.30)
1.1 The Mathematical Framework 15
where P indicates the principal value part in an integration over x. The above
relation will prove necessary in later sections, where we deal with the coupling
of a system with discrete spectrum to a continuum.
Finally, the following alternative expressions involving the delta function
are often useful,
?(x) =
1
2?
_ ?
??
dk eikx , (1.31a)
?(x) = lim
_?0+
1
?
_
x2 + _2 , (1.31b)
?(x) = lim
_?0+
?1
?_
exp

?x2
_

, (1.31c)
?(x) = lim
_?0+
1
?
sin(x/_)
x
, (1.31d)
?(x) = lim
_?0+
_
?
sin2(x/_)
x2 , (1.31e)
using which, one can, in particular, prove that
?(?x) =
1
|?| ?(x) , (1.32a)
?(x2 ? ?2) =
1
2|?| [?(x ? ?) + ?(x + ?)] , (1.32b)
for ? _= 0 (see Prob. 1.3).
1.1.4 Matrix Representation of Operators
Let us for the moment consider the case of a discrete spectrum of a Hermitian
operator K, assuming its eigenvectors |?n_ are normalized. Then for any vector
|?_ we have
|?_ =
?_
n=0
|?n__?n|?_ . (1.33)
Since this is valid for any |?_, it must be that
?_
n=0
|?n__?n| = I . (1.34)
This is valid for any orthonormal basis and is referred to as the spectral
resolution of the identity operator I. From this resolution, and using the
definition of the basis as eigenvectors of K, i.e., K |?_ = ? |?_, we obtain
K =
?_
n=0
?n |?n__?n| , (1.35)
16 1 Quantum Mechanical Background
referred to as spectral resolution of the Hermitian operator K. The object
?n ? |?n__?n| is called projection operator and in fact the expansion of
vector |?_ in (1.33) can be viewed as the vector sum of the projections of |?_
on all vectors of the basis; because ?n |?_ = |?n__?n|?_ indeed represents the
|?n_ component of |?_. Generally, a Hermitian operator having the property
?2 = ? is called a projection operator.
Let now A be an arbitrary linear operator in the space and consider its
action on |?_,
A|?_ =
?_
n=0
A|?n__?n|?_ =
?_
m,n=0
|?m__?m| A |?n__?n|?_ , (1.36)
where in each step we have used the resolution of the identity operator in
(1.34). Since the above equation is valid for all |?_, we conclude that
A =
?_
m,n=0
|?m__?m| A |?n__?n| =
?_
m,n=0
_?m| A |?n_ |?m__?n| , (1.37)
which is called the representation of operator A in the basis { |?n_}. It becomes
the spectral resolution only when A is Hermitian and the basis is that of its
eigenvectors. The above representation is quite general, being valid even for
operators that do not have eigenvectors. The quantities _?m| A |?n_ ? Amn,
which in general are complex numbers, form an infinite matrix—N2 in an
N-dimensional space—often called matrix realization or representation of an
operator. The generalization of the matrix representation of an operator for
the case of discrete and continuous spectrum has the form
A =
?_
m,n=0
_?m| A |?n_ |?m__?n| +
_ _
d? d?
_ _?| A |?
__ |?__?
_| . (1.38)
Clearly, the representation of an operator refers to a (chosen) specific basis
(e.g., { |?n_}). But as in the usual three-dimensional space, one may wish to
change the basis. Let { |?n_} be another orthonormal basis which represents
the eigenvectors of another Hermitian operator X having a discrete spectrum,
i.e.
X |?n_ = ?n |?n_, n= 0, 1, 2, . . . (1.39)
Again, for any arbitrary vector of the space we can write
|?_ =
?_
n=0
|?n__?n|?_ . (1.40)
But the vectors |?m_ can also be decomposed in the new basis as
|?m_ =
?_
n=0
|?n__?n|?m_ . (1.41)
1.1 The Mathematical Framework 17
Using this decomposition for any two vectors |?m_ and |?m_ _, we have
_?m_ |?m_ = ?mm_ =
_
n,n_
_?m_ |?n_ __?n_ |?n__?n|?m_ . (1.42)
Since _?n_ |?n_ = ?nn_, we obtain
_
n
_?n|?m_ _? _?n|?m_ = ?mm_ . (1.43)
Similarly, one can show that
_
m
_?m|?n_ _? _?m|?n_ = ?nn_ . (1.44)
The quantities _?n|?m_ ? Tnm are the matrix elements of a matrix T that
transforms the coefficients of the decomposition of any vector |?_ in one basis
to the coefficients of its decomposition in the other basis. It is referred to as the
transformation (from one to another basis) matrix. The inverse transformation
is realized by the matrix T † whose elements are (T †)mn ? _?m|?n_ = T ?
nm.
The components _?n|?_ of any vector |?_ on the basis { |?n_} can be viewed
as a column matrix, as can its components in another basis. The two column
matrices are obtained one from the other through multiplication by the
transformation matrix.
A matrix U whose matrix elements satisfy the condition
_
n
U?
nm
Unm_ =
?mm_ is said to be unitary, consistently with the definition of the unitary
operator in (1.21). It is called orthogonal if in addition its matrix elements
happen to be real, U?
nm = Unm. Obviously, the transformation matrix T is
unitary, T †T = T T † = I.
The matrix representation of an operator in different bases are also related
through the transformation matrix. This is easily seen if we consider the
matrix element _?m_| A |?m_ of A and express the vectors |?m_ in terms of
{ |?n_} as in (1.41). We then have
_?m_| A |?m_ =
_
n,n_
_?m_ |?n_ __?n_| A |?n__?n|?m_ , (1.45)
where the right side represents a typical expression found in the multiplication
of matrices. Let us define An_n(?) ? _?n_| A |?n_ and Am_m(?) ?
_?m_| A |?m_, where by A(?) or A(?) we denote the matrix representing the
operator A in the respective basis. Then (1.45) can be written in matrix form
as
A(?) = T †A(?)T , (1.46)
which also shows explicitly that changing bases in the representation of an
operator involves, or is equivalent to, a unitary transformation. An operator
is thus completely defined if its representation in some basis is known.
18 1 Quantum Mechanical Background
The careful reader may notice that the above relations emerge from repeated
application of the resolution of the identity operator in terms of orthonormal
bases, I =
_
n
|?n__?n| .
The trace of an operator is defined as the sum of the diagonal elements of
its matrix representation,
Tr(A) ?
_
n
Ann . (1.47)
It has the following obvious properties
(i) Tr(A + B) = Tr(A) + Tr(B).
(ii) Tr(cA) = cTr(A) with c a c-number.
(iii) Tr(AB) ? _
n
_
m
AnmBmn =
_
m
_
n
BmnAnm = Tr(BA).
It then follows that for any unitary operator U we have
Tr(U†AU) = Tr(U U†A) = Tr(A) . (1.48)
An important consequence of this is that the trace is invariant under the basis
transformations (1.46), since the transformation matrix T is unitary.
1.2 Description of Quantum Systems
Having outlined the most relevant properties of vector spaces and linear operators,
we turn now to the formulation of quantum theory which rests upon
the fundamental postulates stated below.
1.2.1 Physical Observables
The first postulate of quantum theory can be formulated as follows. Any physical
observable is represented by a Hermitian operator in the Hilbert space
associated with the system’s degrees of freedom. The complete description
of a system may require more than one physical variable and therefore the
respective operators. The possible physical states of a system are represented
by vectors in the space spanned by the eigenstates of all necessary operators.
These operators obey commutation relations which are related to the
measurement procedure, as detailed in Sect. 1.2.4.
The system under consideration may be inseparable or composite, in the
latter case the complete vector space spanned by the degrees of freedom of
the system is given by the tensor product of vector spaces corresponding to
its constituent subsystems.
The most fundamental property of an isolated physical system is its energy
which is constant. In quantum theory, the energy is represented by the Hamiltonian
H—a Hermitian operator. One or more other observables (operators)
may enter in the expression of the Hamiltonian.
1.2 Description of Quantum Systems 19
Linear Harmonic Oscillator
To give a brief example, we consider the one-dimensional harmonic oscillator.
Classically, it corresponds to a point mass M on a straight line x subject to a
restoring force ?x, where ? is a constant. The potential energy is 1
2?x2, while
the kinetic energy is 1
2M p2 with p being the momentum of the particle. Then
the total energy is
E =
1
2M
p2 +
M?2
2
x2 , (1.49)
where ? =
_
?/M is the frequency of the oscillator.
In the transition to quantum theory, the coordinate x and momentum
p variables become operators Q and P. In particular, P is the differential
operator ?i_?x, while Q = x. As a consequence, their commutator is
[P,Q] = PQ ?QP = ?i_I , (1.50)
which is a c-number multiplying the identity operator I. The crucial point is
that the commutator is nonzero; otherwise stated, Q and P do not commute,
which is what sets apart quantum mechanics from the classical counterpart
cast in terms of the same set of variables (physical observables). The c-number
in the right-hand side of commutator (1.50) is the universal number _ = h/2?
with h known as Planck’s constant (h = 6.619 × 10?34 J s). The roots of this
constant and its value reach back to the origins of the quantum theory in the
early 20th century. The discussion of how this came about is outside the scope
of this book, but can of course be found in most books on quantum theory.
The Hamiltonian operator of the harmonic oscillator therefore is
H =
1
2M
P2 +
M?2
2
Q2 , (1.51)
where H, P and Q are all linear Hermitian operators.
The quantum description of the harmonic oscillator, as of any system,
requires the complete specification of the vector space spanned by the vectors
corresponding to all possible states of the system. The initial approach to
this problem, although not cast in this language, was through the stationary
Schr¨odinger equation. This means that one considers the differential equation
H?(x) =

? _2
2M
?2
?x2 +
M?2
2
x2

?(x) = E?(x) , (1.52)
where ?(x) are functions of x, and E the respective eigenvalues of energy
which is represented by the differential operator H. The variable x takes values
in the continuum ?? ? x ? ?, as dictated by the nature of the physical
system. If the function ?(x) is to correspond to a physically permissible state
denoted by |?_, it must be normalizable. The space thus consists of c-valued
one variable functions ?(x). If the scalar product is defined via
20 1 Quantum Mechanical Background
_?|?
__ ?
_ ?
??
dx?
?
(x)?
_
(x) , (1.53)
the square of the norm of ?(x) is
_?|?_ =
_ ?
??
dx |?(x)|2 . (1.54)
For the norm to be finite, the permissible functions ?(x) must be square
integrable. Typically, one uses a power series expansion for the solutions ?(x)
with the requirement that they approach zero sufficiently fast as |x| ? ? for
the square |?(x)|2 to be integrable. This leads to a discrete set of infinitely
many solutions ?n(x) with the corresponding states |?n_ indexed by n =
0, 1, 2, . . . having the respective energy eigenvalues
En = _?(n + 1
2 ) . (1.55)
The normalized functions can be expressed in terms of Hermite polynomials
Hn(x) = (?1)nex2 dn
dxn e?x2
as
?n(x) =

M?
?_
1/4 2?n/2
?
n!
exp

?M?
2_
x2

Hn
__
M?
_
x

. (1.56)
In particular, for the lowest n = 0 (or ground) state |?0_ with energy E0 =
1
2_?, one has
?0(x) =

M?
?_
1/4
exp

?M?
2_
x2

. (1.57)
Let us look back at what we have done. Starting with the classical Hamiltonian
for the system, we have replaced the momentum p by the differential
operator ?i_?x converting thus the Hamiltonian to an operator. Then
we sought solutions to the differential equation H?(x) = E?(x) under the
appropriate boundary conditions, which defined an eigenvalue problem. The
resulting eigenvalues En are the values of the energy dictated by quantum
theory, with the respective eigenfunctions ?n(x) representing the state of the
system |?n_ with that energy. Thus a system, which in classical physics can
have any energy from zero to infinity, in quantum theory has energy that is
restricted to discrete values determined by the eigenvalues of a differential
operator, and its state (physical properties) is characterized by the respective
eigenfunctions ?n(x) (or eigenvectors |?n_) in the space spanned by these
eigenvectors. Moreover, the lowest possible energy is not zero, but the finite
quantity 1
2_?, referred to as zero-point energy. This is directly related to the
non-commutativity of P and Q as discussed later on.
1.2.2 Quantum Mechanical Hamiltonian
In the approach just outlined, we have solved a differential equation, which is
what one does in classical physics. Where does quantum mechanics come in
1.2 Description of Quantum Systems 21
then? It is in the interpretation based on the postulates that relate the vectors
to the observation (measurement) of the physical variable, and of course in
the replacement of the dynamical variables by operators.
The procedure for the quantization of the Hamiltonian of the harmonic
oscillator represents a special case of a more general scheme. If we have a
system with k degrees of freedom represented by the coordinates q1, q2, . . . , qk
and their canonically conjugate momenta p1, p2, . . . , pk, the Hamiltonian of
the system will, in general, be a function of q’s and p’s, i.e.,
H = H(p1, p2, . . . , pk; q1, q2, . . . , qk) . (1.58)
Classically, the Hamilton’s equations
dqi
dt
=
?H
?pi
and
dpi
dt
=
?H
?qi
, i = 1, 2, . . . , k , (1.59)
determine the equations of motion of the system. The transition to quantum
mechanics is accomplished by identifying pi with the differential operator
?i_?qi and solving the eigenvalue problem defined by the partial differential
equation
H?(q1, q2, . . . , qk) = E?(q1, q2, . . . , qk) , (1.60)
under the appropriate boundary conditions, to determine the eigenfunctions
?n(q1, q2, . . . , qk) and the corresponding energy eigenvalues En, with n running
over an infinite set of discrete and/or continuous values. In general, n
can, and usually does, represent a group of indices, each running over the
appropriate range of values. The scalar product is now given as
_?n|?n_ _ =
_
dq1dq2 . . . dqk ?
?
n(q1, q2, . . . , qk)?n_ (q1, q2, . . . , qk) . (1.61)
The set of eigenstates |?n_ forms an orthonormal basis for the space of the
physical system described by H. Identifying qi and pi with the operators Qi
and Pi, and imposing the commutation relations
[Qj ,Pi] = i_?ijI , [Qi,Qj] = [Pi,Pj] = 0, i,j= 1, 2, . . . , k , (1.62)
we have what is referred to as canonical quantization of the system. The
route to quantization in non-relativistic quantum theory, which is the context
of this book, is to obtain the Hamiltonian appropriate to the system under
consideration and proceed with the canonical quantization. If the system does
not have a classical Hamiltonian, e.g., quantum mechanical spin, we must
define for it a suitable vector space.
1.2.3 Algebraic Approach for the Harmonic Oscillator
Having outlined an example of the wavefunction approach through the explicit
solution of the differential (Schr¨odinger) equation, it is instructive to show
22 1 Quantum Mechanical Background
how, for the same quantum system, the eigenvectors and eigenvalues of H
can be obtained from the algebraic properties of the operators of the system,
without solving any differential equation. This approach not only underscores
the fundamental relation between vector spaces and quantum theory, but it
also provides a powerful and elegant tool.
To this end, we introduce the operators a and a†, defined as
a =
? 1
2M_?
(M?Q + iP) , a

=
? 1
2M_?
(M?Q?iP) . (1.63)
Clearly they are non-Hermitian, with a† being the Hermitian adjoint of a.
Their commutator is
[a, a

] = 1 , (1.64)
which follows directly from the commutator of Q and P in (1.50).We introduce
in addition a Hermitian operator N = a†a, which upon substitution of a and
a† from (1.63) is found to be
N =
1
_?

H? _?
2
I

.
We can then write
H = _?
_
N + 1
2
_
= _?
_
a

a + 1
2
_
. (1.65)
Since H is the energy of the harmonic oscillator, evidently N is also a measure
of energy in units of _?. Being a Hermitian operator, N possesses eigenstates
with real eigenvalues, if it possesses any at all. A purely algebraic approach
can lead us to the answer as follows.
Assume there is an eigenvector ?? with eigenvalue ?, which means
N?? = ??? . (1.66)
Calculate Na?? using (1.64):
Na?? = a

aa?? = (aa
† ? 1)a?? = a(a

a ? 1)?? = (? ? 1)(a??) .
Therefore a?? is also an eigenvector of N with eigenvalue (??1), a?? = ???1.
Repeating the procedure m times, we see that am?? = ???m, provided it is
not the zero eigenvector. Similarly, we find that
Na

?? = (? + 1)(a

??) ,
which means that a†?? = ??+1, i.e., an eigenvector of N with eigenvalue
? + 1, unless it is the zero vector. We now show that in can not be the zero
vector. If it were, i.e., if a†?? = 0, we would have _a†??|a†??_ = 0. But from
the definition of the Hermitian adjoint we have
_a

??|a

??_ = _??|aa

??_ = _??|(a

a + 1)??_ = _a??|a??_ + _??|??_ _= 0 ,
1.2 Description of Quantum Systems 23
because by hypothesis ?? _= 0. Repeating the procedure m times, we find that
(a†)m?? = ??+m. Therefore, if we have an eigenvector ?? of the operator N,
the sequence of vectors ??+m generated as (a†)m?? for m = 0, 1, 2, . . . constitutes
a sequence of eigenvectors of N, which are non-zero, with respective
eigenvalues ? + m.
It remains to determine whether the sequence am?? = ???m leads to the
zero eigenvector. Note that
_???m|N???m_ = (? ? m)_???m|???m_ ,
and also
_???m|N???m_ = _???m|a

a???m_ = _a???m|a???m_ .
Combining the two results, we obtain
? ? m =
_a???m|a???m_
_???m|???m_
? 0 ,
because both numerator and denominator, being the norms of two vectors, are
non-negative. Consequently, the sequence of eigenvectors ???m must eventually
terminate to an eigenvector ?0, such that a?0 = 0, with ?0 being the
eigenvector of N with eigenvalue 0,
N?0 = a

a?0 = a

0 = 0 , (1.67)
This is the eigenvector with the lowest eigenvalue. Assume it is normalized,
_?0|?0_ = 1. All other eigenvectors can now be produced (or constructed)
from ?0 through the repeated application of a†. Thus
?n = kn(a

)n?0 , (1.68)
with kn a normalization constant, to be determined from the requirement
_?n|?n_ = 1. We thus have N?n = n?n and the condition
|kn|2_(a

)n?0|(a

)n?0_ = 1 .
One now seeks a relation between kn and kn?1, obtained as follows,
?n?1 = kn?1(a

)n?1?0 ,
a

?n?1 = kn?1(a

)n?0 =
kn?1
kn
?n ,
or ?n =
kn
kn?1
a

?n?1 , (1.69)
where both ?n and ?n?1 must be normalized. In terms of (1.69), this normalization
implies
24 1 Quantum Mechanical Background
_?n|?n_ = 1 =
|kn|2
|kn?1|2
_a

?n?1|a

?n?1_ =
|kn|2
|kn?1|2
_?n?1|aa

?n?1_
=
|kn|2
|kn?1|2
_?n?1|(a

a + 1)?n?1_
=
|kn|2
|kn?1|2 (n ? 1 + 1)_?n?1|?n?1_
=
|kn|2
|kn?1|2n , (1.70)
from which we obtain the recursion relation
|kn|2 =
1
n
|kn?1|2 . (1.71)
Since ?0 is assumed normalized, we have k0 = 1, from which repeated application
of the recursion relation (1.71) leads to
kn =
_
1
n!
. (1.72)
The derivation above is compatible with the more general choice kn =
ei?
_
1/n!, with ? any real number. This phase will not be significant in our
considerations, as far as determining the eigenvectors is concerned, and is set
equal to 0.
We have thus determined the eigenvectors of operator N to be
|?n_ =
?1
n!
(a

)n |?0_ , (1.73)
with eigenvalues n = 0, 1, 2, . . . They are orthogonal to each other, _?n|?n_ _ =
?nn_ , because they all have distinct eigenvalues. Since the Hamiltonian H is
given by (1.65), it is evident that
H|?n_ = _?
_
n + 1
2
_
|?n_ , (1.74)
which shows that the vectors |?n_ are the eigenvectors of the Hamiltonian,
with eigenvalues _?(n + 1
2 ), in agreement with the results obtained above
through the solution of the differential (Schr¨odinger) equation. Here, however,
the eigenvectors and eigenvalues of H were obtained from the algebraic
properties of the operators of the system without ever solving a differential
equation.
If we want to have the analytic expressions for ?n as functions of x, all we
need is the expression for ?0. According to (1.57), it is given by
?0 = ?0(x) =

M?
?_
1/4
exp

?M?
2_
x2

. (1.75)
1.2 Description of Quantum Systems 25
Then the eigenfunctions ?n are obtained by successive application of the differential
operator
a

=
?1
2
__
M?
_
x ?
_
_
M?
?
?x

, (1.76)
with the result ?n = ?n(x) given by (1.56). Using the properties of |?n_, it is
straightforward to show that
a |?n_ =
?
n |?n?1_ and a
† |?n_ =
?
n + 1|?n+1_ . (1.77)
Although for reasons of expediency, in deriving the above results we invoked
the expression for ?0 = ?0(x) obtained through the differential equation
(1.52), that was not necessary. One can obtain ?n as functions of x within
the algebraic procedure, using only the properties of the operators a and a†.
Briefly, to this end, one seeks the eigenvectors |x_ of operator Q with the
(continuous) eigenvalues x. Formally, one seeks solutions of
Q|x_ = x |x_ , (1.78)
and uses the basis { |?n_} to obtain the coefficients _?n|x_ transforming one
set of eigenvectors into the other. One expresses Q in terms of a and a†, and
exploits their action on |?n_, defining in the process
fn(y) =
?
2nn!
_?n|x_
_?0|x_
with y =
_
m?/_ x. One then arrives at the difference equations (or recursion
relations)
fn(y) = 2yfn?1(y) ? 2(n ? 1)fn?2(y) (1.79)
with f0(y) = 1 and f1(y) = 2yf0(y). The above recursion relations are known
to be those of the Hermite polynomials, and for real y are given by
fn(y) = Hn(y) = (?1)ney2 dne?y2
dyn ,
yielding finally
_?n|x_ =
? 1
2nn!
_?0|x_Hn
__
M?
_
x

for ??< x < ? . (1.80)
Having established that according to the fundamental structure of quantum
theory the harmonic oscillator can have only the energies En = _?(n+1
2 ),
it is said that its energy is quantized in units of _?, which is referred to as one
quantum. Since the action of operator a or a† on |?n_ results, respectively,
in the decrease or increase of the energy by one quantum _?, they are called,
26 1 Quantum Mechanical Background
respectively, the annihilation (lowering) or creation (raising) operators. Both
are said to be ladder operators.
The infinite set { |?n_} of eigenvectors of H constitutes an orthonormal
basis for the infinite-dimensional space that contains all possible states (vectors)
of the harmonic oscillator allowed by quantum theory. An arbitrary state
|?_ of the system can therefore be written as
|?_ =
?_
n=0
cn |?n_ , (1.81)
with the condition
_?
n=0
|cn|2 < ?. The space of all |?_ that satisfy this
condition, with the scalar product defined via (1.53), i.e.
_?|?
__ ?
_ ?
??
dx?
?
(x)?
_
(x) ,
constitutes a Hilbert space.
More generally, the eigenvectors of the harmonic oscillator constitute an
orthonormal set of vectors that can be, and often is, used as a basis for the
description of the states of any system in one dimension in the real interval
(??,?), under the condition that its states are square integrable in that
interval. Given the equivalence of the two approaches that led to the eigenvectors
of H, we can condense the notation by using |n_ for the eigenvectors
of N, called the number states, i.e.,
N |n_ ? a

a |n_ = n |n_ , (1.82)
as is very common in the literature. Clearly,
H|n_ = _?
_
n + 1
2
_
|n_ . (1.83)
Coherent States of the Harmonic Oscillator
In addition to providing a fundamental quantum system for the elaboration
and illustration of the principles of quantum theory, the harmonic oscillator
represents a building block in the description of many physical systems, such
as the electromagnetic field, vibrations of nuclei in crystals or molecules, etc.
It is thus of interest to explore several of its properties as they will be found
useful in the subsequent chapters.
A very special feature of the harmonic oscillator is the existence of a set
of states quite different from the eigenstates |n_ of the Hamiltonian H. They
are known as coherent states and can be obtained as eigenstates of the (non-
Hermitian) annihilation operator a. Clearly, if they exist at all, these eigenstates
can not be expected to be necessarily real.
Let |?_ be such a state, with the complex number ? denoting its eigenvalue.
By definition, we thus have
1.2 Description of Quantum Systems 27
a |?_ = ? |?_ . (1.84)
But whatever |?_ is, it has to be a state of the harmonic oscillator and can
therefore be decomposed in the basis { |n_}, which means
|?_ =
?_
n=0
_n|?_ |n_ . (1.85)
Using the definition of |?_ in (1.84) we can write
_n| a |?_ = ?_n|?_ .
On the other hand, we have
_n| a |?_ = _?| a
† |n_?
=
?
n + 1_?|n + 1_?
.
Equating these two expressions for _n| a |?_, we obtain
_n + 1|?_ =
? ?
n + 1
_n|?_ . (1.86)
Starting from the lowest n = 0, by induction we find
_n|?_ =
?n
?
n!
_0|?_ , (1.87)
which enables us to represent |?_ as
|?_ = _0|?_
?_
n=0
?n
?
n!
|n_ , (1.88)
where the only unknown is the multiplicative constant _0|?_, a c-number.
Requiring that |?_ be normalized, i.e. _?|?_ = 1, we have
_?|?_ = |_0|?_|2
?_
n=0
|?|2n
n!
= |_0|?_|2 exp(|?|2) = 1 ,
from where _0|?_ = ei?e?1
2
|?|2
with ? being a real number representing the
phase of the state vector. Unless one has a reason to expect the phase to play
a role in a particular context, it can be set equal to zero. Thus a well-defined
eigenstate of the annihilation operator a is
|?_ = e
?1
2
|?|2
?_
n=0
?n
?
n!
|n_ , (1.89)
which exists for any c-number ?.
Let us also express the coherent state |?_ through the lowest energy state
of the harmonic oscillator |0_ as
28 1 Quantum Mechanical Background
|?_ = e
?1
2
|?|2
?_
n=0
(?a†)n
n!
|0_ = e
?1
2
|?|2
e?a† |0_ . (1.90)
Using the fact that e???a |0_ = |0_ (since a |0_ = 0) and the operator relation
(1.26), we can rewrite (1.90) as
|?_ = D(?) |0_ , D(?) ? e?a†???a . (1.91)
Thus, formally, the coherent state |?_ can be generated from the lowest
(ground) state |0_ by acting upon it with the operator D(?) known as the
displacement operator.
Consider now the scalar product of two coherent states |?_ and |?_,
_?|?_ = e
?1
2 (|?|2+|?|2)
_
n
(??)n?n
n!
= exp
_
?
?
? ? 1
2
(|?|2 + |?|2)
_
.
We thus have
|_?|?_|2 = e
?|???|2
, (1.92)
which is small for |? ? ?|2 1, but never zero, i.e., the coherent states are
non-orthogonal. However, the farther from each other (on the complex plane)
the eigenvalues ? and ? are, the more “orthogonal” (less overlap) the two state
are, but each state still contains all of the others. Thus the set of coherent
states { |?_} is continuous, normalized, but not orthogonal and overcomplete.
It can nevertheless be used as a basis which is particularly useful in calculating
the expectation values of correlation functions of a† and a, as will be illustrated
in the next chapter.
1.2.4 Operators and Measurement
In the beginning of Sect. 1.2, we formulated the first postulate of quantum theory
that relates physical observables with Hermitian operators in the Hilbert
space in which the possible state vectors of the system are defined. The second
postulate of quantum theory states that the measurement of a physical
observable corresponds to an action of the respective operator on the state of
the system. In addition, the result of the measurement can only be one of the
eigenvalues of the operator, with a probability determined by the respective
coefficient in the expansion of the state vector in terms of eigenvectors of that
operator, which is often refereed to as the Born rule.
To state this postulate formally, let A be the Hermitian operator, |ai_ its
eigenvectors with eigenvalues ai, and |?_ the state of the system. Then we
can write
|?_ =
_
i
_ai|?_ |ai_ , (1.93)
from which
1.2 Description of Quantum Systems 29
A|?_ =
_
i
_ai|?_ ai |ai_ . (1.94)
The probability P(ai) for obtaining the result ai in one given measurement is
P(ai) = |_ai|?_|2 = _?|?i |?_ , (1.95)
where ?i ? |ai__ai| is the corresponding projection operator. Since we always
assume the state vector |?_ to be normalized, we have
_
i
P(ai) =
_
i
|_ai|?_|2 = 1 , (1.96)
as it should be if P(ai) are to fulfill the necessary conditions for probabilities.
The average value to be expected from the measurement of A on an ensemble
of systems identically prepared in state |?_ will obviously be
_
i
aiP(ai) =
_
i
ai|_ai|?_|2 = _?| A |?_ ? _A_ , (1.97)
and is called the expectation (or expected) value of A for a system in state
|?_. It simply represents in ket notation the scalar product of |?_ with the
vector A|?_ resulting from the action of operator A on |?_.
In general, there is a variance ?2A
? (?A)2 associated with the measurement
of A,
(?A)2 ? _A2_ ? _A_2 = _?|A2 |?_ ? |_?| A |?_|2 , (1.98)
which is consistent with the definition of the variance in probability theory,
since (1.98) also represents the quantity
_
i
a2i
P(ai) ?
__
i
aiP(ai)
_2
= ?2A
.
Clearly, if |?_ coincides with one of the eigenstates of A, the result of the
measurement will be its corresponding eigenvalue with zero variance. It is
only in that case that the expectation value of the physical observable can be
determined with no uncertainty whatsoever.
Post-measurement State
As stated above, the measurement of a physical observable described by an
operator A yields one of its the eigenvalues ai. In turn, the state of the system
immediately after the measurement is given by the eigenvector corresponding
to the measured eigenvalue of A. Thus, for an arbitrary state |?_, if the
outcome of the measurement is ai, immediately after the measurement the
system is in state |ai_. This is often referred to as the collapse of the state
30 1 Quantum Mechanical Background
vector |?_ to |ai_. The outcome of the measurement can therefore be associated
with the corresponding projection operator ?i ? |ai__ai| acting on the
system initially in state |?_ to generate the post-measurement state |?pm_
according to
|?pm_ =
?i |?_
_
P(ai)
= |ai_ , (1.99)
where the denominator with P(ai) given by (1.95) ensures the renormalization
of |?pm_.
Often in quantum mechanics one encounters more general types of measurement,
in which the detection process involves an intermediate system (or
environment) which is correlated with the interrogated system through an
interaction. In general, this interaction may result in transitions between the
eigenstates |ai_ of the measured system. Examples of such measurements will
be given in Sect. 2.3 and Sect. 4.2. Then, with each outcome of the measurement
performed on the intermediate system (environment) we can associate
an operator ?i = |aj__ai|, where |aj_ and |ai_ may represent different eigenstates.
The so-called quantum jump operator ?i acts on the system initially
in state |?_ to generate the post-measurement state |?pm_ given by
|?pm_ =
?i |?_
_
_?| ?

i?i |?_
=
|aj__ai|?_
_
P(ai)
= |aj_ , (1.100)
where we have used _?| ?

i?i |?_ = _?|ai__ai|?_ = P(ai). In the special case of
?i = |ai__ai| = ?i we recover the post-measurement state (1.99). The measurement
schemes resulting in the post-measurement states (1.99) and (1.100)
are sometimes called measurements of the first and second kind, respectively.
1.2.5 Heisenberg Uncertainty Principle
A quantum system is determined fully by the necessary operators whose number
depends on the nature of the system and its degrees of freedom. In the
simple case of the linear harmonic oscillator, with one degree of freedom, we
have the position Q and momentum P operators which do not commute. On
the other hand, we have seen that for k degrees of freedom we have k operators
Qi and Pi, each of which commutes with all others as per (1.62). This often
means that the Hamiltonian H can be written in terms of the sum of operators
acting on different non-overlapping subspaces whose union constitutes
the complete space of the system. These partial Hamiltonian operators commute
with each other as they involve different degrees of freedom represented
by commuting operators. Thus we are led to the following very useful property
(theorem): If two operators A and B have the same set of eigenvectors,
they must commute, [A, B] = 0. The converse is also true: If two operators
commute, their eigenvectors coincide.
1.2 Description of Quantum Systems 31
Let { |?i_} be the common set of eigenvectors, i.e., A|?i_ = ai |?i_ and
B |?i_ = bi |?i_ with ai and bi the respective eigenvalues and i = 0, 1, 2, . . .,N
or ?. Obviously
AB |?i_ = Abi |?i_ = biA|?i_ = biai |?i_ = BA|?i_ . (1.101)
Therefore AB ? BA = 0 because (1.101) holds for all |?i_. To prove the
converse, assume AB = BA and let { |ai_} be the complete set of eigenvectors
of A, i.e., A|ai_ = ai |ai_. Consider now AB |ai_ = BA|ai_ = aiB |ai_. This
implies that B |ai_ is an eigenvector of A with the eigenvalue ai. Consequently
B |ai_ = ?i |ai_, where ?i is a c-number coefficient. But this means that |ai_
is an eigenvector of B as well.
Thus, if two operators A and B commute, they possess a common set of
eigenvectors { |?i_} with corresponding eigenvalues {ai} and {bi}. It is now
evident that if the system is in one of those common eigenstates, say |?i_,
measurement of both A and B will yield values ai and bi with no uncertainty.
Otherwise, if A and B do not commute, each of them has its own distinct set
of eigenvectors { |ai_} and { |bi_}, respectively. Then if the system is in one
of the eigenstates of A, say |ai_, the measurement of observable associated
with A will yield ai with no uncertainty. But because |ai_ is not an eigenstate
of B, the result of measurement of B will be uncertain. This fundamental
feature of quantum theory was first articulated by Heisenberg in what is now
known as Heisenberg uncertainty principle, which states that the product of
uncertainties of two variables represented by operators A and B must be no
smaller than half of the expectation value of their commutator,
?A?B ? 1
2
|_[A, B]_| . (1.102)
For two canonically conjugate variables, represented by two non-commuting
operators, such as position Q and momentum P, we thus have
?Q?P ? _/2 , (1.103)
which is a consequence of the commutation relation (1.50)
To illustrate the foregoing discussion, let us revert again to the harmonic
oscillator in the coherent state |?_. Since |?_, as given by (1.89), is a linear
combination of the energy eigenstates |n_, the energy of the system in such a
state involves an uncertainty. Clearly, upon measurement, the probability of
obtaining the value n_? is
P(n) = |_n|?_|2 =
|?|2n
n!
e
?|?|2
, (1.104)
while the average value of n is
_?| a

a |?_ =
_
n
nP(n) = |?|2 ? ?n . (1.105)
32 1 Quantum Mechanical Background
This implies that P(n) can also be expressed as
P(n) =
?nne??n
n!
, (1.106)
which demonstrates that the probability of obtaining n quanta is given by
the Poisson distribution. The variance (?n)2 = _N2_ ? _N_2 can easily be
calculated, by noting that
_?|N2 |?_ = _?| a

aa

a |?_ = _?| a

(a

a + 1)a |?_
= _?| a

a

aa |?_ + _?| a

a |?_ = |?|4 + |?|2 ,
where we have used the fact that _?| a† = (a |?_)? = ??_?| , i.e., a† operates
to the left (on the bra), while a operates to the right (on the ket). We thus
have (?n)2 = |?|4 + |?|2 ? |?|4 = |?|2, or
?n = |?| =
?
?n , (1.107)
as also expected from the Poisson distribution (1.106).
Let us also calculate the uncertainties in the position Q and momentum
P measurement of the harmonic oscillator in the coherent state |?_. Using
Q =
_
_
2M?
(a

+ a) , P = i
_
M_?
2
(a
† ? a) , (1.108)
we have
_Q_ =
_
2_
M?
Re(?) , _P_ =
?
2M_? Im(?) ,
and
_Q2_ =
_
2M?
__
2Re(?)
_2
+ 1
_
, _P2_ =
M_?
2
__
2Im(?)
_2
+ 1
_
.
The variances are thus given by
(?Q)2 = _Q2_ ? _Q_2 =
_
2M?
, (?P)2 = _P2_ ? _P_2 =
M_?
2
, (1.109)
from which
?Q?P = _/2 , (1.110)
i.e., the coherent state of the harmonic oscillator is a minimum uncertainty
state in the sense that ?Q?P is the minimum allowed. In addition, the
uncertainties of the position Q and momentum P are equal (except for the
dimensional factors
?
M? in the denominator and numerator, respectively),
as shown in Fig. 1.1(a).
A more general class of minimum uncertainty states are the squeezed coherent
states for which the uncertainty of one of the variables is smaller than
1.2 Description of Quantum Systems 33
Re(? )
Im(? )
Re(? )
Im(? )
(a) (b)
Fig. 1.1. Amplitudes (_Q_ ? Re(?) and _P_ ? Im(?)) and uncertainties (shaded
error contours) for the (a) coherent and (b) squeezed states.
that of the coherent state,
_
_/2, with the uncertainty of the conjugate variable
larger than
_
_/2, so that their product is equal to _/2, as required by
the uncertainty principle. A squeezed state denoted by |?, ?_ can formally
be generated by acting with the unitary squeezing operator S(?) upon the
coherent state |?_,
|?, ?_ = S(?) |?_ , S(?) ? exp
_ 1
2 ?
?
a2 ? 1
2?a
†2_
. (1.111)
Given a harmonic oscillator in the squeezed state |?, ?_, where ? = rei? is a
complex number, it is convenient to define the Hermitian amplitude operators
Y1(?) = (a

ei?/2 + ae
?i?/2) , Y2(?) = i(a

ei?/2 ? ae
?i?/2) , (1.112)
which are proportional to the position Q and momentum P operators rotated
by the angle ?/2 in the complex ? plain. Using the following properties of the
squeezing operator (see Prob. 1.5),
S†
(?)aS(?) = a cosh r ? a

ei? sinhr , (1.113a)
S†
(?)a
†S(?) = a

cosh r ? ae
?i? sinhr , (1.113b)
it is a simple exercise to obtain
_Y1_ = _?| S†
(?)d1S(?) |?_ = (?
?
ei?/2 + ?e
?i?/2) e
?r ,
_Y2_ = _?| S†
(?)d2S(?) |?_ = i(?
?
ei?/2 ? ?e
?i?/2) er ,
and
_Y2
1
_ =
_
1 + (?
?
ei?/2 + ?e
?i?/2)2_
e
?2r ,
_Y2
2
_ =
_
1 ? (?
?
ei?/2 ? ?e
?i?/2)2_
e2r .
The variances of Y1 and Y2 are therefore
(?Y1)2 = e
?2r , (?Y2)2 = e2r , (1.114)
34 1 Quantum Mechanical Background
which yields
?Y1 ?Y2 = 1 . (1.115)
Thus the parameter r determines the amount of squeezing of the uncertainty
of one of the amplitudes, while the uncertainty of the other (conjugate) amplitude
is stretched by the corresponding amount, such that their product is
a constant equal to 1. When ? = 0, the amplitudes Y1 and Y2 correspond,
respectively, to the position Q and momentum P operators, for which we
readily obtain
?Q =
_
_
2M?
e
?r, ?P =
_
M_?
2
er . (1.116)
For r > 0 we have a position-squeezed state, meaning that the uncertainty in
the position measurement is smaller than that for the coherent state (see
Fig. 1.1(b)), while for r < 0 the harmonic oscillator is in a momentumsqueezed
state.
1.2.6 Time Evolution: The Schr¨odinger Equation
The third postulate of non-relativistic quantum theory states that the timeevolution
of a system is governed by the Schr¨odinger equation
i_
?
?t
|?(t)_ = H|?(t)_ . (1.117)
For an isolated system, the Hamiltonian H = H0 is time-independent and the
total energy of the system is conserved. We can then expand the state of the
system |?_ in terms of the eigenstates |En_ of the Hamiltonian, H0 |En_ =
En |En_ with En the corresponding energy eigenvalues, as
|?_ =
_
n
cn |En_ .
Substituting this into the Schr¨odinger equation (1.117) we have
_
n
? cn |En_ = ? i
_
_
n
Encn |En_ , (1.118)
with the solution cn(t) = cn(0) e?i?nt, where ?n = En/_. Thus the amplitudes
cn of the decomposition of the state vector, while preserving their absolute
value |cn(t)| = |cn(0)|, oscillate in time with the frequencies ?n determined
by the corresponding energy eigenvalues En, with the result
|?(t)_ =
_
n
cn(0)e
?i?nt |En_ . (1.119)
Consider now the case of a system under an external (in general timedependent)
perturbation V(t) which acts through some dynamic variable
1.2 Description of Quantum Systems 35
(operator) of the system contained in V. The total Hamiltonian now is
H = H0 + V. It is often convenient to expand the state of the systems in
terms of the eigenstates |En_ of the unperturbed Hamiltonian H0 as
|?(t)_ =
_
n
cn(t) |En_ =
_
n
?cn(t)e
?i?nt |En_ .
Then, from the Schr¨odinger equation (1.117) we obtain the following set of
differential equations for the slowly-varying coefficients ?cn(t),
?
?t
?cn(t) = ? i
_
_
m
?Vnm(t)?cm(t) , ?Vnm(t) ? ei?nmt_En| V(t) |Em_ , (1.120)
where ?nm = ?n ? ?m. These equations govern the time evolution of the
systems in the so-called interaction picture, in which the rapid oscillations of
the coefficients of the state vector expansion in terms of the energy eigenstates
|En_ were removed via the transformation
| ? ?(t)_ = e
i
_
H0t |?(t)_ =
_
n
?cn(t) |En_ . (1.121)
The Schr¨odinger equation for | ? ?(t)_ then reads
i_
?
?t
| ? ?(t)_ = ?V(t) | ? ?(t)_ , ?V(t) ? e
i
_
H0t V(t) e
? i
_
H0t , (1.122)
which is an equivalent way of writing (1.120). Thus to determine the state
of the system at time t > 0, given its state at time t = 0, one has to solve
the set of coupled differential equations (1.120) with the corresponding initial
conditions defined by the amplitudes cn(0) = ?cn(0).
When the interaction V is time independent (or its time-dependence is
harmonic), the time evolution of the state vector |?(t)_ can, in principle, be
determined through the solution of the eigenvalue problem H|?_ = _? |?_.
The roots of the determinant det(H?_?I) give the eigenfrequencies ?n or the
eigenenergies _?n of the total Hamiltonian H. The corresponding eigenvectors
|?n_, which satisfy H|?n_ = _?n |?n_, are the eigenstates of the system
“dressed” by the interaction V. Then, the state of the system at any time
t ? 0 is given by
|?(t)_ =
_
n
e
?i?nt |?n__?n|?(0)_ , (1.123)
where |?(0)_ is the initial state. For an isolated system, V = 0, we obviously
have _?n = En and |?n_ = |En_, where En and |En_ are the energy eigenvalues
and the corresponding eigenstates of H0. Then the above equation reduces
to (1.119)


المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .