Chapter 9
Eigenvalue Bounds
9.1 Introduction and Notation ....................................... 124
9.1.1 Eigenvalues and Singular Values ........................ 124
9.1.2 Vector and Matrix Norms ............................... 125
9.1.3 Some Examples of Induced Matrix Norms .............. 126
9.2 Overview ......................................................... 128
9.3 Cimmino’s Algorithm ............................................ 130
9.4 The Landweber Algorithms ...................................... 130
9.4.1 Finding the Optimum γ ................................. 131
9.4.2 The Projected Landweber Algorithm ................... 132
9.5 Some Upper Bounds for L ....................................... 133
9.5.1 Earlier Work ............................................. 133
9.5.2 Our Basic Eigenvalue Inequality ........................ 135
9.5.3 Another Upper Bound for L ............................ 138
9.6 Simultaneous Iterative Algorithms ............................... 140
9.6.1 The General Simultaneous Iterative Scheme ............ 140
9.6.2 The SIRT Algorithm .................................... 141
9.6.3 The CAV Algorithm ..................................... 142
9.6.4 The Landweber Algorithm .............................. 142
9.6.5 The Simultaneous DROP Algorithm .................... 143
9.7 Blockiterative Algorithms ....................................... 144
9.7.1 The BlockIterative Landweber Algorithm .............. 144
9.7.2 The BICAV Algorithm .................................. 144
9.7.3 A BlockIterative CARP1 ............................... 145
9.7.4 Using Sparseness ......................................... 146
9.8 Exercises ......................................................... 146
As we discussed previously, a number of iterative methods that involve a
matrix A place upper bounds on the steplength parameter in terms of
the spectral radius of the matrix A
†
A.SinceA is often quite large, ﬁnding
decent estimates of ρ(A
†
A) without having to calculate A
†
A becomes im
portant. In this chapter we obtain upper bounds on the spectral radius of
positivedeﬁnite matrices and use these bounds in the selection of param
eters in several iterative methods.
123
124 Iterative Optimization in Inverse Problems
9.1 Introduction and Notation
In this section we review basic deﬁnitions and properties of eigenvalues
and matrix norms.
9.1.1 Eigenvalues and Singular Values
Let A be a complex M by N matrix. It is often helpful to know how
large the twonorm Ax
2
can be, relative to x
2
; that is, we want to ﬁnd
a constant a so that
Ax
2
/x
2
≤ a,
for all x = 0. We can reformulate the problem by asking how large Au
2
2
can be, subject to u
2
= 1. Using Lagrange multipliers, we discover that
a unit vector u that maximizes Au
2
2
has the property that
A
†
Au = λu,
for some constant λ. This leads to the more general problem discussed in
this section.
Deﬁnition 9.1 Given an N by N complex matrix S, we say that a complex
number λ is an eigenvalue of S if there is a nonzero vector u with Su = λu.
The column vector u is then called an eigenvector of S associated with
eigenvalue λ.
Clearly, if u is an eigenvector of S,thensoiscu, for any constant c =0;
therefore, it is common to choose eigenvectors to have norm equal to one.
Deﬁnition 9.2 The spectral radius of S,denotedρ(S), is the largest value
of λ,whereλ denotes an eigenvalue of S.
Deﬁnition 9.3 A Hermitian matrix Q is said to be nonnegativedeﬁnite
if all the eigenvalues of Q are nonnegative, and positivedeﬁnite if all the
eigenvalues are positive.
Deﬁnition 9.4 Let A be an arbitrary matrix. A nonnegative number γ is
a singular value for A if γ
2
is an eigenvalue of both A
†
A and AA
†
.
We present now a number of assertions without proof. Details can be
found in almost any text on applied or computational linear algebra; see,
for example, [174].
Eigenvalue Bounds 125
• For any square matrix S,thetraceofS is the sum of its eigenvalues.
• For any square matrix S we have ρ(S)
2
= ρ(S
2
).
• The eigenvalues of a Hermitian matrix H are real.
• A Hermitian matrix Q is a nonnegativedeﬁnite matrix if and only if
there is another matrix C, not necessarily square, such that Q = C
†
C.
9.1.2 Vector and Matrix Norms
We consider now the most common norms on the space C
J
.These
notions apply equally to R
J
.
The 1norm on C
J
is deﬁned by
x
1
=
J
j=1
x
j
. (9.1)
The ∞norm on C
J
is deﬁned by
x
∞
=max{x
j
j =1, ..., J}. (9.2)
For any p ≥ 1, the pnorm is deﬁned by
x
p
=
J
j=1
x
j

p
1/p
. (9.3)
The 2norm, also called the Euclidean norm, is the most commonly used
norm on C
J
.Itisthepnorm for p = 2 and is the one that comes from the
inner product:
x
2
=
J
j=1
x
j

2
=
x, x =
√
x
†
x. (9.4)
Any matrix can be turned into a vector by vectorization. Therefore,
we can deﬁne a norm for any matrix by simply vectorizing the matrix
and taking a norm of the resulting vector; the 2norm of the vectorized
matrix is the Frobenius norm of the matrix itself. Such norms for matrices
may not be compatible with the role of a matrix as representing a linear
transformation. For that reason, we consider norms on matrices that are
induced by the norms of the vectors on which the matrices operate.
Deﬁnition 9.5 Let A be an M by N complex matrix. A norm on A,de
noted A,issaidtobecompatible with given norms on C
N
and C
M
if
Ax≤Ax, for every x in C
N
.
Get Iterative Optimization in Inverse Problems now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.