Frobenius Norm.
The singular values of the matrix
are .
When you add up the squares of each entry of A you get
.
When you add the squares of the singular values you get
.
Accident?
In math there are no accidents!
The Frobenius norm of a matrix A, ,
is defined as the square root of the sum of the squares of all its entries.
E.g.
Theorem: If A has singular values ,
then .
Proof:
Let
be an SVD of A.
First note that for any matrix
given in terms of its columns,
.
Now,
(1) Let
be the columns of (stretcher)(aligner).
Since the hanger matrix simply rotates the columns of
without changing their lengths, the two sides are equal.
(2) The
matrix simply rotates the columns of
without changing their lengths.
Rank One Decomposition
Suppose the SVD for a matrix
is
.
You've seen that the reduced SVD is
.
Here's one more way to write A, called the rank one decomposition.
How do you know this equals ?
As usually it's the agreement on the basis .
Here's what happens when the rank one decomposition hits :
(Linearity)
(Since
is orthonormal)
(Fundamental Equation)
Since the rank one decomposition agrees with A on ,
it must equal A.
The rank one decomposition gets its name from that fact that each of
the summand
is a rank one matrix.
Lower Rank Approximations
Here's a 3x3 matrix .
When you compute the determinant of A you find Det[A]=16, so you know
that A has rank three.
This big fat juicy determinants tells you that for every y, the
system A x = y, has the unique solution .
Now check out the matrix B
![[Graphics:normgr36.gif]](normgr36.gif)
and note that A is approximately equal to B.
In fact you can compute
But when you check the determinant of B you find Det[B]=0.
A zero determinant tells you that for some y's, B x = y, will
have infinitely many solutions and for other y's there'll be no
solutions at all.
Is there a way to predict that the seemingly well-behaved matrix A has
such a nasty neighbor B so nearby?
You bet.
Here's an SVD for A
.
The three non-zero singular values tell you that the matrix has rank
3.
But the value 0.01 is so small that A is nearly a rank two matrix.
In fact the matrix B was created by setting that last singular value
to zero.
.
Now the rank one decomposition of A is
and the rank one decomposition of B is
.
So
and .
So you see that if A has a small singular value, then you can
get a lower rank matrix B close to A by setting the small
singular value to zero.
The next theorem says that this is the best way to find nearby neighbors
of lower rank.
Theorem: Suppose A is an mxn matrix with singular
values .
If A has rank r, then for any k<r, a best rank k approximation
to A is .
For a proof of this theorem see page 414 of Steven J. Leon's excellent
book, Linear Algebra with Applications, 5th Edition.
Comments:
Best means that
is minimized.
Note that
.
Exercises
1. Compute
2. The first two singular values of
are
and .
What are
and ?
|