Matrix Action 
Perpframes, Aligners and Hangers 
Matrix Subspaces  
Linear Systems, Pseudo-Inverse 
Condition Number 
Matrix Norm, Rank One 
Data Compression 
Noise Filtering 
Todd Will
UW-La Crosse
 Frobenius Norm 
Rank One Decomposition

Matrix Norm and 

Rank One Decomposition

Lower Rank Approximation 

Frobenius Norm.

The singular values of the matrix [Graphics:normgr1.gif] are [Graphics:normgr2.gif]

When you add up the squares of each entry of A you get 

When you add the squares of the singular values you get 


In math there are no accidents! 

The Frobenius norm of a matrix A, [Graphics:normgr5.gif], is defined as the square root of the sum of the squares of all its entries. 

E.g. [Graphics:normgr6.gif] 

Theorem: If A has singular values [Graphics:normgr7.gif], then [Graphics:normgr8.gif].


Let [Graphics:normgr9.gif] be an SVD of A

First note that for any matrix [Graphics:normgr10.gif] given in terms of its columns, 






(1) Let [Graphics:normgr17.gif] be the columns of (stretcher)(aligner). 

Since the hanger matrix simply rotates the columns of [Graphics:normgr18.gif] without changing their lengths, the two sides are equal. 

(2) The [Graphics:normgr19.gif] matrix simply rotates the columns of [Graphics:normgr20.gif] without changing their lengths. 

Rank One Decomposition

Suppose the SVD for a matrix [Graphics:normgr21.gif] is 

You've seen that the reduced SVD is 


Here's one more way to write A, called the rank one decomposition. 


How do you know this equals [Graphics:normgr25.gif]

As usually it's the agreement on the basis [Graphics:normgr26.gif]

Here's what happens when the rank one decomposition hits [Graphics:normgr27.gif]


 [Graphics:normgr29.gif] (Linearity)

 [Graphics:normgr30.gif]   (Since [Graphics:normgr26.gif] is orthonormal)

 [Graphics:normgr32.gif]                        (Fundamental Equation)

Since the rank one decomposition agrees with A on [Graphics:normgr26.gif], it must equal A. 

The rank one decomposition gets its name from that fact that each of the summand


is a rank one matrix. 

Lower Rank Approximations

Here's a 3x3 matrix [Graphics:normgr34.gif]

When you compute the determinant of A you find Det[A]=16, so you know that A has rank three. 

This big fat juicy determinants tells you that for every y, the system A x = y, has the unique solution [Graphics:normgr35.gif]

Now check out the matrix B 


and note that A is approximately equal to B. 

In fact you can compute [Graphics:normgr38.gif] 

But when you check the determinant of B you find Det[B]=0. 

A zero determinant tells you that for some y's, B x = y, will have infinitely many solutions and for other y's there'll be no solutions at all. 

Is there a way to predict that the seemingly well-behaved matrix A has such a nasty neighbor B so nearby? 

You bet. 

Here's an  SVD for A 


The three non-zero singular values tell you that the matrix has rank 3. 

But the value 0.01 is so small that A is nearly a rank two matrix. 

In fact the matrix B was created by setting that last singular value to zero. 


Now the rank one decomposition of A is 


and the rank one decomposition of B is 


So [Graphics:normgr43.gif] and [Graphics:normgr44.gif]

So you see that if A has a small singular value, then you can get a lower rank matrix B close to A by setting the small singular value to zero. 

The next theorem says that this is the best way to find nearby neighbors of lower rank. 

Theorem: Suppose A is an mxn matrix with singular values [Graphics:normgr45.gif]. If A has rank r, then for any k<r, a best rank k approximation to A is [Graphics:normgr46.gif].

For a proof of this theorem see page 414 of Steven J. Leon's excellent book, Linear Algebra with Applications, 5th Edition. 


Best means that [Graphics:normgr47.gif] is minimized. 

Note that 





1. Compute [Graphics:normgr51.gif] 

2. The first two singular values of [Graphics:normgr52.gif] are [Graphics:normgr53.gif] and [Graphics:normgr54.gif]
What are [Graphics:normgr55.gif] and [Graphics:normgr56.gif]

© 1999 Todd Will
Last Modified: 05-Mar-1999
Hit Counter