Gram-Schmidt Orthogonalization

GUIDE: Mathematics of the Discrete Fourier Transform (DFT) - Julius O. Smith III. Gram-Schmidt Orthogonalization

It appears that you are using AdBlocking software. The cost of running this website is covered by advertisements. If you like it please feel free to a small amount of money to secure the future of this website.

NOTE: THIS DOCUMENT IS OBSOLETE, PLEASE CHECK THE NEW VERSION: "Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition", by Julius O. Smith III, W3K Publishing, 2007, ISBN 978-0-9745607-4-8. - Copyright © 2017-09-28 by Julius O. Smith III - Center for Computer Research in Music and Acoustics (CCRMA), Stanford University

<< Previous page  TOC  INDEX  Next page >>

Gram-Schmidt Orthogonalization



Theorem: Given a set of $N$ linearly independent vectors $\ from ${\, we can construct an orthonormalset $\ which are linear combinations of the original set and which span the same space.

Proof: We prove the theorem by constructing the desired orthonormal set $\ sequentially from the original set $\. This procedure is known as Gram-Schmidt orthogonalization.

  1. Set $\.
  2. Define $\ as the $\ minus the projection of $\ onto $\:
    \

    The vector $\ is orthogonal to $\ by construction. (We subtracted out the part of $\ that wasn't orthogonal to $\.)
  3. Set $\ (i.e., normalize the result of the preceding step).
  4. Define $\ as the $\ minus the projection of $\ onto $\ and $\:
    \

  5. Normalize: $\.
  6. Continue this process until $\ has been defined.

The Gram-Schmidt orthogonalization procedure will construct an orthonormal basis from any set of $N$ linearly independent vectors. Obviously, by skipping the normalization step, we could also form simply an orthogonal basis. The key ingredient of this procedure is that each new orthonormalbasis vector is obtained by subtracting out the projection of the next linearly independent vector onto the vectors accepted so far in the set. We may say that each new linearly independent vector $\ is projected onto the subspace spanned by the vectors $\, and any nonzero projection in that subspace is subtracted out of $\ to make it orthogonal to the entire subspace. In other words, we retain only that portion of each new vector $\which points along a new dimension. The first direction is arbitrary and is determined by whatever vector we choose first ( $\ here). The next vector is forced to be orthogonal to the first. The second is forced to be orthogonal to the first two, and so on.

This chapter can be considered an introduction to some of the most important concepts from linear algebra. The student is invited to pursue further reading in any textbook on linear algebra, such as [3].


<< Previous page  TOC  INDEX  Next page >>

Appendix: Matlab Examples
Signal Reconstruction from Projections
General Conditions
  Index
 

© 1998-2023 – Nicola Asuni - Tecnick.com - All rights reserved.
about - disclaimer - privacy