An Example of Changing Coordinates in 2D

GUIDE: Mathematics of the Discrete Fourier Transform (DFT) - Julius O. Smith III. An Example of Changing Coordinates in 2D

It appears that you are using AdBlocking software. The cost of running this website is covered by advertisements. If you like it please feel free to a small amount of money to secure the future of this website.

NOTE: THIS DOCUMENT IS OBSOLETE, PLEASE CHECK THE NEW VERSION: "Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition", by Julius O. Smith III, W3K Publishing, 2007, ISBN 978-0-9745607-4-8. - Copyright © 2017-09-28 by Julius O. Smith III - Center for Computer Research in Music and Acoustics (CCRMA), Stanford University

<< Previous page  TOC  INDEX  Next page >>

An Example of Changing Coordinates in 2D

As a simple example, let's pick the following pair of new coordinate vectors in 2D

\


These happen to be the DFT sinusoids for $N=2$ having frequencies $f_0=0$(''dc'') and $f_1=f_s/2$ (half the sampling rate). (The sampled complex sinusoids of the DFT reduce to real numbers only for $N=1$ and $N=2$.) We already showed in an earlier example that these vectors are orthogonal. However, they are not orthonormal since the norm is$\ in each case. Let's try projecting $\ onto these vectors and seeing if we can reconstruct by summing the projections.

The projection of $\ onto $\ is by definition

\

Similarly, the projection of $\ onto $\ is
\

The sum of these projections is then
\


It worked!

Now consider another example:

\


The projections of $x=[x_0,x_1]$ onto these vectors are
\


The sum of the projections is
\


Something went wrong, but what? It turns out that a set of $N$ vectors can be used to reconstruct an arbitrary vector in ${\ from its projections only if they are linearly independent. In general, a set of vectors is linearly independent if none of them can be expressed as a linear combination of the others in the set. What this means intuituvely is that they must ''point in different directions'' in $N$ space. In this example$s_1 = - s_0$ so that they lie along the same line in $N$-space. As a result, they are linearly dependent: one is a linear combination of the other.

Consider this example:

\


These point in different directions, but they are not orthogonal. What happens now? The projections are
\


The sum of the projections is
\


So, even though the vectors are linearly independent, the sum of projections onto them does not reconstruct the original vector. Since the sum of projections worked in the orthogonal case, and since orthogonalityimplies linear independence, we might conjecture at this point that the sum of projections onto a set of $N$ vectors will reconstruct the original vector only when the vector set is orthogonal, and this is true, as we will show.

It turns out that one can apply an orthogonalizing process, called Gram-Schmidt orthogonalization to any $N$ linearly independent vectors in${\ so as to form an orthogonal set which will always work. This will be derived in Section 6.7.3.

Obviously, there must be at least $N$ vectors in the set. Otherwise, there would be too few degrees of freedom to represent an arbitrary $\. That is, given the $N$ coordinates $\ of$\ (which are scale factors relative to the coordinate vectors $\in ${\), we have to find at least $N$ coefficients of projection (which we may think of as coordinates relative to new coordinate vectors $\). If we compute only $M<N$ coefficients, then we would be mapping a set of$N$ complex numbers to $M<N$ numbers. Such a mapping cannot be invertible in general. It also turns out $N$ linearly independent vectors is always sufficient. The next section will summarize the general results along these lines.

<< Previous page  TOC  INDEX  Next page >>

 

© 1998-2023 – Nicola Asuni - Tecnick.com - All rights reserved.
about - disclaimer - privacy