Derivation of Taylor Series Expansion with Remainder

GUIDE: Mathematics of the Discrete Fourier Transform (DFT) - Julius O. Smith III. Derivation of Taylor Series Expansion with Remainder

It appears that you are using AdBlocking software. The cost of running this website is covered by advertisements. If you like it please feel free to a small amount of money to secure the future of this website.

NOTE: THIS DOCUMENT IS OBSOLETE, PLEASE CHECK THE NEW VERSION: "Mathematics of the Discrete Fourier Transform (DFT), with Audio Applications --- Second Edition", by Julius O. Smith III, W3K Publishing, 2007, ISBN 978-0-9745607-4-8. - Copyright © 2017-09-28 by Julius O. Smith III - Center for Computer Research in Music and Acoustics (CCRMA), Stanford University

<< Previous page  TOC  INDEX  Next page >>


Derivation of Taylor Series Expansion with Remainder

We repeat the derivation of the preceding section, but this time we treat the error term more carefully.

Again we want to approximate $f(x)$ with an $n$th-order polynomial:

\

$R_{n+1}(x)$ is the ''remainder term'' which we will no longer assume is zero.

Our problem is to find $\ so as to minimize $R_{n+1}(x)$over some interval $I$ containing $x$. There are many ''optimality criteria'' we could choose. The one that falls out naturally here is called ''Padé'' approximation. Padé approximation sets the error value and its first $n$ derivatives to zero at a single chosen point, which we take to be $x=0$. Since all $n+1$ ''degrees of freedom'' in the polynomial coefficients $f_i$ are used to set derivatives to zero at one point, the approximation is termed ''maximally flat'' at that point. Padé approximation comes up often in signal processing. For example, it is the sense in which Butterworth lowpass filters are optimal. (Their frequencyreponses are maximally flat at dc.) Also, Lagrange interpolation filters can be shown to maximally flat at dc in the frequency domain.

Setting $x=0$ in the above polynomial approximation produces

\

where we have used the fact that the error is to be exactly zero at $x=0$.

Differentiating the polynomial approximation and setting $x=0$ gives

\

where we have used the fact that we also want the slopeof the error to be exactly zero at $x=0$.

In the same way, we find

\

for $k=2,3,4,\, and the first $n$derivatives of the remainder term are all zero. Solving these relations for the desired constants yields the $n$th-order Taylor series expansion of $f(x)$ about the point$x=0$
\

as before, but now we better understand the remainder term.

From this derivation, it is clear that the approximation error (remainder term) is smallest in the vicinity of $x=0$. All degrees of freedomin the polynomial coefficients were devoted to minimizing the approximation error and its derivatives at $x=0$. As you might expect, the approximation error generally worsens as $x$ gets farther away from $0$.

To obtain a more uniform approximation over some interval $I$ in$x$, other kinds of error criteria may be employed. This is classically called ''economization of series,'' but nowadays we may simply call itpolynomial approximation under different error criteria. In Matlab, the function polyfit(x,y,n) will find the coefficients of a polynomial $p(x)$ of degree n that fits the data y over the points x in a least-squares sense. That is, it minimizes

\

where $n_x \.

<< Previous page  TOC  INDEX  Next page >>

 

© 1998-2023 – Nicola Asuni - Tecnick.com - All rights reserved.
about - disclaimer - privacy