**<<
Previous page TOC INDEX Next
page >>**

## Signal Metrics

This section defines some useful functions of signals.

The

meanof a signal (more precisely the “sample mean”) is defined as itsaverage value:

The

total energyof a signal is defined thesum of squaredmoduli:

Energy is the “ability to do work.” In physics, energy and work are in units of “force times distance,” “mass times velocity squared,” or other equivalent combinations of units. The energy of a

pressure waveis the integral over time of the squared pressure divided by the wave impedance the wave is traveling in. The energy of avelocity waveis the integral over time of the squared velocity times the wave impedance. In audio work, a signal is typically a list ofpressure samplesderived from a microphone signal, or it might be samples offorcefrom a piezoelectric transducer,velocityfrom a magnetic guitarpickup, and so on. In all of these cases, the total physical energy associated with the signal is proportional to the sum of squared signal samples. (Physical connections in signal processing are explored more deeply in Music 421.)The

average powerof a signal is defined theenergy per sample:

Another common description when is real is the “mean square.” When is a complex sinusoid, i.e., , then ; in other words, for complex sinusoids, the average power equals theinstantaneous powerwhich is the amplitude squared.Power is always in physical units of energy per unit time. It therefore makes sense to define the average signal power as the total signal energydivided by its length. We normally work with signals which are functions of time. However, if the signal happens instead to be a function of distance (e.g., samples of displacement along a vibrating string), then the “power” as defined here still has the interpretation of a

spatial energy density. Power, in contrast, is a temporal energy density.The

root mean square(RMS) level of a signal is simply . However, note that in practice (especially in audio work) an RMS level may be computed after subtracting out the mean value. Here, we call that thevariance.The

variance(more precisely thesample variance) of the signal is defined as the power of the signal with its sample mean removed:

It is quick to show that, for real signals, we have

which is the “mean square minus the mean squared.” We think of the variance as the power of the non-constant signal components (i.e., everything but dc). The terms “sample mean” and “sample variance” come from the field ofstatistics, particularly the theory ofstochastic processes. The field ofstatistical signal processing[16] is firmly rooted in statistical topics such as “probability,” “random variables,” “stochastic processes,” and “time series analysis.” In this course, we will only touch lightly on a few elements of statistical signal processing in a self-contained way.The

normof a signal is defined as the square root of its total energy:

We think of as thelengthof in -space. Furthermore, is regarded as thedistancebetween and . The norm can also be thought of as the “absolute value” or “radius” of a vector.^{6.2}

Example:Going back to our simple 2D example , we can compute its norm as . The physical interpretation of the norm as a distance measure is shown in Fig. 6.5.

Example:Let’s also look again at the vector-sum example, redrawn in Fig. 6.6.The norm of the vector sum is

while the norms of and are and , respectively. We find that which is an example of thetriangle inequality. (Equality occurs only when and are colinear, as can be seen geometrically from studying Fig. 6.6.)

Example:Consider the vector-difference example diagrammed in Fig. 6.7.The norm of the difference vector is

Subsections