Matrix Multiplication

GUIDE: Mathematics of the Discrete Fourier Transform (DFT). Matrix Multiplication

It appears that you are using AdBlocking software. The cost of running this website is covered by advertisements. If you like it please feel free to a small amount of money to secure the future of this website.

<< Previous page  TOC  INDEX  Next page >>

Matrix Multiplication

Let ${A}^{\ be a general $M\ matrix and let $B$ denote a general$L\ matrix. Denote the matrix product by $C={A}^{\ or $C={A}^{\. Then matrix multiplication is carried out by computing the inner product of every row of ${A}^{\ with every column of $B$. Let the$i$th row of ${A}^{\ be denoted by ${\, $i=1, 2,\, and the$j$th column of $B$ by $\, $j=1,2,\. Then the matrix product $C={A}^{\ is defined as


This definition can be extended to complex matrices by using a definition of inner product which does not conjugate its second argument.7.4


<!– MATH \begin{displaymath} \left[\begin{array}{cc} a & b \c & d \e & f \end{array}\right] \cdot

\left[\begin{array}{cc} \alpha & \beta \\gamma & \delta \end{array}\right]

\left[\begin{array}{cc} a\alpha+b\gamma & a\beta+b\delta
c\alpha+d\gamma & c\beta+d\delta
e\alpha+f\gamma & e\beta+f\delta \end{array}\right] \end{displaymath} –>\

<!– MATH \begin{displaymath} \left[\begin{array}{cc} \alpha & \beta \\gamma & \delta \end{array}\right] \cdot

\left[\begin{array}{ccc} a & c & e \b & d & f \end{array}\right]

\left[\begin{array}{ccc} \alpha a + \beta b & \alpha c + \beta d & \alpha e + \beta f
\gamma a + \delta b & \gamma c + \delta d & \gamma e + \delta f \end{array}\right] \end{displaymath} –>\

<!– MATH \begin{displaymath} \left[\begin{array}{c} \alpha \\beta \end{array}\right] \cdot

\left[\begin{array}{ccc} a & b & c \end{array}\right]

\left[\begin{array}{ccc} \alpha a & \alpha b & \alpha c
\beta a & \beta b & \beta c \end{array}\right] \end{displaymath} –>\


An $M\ matrix $A$ can only be multiplied on the right by an$L\ matrix, where $N$ is any positive integer. An $L\matrix $A$ can only be multiplied on the left by a $M\matrix, where $M$ is any positive integer. Thus, the number of columns in the matrix on the left must equal the number of rows in the matrix on the right.

Matrix multiplication is non-commutative, in general. That is, normally $AB\ even when both products are defined (such as when the matrices are square.)

The transpose of a matrix product is the product of the transposes in reverse order:


The identity matrix is denoted by $I$ and is defined as


Identity matrices are always square. The $N\ identity matrix $I$, sometimes denoted as $I_N$, satisfies $A\ for every$M\ matrix $A$. Similarly, $I_M\, for every $M\matrix $A$.

As a special case, a matrix ${A}^{\ times a vector $\ produces a new vector $\ which consists of the inner product of every row of ${A}^{\ with$\


A matrix ${A}^{\ times a vector $\ defines a linear transformationof $\. In fact, every linear function of a vector $\ can be expressed as a matrix multiply. In particular, every linear filtering operation can be expressed as a matrix multiply applied to the input signal. As a special case, every linear, time-invariant (LTI) filtering operation can be expressed as a matrix multiply in which the matrix is Toeplitz, i.e., ${A}^{\ (constant along alldiagonals).

As a further special case, a row vector on the left may be multiplied by a column vector on the right to form a single inner product:


where the alternate transpose notation “$\” is defined to include complex conjugation so that the above result holds also for complex vectors. Using this result, we may rewrite the general matrix multiply as

<< Previous page  TOC  INDEX  Next page >>


© 1998-2017 – Nicola Asuni - - All rights reserved.
about - disclaimer - privacy