One way to solve (4.6) is to decompose \(f(t)\) as a sum of cosines (and sines) and then solve many problems of the form (4.7). We then use the principle of superposition, to sum up all the solutions we got to get a solution to (4.6).
Before we proceed, let us talk a little bit more in detail about periodic functions. A function is said to be periodic with period \(P\) if \(f(t) = f(t+P)\) for all \(t\text{.}\) For brevity we say \(f(t)\) is \(P\)-periodic. Note that a \(P\)-periodic function is also \(2P\)-periodic, \(3P\)-periodic and so on. For example, \(\cos (t)\) and \(\sin (t)\) are \(2\pi\)-periodic. So are \(\cos (kt)\) and \(\sin (kt)\) for all integers \(k\text{.}\) The constant functions are an extreme example. They are periodic for any period (exercise).
Normally we start with a function \(f(t)\) defined on some interval \([-L,L]\text{,}\) and we want to extend \(f(t)\) periodically to make it a \(2L\)-periodic function. We do this extension by defining a new function \(F(t)\) such that for \(t\) in \([-L,L]\text{,}\)\(F(t) = f(t)\text{.}\) For \(t\) in \([L,3L]\text{,}\) we define \(F(t) = f(t-2L)\text{,}\) for \(t\) in \([-3L,-L]\text{,}\)\(F(t) = f(t+2L)\text{,}\) and so on. To make that work we needed \(f(-L) = f(L)\text{.}\) We could have also started with \(f\) defined only on the half-open interval \((-L,L]\) and then define \(f(-L) = f(L)\text{.}\)
Example4.2.1.
Define \(f(t) = 1-t^2\) on \([-1,1]\text{.}\) Now extend \(f(t)\) periodically to a 2-periodic function. See Figure 4.3.
You should be careful to distinguish between \(f(t)\) and its extension. A common mistake is to assume that a formula for \(f(t)\) holds for its extension. It can be confusing when the formula for \(f(t)\) is periodic, but with perhaps a different period.
Exercise4.2.1.
Define \(f(t) = \cos t\) on \([\nicefrac{-\pi}{2},\nicefrac{\pi}{2}]\text{.}\) Take the \(\pi\)-periodic extension and sketch its graph. How does it compare to the graph of \(\cos t\text{?}\)
Subsection4.2.2Inner product and eigenvector decomposition
Suppose we have a symmetric matrix, that is \(A^T = A\text{.}\) As we remarked before, eigenvectors of \(A\) are then orthogonal. Here the word orthogonal means that if \(\vec{v}\) and \(\vec{w}\) are two eigenvectors of \(A\) for distinct eigenvalues, then \(\langle \vec{v} , \vec{w} \rangle = 0\text{.}\) In this case the inner product \(\langle \vec{v} , \vec{w} \rangle\) is the dot product, which can be computed as \(\vec{v}^T\vec{w}\text{.}\)
To decompose a vector \(\vec{v}\) in terms of mutually orthogonal vectors \(\vec{w}_1\) and \(\vec{w}_2\) we write
Instead of decomposing a vector in terms of eigenvectors of a matrix, we decompose a function in terms of eigenfunctions of a certain eigenvalue problem. The eigenvalue problem we use for the Fourier series is
We computed that eigenfunctions are 1, \(\cos (k t)\text{,}\)\(\sin (k t)\text{.}\) That is, we want to find a representation of a \(2\pi\)-periodic function \(f(t)\) as
or the trigonometric series for \(f(t)\text{.}\) We write the coefficient of the eigenfunction 1 as \(\frac{a_0}{2}\) for convenience. We could also think of \(1 = \cos (0t)\text{,}\) so that we only need to look at \(\cos (kt)\) and \(\sin (kt)\text{.}\)
As for matrices we want to find a projection of \(f(t)\) onto the subspaces given by the eigenfunctions. So we want to define an inner product of functions. For example, to find \(a_n\) we want to compute \(\langle \, f(t) \, , \, \cos (nt) \, \rangle\text{.}\) We define the inner product as
With this definition of the inner product, we saw in the previous section that the eigenfunctions \(\cos (kt)\) (including the constant eigenfunction), and \(\sin (kt)\) are orthogonal in the sense that
\begin{equation*}
\begin{aligned}
\langle \, \cos (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\
\langle \, \sin (mt)\, , \, \sin (nt) \, \rangle = 0 & \qquad \text{for } m \not= n , \\
\langle \, \sin (mt)\, , \, \cos (nt) \, \rangle = 0 & \qquad \text{for all } m \text{ and } n .
\end{aligned}
\end{equation*}
We will often use the result from calculus that says that the integral of an odd function over a symmetric interval is zero. Recall that an odd function is a function \(\varphi(t)\) such that \(\varphi(-t) = -\varphi(t)\text{.}\) For example the functions \(t\text{,}\)\(\sin t\text{,}\) or (importantly for us) \(t \cos (nt)\) are all odd functions. Thus
Let us move to \(b_n\text{.}\) Another useful fact from calculus is that the integral of an even function over a symmetric interval is twice the integral of the same function over half the interval. Recall an even function is a function \(\varphi(t)\) such that \(\varphi(-t) = \varphi(t)\text{.}\) For example \(t \sin (nt)\) is even.
Extend \(f(t)\) periodically and write it as a Fourier series. This function or its variants appear often in applications and the function is called the square wave.
The plot of the extended periodic function is given in Figure 4.6. Now we compute the coefficients. We start with \(a_0\)
is only an equality for such \(t\) where \(f(t)\) is continuous. We do not get an equality for \(t=-\pi,0,\pi\) and all the other discontinuities of \(f(t)\text{.}\) It is not hard to see that when \(t\) is an integer multiple of \(\pi\) (which gives all the discontinuities), then
\begin{equation*}
f(t) =
\begin{cases}
0 & \text{if } \; {-\pi} < t < 0 , \\
\pi & \text{if } \; \phantom{-}0 < t < \pi , \\
\nicefrac{\pi}{2} & \text{if } \; \phantom{-}t = -\pi,
t = 0,\text{ or }
t = \pi,
\end{cases}
\end{equation*}
and extend periodically. The series equals this new extended \(f(t)\) everywhere, including the discontinuities. We will generally not worry about changing the function values at several (finitely many) points.
We will say more about convergence in the next section. Let us, however, briefly mention an effect of the discontinuity. Zoom in near the discontinuity in the square wave. Further, plot the first 100 harmonics, see Figure 4.8. While the series is a very good approximation away from the discontinuities, the error (the overshoot) near the discontinuity at \(t=\pi\) does not seem to be getting any smaller as we take more and more harmonics. This behavior is known as the Gibbs phenomenon. The region where the error is large does get smaller, however, the more terms in the series we take.
We can think of a periodic function as a “signal” being a superposition of many signals of pure frequency. For example, we could think of the square wave as a tone of certain base frequency. This base frequency is called the fundamental frequency. The square wave will be a superposition of many different pure tones of frequencies that are multiples of the fundamental frequency. In music, the higher frequencies are called the overtones. All the frequencies that appear are called the spectrum of the signal. On the other hand a simple sine wave is only the pure tone (no overtones). The simplest way to make sound using a computer is the square wave, and the sound is very different from a pure tone. If you ever played video games from the 1980s or so, then you heard what square waves sound like.
Exercises4.2.4Exercises
4.2.3.
Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(\sin (5t) + \cos (3t)\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)
4.2.4.
Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(\lvert t \rvert\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)
4.2.5.
Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(\lvert t \rvert^3\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)
Extend periodically and compute the Fourier series of \(f(t)\text{.}\)
4.2.7.
Suppose \(f(t)\) is defined on \((-\pi,\pi]\) as \(t^3\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)
4.2.8.
Suppose \(f(t)\) is defined on \([-\pi,\pi]\) as \(t^2\text{.}\) Extend periodically and compute the Fourier series of \(f(t)\text{.}\)
There is another form of the Fourier series using complex exponentials \(e^{nt}\) for \(n=\ldots,-2,-1,0,1,2,\ldots\) instead of \(\cos(nt)\) and \(\sin(nt)\) for positive \(n\text{.}\) This form may be easier to work with sometimes. It is certainly more compact to write, and there is only one formula for the coefficients. On the downside, the coefficients are complex numbers.
Note that the sum now ranges over all the integers including negative ones. Do not worry about convergence in this calculation. Hint: It may be better to start from the complex exponential form and write the series as