Skip to main content
Logo image

Section 11.8 Fourier series

Note: 3–4 lectures
Fourier series
 1 
Named after the French mathematician Jean-Baptiste Joseph Fourier (1768–1830).
is perhaps the most important (and the most difficult) of the series that we cover in this book. We saw a few examples already, but let us start at the beginning.

Subsection 11.8.1 Trigonometric polynomials

A trigonometric polynomial is an expression of the form
\begin{equation*} a_0 + \sum_{n=1}^N \bigl(a_n \cos(nx) + b_n \sin(nx) \bigr), \end{equation*}
or equivalently, thanks to Euler’s formula (\(e^{i\theta} = \cos(\theta) + i \sin(\theta)\)):
\begin{equation*} \sum_{n=-N}^N c_n e^{inx} . \end{equation*}
The second form is usually more convenient. If \(z \in \C\) with \(\sabs{z}=1,\) we write \(z = e^{ix}\text{,}\) and so
\begin{equation*} \sum_{n=-N}^N c_n e^{inx} = \sum_{n=-N}^N c_n z^n . \end{equation*}
So a trigonometric polynomial is really a rational function of the complex variable \(z\) (we are allowing negative powers) evaluated on the unit circle. There is a wonderful connection between power series (actually Laurent series because of the negative powers) and Fourier series because of this observation, but we will not investigate this further.
Another reason why Fourier series is important and comes up in so many applications is that the functions \(e^{inx}\) are eigenfunctions
 2 
Eigenfunction is like an eigenvector for a matrix, but for a linear operator on a vector space of functions.
of various differential operators. For example,
\begin{equation*} \frac{d}{dx} \bigl[ e^{inx} \bigr] = (in) e^{inx}, \qquad \frac{d^2}{dx^2} \bigl[ e^{inx} \bigr] = (-n^2) e^{inx} . \end{equation*}
That is, they are the functions whose derivative is a scalar (the eigenvalue) times itself. Just as eigenvalues and eigenvectors are important in studying matrices, eigenvalues and eigenfunctions are important when studying linear differential equations.
The functions \(\cos (nx)\text{,}\) \(\sin (nx)\text{,}\) and \(e^{inx}\) are \(2\pi\)-periodic and hence trigonometric polynomials are also \(2\pi\)-periodic. We could rescale \(x\) to make the period different, but the theory is the same, so we stick with the period \(2\pi\text{.}\) The antiderivative of \(e^{inx}\) is \(\frac{e^{inx}}{in}\) and so
\begin{equation*} \int_{-\pi}^\pi e^{inx} \, dx = \begin{cases} 2\pi & \text{if } n=0, \\ 0 & \text{otherwise.} \end{cases} \end{equation*}
Consider
\begin{equation*} f(x) \coloneqq \sum_{n=-N}^N c_n e^{inx} , \end{equation*}
and for \(m=-N,\ldots,N\) compute
\begin{equation*} \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-imx} \, dx = \frac{1}{2\pi} \int_{-\pi}^\pi \left(\sum_{n=-N}^N c_n e^{i(n-m)x}\right) \, dx = \sum_{n=-N}^N c_n \frac{1}{2\pi} \int_{-\pi}^\pi e^{i(n-m)x} \, dx = c_m . \end{equation*}
We just found a way of computing the coefficients \(c_m\) using an integral of \(f\text{.}\) If \(\sabs{m} > N\text{,}\) the integral is 0, so we might as well have included enough zero coefficients to make \(\sabs{m} \leq N\text{.}\)

Proof.

If \(f(x)\) is real-valued, that is \(\overline{f(x)} = f(x)\text{,}\) then
\begin{equation*} \overline{c_m} = \overline{ \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-imx} \, dx } = \frac{1}{2\pi} \int_{-\pi}^\pi \overline{ f(x) e^{-imx} } \, dx = \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{imx} \, dx = c_{-m} . \end{equation*}
The complex conjugate goes inside the integral because the integral is done on real and imaginary parts separately.
On the other hand, if \(c_{-m} = \overline{c_m}\text{,}\) then
\begin{equation*} \overline{c_{-m}\, e^{-imx}+ c_{m}\, e^{imx}} = \overline{c_{-m}}\, e^{imx}+ \overline{c_{m}}\, e^{-imx} = c_{m}\, e^{imx}+ c_{-m}\, e^{-imx} , \end{equation*}
which is real valued. Also \(c_0 = \overline{c_0}\text{,}\) so \(c_0\) is real. By pairing up the terms, we obtain that \(f\) has to be real-valued.
The functions \(e^{inx}\) are also linearly independent.

Proof.

The result follows immediately from the integral formula for \(c_n\text{.}\)

Subsection 11.8.2 Fourier series

We now take limits. The series
\begin{equation*} \sum_{n=-\infty}^\infty c_n \, e^{inx} \end{equation*}
is called the Fourier series and the numbers \(c_n\) the Fourier coefficients. Using Euler’s formula \(e^{i\theta} = \cos(\theta) + i \sin (\theta)\text{,}\) we could also develop everything with sines and cosines, that is, as the series \(a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)\text{.}\) It is equivalent, but slightly more messy.
Several questions arise. What functions are expressible as Fourier series? Obviously, they have to be \(2\pi\)-periodic, but not every periodic function is expressible with the series. Furthermore, if we do have a Fourier series, where does it converge (where and if at all)? Does it converge absolutely? Uniformly? Also note that the series has two limits. When talking about Fourier series convergence, we often talk about the following limit:
\begin{equation*} \lim_{N\to\infty} \sum_{n=-N}^N c_n e^{inx} . \end{equation*}
There are other ways we can sum the series to get convergence in more situations, but we refrain from discussing those. In light of this, define the symmetric partial sums
\begin{equation*} s_N(f;x) \coloneqq \sum_{n=-N}^N c_n \,e^{inx} . \end{equation*}
Conversely, for an integrable function \(f \colon [-\pi,\pi] \to \C\text{,}\) call the numbers
\begin{equation*} c_n \coloneqq \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-inx} \, dx \end{equation*}
its Fourier coefficients. To emphasize the function the coefficients belong to, we write \(\hat{f}(n)\text{.}\)
 3 
The notation should seem similar to Fourier transform to those readers that have seen it. The similarity is not just coincidental, we are taking a type of Fourier transform here.
We then formally write down a Fourier series:
\begin{equation*} f(x) \sim \sum_{n=-\infty}^\infty c_n \, e^{inx} . \end{equation*}
As you might imagine such a series might not even converge. The \(\sim\) doesn’t imply anything about the two sides being equal in any way. It is simply that we created a formal series using the formula for the coefficients. We will see that when the functions are “nice enough,” we do get convergence.

Example 11.8.3.

Consider the step function \(h(x)\) so that \(h(x) \coloneqq 1\) on \([0,\pi]\) and \(h(x) \coloneqq -1\) on \((-\pi,0)\text{,}\) extended periodically to a \(2\pi\)-periodic function. With a little bit of calculus, we compute the coefficients:
\begin{equation*} \hat{h}(0) = \frac{1}{2\pi} \int_{-\pi}^\pi h(x) \, dx = 0, \qquad \hat{h}(n) = \frac{1}{2\pi} \int_{-\pi}^\pi h(x) e^{-inx} \, dx = \frac{i\bigl( (-1)^n-1 \bigr)}{\pi n} \quad \text{for } n \geq 1 . \end{equation*}
A little bit of simplification leads to
\begin{equation*} s_N(h;x) = \sum_{n=-N}^N \hat{h}(n) \,e^{inx} = \sum_{n=1}^N \frac{2\bigl(1-(-1)^n\bigr)}{\pi n} \sin(n x) . \end{equation*}
See the left hand graph in Figure 11.11 for a graph of \(h\) and several symmetric partial sums.
For a second example, consider the function \(g(x) \coloneqq \sabs{x}\) on \([-\pi,\pi]\) and then extended to a \(2\pi\)-periodic function. Computing the coefficients, we find
\begin{equation*} \hat{g}(0) = \frac{1}{2\pi} \int_{-\pi}^\pi g(x) \, dx = \frac{\pi}{2}, \qquad \hat{g}(n) = \frac{1}{2\pi} \int_{-\pi}^\pi g(x) e^{-inx} \, dx = \frac{(-1)^n-1}{\pi n^2} \quad \text{for } n \geq 1 . \end{equation*}
A little simplification yields
\begin{equation*} s_N(g;x) = \sum_{n=-N}^N \hat{g}(n) \,e^{inx} = \frac{\pi}{2} + \sum_{n=1}^N \frac{2\bigl((-1)^n-1\bigr)}{\pi n^2} \cos(n x) . \end{equation*}
See the right hand graph in Figure 11.11.

Figure 11.11. The functions \(h\) and \(g\) in bold, with several symmetric partial sums in gray.

Note that for both \(f\) and \(g\text{,}\) the even coefficients (except \(\hat{g}(0)\)) happen to vanish, but that is not really important. What is important is convergence. First, at the discontinuity at \(x=0\text{,}\) we find \(s_N(h;0) = 0\) for all \(N\text{,}\) so \(s_N(h;0)\) converges to a different number from \(h(0)\) (at a nice enough jump discontinuity, the limit is the average of the two-sided limits, see the exercises). That should not be surprising; the coefficients are computed by an integral, and integration does not notice if the value of a function changes at a single point. We should remark, however, that we are not guaranteed that in general the Fourier series converges to the function even at a point where the function is continuous. We will prove convergence if the function is at least Lipschitz.
What is really important is how fast the coefficients go to zero. For the discontinuous \(h\text{,}\) the coefficients \(\hat{h}(n)\) go to zero approximately like \(\nicefrac{1}{n}\text{.}\) On the other hand, for the continuous \(g\text{,}\) the coefficients \(\hat{g}(n)\) go to zero approximately like \(\nicefrac{1}{n^2}\text{.}\) The Fourier coefficients “see” the discontinuity in some sense.
Do note that continuity in this setting is the continuity of the periodic extension, that is, we include the endpoints \(\pm \pi\text{.}\) So the function \(f(x) = x\) defined on \((-\pi,\pi]\) and extended periodically would be discontinuous at the endpoints \(\pm\pi\text{.}\)
In general, the relationship between regularity of the function and the rate of decay of the coefficients is somewhat more complicated than the example above might make it seem, but there are some quick conclusions we can make. We forget about finding a series for a function for a moment, and we consider simply the limit of some given series. A few sections ago, we proved that the Fourier series
\begin{equation*} \sum_{n=1}^\infty \frac{\sin(nx)}{n^2} \end{equation*}
converges uniformly and hence converges to a continuous function. This example and its proof can be extended to a more general criterion.
The proof is to apply the Weierstrass \(M\)-test (Theorem 11.2.4) and the \(p\)-series test to find that the series converges uniformly and hence to a continuous function (Corollary 11.2.8). We can also take derivatives.
The proof is to note that the series converges to a continuous function by the previous proposition. In particular, it converges at some point. Then differentiate the partial sums
\begin{equation*} \sum_{n=-N}^{N} i n c_n \,e^{inx} \end{equation*}
and notice that for all nonzero \(n\)
\begin{equation*} \sabs{i n c_n} \leq \frac{C}{\sabs{n}^{\alpha-1}} . \end{equation*}
The differentiated series converges uniformly by the \(M\)-test again. Since the differentiated series converges uniformly, we find that the original series \(\sum_{n=-\infty}^\infty c_n\,e^{inx}\) converges to a continuously differentiable function, whose derivative is the differentiated series (see Theorem 11.2.14).
We can iterate this reasoning. Suppose there is some \(C\) and \(\alpha > k+1\) (\(k \in \N\)) such that for all nonzero integers \(n\text{,}\)
\begin{equation*} \sabs{c_n} \leq \frac{C}{\sabs{n}^\alpha} . \end{equation*}
Then the Fourier series converges to a \(k\)-times continuously differentiable function. Therefore, the faster the coefficients go to zero, the more regular the limit is.

Subsection 11.8.3 Orthonormal systems

Let us abstract away the exponentials, and study a more general series for a function. One fundamental property of the exponentials that makes Fourier series work is that the exponentials are a so-called orthonormal system. Fix an interval \([a,b]\text{.}\) We define an inner product for the space of functions. We restrict our attention to Riemann integrable functions as we do not have the Lebesgue integral, which would be the natural choice. Let \(f\) and \(g\) be complex-valued Riemann integrable functions on \([a,b]\) and define the inner product
\begin{equation*} \langle f , g \rangle \coloneqq \int_a^b f(x) \overline{g(x)} \, dx . \end{equation*}
If you have seen Hermitian inner products in linear algebra, this is precisely such a product. We must include the conjugate as we are working with complex numbers. We then have the “size” of \(f\text{,}\) that is, the \(L^2\) norm \(\snorm{f}_2\text{,}\) by (defining the square)
\begin{equation*} \snorm{f}_2^2 \coloneqq \langle f , f \rangle = \int_a^b \sabs{f(x)}^2 \, dx . \end{equation*}

Remark 11.8.6.

Note the similarity to finite dimensions. For \(z = (z_1,z_2,\ldots,z_d) \in \C^d\text{,}\) one defines
\begin{equation*} \langle z , w \rangle \coloneqq \sum_{n=1}^d z_n \overline{w_n} . \end{equation*}
Then the norm is (usually denoted simply by \(\snorm{z}\) in \(\C^d\) rather than by \(\snorm{z}_2\))
\begin{equation*} \snorm{z}^2 = \langle z , z \rangle = \sum_{n=1}^d \sabs{z_n}^2 . \end{equation*}
This is just the euclidean distance to the origin in \(\C^d\) (same as \(\R^{2d}\)).
In what follows, we will assume all functions are Riemann integrable.

Definition 11.8.7.

Let \(\{ \varphi_n \}_{n=1}^\infty\) be a sequence of integrable complex-valued functions on \([a,b]\text{.}\) We say that this is an orthonormal system if
\begin{equation*} \langle \varphi_n , \varphi_m \rangle = \int_a^b \varphi_n(x) \, \overline{\varphi_m(x)} \, dx = \begin{cases} 1 & \text{if } n=m, \\ 0 & \text{otherwise.} \end{cases} \end{equation*}
In particular, \(\snorm{\varphi_n}_2 = 1\) for all \(n\text{.}\) If we only require that \(\langle \varphi_n , \varphi_m \rangle = 0\) for \(m\not= n\text{,}\) then the system would be called an orthogonal system.
We noticed above that
\begin{equation*} {\left\{ \frac{1}{\sqrt{2\pi}} \, e^{inx} \right\}}_{n=1}^\infty \end{equation*}
is an orthonormal system on \([-\pi,\pi]\text{.}\) The factor out in front is to make the norm be 1.
Having an orthonormal system \(\{ \varphi_n \}_{n=1}^\infty\) on \([a,b]\) and an integrable function \(f\) on \([a,b]\text{,}\) we can write a Fourier series relative to \(\{ \varphi_n \}_{n=1}^\infty\text{.}\) Let
\begin{equation*} c_n \coloneqq \langle f , \varphi_n \rangle = \int_a^b f(x) \overline{\varphi_n(x)} \, dx , \end{equation*}
and write
\begin{equation*} f(x) \sim \sum_{n=1}^\infty c_n \varphi_n . \end{equation*}
In other words, the series is
\begin{equation*} \sum_{n=1}^\infty \langle f , \varphi_n \rangle \varphi_n(x) . \end{equation*}
Notice the similarity to the expression for the orthogonal projection of a vector onto a subspace from linear algebra. We are in fact doing just that, but in a space of functions.
In other words, the partial sums of the Fourier series are the best approximation with respect to the \(L^2\) norm.

Proof.

Let us write
\begin{equation*} \int_a^b \sabs{f-p_k}^2 = \int_a^b \sabs{f}^2 - \int_a^b f \widebar{p_k} - \int_a^b \widebar{f} p_k + \int_a^b \sabs{p_k}^2 . \end{equation*}
Now
\begin{equation*} \int_a^b f \widebar{p_k} = \int_a^b f \sum_{n=1}^k \overline{d_n} \overline{\varphi_n} = \sum_{n=1}^k \overline{d_n} \int_a^b f \, \overline{\varphi_n} = \sum_{n=1}^k \overline{d_n} c_n , \end{equation*}
and
\begin{equation*} \int_a^b \sabs{p_k}^2 = \int_a^b \sum_{n=1}^k d_n \varphi_n \sum_{m=1}^k \overline{d_m} \overline{\varphi_m} = \sum_{n=1}^k \sum_{m=1}^k d_n \overline{d_m} \int_a^b \varphi_n \overline{\varphi_m} = \sum_{n=1}^k \sabs{d_n}^2 . \end{equation*}
So
\begin{equation*} \begin{split} \int_a^b \sabs{f-p_k}^2 & = \int_a^b \sabs{f}^2 - \sum_{n=1}^k \overline{d_n} c_n - \sum_{n=1}^k d_n \overline{c_n} + \sum_{n=1}^k \sabs{d_n}^2 \\ & = \int_a^b \sabs{f}^2 - \sum_{n=1}^k \sabs{c_n}^2 + \sum_{n=1}^k \sabs{d_n-c_n}^2 . \end{split} \end{equation*}
This is minimized precisely when \(d_n = c_n\text{.}\)
When we do plug in \(d_n = c_n\text{,}\) then
\begin{equation*} \int_a^b \sabs{f-s_k}^2 = \int_a^b \sabs{f}^2 - \sum_{n=1}^k \sabs{c_n}^2 , \end{equation*}
and so for all \(k\text{,}\)
\begin{equation*} \sum_{n=1}^k \sabs{c_n}^2 \leq \int_a^b \sabs{f}^2 . \end{equation*}
Note that
\begin{equation*} \sum_{n=1}^k \sabs{c_n}^2 = \snorm{s_k}_2^2 \end{equation*}
by the calculation above. We take a limit to obtain the so-called Bessel’s inequality.
In particular, \(\int_a^b \sabs{f}^2 < \infty\) implies the series converges and hence
\begin{equation*} \lim_{k \to \infty} c_k = 0 . \end{equation*}

Subsection 11.8.4 The Dirichlet kernel and approximate delta functions

We return to the trigonometric Fourier series. The system \(\{ e^{inx} \}_{n=1}^\infty\) is orthogonal, but not orthonormal if we simply integrate over \([-\pi,\pi]\text{.}\) We can rescale the integral and hence the inner product to make \(\{ e^{inx} \}_{n=1}^\infty\) orthonormal. That is, if we replace
\begin{equation*} \int_a^b \qquad \text{with} \qquad \frac{1}{2\pi} \int_{-\pi}^\pi, \end{equation*}
(we are just rescaling the \(dx\) really)
 5 
Mathematicians in this field sometimes simplify matters with a tongue-in-cheek definition that \(1=2\pi\text{.}\)
, then everything works and we obtain that the system \(\{ e^{inx} \}_{n=1}^\infty\) is orthonormal with respect to the inner product
\begin{equation*} \langle f , g \rangle = \frac{1}{2\pi} \int_{-\pi}^\pi f(x) \, \overline{g(x)} \, dx . \end{equation*}
Suppose \(f \colon \R \to \C\) is \(2\pi\)-periodic and integrable on \([-\pi,\pi]\text{.}\) Write
\begin{equation*} f(x) \sim \sum_{n=-\infty}^\infty c_n \,e^{inx} , \qquad \text{where} \quad c_n \coloneqq \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-inx} \, dx . \end{equation*}
Recall the notation for the symmetric partial sums, \(s_N(f;x) \coloneqq \sum_{n=-N}^N c_n \,e^{inx}\text{.}\) The inequality leading up to Bessel now reads:
\begin{equation*} \frac{1}{2\pi} \int_{-\pi}^\pi \sabs{s_N(f;x)}^2 \, dx = \sum_{n=-N}^N \sabs{c_n}^2 \leq \frac{1}{2\pi} \int_{-\pi}^\pi \sabs{f(x)}^2 \, dx . \end{equation*}
Let the Dirichlet kernel be
\begin{equation*} D_N(x) \coloneqq \sum_{n=-N}^N e^{inx} . \end{equation*}
We claim that
\begin{equation*} D_N(x) = \frac{\sin\bigl( (N+\nicefrac{1}{2})x \bigr)}{\sin(\nicefrac{x}{2})} , \end{equation*}
for \(x\) such that \(\sin(\nicefrac{x}{2}) \not= 0\text{.}\) The left-hand side is continuous on \(\R\text{,}\) and hence the right-hand side extends continuously to all of \(\R\text{.}\) To show the claim, we use a familiar trick:
\begin{equation*} (e^{ix}-1) D_N(x) = e^{i(N+1)x} - e^{-iNx} . \end{equation*}
Multiply by \(e^{-ix/2}\)
\begin{equation*} (e^{ix/2}-e^{-ix/2}) D_N(x) = e^{i(N+\nicefrac{1}{2})x} - e^{-i(N+\nicefrac{1}{2})x} . \end{equation*}
The claim follows.
Expand the definition of \(s_N\)
\begin{multline*} s_N(f;x) = \sum_{n=-N}^N \frac{1}{2\pi} \int_{-\pi}^\pi f(t) e^{-int} \, dt ~ e^{inx} \\ = \frac{1}{2\pi} \int_{-\pi}^\pi f(t) \sum_{n=-N}^N e^{in(x-t)} \, dt = \frac{1}{2\pi} \int_{-\pi}^\pi f(t) D_N(x-t) \, dt . \end{multline*}
Convolution strikes again! As \(D_N\) and \(f\) are \(2\pi\)-periodic, we may also change variables and write
\begin{equation*} s_N(f;x) = \frac{1}{2\pi} \int_{x-\pi}^{x+\pi} f(x-t) D_N(t) \, dt = \frac{1}{2\pi} \int_{-\pi}^\pi f(x-t) D_N(t) \, dt . \end{equation*}
See Figure 11.12 for a plot of \(D_N\) for \(N=5\) and \(N=20\text{.}\)

Figure 11.12. Plot of \(D_N(x)\) for \(N=5\) (gray) and \(N=20\) (black).

The central peak gets taller and taller as \(N\) gets larger, and the side peaks stay small. We are convolving (again) with approximate delta functions, although these functions have all these oscillations away from zero. The oscillations on the side do not go away but they are eventually so fast that we expect the integral to just sort of cancel itself out there. Overall, we expect that \(s_N(f)\) goes to \(f\text{.}\) Things are not always simple, but under some conditions on \(f\text{,}\) such a conclusion holds. For this reason people write
\begin{equation*} 2\pi \, \delta(x) \sim \sum_{n=\infty}^\infty e^{inx} , \end{equation*}
where \(\delta\) is the “delta function” (not really a function), which is an object that will give something like “\(\int_{-\pi}^{\pi} f(x-t) \delta(t) \, dt = f(x)\text{.}\)” We can think of \(D_N(x)\) converging in some sense to \(2 \pi\, \delta(x)\text{.}\) However, we have not defined (and will not define) what the delta function is, nor what does it mean for it to be a limit of \(D_N\) or have a Fourier series.

Subsection 11.8.5 Localization

If \(f\) satisfies a Lipschitz condition at a point, then the Fourier series converges at that point.
In particular, if \(f\) is continuously differentiable at \(x\text{,}\) then we obtain convergence at \(x\) (exercise). A function \(f \colon [a,b] \to \C\) is continuous piecewise smooth if it is continuous and there exist points \(x_0 = a < x_1 < x_2 < \cdots < x_k = b\) such that for every \(j\text{,}\) \(f\) restricted to \([x_j,x_{j+1}]\) is continuously differentiable (up to the endpoints).
The proof of the corollary is left as an exercise. Let us prove the theorem.

Proof.

(Proof of Theorem 11.8.10) For all \(N\text{,}\)
\begin{equation*} \frac{1}{2\pi} \int_{-\pi}^\pi D_N = 1 . \end{equation*}
Write
\begin{equation*} \begin{split} s_N(f;x)-f(x) & = \frac{1}{2\pi} \int_{-\pi}^\pi f(x-t) D_N(t) \, dt - f(x) \frac{1}{2\pi} \int_{-\pi}^\pi D_N(t) \, dt \\ & = \frac{1}{2\pi} \int_{-\pi}^\pi \bigl( f(x-t) - f(x) \bigr) D_N(t) \, dt \\ & = \frac{1}{2\pi} \int_{-\pi}^\pi \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \sin\bigl( (N+\nicefrac{1}{2})t \bigr) \, dt . \end{split} \end{equation*}
By the hypotheses, for small nonzero \(t\text{,}\)
\begin{equation*} \abs{ \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} } \leq \frac{M\sabs{t}}{\sabs{\sin(\nicefrac{t}{2})}} . \end{equation*}
As \(\sin(\theta) = \theta + h(\theta)\) where \(\frac{h(\theta)}{\theta} \to 0\) as \(\theta \to 0\text{,}\) we notice that \(\frac{M\sabs{t}}{\sabs{\sin(\nicefrac{t}{2})}}\) is continuous at the origin. Hence, \(\frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})}\text{,}\) as a function of \(t\text{,}\) is bounded near the origin. As \(t=0\) is the only place on \([-\pi,\pi]\) where the denominator vanishes, it is the only place where there could be a problem. So, the function is bounded near \(t=0\) and clearly Riemann integrable on any interval not including \(0\text{,}\) and thus it is Riemann integrable on \([-\pi,\pi]\text{.}\) We use the trigonometric identity
\begin{equation*} \sin\bigl( (N+\nicefrac{1}{2})t \bigr) = \cos(\nicefrac{t}{2}) \sin(Nt) + \sin(\nicefrac{t}{2}) \cos(Nt) , \end{equation*}
to compute
\begin{multline*} \frac{1}{2\pi} \int_{-\pi}^\pi \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \sin\bigl( (N+\nicefrac{1}{2})t \bigr) \, dt = \\ \frac{1}{2\pi} \int_{-\pi}^\pi \left( \frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \cos (\nicefrac{t}{2}) \right) \sin (Nt) \, dt + \frac{1}{2\pi} \int_{-\pi}^\pi \bigl( f(x-t) - f(x) \bigr) \cos (Nt) \, dt . \end{multline*}
As functions of \(t\text{,}\) \(\frac{f(x-t) - f(x)}{\sin(\nicefrac{t}{2})} \cos (\nicefrac{t}{2})\) and \(\bigl( f(x-t) - f(x) \bigr)\) are bounded Riemann integrable functions and so their Fourier coefficients go to zero by Theorem 11.8.9. So the two integrals on the right-hand side, which compute the Fourier coefficients for the real version of the Fourier series go to 0 as \(N\) goes to infinity. This is because \(\sin(Nt)\) and \(\cos(Nt)\) are also orthonormal systems with respect to the same inner product. Hence \(s_N(f;x)-f(x)\) goes to 0, that is, \(s_N(f;x)\) goes to \(f(x)\text{.}\)
The theorem also says that convergence depends only on local behavior. That is, to understand convergence of \(s_N(f;x)\) we only need to know \(f\) in some neighborhood of \(x\text{.}\)
The first claim follows by taking \(M=0\) in the theorem. The “In particular” follows by considering \(f-g\text{,}\) which is zero on \(J\) and \(s_N(f-g) = s_N(f) - s_N(g)\text{.}\) So convergence at \(x\) depends only on the values of the function near \(x\text{.}\) However, we saw that the rate of convergence, that is, how fast does \(s_N(f)\) converge to \(f\text{,}\) depends on global behavior of \(f\text{.}\)
Note a subtle difference between the results above and what Stone–Weierstrass theorem gives. Any continuous function on \([-\pi,\pi]\) can be uniformly approximated by trigonometric polynomials, but these trigonometric polynomials may not be the partial sums \(s_N\text{.}\)

Subsection 11.8.6 Parseval’s theorem

Finally, convergence always happens in the \(L^2\) sense and operations on the (infinite) vectors of Fourier coefficients are the same as the operations using the integral inner product.

Proof.

There exists (exercise) a continuous \(2\pi\)-periodic function \(h\) such that
\begin{equation*} \snorm{f-h}_2 < \epsilon . \end{equation*}
Via Stone–Weierstrass, approximate \(h\) with a trigonometric polynomial uniformly. That is, there is a trigonometric polynomial \(P(x)\) such that \(\sabs{h(x) - P(x)} < \epsilon\) for all \(x\text{.}\) Hence
\begin{equation*} \snorm{h-P}_2 = \sqrt{ \frac{1}{2\pi} \int_{-\pi}^{\pi} \sabs{h(x)-P(x)}^2 \, dx } \leq \epsilon. \end{equation*}
If \(P\) is of degree \(N_0\text{,}\) then for all \(N \geq N_0\) ,
\begin{equation*} \snorm{h-s_N(h)}_2 \leq \snorm{h-P}_2 \leq \epsilon , \end{equation*}
as \(s_N(h)\) is the best approximation for \(h\) in \(L^2\) (Theorem 11.8.8). By the inequality leading up to Bessel,
\begin{equation*} \snorm{s_N(h)-s_N(f)}_2 = \snorm{s_N(h-f)}_2 \leq \snorm{h-f}_2 \leq \epsilon . \end{equation*}
The \(L^2\) norm satisfies the triangle inequality (exercise). Thus, for all \(N \geq N_0\text{,}\)
\begin{equation*} \snorm{f-s_N(f)}_2 \leq \snorm{f-h}_2 + \snorm{h-s_N(h)}_2 + \snorm{s_N(h)-s_N(f)}_2 \leq 3\epsilon . \end{equation*}
Hence, the first claim follows.
Next,
\begin{equation*} \langle s_N(f) , g \rangle = \frac{1}{2\pi} \int_{-\pi}^\pi s_N(f;x) \overline{g(x)} \, dx = \sum_{n=-N}^N c_n \frac{1}{2\pi} \int_{-\pi}^\pi e^{inx} \overline{g(x)} \, dx = \sum_{n=-N}^N c_n \overline{d_n} . \end{equation*}
We need the Schwarz (or Cauchy–Schwarz or Cauchy–Bunyakovsky–Schwarz) inequality for \(L^2\text{,}\) that is,
\begin{equation*} {\abs{\int_a^b f\bar{g}}}^2 \leq \left( \int_a^b \sabs{f}^2 \right) \left( \int_a^b \sabs{g}^2 \right) . \end{equation*}
Its proof is left as an exercise; it is not much different from the finite-dimensional version. So
\begin{equation*} \begin{split} \abs{\int_{-\pi}^\pi f\bar{g} - \int_{-\pi}^\pi s_N(f)\bar{g}} & = \abs{\int_{-\pi}^\pi (f- s_N(f))\bar{g}} \\ & \leq {\left(\int_{-\pi}^\pi \sabs{f- s_N(f)}^2 \right)}^{1/2} {\left( \int_{-\pi}^\pi \sabs{g}^2 \right)}^{1/2} . \end{split} \end{equation*}
The right-hand side goes to 0 as \(N\) goes to infinity by the first claim of the theorem. That is, as \(N\) goes to infinity, \(\langle s_N(f),g \rangle\) goes to \(\langle f,g \rangle\text{,}\) and the second claim is proved. The last claim in the theorem follows by using \(g=f\text{.}\)

Exercises 11.8.7 Exercises

11.8.1.

Consider the Fourier series
\begin{equation*} \sum_{k=1}^\infty \frac{1}{2^k} \sin(2^k x) . \end{equation*}
Show that the series converges uniformly and absolutely to a continuous function. Remark: This is another example of a nowhere differentiable function (you do not have to prove that)
 7 
See G. H. Hardy, Weierstrass’s Non-Differentiable Function, Transactions of the American Mathematical Society, 17, No. 3 (Jul., 1916), pp. 301–325. A thing to notice here is the \(n\)th Fourier coefficient is \(\nicefrac{1}{n}\) if \(n=2^k\) and zero otherwise, so the coefficients go to zero like \(\nicefrac{1}{n}\text{.}\)
. See Figure 11.13.

Figure 11.13. Plot of \(\sum_{n=1}^\infty \frac{1}{2^n} \sin(2^n x)\text{.}\)

11.8.2.

Suppose that a \(2\pi\)-periodic function that is Riemann integrable on \([-\pi,\pi]\text{,}\) and such that \(f\) is continuously differentiable on some open interval \((a,b)\text{.}\) Prove that for every \(x \in (a,b)\text{,}\) we have \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\)

11.8.3.

Prove Corollary 11.8.11, that is, suppose a \(2\pi\)-periodic function is continuous piecewise smooth near a point \(x\text{,}\) then \(\lim\limits_{N\to\infty} s_N(f;x) = f(x)\text{.}\) Hint: See the previous exercise.

11.8.4.

Given a \(2\pi\)-periodic function \(f \colon \R \to \C\text{,}\) Riemann integrable on \([-\pi,\pi]\text{,}\) and \(\epsilon > 0\text{,}\) show that there exists a continuous \(2\pi\)-periodic function \(g \colon \R \to \C\) such that \(\snorm{f-g}_2 < \epsilon\text{.}\)

11.8.5.

Prove the Cauchy–Bunyakovsky–Schwarz inequality for Riemann integrable functions:
\begin{equation*} {\abs{\int_a^b f\bar{g}}}^2 \leq \left( \int_a^b \sabs{f}^2 \right) \left( \int_a^b \sabs{g}^2 \right) . \end{equation*}

11.8.6.

Prove the \(L^2\) triangle inequality for Riemann integrable functions on \([-\pi,\pi]\text{:}\)
\begin{equation*} \snorm{f+g}_2 \leq \snorm{f}_2 + \snorm{g}_2 . \end{equation*}

11.8.7.

Suppose for some \(C\) and \(\alpha > 1\text{,}\) we have a real sequence \(\{ a_n \}_{n=1}^\infty\) with \(\abs{a_n} \leq \frac{C}{n^\alpha}\) for all \(n\text{.}\) Let
\begin{equation*} g(x) \coloneqq \sum_{n=1}^\infty a_n \sin(n x) . \end{equation*}
  1. Show that \(g\) is continuous.
  2. Formally (that is, suppose you can differentiate under the sum) find a solution (formal solution, that is, do not yet worry about convergence) to the differential equation
    \begin{equation*} y''+ 2 y = g(x) \end{equation*}
    of the form
    \begin{equation*} y(x) = \sum_{n=1}^\infty b_n \sin(n x) . \end{equation*}
  3. Then show that this solution \(y\) is twice continuously differentiable, and in fact solves the equation.

11.8.8.

Let \(f\) be a \(2\pi\)-periodic function such that \(f(x) = x\) for \(0 < x < 2\pi\text{.}\) Use Parseval’s theorem to find
\begin{equation*} \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} . \end{equation*}

11.8.9.

Suppose that \(c_n = 0\) for all \(n < 0\) and \(\sum_{n=0}^\infty \sabs{c_n}\) converges. Let \(\D \coloneqq B(0,1) \subset \C\) be the unit disc, and \(\overline{\D} = C(0,1)\) be the closed unit disc. Show that there exists a continuous function \(f \colon \overline{\D} \to \C\) that is analytic on \(\D\) and such that on the boundary of \(\D\) we have \(f(e^{i\theta}) = \sum_{n=0}^\infty c_n e^{in\theta}\text{.}\)
Hint: If \(z=re^{i\theta}\text{,}\) then \(z^n = r^n e^{in\theta}\text{.}\)

11.8.10.

Show that
\begin{equation*} \sum_{n=1}^\infty e^{-1/n} \sin(n x) \end{equation*}
converges to an infinitely differentiable function.

11.8.11.

Let \(f\) be a \(2\pi\)-periodic function such that \(f(x) = f(0) + \int_0^x g\) for a function \(g\) that is Riemann integrable on every interval. Suppose
\begin{equation*} f(x) \sim \sum_{n=-\infty}^\infty c_n \,e^{inx} . \end{equation*}
Show that there exists a \(C > 0\) such that \(\sabs{c_n} \leq \frac{C}{\sabs{n}}\) for all nonzero \(n\text{.}\)

11.8.12.

  1. Let \(\varphi\) be the \(2\pi\)-periodic function defined by \(\varphi(x) \coloneqq 0\) if \(x \in (-\pi,0)\text{,}\) and \(\varphi(x) \coloneqq 1\) if \(x \in (0,\pi)\text{,}\) letting \(\varphi(0)\) and \(\varphi(\pi)\) be arbitrary. Show that \(\lim\limits_{N \to \infty} s_N(\varphi;0) = \nicefrac{1}{2}\text{.}\)
  2. Let \(f\) be a \(2\pi\)-periodic function Riemann integrable on \([-\pi,\pi]\text{,}\) \(x \in \R\text{,}\) \(\delta > 0\text{,}\) and there are continuously differentiable \(g \colon [x-\delta,x] \to \C\) and \(h \colon [x,x+\delta] \to \C\) where \(f(t) = g(t)\) for all \(t \in [x-\delta,x)\) and where \(f(t) = h(t)\) for all \(t \in (x,x+\delta]\text{.}\) Then \(\lim\limits_{N\to\infty} s_N(f;x) = \frac{g(x)+h(x)}{2}\text{,}\) or in other words,
    \begin{equation*} \lim_{N \to \infty} s_N(f;x) = \frac{1}{2} \left( \lim_{t \to x^-} f(t) + \lim_{t \to x^+} f(t) \right) . \end{equation*}

11.8.13.

Let \(\{ a_n \}_{n=1}^\infty\) be such that \(\lim_{n\to \infty} a_n = 0\text{.}\) Show that there is a continuous \(2\pi\)-periodic function \(f\) whose Fourier coefficients \(c_{n}\) satisfy that for each \(N\) there is a \(k \geq N\) where \(\sabs{c_k} \geq a_k\text{.}\)
Remark: The exercise says that if \(f\) is only continuous, there is no “minimum rate of decay” of the coefficients. Compare with Exercise 11.8.11.
Hint: Look at Exercise 11.8.1 for inspiration.
For a higher quality printout use the PDF versions: https://www.jirka.org/ra/realanal.pdf or https://www.jirka.org/ra/realanal2.pdf