An Elementary Proof of a Classical Information-Theoretic Formula

A renowned information-theoretic formula by Shannon expresses the mutual information rate of a white Gaussian channel with a stationary Gaussian input as an integral of a simple function of the power spectral density of the channel input. We give in this paper a rigorous yet elementary proof of this classical formula. As opposed to all the conventional approaches, which either rely on heavy mathematical machineries or have to resort to some"external"results, our proof, which hinges on a recently proven sampling theorem, is elementary and self-contained, only using some well-known facts from basic calculus and matrix theory.


Introduction
Consider the following continuous-time white Gaussian channel where {B(t) : t ∈ R + } denotes the standard Brownian motion, and the channel input {X(s) : s ∈ R} is an independent stationary Gaussian process with power spectral density f (λ).This paper is to give an elementary proof of the following classical information-theoretic formula (see, e.g., Theorem 6.7.1 of [11]) lim This renowned formula was first established by Shannon in his seminal work [16] through a heuristic yet rather convincing spectrum-splitting argument and then treated more rigorously by numerous authors, predominantly using alternative channel formulations obtained via some orthogonal expansion representations in the relevant Hilbert space.Representative work in this direction include [9,10,8,3,11], and the heart of all the approaches therein lies in a continuous-time version of the famed Szego's theorem (see, e.g., the theorem on page 139 of [5]).In a different direction, there have been efforts devoted to analyze continuous-time Gaussian channels using tools and techniques from stochastic calculus [2,12,6,7], where the channel mutual information has been found to linked to an optimal linear filter.These links, together with well-known results from filtering theory [17,18], will conceivably recover (2).
It appear to us that all existing treatments either rely on heavy mathematical machineries or have to resort to some "external" results.By comparison, our proof, which hinges on a recently proven sampling theorem (Theorem 3.2 in [15]), is elementary and self-contained, only using some well-known facts from basic calculus and matrix theory: it turns out that the aforementioned sampling theorem enables us to sidestep numerous complications that are otherwise present in the continuous-time regime and allows us to employ a spectral analysis of finite-dimensional matrices, rather than infinite-dimensional operators that some previous approaches would have to deal with.Moreover, as elaborated in Section 4, our approach gives rise to a "scalable" version of Szego's theorem and naturally connects a continuoustime Gaussian channel to its sampled discrete-time versions, and thereby promising further applications in more general settings.

A Heuristic Proof
We first explain the aforementioned sampling theorem.For any given T > 0 and n ∈ N, choose evenly spaced sampling times t i , i = 0, 1, . . ., n, such that t i1 = iT /n and let ∆ T,n {t 0 , t 1 , . . ., t n }.
Sampling the channel (1) over the time interval [0, T ] with respect to ∆ T,n , we obtain its sampled discrete-time version as follows: Loosely speaking, Theorem 2.1 in [15] says that as the above sampling gets increasingly finer, the mutual information of the discrete-time channel (3) will converge to that of the original continuous-time channel (1).Note that the mutual information of the channel (3) can be computed as where I n is the n × n identity matrix and A T,n is an n × n matrix whose (i, j)-th entry is defined as It then follows from the stationarity of {X(s)} that where, setting t −i = −iT /n for i = 1, 2, . . ., n − 1, we have defined Noting that A T,n is a Hermitian (and Toeplitz) matrix and letting ψ 1 , ψ 2 , . . ., ψ n denote all its eigenvalues, we have Now, for large n, approximating and for n/2 < m < n, where we have used the definition is the autocorrelation function of {X(s)}.Adapting some wellknown arguments for establishing aysmptotic equivalence (see, e.g., [5] or [4]), we can prove that for large T and large n, Now, collecting all the results above, we conclude that, for appropriately chosen large T and large n, where (a) follows from Theorem 3.3, (b) follows from ( 4), (c) follows from ( 5), (d) follows from ( 9), (e) follows from ( 8) and (f ) follows from the definition of the integral, establishing the formula (2).The above proof is by no means rigourous, but, as elaborated in the next section, a refinement with some elementary ε-δ arguments and Fourier analysis arguments will certainly make it so to reach (2), which yields a rigorous proof of the classical formula.

A Rigorous Proof
First of all, we rigorously state our theorem.
Remark 3.2.It is well known that f (λ) and R(τ ) are a Fourier transform pair, and the integrability of one implies that the other one is uniformly bounded and uniformly continuous over R.Moreover, it is easy to verify that f (λ) is non-negative, and both f (λ) and R(τ ) are symmetric.
We next state the sampling theorem that will be used in our proof, which is a weakened version of Theorem 2.1 in [15] that holds true in a more general setting where sampling times may not be evenly spaced, and moreover, feedback and memory are possibly involved.Theorem 3.3.For any fixed T > 0 and any sequence {∆ T,n k : k ∈ N} satisfying ∆ T,n k ⊂ ∆ T,n k+1 for any feasible k, we have We are now ready to give the proof of our main result.
Proof of Theorem 3.1.Our proof consists of the following several steps.
Step 1.In this step, we show that both A T,n 2 and ÂT,n 2 are bounded from above uniformly over all T > 0 and n ∈ N, namely, there exists C > 0 such that for all T > 0 and n ∈ N, A T,n 2 , ÂT,n 2 ≤ C.Here • 2 denotes the operator norm induced by the L 2 -norm for vectors.
It is straightforward to verify (cf. the proof of Lemma 4.1 in [4]) that where γ l e ilθ .
So, to establish the uniform boundedness of A T,n 2 , it suffices to prove that |g T,n (θ)| is bounded from above uniformly all θ, T and n.Towards this end, we note that for any feasible l 1 , l 2 , which immediately implies that which, together with (6), further implies that for any m, Note that a similar argument as above yields that for all θ, T and n, which implies the uniform boundedness of A T,n 2 , and moreover, together with (6), that of ÂT,n 2 .
Step 2. In this step, we show that both A T,n 2 F /T and ÂT,n 2 F /T are bounded from above uniformly over all T > 0 and n ∈ N.Here • F denotes the Frobenious norm.
To prove the uniform boundedness of A T,n where, for the last inequality, we have used (12).
A similar argument can be used to establish the uniform boundedness of ÂT,n 2 F /T .Step 3. In this step, we show that one can first fix a large enough T and then choose a large enough n such that A T,n − ÂT,n 2 F /T is arbitrarily small; more precisely, for any ε > 0, there exists T 0 > 0 such that for any T ≥ T 0 , there exists n 0 > 0 such that for all n ≥ n 0 , A T,n − ÂT,n 2 F /T ≤ ε.Towards this goal, we first note that In light of the integrability of R(τ ), for any given ε ′ > 0, there exists τ 0 > 0 such that Now, it can be easily verified that for any given ε ′ > 0, we can first fix a large enough T and then choose large enough n > 0 such that t ⌊ε ′ n⌋ ≥ τ 0 , and furthermore, where we have used (11) in deriving (a).It then follows that for T and n as above, Step 4. In this step, fixing a polynomial p(x), we show that for any ε > 0, there exists T 0 > 0 such that for any T ≥ T 0 , there exists n 0 > 0 such that for all n ≥ n 0 , To achieve this goal, it suffices to prove that, given any fixed k, for any ε > 0, one can first fix a large enough T and then choose a large enough n such that First of all, we note that ).And for the first term, using the well-known fact that for any two compatible matrices T .
It then follows from Steps 1, 2 and 3 that for any ε ′ > 0, one can first fix a large enough T and then choose a large enough n such that A completely parallel argument can be used to establish the same statement for other terms, which in turn implies our goal in this step.
Step 5.In this step, we finish the proof of the theorem.First of all, let {ε k } be a monotone decreasing sequence of positive real numbers convergent to 0. For any ε k > 0, we first arbitrarily choose a monotone increasing sequence {T k } of positive real numbers divergent to infinity, and then, applying Theorem 3.3, choose n k for each T k such that Then, applying the Weierstrass approximation theorem to the continuous function log(1 + x)/x, we choose two polynomials p which obviously leads to Re-choosing a larger T k first and then a larger n k if necessary, we have, by Step 4, (18) Now, as elaborated in Appendix A, one can show that (again re-choosing n k for each ) And moreover, from (15), we deduce that (p This, together with the integrability of f (•), implies that which, together with ( 16), ( 17), ( 18) and ( 19), implies that Finally, similarly as in the derivation of (20), using (15) and the integrability of f (•), we conclude that lim which, together with (21), ( 5) and ( 14), implies that The theorem then immediately follows from a typical subsequence argument, as desired.

Concluding Remarks
Some remarks about the approach employed in this work are in order.First, echoing [15], we emphasize that time sampling, which is the key ingredient in our approach, ensures the inheritance of causality in converting a continuous-time Gaussian channel to its discrete-time versions, which stands in contrast to the orthogonal expansion representation in some previous approaches that destroys the temporal causality in the conversion process.
More technically, we note that our proof of Theorem 3.1 has actually established that one can appropriately "scale" {T k } and {n k } with T k /n k shrinking to 0 as k tends to infinity (i.e., the sampling gets finer) such that lim k→∞ log det ( Such a result can be regarded as a "scalable" version of Szego's theorem, which seems to serve as a bridge connecting discrete-time and continuous-time Szego's theorems. As argued above, we believe that, other than recovering a classical information-theoretic formula with an elementary proof, our approach promises further applications in more general settings, which, for instance, include possible extensions of the formula (2) to continuous-time Gaussian channels with feedback and memory [11], or with multi-users [15], or multi-inputs and multiple-outputs [1].
Acknowledgement.We would like to thank Professor Shunsuke Ihara for insightful discussions and for pointing out relevant references.

A Proof of (19)
To prove (19), it suffices to prove that for any q = 1, 2, . . ., For illustrative purposes, we now prove (22) for the case that q = 2 in great detail.First of all, by (6), we have, for any m = 1, 2, . .
and furthermore where for (a), we have used the easily verifiable fact that if l + j is equal to 0 or n k , then n k m=1 e −2πim(l+j)/n k is equal to n k , and 0 otherwise.Noting that with a routine continuity argument using the definition of integral, we arrive at where we have used the uniform boundedness and uniform continuity of R(•) for the first equality, and the last equality follows from the fact that f 2 (•) and R * R(•) are a Fourier transform pair.Moreover, using the absolute summability of {γ l } and the fact that lim τ →∞ R(τ ) = 0 (this follows from the Riemann-Lesbegue lemma), we have We next prove (22) for a general q ≥ 2. Since the arguments are more tedious than yet completely parallel to the case that q = 2, we only outline the major steps below.In a parallel manner as above, we have, for any q ≥ 2, where S 1 is the summation of all terms taking the form of γ j 1 γ j 2 . . .γ jq satisfying j 1 + j 2 + • • • + j q−1 = j q and S 2 is the summation of all the "remaining" terms.Then, similarly as in deriving (24), we deduce that And similarly as in deriving (25), we deduce that lim Finally, (22) then follows from (26), ( 27) and (28), which in turn implies (19), as desired.