Matched Multiuser Gaussian Source-Channel Communications via Uncoded Schemes

We investigate whether uncoded schemes are optimal for Gaussian sources on multiuser Gaussian channels. Particularly, we consider two problems: the first is to send correlated Gaussian sources on a Gaussian broadcast channel where each receiver is interested in reconstructing only one source component (or one specific linear function of the sources) under the mean squared error distortion measure; the second is to send vector Gaussian sources on a Gaussian multiple-access channel, where each transmitter observes a noisy combination of the source, and the receiver wishes to reconstruct the individual source components (or individual linear functions) under the mean squared error distortion measure. It is shown that when the channel parameters match certain general conditions, the induced distortion tuples are on the boundary of the achievable distortion region, and thus optimal. Instead of following the conventional approach of attempting to characterize the achievable distortion region, we ask the question whether and how a match can be effectively determined. This decision problem formulation helps to circumvent the difficult optimization problem often embedded in region characterization problems, and also leads us to focus on the critical conditions in the outer bounds that make the inequalities become equalities, which effectively decouples the overall problem into several simpler sub-problems.


Introduction
Although the source-channel separation architecture is asymptotically optimal in the point-to-point communication setting [1] as well as several classes of multiuser communication settings (see e.g., [2] and references therein), uncoded schemes have several particularly attractive properties. Firstly, they have very simple encoders and decoders; secondly, they belong to the so-called zero-delay codes, which can avoid the long delay required to approach the asymptotic performance in the separation-based schemes; lastly, they are in fact optimal in some settings, while the separationbased schemes are not (see e.g., [3]).
It was shown in [4] that uncoded schemes are optimal when certain matching conditions involving the source probability distribution, the channel transition probability distribution, the channel cost function and the distortion measure function are satisfied. Though the focus in [4] was mainly on the point-to-point setting, recent results [5][6][7][8] suggest that the concept of matching indeed carries over to the multiuser case. In fact, in multiuser settings, matching may occur naturally when the distortion measure, the channel cost function and source distribution are all fixed, and the channel parameters, which represent physically meaningful quantities, satisfy certain conditions. In this work, we consider such matching, particularly, when the sources and the channels are Gaussian, the channel constraints are on the expected average signal power, the distortion measure is the mean squared error (MSE), and only the channels parameters, such as the channel amplification factors and the additive noise powers, are allowed to vary.
In this context, of interest is whether for a fixed source and fixed coding parameters, the distortion vector such induced is on the boundary of the achievable distortion region (and thus optimal). More specifically, we seek to answer the following questions: • Is there a set of (explicitly) computable conditions that can be used to certify a fixed uncoded scheme to be optimal for a given source and channel pair?
• If so, is there a non-trivial set of channels that satisfy such conditions for a given source and uncoded scheme pair?
We shall refer to this kind of channels as "matched channels"; a dual question is to ask for "matched sources", however in the context of problems considered here, the dual question is notationally more involved, and thus we choose to investigate the problems from the perspective of "matched channels". One can also ask for "matched distortion measures", similarly as the approach taken in [4], however in the Gaussian setting we consider here, fixing the MSE distortion is practically more important and well-motivated. The set of matched channels should be distinguished from the complete set of channels for which the given uncoded scheme is optimal. The former may be a strict subset of the latter, since the former may be only sufficient for an uncoded scheme to be optimal, which usually depends on the specific outer bounding technique employed. Characterizing the latter region is naturally more difficult than answering the questions we posed above.
The two questions posed above are in essence the two facets of the same question. Since we only provide conditions for matching, or in other words, sufficient conditions for the scheme to be optimal, the set of matched channels may in fact be empty. A trivial condition to answer the first question is simply an impossible one such that we would never be able to certify a channel to be matched. Thus the second question is important, and we show indeed for the two problems considered here, there are non-trivial channels that match the source and the uncoded scheme.
Traditionally, research in information theory asks for characterization of certain achievable region, for which we first derive an expression for an outer bound, and derive an expression for an inner bound, and then make comparison of them. This approach can be challenging because it usually involves optimization over a set of parameters, and solving such an optimization problem explicitly can be difficult. It is not clear whether the obstacle mainly stems from the intractable nature of the underlying communication problem, or it is mainly caused by the embedded optimization problem.
The aforementioned difficulty motivates the formulation of the first question, which is a decision problem instead of an optimization problem. An analogy of this situation can be found in computer science algorithm research, where instead of asking whether an optimization problem can be solved in polynomial time, the alternative question is asked whether a decision (e.g., regarding a solution is above a threshold) can be made in polynomial time. Our problem formulation naturally leads to a different approach in the investigation. Instead of focusing on comparison of the inner bounds and outer bounds using their expressions, we focus on the necessary conditions that the outer bound becomes tight, i.e., the conditions when the information inequalities hold with strict equality. With fixed source and fixed coding parameters, the coding vector can be substituted into the conditions, and the necessary and sufficient conditions for such equality can be derived. The outer bounds naturally provide certain "decoupled" conditions, which significantly simplifies the overall task. Though this approach may have inherently been used by many researchers in the past, its effectiveness becomes particularly evident in our investigation of uncoded schemes in the joint source channel communication setting.
In the rest of the paper, we focus specifically on two joint source channel coding problems using the approach outlined above. The first problem is to send correlated Gaussian sources on a Gaussian broadcast channel where each receiver is interested in reconstructing only one source component (or equivalently one specific linear function of the source) under the MSE distortion measure. The second problem is to send vector Gaussian sources on a Gaussian multiple-access channel, where each transmitter observes a noisy combination of the source, i.e., a case of the vector CEO problem, and the receiver wishes to reconstruct the source components (or equivalently linear functions of the source components) under the MSE distortion measure. General conditions for matching are derived, which either include or generalize well-known existing results on the optimality of uncoded schemes in the multiuser setting. Particularly notable are the following cases: • The first problem generalizes the two-user case considered in [5] and [6] to the M -user case, for which we show that the uncoded scheme is optimal for a large set of sources and channels; our results reveal that uncoded scheme can still be optimal when some source components are negatively correlated.
• The results on the second problem includes as specials cases the symmetric scalar Gaussian CEO problem [7], the problem of sending bivariate Gaussian sources on a Gaussian multiple-access channel [8], and sending remote (noisy) bivariate Gaussians on a Gaussian multiple-access channel [9]. Our results reveal that in addition to the symmetric case considered in [7], uncoded scheme is also optimal when the sensor observation quality is proportional to the channel quality. These results also allow the sensor observations to have more general correlation structure and the observations to be noisy, thus extending the results in [8] and [9]. When viewed from the perspective of computation, our result also provides new characterizations for the problem of computing linear functions of Gaussian random variables on the Gaussian multiple-access channels considered in [10] and [11].
Notationally, we write for a source S at time n as S[n], and a length-N vector as S N . For a set of coefficients (α 1 , α 2 , . . . , α M ), we sometimes write it in a (column) vector form asᾱ. For a real matrix Σ, we write its transpose as Σ t . The positive semidefinite order is denoted as .

Correlated Gaussian Sources on a Gaussian Broadcast Channel
In this section we consider the problem of broadcasting correlated Gaussian sources on a Gaussian broadcast channel, which can be described as follows; see also Fig. 1 for an illustration. Let the zero-mean Gaussian source be (S 1 [n], S 2 [n], . . . , S M [n]) with covariance matrix Σ S 1 ,S 2 ,...,S M , which is assumed to be full rank. The channel is given by where (Z 1 , Z 2 , . . . , Z M ) are zero-mean additive noises which are mutually independent, with vari- respectively; the channel input must satisfy an average power constraint 1 N N n=1 EX 2 ≤ P . The transmitter encodes the length-N source vector (S N 1 , S N 2 , . . . , S N M ) into a length-N channel vector X N , and the m-th receiver reconstructs from the channel output vector Y N m the source vector S N m asŜ N m , resulting in a distortion D m = 1 We omit the formal problem definition using encoding and decoding function, which is standard and can be obtained by extending that in, for example, [6].
The uncoded scheme of interest has the form such that In other words, at each time instance, the channel input is simply a linear combination of the source components with coefficients (α 1 , α 2 , . . . , α M ), such that the resulting signal has an expected variance that is equal to the power constraint P . We shall assume α m = 0. The decoders simply estimate The main result is summarized in the following theorem, which gives a matching condition in a positive semidefinite form.
Theorem 1. A Gaussian broadcast channel is said to be matched to a given source and an uncoded scheme with non-zero parametersᾱ, and the distortion vector induced by the given scheme is on the boundary of the achievable distortion region thus optimal, if where the entries of the symmetric matrix Σ V 1 ,V 2 ,...,V M are specified as This theorem establishes a condition that is sufficient to guarantee a distortion vector induced by an uncoded scheme to be on the boundary of the achievable distortion region, and thus an optimal solution. The matrix Σ V 1 ,V 2 ,...,V M may seems mysterious at the first sight, however, the reason for introducing this matrix will become clear shortly.
This theorem clearly answers our first question regarding conditions that can be used to certify whether a given uncoded scheme is optimal. In fact, it also provides clues on the second question regarding whether there exist non-trivial channels that such matching is possible. Indeed, in Section 2.3 and Section 2.4 we establish several properties of matched channels, through which an answer to the second question is given. Before presenting those results, the proof of this theorem is presented next in two parts: the critical conditions in a novel outer bound are outlined in Section 2.1, and then these conditions for the bound to hold with equality in the uncoded scheme are analyzed in Section 2.2. The proof details for the outer bound are relegated to the Appendix.

Extracting the Critical Conditions from the Outer Bound
In order to obtain the matching condition, we first derive a novel outer bound for this problem. An important technique in the derivation of this outer bound is the introduction of certain appropriate random variables outside of the original problem. This approach is partly motivated by our previous work [12][13][14][15], which can further be traced back to Ozarow [16]. Consider M zero-mean Gaussian random variables (W 1 , W 2 , . . . , W M ), independent of everything else, with covariance matrix Σ W 1 ,W 2 ,...,W M , and write The outer bound will be written as a necessary condition that any distortion vector has to satisfy. For this purpose, we bound the following quantity for any encoding and decoding functions: where σ 2 0 for notational simplicity. An almost identical quantity was used in [12] to obtain an approximate characterization for the distortion region of the Gaussian broadcast problem with bandwidth mismatch. We shall upper-bound this quantity using the channel properties and lowerbound it using the source reconstruction requirements, then combine them to obtain an eventual outer bound.
This quantity can be upper-bounded as given in the Appendix as with equality holds if and only if and the following condition stemming from the entropy power inequalities hold with equality The conditions in (13) are standard, as Bergmans [17] also used the entropy power inequality to establish the Gaussian broadcast channel capacity, and in general a Gaussian codebook suffices to make them equalities. The condition (12) intuitively requires that the power is fully utilized. The condition (11) is however rather peculiar, which essentially requires the noisy source (U N 1 , U N 2 , . . . , U N M ) to be as useful as the real source (S N 1 , S N 2 , . . . , S N M ) in determining the channel output Y N M . The quantity E(Σ W 1 ,W 2 ,...,W M ) can also be lower-bounded as given in the Appendix, where its individual summands are bounded as with equality holds if and only if The conditions in (15) are standard which can be viewed as requiring the codes to achieve the given distortions with equality, however the conditions in (16) are peculiar which essentially require all (10) and (14), we obtain an outer bound, or outer bounds since each set of the auxiliary random variables (W 1 , W 2 , . . . , W M ) provides one specific outer bound. In the approach we shall take, the precise form of this outer bound is less important than the extracted matching conditions (11), (12), (13), (15) and (16). In fact, the conditions (12), (13) and (15) can be satisfied simply by choosing a proper jointly Gaussian coding scheme, yet the conditions (11) and (16) are the effectual non-trivial conditions. Note that from the problem setting and taking into consideration the fact that physical degradedness is equivalent to stochastic degradedness in the broadcast setting, we have the Markov string where we have utilized the fact that physical degradation is equivalent to stochastic degradation in broadcast channels. This Markov string is not sufficient to guarantee (11) and (16), and thus they require special attention.

The Forward Matching Condition
We first introduce some additional notation and make a few observations. Notice that due to the power constraint, the coefficient vectorᾱ (α 1 , α 2 , . . . , α M ) t should satisfȳ and it follows that Due to the jointly Gaussian distribution of the uncoded scheme, we can write where the three components are mutually independent, since β m X = E[S m |X]; we have also omitted the time index [n] to simplify the notation. It follows that the covariance matrix of (U 1 , U 2 , . . . , U M ) given Y m can be decomposed as follows where With the above observations, we now return to the derivation of the forward matching conditions. As mentioned earlier, we need to substitute the random vectors specified by the uncoded scheme, i.e., assigning X[n] = M m=1 α m S m [n], into the critical conditions (11), (12), (13), (15) and (16) in order to identify the matching conditions. It is straightforward to see that (12), (13) and (15) indeed hold with equality due to the jointly Gaussian distribution of the uncoded scheme, and the chosen coefficients. Thus we only need to focus on (11) and (16), which in the context of the uncoded scheme are equivalent to the following single-letter forms To satisfy the condition (24) with the jointly Gaussian uncoded scheme, for any m = 2, 3, . . . , M , It remains to determine the diagonal entries of in order to satisfy the condition (23) with equality, we must have where we have taken into account of the fact that Σ S 1 ,S 2 ,...,S M is full rank. Thus for any m = 1, 2, . . . , M , It follows that γ m,m = σ 2 Vm can be determined from since α m = 0. Thus the conditions (23) and (24) being equalities uniquely specify the matrix Σ V 1 ,V 2 ,...,V M . Conversely, as long as the matrix Σ (0) is positive semidefinite, the conditions (23) and (24) hold with equality and the corresponding auxiliary random variables (W 1 , W 2 , . . . , W M ) can be found, and the outer bound derived previously is thus tight. This is exactly the matching condition given in Theorem 1.
Remark: The outer bounds conditions (11) and (16) in the context of the uncoded scheme provide two constraints on the matrix Σ V 1 ,V 2 ,...,V M . Their effects on the matrix Σ V 1 ,V 2 ,...,V M are largely decoupled: the condition required by (16) being equal determines the off-diagonal entries of Σ V 1 ,V 2 ,...,V M , while the condition (11) determines its diagonal entries. This decoupling effect is particularly helpful in deriving the matching conditions. In the second problem we consider in the next section, i.e., the multiple access channel problem, this decoupling effect is even more pronounced.

Cholesky Factorization and a Necessary Condition For Matching
The condition given Theorem 1 is in a positive semidefinite form, however, due to the specific problem structure, it can also be represented as a set of recursive conditions, which is discussed in this section. This alternative representation also leads to a necessary condition for matching to hold, which plays an instrumental role for several results given in Section 2.4, where we answer the second question regarding the existence of non-trivial set of matched channels.
Determining whether a matrix is positive semidefinite is equivalent to computing the LDL decomposition, and checking whether the resultant diagonal matrix in the decomposition has nonnegative entries. The matrix Σ (0) is positive semidefinite if and only if the diagonal matrix in the LDL decomposition has only non-negative entries. Computationally this can be accomplished with the Cholesky factorization [19] on the matrix Σ (0) . Here we provide an intuitive description of the Cholesky factorization in the context of the problem being considered, and its conceptual interpretation as the recursive thresholding determination for the channel to yield a matching.
In the first step of the Cholesky factorization, we use symmetric column and row Gaussian elimination to eliminate all the entries of the M -th column and the M -th row, except the diagonal entry. Denote the resulting upper-left (M − 1) × (M − 1) matrix after this first step as Σ (1) . A necessary condition for the matrix Σ (0) to be positive definite is that the lower right entry of the matrix Σ (0) is strictly positive, or all the entries on the last column are zero. Notice that the condition only involves σ 2 X|Y M or the channel noise power σ 2 Z M , which yields a necessary condition . Continue the Cholesky factorization on Σ (1) , and a similar necessary condition is its lower right entry is strictly positive, or the entries on the (M − 1)-th row of Σ (1) are zero. Similarly as the previous step, the condition on . Continuing this process will yield a set of conditions in the form of The matrix Σ (0) is positive definite if and only if all such threshold conditions are satisfied. Notice that the threshold function . . , 1 can be viewed as a recursive threshold checking (or determination) procedure, and the channel noise power σ 2 Zm needs to be chosen to be larger than the threshold determined by (σ 2 Z M , σ 2 Z M −1 , . . . , σ 2 Z m+1 ) in every step to yield a matching. Given the above observation, it is natural to speculate that if a channel is matched, then any more noisy channel also induces a match. This intuition is in fact correct, and the statement is made more rigorous in the next section as Corollary 1.
One necessary condition for a matching to exist is that the matrix Σ V 1 ,V 2 ,...,V M is positive semidefinite. We can thus apply the Cholesky factorization technique on this particular matrix to obtain a necessary condition for matching to exist.
Note that this condition is essentially independent of the channel, as long as the channel is not perfect. This lemma is proved in the Appendix.

Properties and Existence of Matched Channels
With Lemma 1, we can establish several properties of the set of matched channels, given next as corollaries to Theorem 1. Their proofs are provided in the Appendix. These properties essentially provide an answer to the second question posed earlier, and we shall also further illustrate such sources and channels using an example. Corollary 1. If an uncoded scheme is matched on (σ 2 Z 1 , σ 2 Z 2 , . . . , σ 2 Z M ), then it is matched and thus optimal on any channel with noise powers σ 2 Zm , m = 1, 2, . . . , M .
The corollary reveals a property of matched channels: once a channel is matched, any channel with more noise is also a matched channel and thus the uncoded scheme is optimal. The next corollary states, from the perspective of only the source and the uncoded scheme parameters, the necessary and sufficient condition for matching to exist.
Corollary 2. Matching (on a broadcast channel with finite noise powers) exists, if and only if α i β i > 0 and the matrix ΠΣ S 1 ,S 2 ,...,S M Π has its largest eigenvalue being 1 with multiplicity 1, where Π is a diagonal matrix with diagonal entries being  Moreover, if the above condition holds, then any channel with σ 2 is its positive eigenvector, such that 1 is its largest eigenvalue with multiplicity 1 (by Perron-Frobenius Theorem [20]).
Different from the case discussed in the previous remark, the next corollary gives another sufficient condition for matching to occur when the sources and the coding parameters satisfy the same positive correlation condition.
Any channel with σ 2 Remark: σ 2 Z * m as defined above may in fact be negative. However this does not cause any discrepancy due to the requirement σ 2 but β 2 M P < σ 2 S M unless α 1 = α 2 = . . . = α M −1 = 0, which however would contradict our assumption. It thus follows and thus σ 2 Z * M > 0 always holds under the condition in the corollary. Remark: For the symmetric case where σ 2 S i = σ 2 S , σ 2 S i S j = ρσ 2 S , α i = α and E[S i |X] = βX, for i = 1, 2, . . . , M . A necessary and sufficient condition for matching is simply To see this, notice that and β can be computed as Checking the first condition in the Cholesky factorization, it is easily verified that (35) is a necessary condition for matching. However, from Corollary 3, it is seen that it is sufficient to choose any . . , M . This is exactly condition (35).

An Example: A Source with Three Components
Let us consider a source with three components whose covariance matrix is either or and further assume that the coefficients are chosen α 1 = α 2 = α 3 = 1 in the uncoded scheme.
In addition to the constraint that the matrix Σ S 1 ,S 2 ,S 3 must be positive definite, for a matching to exist, the condition in Corollary 2 must be satisfied. It can be shown that the eigenvalues of ΠΣ S 1 ,S 2 ,S 3 Π are and we must have λ 2 < 1 and λ 3 < 1. In the appendix, we show that the valid choices are the (ρ 1 , ρ 2 ) pairs such that The corresponding region is plotted in Fig 2. Notice that the two matrices are equivalent for the purpose of determining whether matching is possible, thus the region in Fig 2 is valid for both cases. Next let us fix a (ρ 1 , ρ 2 ) pair, and consider the region of ( ) pairs such that matching occurs. The tradeoffs can be computed explicitly, and are illustrated in Fig. 3 for (ρ 1 , ρ 2 ) = ( 1 2 , 1 6 ). The circles in the plots give the channels specified by Corollary 2. The channels given by Corollary 3 can be computed directly (given as the dots), which is loose in the first case, but on the lower boundary (and it is an extreme point) for the second case. Since σ 2 Z 3 ≥ σ 2 Z 2 , we also include this boundary in the plot. For the first case, the boundary < P is also shown, while for the second, the lower bound y ≥ 16 15 required by the function f (0) (P,ᾱ) in the first step of the Cholesky factorization is shown. The corresponding channels that matching occurs are those inside the "fan" regions. Note that there is a tension between the noise powers σ 2 Z 2 and σ 2 Z 3 for matching to occur with for the fixed source and uncoded scheme.

Vector Gaussian CEO on a Gaussian Multiple-Access Channel
In this section we consider the problem of sending correlated Gaussian sources on a Gaussian multiple-access channel, where the transmitters observe noise linear combinations of the source components; see also where δ > 0. The receiver wishes to reconstruct (S The parameters γ m, can be conveniently written in a matrix form Γ, and computed as where Σ (S 1 ,S 2 ,...,S M ),(T 1 ,T 2 ,...,T L ) is the cross-covariance matrix between vectors (S 1 , S 2 , . . . , S M ) and (T 1 , T 2 , . . . , T L ).
Notice that the problem can be equivalently formulated as computation of linear functions of Gaussian sources on the multiple-access channel. In this alternative setting, the functions to be computed are (S 1 , S 2 , . . . , S M ), which can be represented as noisy linear functions of the sensor observations (T 1 , T 2 , . . . , T L ). This alternative formulation is notationally more involved in the current problem setting, but we shall explore this connection in a separate work.
We assume M ≤ L, and consider the case that the matrices Σ where η is either +1 or −1 to be specified next. In other words, each sensor sends its noisy observation directly using the full power, but it can choose whether to negate its observations. The and we assume α m = 0, m = 1, 2, . . . , M , which is true in general except certain degenerate cases. Our main result on this problem is summarized in the following theorem.

Theorem 2.
A Gaussian multiple-access channel is said to be matched to a given Gaussian source and an uncoded scheme with parametersη, and the distortion vector induced by the given scheme is on the boundary of the achievable distortion region and thus optimal, if 1. η η ψ , ≥ 0, 1 ≤ < ≤ L; 2. The vector δ 1 η 1 and ρ m,j 's are the entries of the matrix These conditions can be intuitively explained as follows: condition one guarantees that the channel inputs from all transmitters coherently add up; condition two stems from the requirement that the noise observations should serve the same role as the underlying source for the chosen power constraints and amplification factors, i.e., as if the observation noise does not exist; condition three is similar to the effect in the previous problem where once a channel is matched, a more noisy channel will also induce a match.
When all ψ , ≥ 0, we can simply choose η = +1 (or −1) for all to satisfy the first condition. However, when some of the terms ψ , are negative, a simple algorithmic approach can be used to determine whether there exists a valid assignment of η . In fact this condition is completely source dependent, and the choice of {η , = 1, 2, . . . , L} is unique up to a negation (assuming any component T is not completely independent of the others), and thus can be considered fixed for a given source observation covariance matrix.
The proof of this theorem also has two parts given in Section 3.1 and Section 3.2. This theorem answers the first question regarding the conditions to certify whether an uncoded scheme is optimal in this communication problem. The answer to the second question for this problem turns out to be simpler than that in the broadcast case, and we discuss in Section 3.3 as special case examples several problems previously considered in the literature.

Extracting the Critical Conditions from the Outer Bound
and thus The reason to introduceS m 's is that in the remote coding setting, the distortion can be decomposed into two independent parts: the first part is due to encoding the observable part of the underlying sources, which areS random variables, with a distortion, and the second is due to the inherent noisy nature of the observations which induces a fixed distortion ∆ m . Thus encoding the source S m to distortion D m is equivalent to encoding the equivalent sourceS m to distortion D m − ∆ m . We can now derive an outer bound by combining the approach used in the broadcast problem with a technique based on Witsenhausen's bound [21]. Again consider M auxiliary zero-mean and we can write using data processing inequality that where equality holds if and only if Following the exact steps as in [7] (see also [8]) and applying Witsenhausen's bound [21], we can obtain where and ρ * , = |ψ , (ψ , ψ , ) − 1 2 |. This inequality intuitively says that the mutual information between the channel inputs and the output is upper bounded by the capacity of a point-to-point channel, whose power constraint is equal to the resultant signal power when all the inputs on the multipleaccess channel are coherently added. We will not attempt to further simplify this condition at this point, since in the context of the uncoded scheme, it has a particularly simple form.
The right hand side of (53) can be bounded similarly as in the broadcast problem. Here the equivalent source is (S 1 ,S 2 , . . . ,S M ), and the distortion vectors are (D 1 − ∆ 1 , D 2 − ∆ 2 , . . . , D M − ∆ M ), and moreover, σ 2 Zm = σ 2 Z for m = 1, 2, . . . , M . We thus arrive at where equality holds if and only if An outer bound is then obtained by combining (53), (55) and (57). Again the precise form of this outer bound is less important than the extracted matching conditions (54), (55) being equality, (58) and (59). The condition (55) being equality and the condition (58) can be satisfied simply by choosing a proper jointly Gaussian coding scheme, and the conditions (54) and (59) are almost identical to (11) and (16) in the broadcast case.

The Forward Matching Conditions
Since the uncoded scheme takes single letter encoding function, (55) being equality is equivalent to Because in the uncoded scheme the channel input X is given in (45), the equality holds as long as This yields the first condition stated in Theorem 2. The conditions (54) and (59) in the context of uncoded scheme are equivalent to For (62) to hold with equality, two conditions must hold and Let us consider the first condition (65). Due to the jointly Gaussian distribution, there exists a set of coefficients (α 1 , α 2 , . . . , α M ) such that However notice thatX thus the condition (65) is equivalent to the fact that the vector is in the row space of the matrix Γ. Equivalently, the vector δ 1 η 1 P 1 ψ 1,1 , δ 2 η 2 P 2 ψ 2,2 , . . . , δ L η L P L ψ L,L Σ T 1 ,T 2 ,...,T L needs to be in the row space of the matrix Σ (S 1 ,S 2 ,...,S M ),(T 1 ,T 2 ,...,T L ) . This leads to the second condition stated in Theorem 2. When this condition is satisfied, the coefficientsᾱ can be determined exactly as in (46). The conditions (63) and (66) are now identical to the broadcast case withS 1 ,S 2 , . . . ,S M being the sources andX being the channel input. By Corollary 2, such a channel is matched when the second largest eigenvalue of the matrix ΠΣS 1 ,S 2 ,...,S M Π is less than , or in other words, the noise power must be above or equal to the given threshold stated in Theorem 2.
Remark: The first condition in Theorem 2 generally has a unique solution if it can be satisfied, up to a negation of all the signs of the channel input signals. The second condition can almost always be satisfied by choosing appropriate a (δ 1 , δ 2 , . . . , δ L ) vector, except a few special cases where an all positive solution does not exist (recall we have assumed δ > 0, and thus only all positive solutions are valid). If the third condition is satisfied for certain source-channel-code triple, then it is satisfied for any more noisy channels. It is seen that the critical conditions in the outer bound derivation essentially decouples the matching problem into several simpler ones, leading to the three largely independent conditions given in Theorem 2.

Matched Channels in Special Case Scenarios
In the multiple-access setting, the conditions for matching in Theorem 2 are already rather simple, and there is no need to further investigate the properties of matched channels as in the broadcast case. Next we consider two special cases in the general problem setting which extend those considered in [7] and [8], respectively.

The Scalar CEO Problem
Consider a zero-mean scalar Gaussian source S[n] with variance σ 2 S . There are a total of L sensors, whose observations are where d ≥ 0 (without loss of generality) and Z [n]'s are the zero-mean independent additive noise with variance σ 2 Z . This special case is depicted in Fig. 5. It is clear that the first condition in Theorem 2 is satisfied by η = 1 for all = 1, 2, . . . , L. The second condition for this case is equivalent to where ∝ here means a component-wise proportional relation. In other words, the uncoded scheme is optimal if However, the LHS of the above condition can be simplified to where the second term is proportional to d , and the first term is proportional to d if and only if P δ 2 (d 2 σ 2 S + σ 2 Z )d 2 = const, = 1, 2, . . . , L. of the matrix ΠΣS 1 ,S 2 ,...,S M Π can be viewed as zero, thus any noise power σ 2 Z will allow a matching. Summarizing the above analysis, it is seen that for the scalar CEO problem on a Gaussian multipleaccess channel, as long as the condition (74) holds, the uncoded scheme is optimal. Conversely, for any noisy observation qualities, there always exists a matched channel by choosing the values of δ properly.
The condition (74) corresponds to a proportional quality requirement: the quality of the observations need to match the transmission powers and the transmission amplification factors. Gastpar [7] showed that when all the sensors have the same observation quality, the same power and the same amplification factor, the uncoded scheme is optimal. Our result thus generalizes it to the proportional case.

Correlated Gaussian on a Gaussian Multiple-Access Channel
Consider the case when M = L, and we shall assume that the first condition in Theorem 2 can be satisfied. The second condition is also satisfied trivially since the matrix Σ (S 1 ,S 2 ,...,S M ),(T 1 ,T 2 ,...,T L ) is full rank in our problem setting. Thus only the last condition needs to be checked in this case. Equivalently, when λ 2 is strictly less than 1, there always exists a noise power σ 2 Z such that the channel is matched and thus the uncoded scheme is optimal.
Lapidoth and Tinguely [8] previously considered the case that T m = S m , m = 1, 2, . . . , M ; see Fig. 6. It was shown that for covariance matrix Σ S 1 ,S 2 ,...,S M with strictly positive entries, there always exists a noise power σ 2 Z such that the uncoded scheme is optimal. Our result generalizes theirs to the case that the observations can be noisy linear combinations, and the covariance matrix Σ S 1 ,S 2 ,...,S M does not necessarily all have strictly positive entries.

Conclusion
We considered the problem of determining whether a given uncoded scheme is optimal for multiuser joint source-channel coding. It was shown that for both broadcast and multiple-access in the Gaussian setting, matching occurs naturally under certain general conditions. Our approach differs from the more conventional approach in that instead of attempting to find explicit outer bound and inner bound then compare them, our focus is on the critical conditions that make the outer bound hold with equality. This approach has a decoupling effect which tremendously simplifies the overall task. As future work, we plan to extend and generalize this approach to explore matching in other channel networks.
Since physical degradedness is equivalent to stochastic degradedness in the broadcast setting, i.e., Z j can be assumed to be decomposable into two independent components as Z j+1 + ∆Z j , we can apply the entropy power inequality [18] for j = 1, 2, . . . , M − 1, For j = M , it is clear that with equality if and only if It now follows that Continuing this line of reduction, we finally arrive at when m = 1 where the last inequality is by the concavity of the log(·) function and the given power constraint. The chain of inequalities in (82) holds with equality holds if and only if as well as (79) and the entropy power inequalities hold with equality. We next lower bound E(Σ W 1 ,W 2 ,...,W M ). By the rate-distortion theorem [18] with equality holds if and only if Furthermore, where (86) is because conditioning reduces entropy, and (87) is because Gaussian distribution maximizes the entropy for random variables with the same variance, together with the concavity of the log function. For (86) to hold with equality, we must have and for (87) to hold with equality requires It follows that Combining (82) and (90), we reach an outer bound Proof of Lemma 1. For simplicity, let us define B where the terms (B , and matrices constructed for the two channels as Σ V 1 ,V 2 ,...,V M and Σ * V 1 ,V 2 ,...,V M , respectively. It is clear that However, it is easily seen that this matrix is positive semidefinite since the first m * − 1 diagonal terms are non-negative, and we can remove all the other terms through symmetric elimination, i.e., is positive semidefinite since it is a summation of two positive semidefinite matrices.
If all the off-diagonal entries of diag(ᾱ)Σ W 1 ,W 2 ,...,W M diag(ᾱ) are non-positive, then the matrix is diagonally dominant, and the diagonal entries are all positive, which implies that it is a positive semidefinite matrix. Thus as long as α j β j α m β m P 2 P + σ 2 Zm ≤ α j α m ρ j,m , j < m and α j β j α m β m P 2 P + σ 2 Together with Corollary 1, this implies the statement given in the corollary is indeed true.