Performance of a Dual Channel Matched Filter System

where R= EfnnHg is the correlation matrix of the interference plus noise. Thus, to compute the optimum weight vector (matched filter), the interference plus noise correlation matrix must be known. Typically, this is not the case and an estimate of the interference plus noise correlation matrix is used in place of the true interference plus noise correlation matrix, leading to the following filter:


I. INTRODUCTION
A critical step in the optimal detection of a known signal in the presence of additive interference and noise is defining a filter that maximizes the output signal-to-interference plus noise ratio (SINR).Let w denote an arbitrary filter (weight vector), then the output SINR of w is defined as where s is the desired (known) signal vector, n is the interference plus noise signal vector, f¢g H denotes complex conjugate transpose, and Ef¢g denotes the expectation operator.Note that the signal and weight vectors are column vectors.The filter that maximizes 0018-9251/99/$10.00IEEE (1) is referred to as the matched filter and is given by [1] w opt = R ¡1 s (2) where R = Efnn H g is the correlation matrix of the interference plus noise.Thus, to compute the optimum weight vector (matched filter), the interference plus noise correlation matrix must be known.Typically, this is not the case and an estimate of the interference plus noise correlation matrix is used in place of the true interference plus noise correlation matrix, leading to the following filter: where R is the estimated interference plus noise correlation matrix.Equations ( 2) and ( 3) are referred to as the direct matrix inversion (DMI) method and the sample matrix inversion (SMI) method, respectively.The use of an estimated interference plus noise correlation matrix in the SMI method results in a random weight vector and hence, a random output SINR.Since the output SINR is a random variable, the SINR performance of the SMI method must be expressed in statistical terms.The average SINR performance of the SMI method is less than the optimum (DMI) SINR performance.The expected loss in SINR performance will depend on the quality of the estimated interference plus noise correlation matrix.Typically, the correlation matrix is estimated from a set of signal vectors, referred to as secondary data vectors, that only contain the interference plus noise signal.The quality of R improves and the expected SINR loss decreases as the number of secondary data vectors used in the estimate increases.Because of various constraints, the number of available secondary data vectors is limited and a certain amount of SINR loss must be accepted.Additionally, observe that solving for a new N £ 1 weight vector using the SMI (or DMI) method is computational expensive, requiring on the order of N 3 operations.As N becomes large, the computation cost of solving for the weight vector each time a new estimate of the correlation matrix is generated could become prohibitive.Thus, one of the key research challenges is to develop alternative methods that reduce the computational cost and provide acceptable performance with minimal secondary data support.
In this paper, we analyze the SINR performance of the dual channel matched filter system shown in Fig. 1 when the weight vectors are computed using the SMI method and under the assumption that input interference plus noise signals are uncorrelated.That is, the input interference plus noise signal for channel 1 is uncorrelated with the input interference plus noise signal for channel 2. We derive an exact expression for the normalized SINR in terms of random variables with known distributions.The resulting expression is complex, involving the product, sum, and ratio of non-Gaussian random variables, which prevented the derivation of exact closed-form expressions for the probability density function (pdf), mean, and variance of the normalized SINR.Therefore, we resorted to the Taylor series expansion to develop approximate expressions for the mean and variance of the normalized SINR.The resulting approximate expressions have simple forms and are suitable for determining the normalized SINR performance as function of the number of secondary data vectors used in estimating the correlation matrices.One can easily show that the dual channel matched filter system is equivalent (in a SINR sense) to a single channel matched filter system (see Fig. 2) designed to process both channels when the correlation matrices are known and the inputs are uncorrelated.When the correlation matrices are estimated, we show that the dual channel matched filter system achieves nearly the same normalized SINR performance as the the single channel matched filter with half the secondary data support.These results suggest that, with the proper preprocessing to decorrelate the input signal, a single matched filter could be replaced by two smaller matched filters, leading to a reduction in the secondary data requirements and potentially, to a reduction in the computational cost.
The remainder of this paper is organized as follows.In Section II, we first develop an exact expression for the normalized SINR of the dual channel matched filter system in terms of random variables with known distributions and then, we develop approximations for the mean and variance of the normalized SINR.The reduced secondary data requirements of the dual channel matched filter system are demonstrated in Section III.In Section IV, we discuss the practical aspects of replacing a single channel matched filter system with a dual channel matched filter system.A simulation example is presented in Section V and concluding remarks are provided in Section VI.

II. DUAL CHANNEL NORMALIZED SINR
As noted earlier, the SINR performance of the single channel SMI method must be expressed in statistical terms since the resulting weight vector is random.To assess the performance of the SMI method, Reed, et al. [1] defined two statistics: the SINR conditioned on ŵ (or the conditioned SINR) and the normalized SINR.The SINR conditioned on ŵ is defined as where the expectation is taken with respect to n.The normalized SINR is defined as (5) where SINR max = s H R ¡1 s is the maximum SINR.
The normalized SINR indicates the loss in SINR performance of the SMI method relative to the optimum (DMI) method.Reed, et al. [1] showed that ½ SMI is a beta random variable from which we get the common rule of thumb that 2N secondary data vectors are needed for an average SINR loss of of 3 dB (i.e., Ef½ SMI g = 0:5), where N is the dimension of the weight vector.Our objective is to perform a similar analysis for the dual channel matched filter system.Although the focus of our analysis is on complex random processes, we also state the results of the real case.We begin by first deriving an exact expression for the dual channel normalized SINR, denoted as ½ dual , assuming the input interference plus noise signals are uncorrelated and the weight vectors are computed using the SMI method.Further, we assume that the interference plus noise signal vectors are zero-mean, normal random vectors and the correlation matrices are estimated using the maximum likelihood estimator.After establishing the exact expression for ½ dual , we derive approximate expressions for the mean and variance of ½ dual using a Taylor series expansion of the exact expression.Before beginning the derivations, we first comment on the notation and highlight previous results used in the derivations.A complex p-variate normal distribution with mean vector ¹ and covariance matrix § is denoted as Ñp (¹, §) and as N p (¹, §) for the real case.The symbols B(®, ¯), Â 2 m , and °(®, ¯) denote a beta random variable with parameters ® and ¯, a chi-square random variable with m degrees of freedom, and a gamma random variable with parameters ® and ¯, respectively.The symbol = d denotes that two random variables have the same distributions and » denotes "is distributed as."Let fw i g K i=1 be a set of independent and identically distributed (IID) N £ 1 random vectors with w i » N N (0, §) and let the N £ K matrix W = [w 1 w 2 ¢ ¢¢ w K ].Then, the N £ N matrix § = WW T (f¢g T denotes transpose) has a Wishart distribution with K degrees of freedom which is denoted by W N (K, §) [2].When each w i » ÑN (0, §), then § = WW H has a complex Wishart distribution with K degrees of freedom which is denoted by WN (K, §) [2].Using Theorem 1 of Khatri and Rao [2], we can derive the following two results.Let R = W N (K, R) and s be an N £ 1 vector, then where » and ½ are independently distributed.Let R = WN (K, R) and s be an N £ 1 vector, then where » and 1 are independently distributed.Note that (9) is the same result derived by Reed, et al. [1].With regard to » and », one can show that [3] The first step in deriving an expression for ½ dual in terms of random variables with known distributions is to derive an expression for the dual channel conditioned SINR in terms of random variables with known distributions.Let n i and s i denote the N £ 1 interference plus noise signal vectors and desired signal vectors for each channel (i = 1, 2), respectively, and R i = Efn i n H i g denote the true correlation matrices of the interference plus noise signals.Further, assume that n 1 and n 2 are uncorrelated (i.e., Efn 1 n H 2 g = 0 = Efn 2 n H 1 g).When the correlation matrices R 1 and R 2 are known, the weight vectors w 1 and w 2 are computed using the DMI method (i.e., w i = R ¡1 i s i ).Let X i denote the N £ K secondary data matrix for each channel, where the columns of X i are IID ÑN (0, R i ).The maximum likelihood estimates of the interference plus noise correlation matrices are [ We can drop the 1=K term in the subsequent analysis, since it appears both in the numerator and denominator of the conditioned SINR.Thus, R1 » WN (K, R 1 ) and R2 » WN (K, R 2 ) after dropping the 1=K term and note that R 1 and R 2 are independently distributed [3].Similarly, R1 » W N (K, R 1 ) and R2 » W N (K, R 2 ) and are independently distributed for the real case.When the true correlation matrices are unknown, the weight vectors are computed using the SMI method and are given by The output of the dual channel matched filter system given the input vector 2 and thus, we can write the dual channel conditioned SINR as Note that dual channel conditioned SINR for the real case is the same as ( 16) with the complex conjugate transpose replaced by transpose.Observe that we can write the denominator of (16) as and the numerator as Notice that the off-diagonal blocks of the matrices involving the estimated correlation matrices are zero matrices.This represents one of the major differences between the single and dual channel derivations.In a single channel system, only a single correlation matrix is estimated.Although the off-diagonal blocks of the true correlation matrix are zero matrices if n 1 and n 2 are uncorrelated, it does not guarantee that the off-diagonal blocks of the estimated correlation matrix will be zero.Now, notice the similarity between the terms in (16) and the earlier results presented in ( 6)- (11).To write the dual channel conditioned SINR in terms of random variables with known distribution, we arrange the terms of (16) in the numerator to a form similar to (11) and the terms in the denominator to a form similar to (9).Each term in the numerator has the form s H i

R¡1
i s i and can be rewritten as where with the last result following from ( 11) and (10).Each term in the denominator has the form s H i R¡1 i R i R¡1 i s i and can be rewritten as where (23) with the last result following from ( 9) and (7).Recall that u i and q i are independent for a fixed i and that R1 and R2 are independent and hence, u 1 , u 2 , q 1 , and q 2 are independent.Finally, using these results, we can write the dual channel conditioned SINR in terms of random variables with known distributions as With the dual channel conditional SINR established, we can achieve our initial objective of expressing the dual channel normalized SINR as a function of random variables with known distributions by dividing (24) by the maximum SINR.Under the assumption that n 1 and n 2 are uncorrelated, the maximum SINR is Thus, the dual channel normalized SINR in terms of random variables with known distributions is which we can rewrite as since R 1 and R 2 are positive definite and thus, ® 1 and ® 2 are greater than zero.Ideally, we would like to develop an expression for the pdf of ½ dual to fully characterize its behavior.Although we can express ½ dual as a function of random variables with known pdfs, developing the pdf of ½ dual represents a formidable task, as does developing closed-form expressions for the mean and variance.Thus, we resorted to a Taylor series expansion of h(u 1 , u 2 , q 1 , q 2 ) to derive approximate expressions for the mean and variance.
Let g(x 1 , x 2 ) be a function of the continuous random variables x 1 and x 2 .Papoulis [4] provides the following approximation for the expected value of g(x 1 , x 2 ): where g(x 1 , x 2 ) and the partial derivatives are evaluated at the point (¹ 1 , ¹ 2 ), ¹ 1 and ¹ 2 denote the mean of x 1 and x 2 , ¾ 2 1 and ¾ 2 2 denote the variance of x 1 and x 2 , » is the correlation coefficient of x 1 and x 2 , and g(x 1 , x 2 ) is assumed to be sufficiently smooth about the point (¹ 1 , ¹ 2 ).Equation (28) is derived using the Taylor series of g(x 1 , x 2 ) about the point (¹ 1 , ¹ 2 ) and substituting up to the second-order terms into the standard formula of the expected value.The approximation for the variance of g(x 1 , x 2 ) is given by [4] The extension of these approximations for an arbitrary n is straightforward.Let g(x 1 , :::, x n ) be a function of the continuous random variables x 1 , :::, x n .Then, the approximations for the mean and variance of g(x 1 , :::, x n ) are and where @g=@x i j p , @ 2 g=@x 2 i j p , and @ 2 g=@x i @x j j p denote the partial derivatives of g(x 1 , :::, x n ) evaluated at point p.
Using (30) and (31), we can now derive approximate expressions for the mean and variance of the dual channel normalized SINR ½ dual .Recall that u 1 and u 2 are IID and that q 1 and q 2 are IID.Thus, the mean (¹ u ) and variance (¾ 2 u ) of u i and the mean (¹ q ) and variance (¾ 2 q ) of q i for i = 1,2 are [5] Further, observe that all the partial derivatives of h(u 1 , u 2 , q 1 , q 2 ) exist at point p, if p does not cause the denominator of (27) to equal zero, since h(u 1 , u 2 , q 1 , q 2 ) is a rational function.Thus, the Taylor series of h(u 1 , u 2 , q 1 , q 2 ) about the point p = (¹ u , ¹ u , ¹ q , ¹ q ) will exist if ¹ u and ¹ q do not equal zero.Note that ¹ u and ¹ q are greater than zero if K > N. Using the independence of u 1 , u 2 , q 1 , and q 2 (i.e., the correlation coefficients (» ij ) in ( 30) and (31) are zero) and with p = (¹ u , ¹ u , ¹ q , ¹ q ), the approximation for the mean Ef½ dual g and variance Vf½ dual g are 5 : (37) Performing the partial derivatives in (36) and (37) and substituting the point p yields Substituting (32)-( 35) into (38) and (39) yields where K is the number of secondary data vectors used to estimate the interference plus noise correlation matrices, N is the dimension of the weight vectors ŵ1 and ŵ2 , and k = ® 1 =(® 1 + ® 2 ), which indicates the relative SINR between the two channels.Observe that the approximations are quadratic functions of k and are a maximum at k = 0, 1 and minimum at k = 0:5.Thus, an increase in the mean (less SINR loss) comes at the expense of an increase in the variance.This trade-off between the mean and variance will undoubtedly have implications in the probability of detection performance of the dual channel system.An examination of (40) reveals that the approximation for the mean has characteristics that match with one's intuition: asymptotically correct, an increasing function of K, and provides an exact answer when k = 0 or k = 1.As K approaches infinity, the estimated interference plus noise correlation matrix approaches the true interference plus noise correlation matrix, implying that 1) the mean of the conditioned SINR approaches the maximum SINR or equivalently, the mean of normalized SINR approaches one, and 2) the mean of normalized SINR is an increasing function of K.In the limit as K goes to infinity, the approximation of the mean for ½ dual given in (40) is one, indicating that the asymptotic properties of the approximation are correct.An examination of (40) reveals that the approximation is an increasing function of K when K > N and N and k are held constant.Next, observe that if k = 0 (or k = 1) the approximation reduces to ¹ q and h(u 1 , u 2 , q 1 , q 2 ) = q 2 (or q 1 ).The mean of q 1 or q 2 is ¹ q and thus, the approximation provides an exact answer when k = 0 or k = 1.Similar observations hold for the approximate expression for the variance (i.e., variance approaches zero as K approaches infinity, decreasing function of K, and exact when k = 0 or 1).
To further verify the validity of the approximations given in (38) and (39), we conducted a series of Monte Carlo simulations with K = 2N for N between 20 and 500 in steps of 20 and for 20 values of k uniformly distributed on the interval 0 • k • 1.For each N and k, 10,000 samples of ½ dual were used to compute a sample mean (i.e., Ef½ dual g), an approximate 99.5% confidence interval for the sample mean, and a sample variance.The confidence intervals are termed approximate, because the sample standard deviation was used in place of the population (known) standard deviation [4].We present only the results for the complex case, but the results from the real case have the same behavior as the complex case.Fig. 3 shows the sample mean of ½ dual for N = 20, 260, and 500 as k varied between 0 and 1 along with the approximate 99.5% confidence intervals which are indicated by the error bars.Also plotted in Fig. 3 is the approximate mean of ½ dual from (40).An examination of Fig. 3 shows excellent agreement between the sample mean and the mean approximation, except when N = 20 and near k = 0:5.This problem area can be eliminated by including another term in the Taylor series expansion.Plotted in Fig. 4 is the sample variance overlaid with the approximate variance from (41) for the same cases as Fig. 3. Again, with exception of N = 20 and near k = 0:5, there is excellent agreement between the sample variance and variance approximation.This problem area can be eliminated by keeping all the moments (i.e., do not discard moments greater than 2).

III. REDUCED SECONDARY DATA REQUIREMENTS
In this section, we address our earlier claim that the dual channel matched filter system requires half the secondary data as the equivalent single channel matched filter system to achieve nearly the same normalized SINR performance.We only discuss the complex case, but one can easily show that the results also hold for the real case.This claim is examined under the assumption that the input interference plus noise signals are uncorrelated.That is, if n 1 and n 2 denote the input interference plus noise signal vectors in their respective channels, then Efn 1 n H 2 g = 0 = Efn 2 n H 1 g.Let n 1 and n 2 be N £ 1 vectors, then the equivalent single channel system must process a 2N £ 1 input signal vector.Thus, the single channel system requires a 2N £ 1 weight vector and approximately 4N (K ¼ 4N) secondary data vectors are required to achieve an average normalized SINR of 0:5 (i.e., an average SINR loss of 3 dB).With K = 4N, the variance of the single channel normalized SINR is for large N. Thus, to support our claim, we must show that the dual channel system has an average normalized SINR of approximately 0.5 (Ef½ dual g ¼ 0:5) with a variance approximately equal to (42), when K = 2N.With the single channel system, we could set the expression for the mean of the normalized SINR equal to 0:5 and solve directly for the number of secondary data vectors needed in terms of the dimension of the weight vector.We cannot apply this same approach to (40), because of its form.Instead, we set K = 2N and k = 0:5, since (40) is quadratic function of k with a minimum at k = 0:5.Then, we show that (40) is greater than or equal to 0:5 for all N ¸1.Substituting k = 0:5 and K = 2N in (40) yields Let N ¸1, then (43) is greater than or equal to 0.5 if which is true for all N ¸1.Thus, the approximate Ef½ dual g ¸0:5 if K ¸2N.The approximate variance (41) of the dual channel normalized SINR with k = 0:5 and K = 2N is for large N.These results support our claim that the dual channel system requires half the secondary data vectors as the single channel system to achieve nearly the same normalized SINR performance.

IV. DISCUSSION
We must address two issues before we can take advantage of the reduced secondary data requirements of the dual channel matched filter system to replace a single channel system.First, we must decorrelate the two halves of the interference plus noise vector to meet the conditions of the previous development.That is, we need to preprocess (transform) the interference plus noise vector such that the correlation matrix of the resulting vector is a block diagonal matrix.Note we can introduced a unitary transformation (preprocessing) without effecting the maximum SINR.The correlation matrix of the interference plus noise after preprocessing by the unitary transformation matrix where V is partition as then the two halves of the preprocessed interferences plus noise vector are decorrelated.In light of the this decorrelation preprocessing requirement, the block diagram of the dual channel system is redrawn as shown in Fig. 5.The second issue that we must address is computational cost.Although the computational cost of computing the weight vectors is less with the dual channel system, one must be concerned with the additional computational cost associated with the preprocessing.The additional matrix-vector multiple introduced by the preprocessing coupled with the fact that every secondary data vector must be preprocessed will significantly reduce any computational savings achieved by reducing the dimension of the weight vector.Clearly, the preprocessing (transformation) must have an efficient implementation.In general, we can construct a block diagonalizing transformation for any particular correlation matrix from its eigenvectors, but this requires a computationally expensive eigendecomposition and will not, in general, yield an efficient transformation matrix.Thus, we seek a fixed (environment independent) block diagonalizing transformation with an efficient implementation.The possibility of finding such a transformation will depend on the class of correlation matrices of interest.
One class of matrices that can be block diagonalized by a fixed and efficient transformation is the class of centrosymmetric matrices.An N £ N matrix C is a centrosymmetric matrix if [6] c N+1¡m,N+1¡n = c m,n for m, n = 1,:::, N where c m,n denotes the element of C in the mth row and nth column.Depending on whether N is even (N = 2M) or odd (N = 2M + 1), we can write the centrosymmetric C in one of the following forms [6]: A Jx BJ z T J ¯zT JB x JAJ where I and J are M £ M matrices and I is the identity matrix.Clearly, the transformation matrices defined in (50) can be implemented efficiently, requiring only the simple operations of addressing and addition.
Observe that real, symmetric Toeplitz matrices and real, symmetric Toeplitz-block-Toeplitz matrices are subclasses of centrosymmetric matrices.Recall that the correlation matrix of a real, wide-sense stationary random process is a real, symmetric Toeplitz matrix [8].Thus, we can replace a single channel system with a dual channel system, reducing the secondary data requirements and easing the computational cost, in any matched filter application involving real, wide-sense stationary random processes.
In applications where the random process is not centrosymmetric, the problem of block diagonalizing a family of correlation matrices with a fixed, efficient transformation becomes difficult.First, one must answer the question of whether or not a fixed transformation exists for the correlation matrices of interest.Basically, to block diagonalize a family of correlation matrices, we must find two independent subspaces that span the N-dimensional vector space (i.e., the vector space is the direct sum of two subspaces) and the subspaces must be invariant to every member of the family.For example, in the case of centrosymmetric matrices, the vector space is the direct sum of the symmetric (i.e., x = Jx) subspace and the skewed symmetric (i.e., x = ¡Jx) subspaces.Mathematical machinery is available for examining the existence issue (see [9 and 10]).Secondly, assuming that one can find two invariant subspaces, we are still left with the problem of controlling computational cost.That is, can we select basis vectors that span the two subspaces such that the resulting transformation will have an efficient implementation?The answer to this question appears to be an open problem.In absence of two invariant subspaces, we can attack the problem in an approximate sense.For example, the correlation matrix of a complex, stationary random process is a Hermitian, Toeplitz matrix which is not a subclass of centrosymmetric matrices.However, we can approximately decorrelate (block diagonalize the correlation matrix) a real or complex stationary random process using a filter bank consisting of a high-pass filter and a low-pass filter [11].The use of a filter bank to decorrelate the signal is a central concept in subband image compression and subband adaptive filters where efficiency is also a key issue.The decorrelation properties of a filter bank will depend on the transition regions and stopband attenuation of the filters and the characteristics of the interference plus noise.The price paid for only approximately decorrelating the interference plus noise is a loss in performance which cannot be regained by increased secondary data support, since the dual channel system is no longer equivalent to the optimal system even when the correlation matrix is known.

V. SIMULATION EXAMPLE
In this section, we present the results of a simulation example to demonstrate the reduced secondary data requirements of the dual channel matched filter system.The signal of interest is for n = 0,1,:::, 63, where u(n) is the unit step function.The interference plus noise environment consists of three uncorrelated interference sources and receiver noise.The signal from each of the interference sources has the form: where Á i is a random variable uniformly distributed over the interval [0, 2¼] and ff 1 = 0:051, f 2 = 0:23, f 3 = 0:41g.The correlation matrix of each interference source is a real, symmetric Toeplitz (centrosymmetric) matrix T i with elements given by The receiver noise is modeled as white noise with a variance of 1 and is assumed to be uncorrelated with the interference.Thus, the interference plus noise correlation matrix is simply the sum of the identity matrix (receiver noise) and three real, symmetric Toeplitz matrices and hence, is a real, symmetric Toeplitz matrix.The dual channel transformations are given by in accordance with (50).The dual channel system only used 64 secondary data vectors to compute the two 32 £ 1 weight vectors.In contrast, the single channel system used 128 secondary data vectors to compute the 64 £ 1 weight vector.Thus, the expected loss in SINR performance was 3 dB for both systems simulated (i.e., Ef½ single g ¼ 0:5 ¼ Ef½ dual g).The simulation results are summarized in Table I and are based on 20000 Monte Carlo runs for each system.The predicted values in Table I for the single channel system were computed from a beta distribution with parameters 33 and 31.5 and from (40) and (41) for the dual channel systems with K = 64, N = 32, and k = 0:4934.Note the good agreement between the predicted and simulation values in Table I, further verifying the utility of the approximations given in (40) and (41).Fig. 6 shows the normalized SINR cumulative probability distribution curves for each of the systems.The results in Table I and Fig. 6 show that the dual channel system has nearly the same performance as the single channel system with half the secondary data support.

VI. CONCLUSION
In this paper, we analyzed the SINR performance of a dual channel matched filter system assuming that the interference plus noise in one channel was uncorrelated with interference plus noise in the other channel.We derived approximations for mean and variance of the dual channel normalized SINR from an exact expression of the normalized SINR as functions of random variables with known distributions.Using the approximations, we showed that a dual channel system delivers nearly the same normalized SINR performance as a single channel system designed to process both inputs with half the secondary data.These results suggest the possibility of replacing a single channel system with a dual channel system using smaller weight vectors, leading to the reduction in the secondary data support and potentially, a reduction in the computational cost.A key element in replacing a single channel system with a dual channel system is the decorrelation preprocessing, which basically requires the introduction of a transformation that block diagonalizes the correlation matrix.This requirement for preprocessing introduces a new challenge: finding a fixed transformation that block diagonalizes the family of correlation matrices of interest and that has an efficient implementation.Depending on the family of correlation matrices, such a transformation may or may not exist.A family of matrices that can be efficiently block diagonalized with a fixed transformation is the family of centrosymmetric matrices which includes the family of real, symmetric Toeplitz matrices.The correlation matrix of a real, wide-sense stationary random process is a real, symmetric Toeplitz matrix.Thus, in matched filter applications involving real, wide-sense stationary random processes, we can replace a single channel system with a dual channel system to reduce the secondary data requirements by approximately 50% and ease the computational cost.

Fig. 4 .
Fig. 4. Sample variance of ½ dual (i.e., Vf½ dual g) based on 10,000 samples for each N and K overlaid with approximate Vf½ dual g computed using (41) and K = 2N.

Fig. 5 .
Fig. 5. Block diagram of dual channel matched filter system with preprocessing (decorrelation) transformations V 1 and V 2 .
and J are M £ M matrices, x and z are M £ 1 vectors, ¯is a scalar, and J is the reverse diagonal identity matrix, i.e., verify that even (N = 2M) and odd (N = 2M + 1) centrosymmetric matrices are block diagonalized by the unitary (orthonormal) matrices [

Fig. 6 .
Fig. 6.Cumulative probability distribution of normalized SINR for single channel and dual channel systems.