Noise Statistics Estimation Techniques for Robust GNSS Carrier Tracking

.


INTRODUCTION
Synchronization is a key stage in the core of any Global Navigation Satellite Systems (GNSS) receiver. This is typically performed via a two-step procedure: acquisition and tracking. First, the acquisition stage detects the presence of the desired signal and provides a coarse estimate of synchronization parameters, that is, timing and frequency; and then, the tracking stage refines those estimates and tracks eventual time variations. GNSS were originally designed to operate under clear-sky conditions. Under this quite benign propagation scenario, traditional tracking techniques based on well-established delay and phase-locked loops (DLL/PLL) provide a good estimation performance [1]. But in general, those standard techniques are prone to fail in non-nominal propagation conditions such as high-dynamics, shadowing, strong fadings, multipath effects or ionospheric scintillation [2,3]. In these situations, synchronization becomes a very challenging task and robust tracking techniques should be considered. Focusing on the carrier phase tracking problem, Kalman filter (KF)-based architectures have been shown to overcome some of the limitations of PLL-based approaches, and appear as a serious alternative in the design of modern GNSS receivers [4]. In this contribution, we are interested in the derivation of a theoretically founded, fully adaptive, KF-based robust carrier phase tracking method. Hereafter, we summarize the main advantages of KFs over PLLs: • Formulated from an optimal filtering standpoint.
• The recursive Kalman gain provides an implicit adaptive and optimal filter bandwidth, computed from the Gaussian noise statistics and the prediction/estimation error covariances, in contrast to the heuristically adjusted PLL bandwidth (i.e., loop filter coefficients).
• A nonlinear KF formulation allows to directly operate with the received signal samples, avoiding the problems associated to the use of phase discriminators, i.e., nonlinearities, loss of Gaussianity and possible saturation.
In its standard form, KF carrier phase tracking methods have two main problems: 1) as legacy PLL solutions, the filter still relies on a discriminator-based architecture, and 2) it assumes a complete knowledge of the system conditions, this means the process and measurement noise parameters affecting the system (i.e., Gaussian noise covariance matrices). This assumptions do not hold, in general, in real-life applications and lead to poor performances in unknown time-varying scenarios. The first point can be solved by using an extended KF (EKF) directly operating with the complex samples at the output of the prompt correlator, thus avoiding the discriminator. The theoretical solution to the second point is given by the so-called Adaptive KFs (AKFs), which sequentially adapt the Gaussian noise parameters to the actual working conditions. Even if several discriminatorbased AKF implementations are proposed in the literature, they typically only adapt the measurement noise variance or rely on somehow heuristic adjustment rules. We provided a comprehensive discussion on the identifiability of noise statistics and general AKF design rules for GNSS carrier tracking in [5]. Simulation results showed that not only the measurement noise variance plays an important role in the filter performance, being typically adjusted in standard AKFs from the C/N 0 estimator already available at the receiver, but a correct recursive estimation of the process noise covariance matrix, Q k , is a key point in robust filtering design. In [5] we pointed out some alternatives to perform such estimation but a performance comparison and efficient robust/adaptive EKF filter design (i.e., avoiding traditional discriminator-based architectures) is still a missing point. Note that some traditional techniques have been tested using a standard discrimator-based KF in [6].
The main goal of this article is to test the estimation capabilities of covariance estimation techniques within the GNSS carrier tracking problem, providing a general robust filter formulation. Several alternatives have been proposed in the literature to estimate the noise covariances within the KF framework, being the correlation methods the most popular techniques. The idea behind those approaches is that the correlation function of the innovation sequence, provided by the filter, is directly related to the unknown parameters. Within this family, a very appealing solution, named Autocovariance Least Square (ALS) method, has been shown to generally outperform the rest of correlation methods, being the most promising approach derived in the last decade [7]. To summarize, we list the main contributions: • Discussion on the impact of unknown noise parameters into the filter performance, identifiability of such noise statistics and standard AKF design rules.
• Up-to-date state-of-the-art on Gaussian noise covariance estimation techniques within the KF framework.
• Performance comparison of the more representative covariance estimation techniques for GNSS carrier tracking. Particularly, in challenging, time-varying scenarios, such as those encountered in high-dynamics, ionospheric scintillation conditions or fast fading situations.
• New adaptive EKF (AEKF) framework to address the robust carrier tracking problem.

Kalman Filtering Background
In general, the optimal filtering problem involves the recursive estimation of unknown states of a system at time k, x k , from the measurements available, y 1:k . The Gaussian state-space models (SSM) of interest can be expressed as where x k ∈ R nx and y k ∈ R ny are the hidden states of the system and measurements at time k, f k−1 (·) and h k (·) are known, possibly nonlinear functions; v k and n k are referred to as process and measurement noises (assumed mutually independent stochastic processes). The optimal Bayesian filtering solution [8] is given by the marginal posterior distribution p(x k |y 1:k ), which gathers all the information about the system contained in the available observations. The multidimensional integrals in the prediction and update steps [9] to compute the posterior distribution are analytically intractable in the general case. Actually, there are few cases where the optimal Bayesian recursion can be analytically solved. This is the case of linear/Gaussian models, where the KF yields to the optimal solution [10]. The Gaussian predictive and posterior distributions are then approximated as and the recursive KF equations are given by • Innovation & Measurement Updatê where f k−1 (x k−1 ) = F k−1 x k−1 and h k (x k ) = H k x k . If the system is nonlinear, the traditional solution is given by the extended KF (EKF) [8], which linearizes the nonlinear process and measurement functions around the predicted and estimated states and directly applies the KF equations (i.e., the vector differential operator is defined as ∇ = [∂/∂x 1 , . . . , ∂/∂x n ]) Even if more efficient filtering techniques such as sigma-point KFs [11,12] can be used to deal with nonlinear/Gaussian systems, they are not considered in this contribution because the carrier tracking problem is slightly nonlinear. Notice that the optimal KF equations depend on both noise statistics, i.e., Q k and R k . In real-life applications where these parameters are unknown to a certain extent, the filter is suboptimal and its performance mainly depend on how good are the covariance estimates used in the filter implementation,Q k andR k .

GNSS Carrier Tracking SSM Signal Formulation
In GNSS receivers, the carrier tracking stage is in charge of recursively estimating the line-of-sight (LOS) phase variations due to the relative movement between the satellite and the receiver. This is typically modeled using a 2 nd or 3 rd order Taylor approximation, depending on the expected system dynamics, where k refers to the discrete-time instants and T s is the sampling period (i.e., correlators output rate), and θ k (rad), f k (Hz), andḟ k (Hz/s), refer to the carrier phase, Doppler shift and Doppler frequency rate, respectively. Then, the state to be tracked can be defined as x 2,k .
= θ k f kḟk (3 rd order), with the following corresponding transition matrices where the phase is expressed in cycles ([rad/2π]). The process noise v k in (1) stands for any possible modeling mismatch. After the acquisition stage, the samples at the output of the prompt correlator can be modeled as [13] y k = A k e jθ k + n k , n k ∼ CN (0, σ 2 n,k ) where A k and θ k may include any non-nominal harsh propagation disturbance. These equations define the general SSM formulation of the GNSS carrier tracking problem.

Equivalence Between Standard PLLs and KFs
A standard PLL is build up with three main blocks: i) first the incoming signal goes through a phase discriminator, which produces an error signal proportional to the phase estimation error; ii) then, a loop filter is in charge of filtering out noise and driving this error to zero; and finally, iii) a numerically controlled oscillator (NCO) integrates the error to produce the predicted phase to generate the local replica. The block diagram of a 2 nd order PLL is shown in Figure 1 [4]. For instance, the predicted phase (i.e., output of the NCO) in a standard 2 nd order PLL can be expressed aŝ where α 1 and α 2 are the loop filter coefficients, and k is the output of the discriminator at time k. This equation can be easily written in a SSM form asx withx P LL k = [θ kfk ] and k given in cycles. This is strictly equivalent to a second order linear KF with constant gain (i.e., block diagram shown in Figure 2 [4]),x where the error signal, k = θ k −θ k|k−1 , is again the output of the KF discriminator. If we take the first component of the KF state vector,θ . The equivalence between a 3 rd order PLL and the corresponding KF is straightforward [14]. It is important to notice that both PLL and KF are equivalent if: i) the KF gain coefficients are constant, and ii) we are able to compute the optimal PLL loop filter coefficients. In general, while the KF gain is optimal and time-varying, the PLL coefficients are constant and set somehow heuristically, therefore, the PLL must be seen as a suboptimal KF implementation. A comprehensive discussion on PLLs versus KFs for carrier phase synchronization is given in [4].

Impact of Unknown System Gaussian Noise Parameters
It is well known that the KF is only optimal when both noise covariance matrices are perfectly known. In practice, these parameters are unknown to a certain extent, then the filter is in general suboptimal. The final filter performance mainly depends on how far are our covariance estimates from their optimal value. In this section we want to briefly show the impact of a wrong initialization of these noise covariances, using the following simple scalar linear time-invariant SSM, The root mean square error (RMSE) considering a wrong initialization of one of both noise variances (i.e., the other one is fixed to the correct value) is shown in Fig. 3. It is clear that the lower RMSE is obtained with the true system variances (i.e., σ 2 v = 10 and σ 2 n = 1). If these values deviate from the correct ones, the filtering error increases. It is worth to point out that: i) if both noise variances are misspecified, the filter performance is even worse than the results shown in Fig. 3, and ii) it is always better to overestimate the noise variances because underestimated noise parameters may lead to divergence of the filter. . Impact on the filter mean RMSE of a wrong process noise variance σ 2 v (left) and measurement noise variance σ 2 n (right).

On the identifiability of Gaussian Noise Statistics and standard AKF Design
In the signal processing literature the identifiability of noise statistics parameters within the KF is still an open problem. To the best of our knowledge it is still not clear if both noise variances can be jointly estimated correctly. We discussed some ideas and the standard (discriminator-based) AKF design rules for GNSS carrier tracking in [5]. For the sake of completeness, in the sequel we summarize the main ideas. An intuitive approach to estimate the system Gaussian noise covariances is to first define an extended state-space which gathers both unknowns (i.e., the original states and noise covariances)x k = {x k , Q k , R k }, and then use a single KF to track the full state. Consider that at a given time k, the state and noise parameters are independent and jointly Gaussian. After the KF prediction step, the states and noise parameters are dependent and no longer jointly Gaussian. As the standard KF formulation relies on the Gaussian assumption and the propagation of the first two moments of the distribution of interest, the interconnection between states and parameters estimates is lost (i.e., the third-order cross-conditional moment is not null, but it is discarded by the KF). Moreover, the nonlinear dependency among variables is not taken into account in the filter formulation. To sum-up, the noise parameters are not identifiable [7], and then a correct AKF design rule should be to use separate (interacting) methods to estimate the states and noise parameters.
Another point which is of capital importance is the ability to correctly estimate both covariances. In [15], the author states that "it is impossible to distinguish between overspecification of the model error and underspecification of the measurement error, and vice versa". This happens because all the information to infer these parameters must be obtained from the filter itself. But the KF prediction and update steps are always interconnected, so all the computations within the filter depend on both noise covariances. Then, we suggested in [5] to fix (or estimate using another method) one of the two covariance matrices and only estimate the other one. In practice, taking into account the GNSS carrier tracking problem at hand, one possibility is to compute the measurement noise variance at the output of the discriminator (standard AKF architecture) from the C/N 0 estimate, already available at the receiver. For a complete discussion on these AKF design rules refer to [5]. Notice that in this contribution we are interested in adaptive EKF architectures avoiding the discriminator, which in general provide better tracking capabilities.

State-of-the-art
In the literature we find a plethora of methods and different approaches to face the noise statistics estimation problem. In the early '70s, Mehra [16] classified the existing methods into four categories: Bayesian, maximum likelihood (ML), covariance matching and correlation methods. A more general classification (vis-a-vis Merha's paper) is to consider two main groups: on-line and off-line methods [17]. The four groups introduced by Mehra (adaptive methods) lie into the on-line category. In the second group we can include subspace and prediction error estimation methods. To solve the recursive Bayesian estimation problem we need on-line noise statistics estimation methods to be embedded into the filter structure, and consequently in this contribution we only consider on-line approaches.
• Bayesian approach: the solution is given by the recursive computation of the joint posterior distribution of the states and parameters (which may include parameters from both the noise statistics and process/measurement functions). Typically, the main problem is the high computational complexity.
• ML methods: these solutions find the estimates that maximize the parameters' likelihood function [18,19,20], an expression that may not accept an analytic expression, then resorting to simulation-based tools such as the Expectation-Maximization (EM) algorithm [21]. The algorithm proposed in [20] was modified in [22], showing the similarities with the ML solution in [19] and the covariance matching method [23].
• Covariance matching: this method computes the process and measurement noise covariances from the residuals of the state estimation problem (i.e., prediction minus estimation). The idea is to make these residuals consistent with their theoretical covariances [23,24]. This is equivalent to the ML solution [19] if the system noise is zero-mean [22].
• Correlation methods: pioneered by Mehra [16] and Bélanger [25] in the '60s and '70s, these are the most popular techniques for Gaussian noise covariance estimation within the KF framework. The idea behind those approaches is that the correlation function of the innovation sequence, provided by the filter, is directly related to the unknown parameters [26]. Within this family, a very appealing solution, named Autocovariance Least Square (ALS) method [27,28,29], has been recently proposed and shown to generally outperform the rest of correlation methods, being the most promising approach derived in the last decade [7].
In the following we detail the mathematical formulation of the widely used covariance matching and traditional correlationbased methods, and the recently proposed ALS algorithm.

Estimation of the measurement noise variance
The measurement model is given by (2), which for a linear (or linearized) system is with n k ∼ N (0, R k ). In the following we consider a constant noise covariance, R. Within the KF, an approximation of the measurement noise at time k may be obtained as If we consider that N noise samples are available, the unbiased estimator of R is obtained as [23] If the measurement noise covariance is time-varying, [23] proposes a recursive estimation of R k from L r samples, i.e., r j (j = k − L r + 1, . . . , k). For k > L r , the solutions iŝ

Estimation of the process noise variance
The process model is given by (1), which for a linear (or linearized) system is with v k ∼ N (0, Q k ). Again, we first consider a constant noise covariance, Q. In this case, an approximation of the process noise at time k is given by Considering N samples, an unbiased estimator of the process noise covariance is computed as [24], . If the process noise covariance is time-varying, it can be recursively computed from L q noise samples aŝ The main problem of this method is that it does not guarantee the positive definiteness of the covariance matrices, then we must consider some heuristic modification such as taking the absolute value of the elements of the main diagonal at each iteration.

Traditional Correlation Method (CC)
In optimal filtering, the so-called innovation property [30] states that the innovations sequence, given by is a zero-mean Gaussian white noise sequence. If the filter is suboptimal, that is, Q k and R k are unknown to a certain extent, the innovation sequence is correlated, and can be used to obtain unbiased estimates of the system noise covariance matrices. In the following, we consider a time-invariant system (i.e., F k−1 = F, H k = H, Q k = Q, R k = R). If the filter reached its steady-state regime, we have that which is given by the Riccati equation with G = F − FKH. Then, the lag-p innovation's autocovariance, C p = E{z k z k−p }, is given by Notice that for the optimal Kalman gain, K = Σ x H (HΣ x H + R) −1 , then C p = 0 for all p = 0, which is the innovation property stated above. We can easily obtain an estimation of the autocovariance from N s innovation samples, aŝ The method to estimate Q and R proposed by [30] is a three-step procedure: i) first, solve a least-squares problem to estimate Σ x H , then ii) solve the least-squares problem to obtain R using the previous estimate, and finally iii) use these two estimates to solve the problem for Q. We can rewrite the innovations' autocovariance explicitly as with A # the pseudo-inverse of matrix A. We can obtain an estimate of Σ x H using the autocovariance estimatesĈ p in (31). Then, we obtain the measurement noise covariance matrix aŝ Finally, for the estimation of the process noise covariance matrix, we may build the following set of equations from (28), with Ω = F −KHΣ x − Σ x H K + KC 0 K F . Notice that the right-hand term in (33) can be computed using Σ x H andĈ 0 , because (Σ x H ) = HΣ x . Therefore, only the left-hand term depends on Q. It is known that the estimation of the process noise covariance is problematic, because the set of equations in (33) is not linearly independent, and thus one must choose a linearly independent subset of these equations [30]. A unique solution can be obtained only if the number of elements of Q is less or equal to n x × n y . In [16] it is suggested to directly estimate the Kalman gain K, and if needed, estimate the process noise covariance matrix Q using the maximum likelihood solution. In the sequel, we detail the formulation of the direct Kalman gain estimation. The optimal Kalman gain can be written as [16] with C y,p = E{y k y k−p } the autocorrelation of the observations, and Υ obtained from solving the following equation An estimation of the Kalman gain is given by (34) using the estimate Σ x H . The main disadvantage of this method is that it is designed to deal with time-invariant systems (i.e., constant F, H, Q and R). The estimation of time-varying noise covariances can be somehow tackled by using a windowing procedure, that is, processing only the last L samples of the innovation sequence.

Autocovariance Least Squares (ALS) Method
Considering again a linear time-invariant system, the state prediction error, e k = x k −x k|k−1 , evolves according to The steady-state prediction error covariance matrix (28) can be written as The ALS is a single-step procedure to estimate (unbiased) both process and measurement noise covariance matrices from the autocovariance of the innovations [27,29,31], Hereafter we detail the one-column version of the ALS [29,31]. We use ⊗ for the Kronecker product and A s stands for the columnwise stacking of matrix A into a vector. If we define the vector of parameters to be estimated as θ = [Q s , R s ] , the ALS solves the least-squares problem ∆θ = b asθ Notice thatb is computed using the autocovariance unbiased estimatesĈ p obtained from N s innovation samples, Again, this method is designed to deal with time-invariant systems (i.e., constant F, H, Q and R), then in its standard form we must consider a windowing procedure to deal with time-varying noise covariances. An extension to deal with a time-varying measurement transition matrix H k was proposed in [31].

Illustrative Example #1: Variance Estimation in a Scalar Linear Time-invariant System
In this section we assess the estimation performance of the three methods previously introduced for a scalar linear time-invariant system. The SSM is given by The three methods are initialized toσ 2 v = 3σ 2 v = 30 andσ 2 n = 10σ 2 n = 10, and the estimation starts after k = 100 time steps. Figure 4 (left) shows the estimation performance obtained with the CM method for three different scenarios: i) estimation of the measurement noise σ 2 n , considering a known σ 2 v , ii) estimation of the process noise σ 2 v , considering a known σ 2 n , and iii) joint estimation of both noise variances. Notice that while the CM standalone estimation of one noise variance provides a good estimation, the joint estimation is not able to correctly identify both variances. It is clear form the figure that the method underestimates σ 2 v and overestimates σ 2 n , what confirms the problem of identifiability of both noise variances [5]. Figure  4 (middle) shows the estimation performance with the correlation-based CC method. In this case, even if the estimation is less smooth than with the CM, the joint estimation of both noise variances provides a better identification, but the effect of overestimation/underestimation is still present. Similar performances are obtained with the ALS for the joint estimation of both process and measurement noise variances, shown in Figure 4 (right), but in this case the identification of both noise variances is even better than with the CC. Notice that these figures show one realization of the algorithm, then we can't unequivocally conclude that both correlation-based methods are able to always correctly identify the noise variances. We also computed the mean RMSE of the steady-state state estimation for the different noise variance estimation methods, obtained from 100 Monte Carlo runs, shown in Table 1. We can see that even with the overestimation/underestimation obtained with the joint estimation of both noise variances, the filters provide a good estimation performance. These results confirm that the most important is to have a Kalman gain close to the optimal (i.e., which depends on both noise variances), and the different noise variances may compensate in the gain computation.

Illustrative Example #2: Covariance Q Estimation in a Multivariate Linear Time-invariant System
In this section, we consider the CM and CC estimation for a multivariate linear time-invariant system where Q is a 2x2 matrix, Both methods are initialized toQ = 5Q andσ 2 n = 5σ 2 n = 10, and the estimation again starts after k = 100 time steps. The results obtained with the CM are shown in Figure 5, where we can see for the joint estimation of σ 2 n and Q, that the estimation  Table 2. Mean RMSE of the steady-state state estimation for the different noise variance estimation methods.
of the measurement noise variance is not correct. Again, this confirms the possible difficulties on the identifiability of both noise parameters. In the case of the CC method in Figure 6, the estimation is much more nervous but the mean value tends to the correct estimate. The filter tries to split at every iteration the contribution of each noise term, but it has some problems to clearly identify the different parameters. Notice that we do not plot the CC standalone estimation of Q because the method estimates the process noise covariance after the estimation of σ 2 n . We compute the mean RMSE of the steady-state state estimation for the different noise variance estimation methods, shown in Table 2. We can conclude the same as in the scalar example #1.

Illustrative Example #3: Covariance R Estimation in a Multivariate Linear Time-invariant System
Finally, to complete the initial analysis, we consider the ALS estimation for a multivariate linear time-invariant system where instead of the state now the measurement is multidimensional (R is a 2x2 matrix), The ALS is initialized toR = 10R andσ 2 v = 10σ 2 n = 50, and the estimation again starts after k = 100 time steps. In this case, we have the joint estimation of both noise parameters. The results are shown in Figure 7, where we can see that in this example the ALS provides a good estimation of both the process noise variance and the measurement noise covariance elements. Notice that the ALS has a faster convergence when compared to the CM, and a smoother estimate compared to the CC.
Overall, we conclude from this analysis that both methods have good noise covariance estimation capabilities, each one with their own advantages/disadvantages. In the sequel, we assess their performance in the GNSS carrier tracking case of interest. Estimation of R 2,2 (jointly with v 2 ) Fig. 7. Noise covariance estimation using the ALS method. Estimation of the diagonal elements of R (right) and σ 2 v (left), for the illustrative scalar example #3.

Robust Carrier Tracking Filter Design
We propose to use a nonlinear KF implementation which directly operates with the received signal complex samples, namely, an EKF type solution. The EKF is known to provide better performances than the standard carrier tracking KFs, because the filter avoids the use of a discriminator. This is mainly because the discriminator is a nonlinear operator, thus it breaks the Gaussianity of the signal and may enter into saturation in weak signal harsh propagation scenarios. We consider a real valued signal model and a 3rd order Taylor approximation of the phase evolution. In the sequel we recall the SSM formulation for completeness, where y k = y i,k + iy q,k and n k = n i,k + jn q,k . The LOS carrier phase evolution, the state to be tracked and its corresponding evolution are given by The key idea behind the EKF is to linearize the nonlinear measurement function around the predicted state, to be able to apply the standard linear KF equations. In this case, From these equations it is straightforward to build a carrier tracking EKF. On top of the KF-based core we consider a Gaussian noise covariance estimation method (i.e., CM, CC or ALS), which interacts with the filter at each prediction and update step. The goal is to have an iterative estimation of the noise parameters and robustify the filter, which must be able to adapt to time-varying scenarios or propagation conditions.

Computer Simulations
The performance of the new AEKF method is analyzed in representative GNSS carrier tracking scenarios. The results are compared to the current discriminator-based state-of-the-art techniques, i) 3 rd order PLL with an equivalent bandwidth B w = 10 Hz, ii) standard KF considering a nominal (initial) C/N 0 , iii) adaptive KF (AKF), adjusting the measurement noise variance from the C/N 0 estimate, and iv) the discriminator-free EKF with known covariance matrices, being the benchmark on the new robust AEKF performance. We consider a time-varying Doppler profile given by an initial random phase in [−π, π], initial Doppler f 0 = 50 Hz and constant rateḟ 0 = 100 Hz/s. The signal is sampled at T s = 10 ms. The measure of performance is the carrier phase RMSE averaged over 100 runs.

Estimation of the measurement noise covariance R k
To assess the performance of the measurement noise covariance estimation, we consider an initial C/N 0 = 30 dB-Hz, and then a sudden drop to C/N 0 = 25 dB-Hz at the middle of the sequence. The results obtained for the different standard methods and the AEKF using the CM method to estimate the measurement noise covariance (AEKF-CM-R) are shown in Fig. 8. The AEKF is initialized toR k = 10R k . The left plot shows the carrier phase RMSE, where we can see that the AEKF is performing optimally, because the performance is the same as the one obtained with the EKF with known covariances. It is worth to note the impact of using a discriminator, because both PLL, KF and AKF perform always worse than the EKF-type solutions.
The right plot shows one iteration of the CM estimation of the diagonal elements of R k . The CM waits L = 100 samples to start performing the estimation, and then it uses a sliding window of the same size. The method has a fast convergence to the true value and is able to correctly track the C/N 0 drop at the middle of the sequence. The same results but using the CC correlation-based method (AEKF-CC-R) are shown in Fig. 9. In this case, we considered different window lengths because using L = 100 samples the AEKF is not performing well (i.e., the covariance estimation is too noisy). The results show that increasing the window to L = 200 samples is enough to obtain results close to the optimal. We also consider L = 500 samples, where we have even smoother results, but we increase the computational complexity of the algorithm. Notice that increasing the window length has a direct impact on the time of convergence when we have a sudden change. This effect is clear at t = 25 seconds, and can also be seen on the RMSE left plot. Regarding the covariance estimation, as already seen in the previous illustrative examples, the estimation results obtained with the CC have a larger variance (i.e., noisier estimates) compared to the ones obtained with the CM for the same window length.
3.2.2. Impact of a misspecified process noise covariance Q k on the estimation of R k The previous results were obtained considering a perfect knowledge of the process noise covariance Q k , which in real-life applications is not true. Then, it is of interest to assess the impact of a misspecified Q k on the estimation of R k . We show the results for both CM and CC methods in Fig. 10   • Underestimated Q k :Q k = 0.001Q k ,Q k = 0.01Q k andQ k = 0.1Q k .
The conclusion is the same for both methods. If we overestimate the process noise covariance Q k , we still have a correct estimation of the measurement noise covariance R k and an AEKF filter performance close to optimal, but with a slightly larger variance. In case of underestimating the process noise covariance Q k , the estimation of the measurement noise covariance R k is no longer correct, thus the filter performance is clearly degraded. This confirms that: i) it is always better to overestimate the noise covariance matrices, and ii) if we are able to have a guess (overestimated) on Q k , which depends on the system dynamics, the filter is robust and we only need to estimate R k .
3.2.3. Joint estimation of the process noise covariance Q k and measurement noise covariance R k It has been shown in the previous illustrative examples that the different noise covariance estimation methods where able to correctly estimate both noise covariances. In the GNSS carrier phase tracking scenario considered in this contribution, the standard CM, CC and ALS were not able to correctly estimate the process noise covariance because of numerical instabilities and ill-conditioned matrices. This also implies that we were not able to use the ALS for the estimation of R k , because it performs the joint estimation of both noise covariances in a single step. The main problem here is that Q k has values on the order of 10 −14 (i.e., T s = 10 ms), then a direct estimation of the full matrix is not a good approach. In this case, a possible solution could be to reformulate these methods to take into account possible underlying model structures. For instance, instead of considering Q k as a full matrix, we could decompose it as Q k = σ 2 v,k GG , with G = [T 3 s /6 T 2 s /2 T s ]. Then, instead of estimating a 3 × 3 matrix, we may reduce the problem to estimate the scalar variance σ 2 v,k . The formal reformulation and computer simulations in this line need to be further studied.

CONCLUSIONS
In the context of GNSS carrier-phase tracking, this discusses the need for appropriately estimating covariance matrices in Kalman-type tracking loops. It is highlighted that inaccurate specification might lead to worse results than conventional PLL schemes or, even worse, to divergence. The paper described several covariance estimation alternatives and compare them in relevant scenarios for the GNSS community. Summing up, PLLs are the scheme of choice for carrier-tracking in GNSS receivers, however its performance can be notably improved in harsh scenarios by employing Kalman-type schemes. The latter require accurate adjustment of model covariances, this paper discusses the main options available to date.