Fiber-Based UTC Dissemination Supporting 5G Telecommunications Networks

Any mobile telecommunications network requires syntonization among its various elements. The need to exploit the available radio spectrum efficiently and providing new kinds of services render frequency syntonization insufficient. Phase and time-of-day synchronization will be necessary in the future. Thus, network operators are searching for efficient and reliable synchronization architectures. In this article, an example of such an architecture, applicable to fifth generation mobile networks, is presented, based on the requirements published in the latest ITU-T Recommendations. For supervision of the performance of the synchronization network, the technology of optical time transfer (OTT) is proposed, which allows dissemination of an accurate timescale such as a UTC realization to selected nodes of a coherent network primary reference time clock. The OTT realized so far and its current and future role in the network of Deutsche Telekom are discussed, and representative measurement results are shown.


AbstrAct
Any mobile telecommunications network requires syntonization among its various elements. The need to exploit the available radio spectrum efficiently and providing new kinds of services render frequency syntonization insufficient. Phase and time-of-day synchronization will be necessary in the future. Thus, network operators are searching for efficient and reliable synchronization architectures. In this article, an example of such an architecture, applicable to fifth generation mobile networks, is presented, based on the requirements published in the latest ITU-T Recommendations. For supervision of the performance of the synchronization network, the technology of optical time transfer (OTT) is proposed, which allows dissemination of an accurate timescale such as a UTC realization to selected nodes of a coherent network primary reference time clock. The OTT realized so far and its current and future role in the network of Deutsche Telekom are discussed, and representative measurement results are shown.

IntroductIon
Synchronization among network elements is a key ingredient of the operation of a mobile telecommunication network. Currently, most mobile networks in Europe, such as second/third generation (2G/3G) and Long Term Evolution-Advanced (LTE-A), use frequency-division duplex (FDD) operation. For their basic functions (e.g. to allow proper handover between different base stations), only frequency synchronization (also called syntonization [1]) is required. For the new 3.6 GHz band provisioned for 5G networks in Europe, however, time-division duplex (TDD) has already been determined in European regulations, and network operators are on the way to roll out this technology. Even for basic TDD functions, time synchronization of uplink and downlink is needed to avoid interference.
Furthermore, for more efficient spectrum use, in European regulations it has been decided not to implement dedicated frequency guard bands between the TDD frequency sub-blocks. A mobile phone thus may also receive a part of signal from neighboring sub-blocks (due to insufficient electrical filtering inside phone circuitry). As the sub-blocks may belong to different operators, inter-operator synchronization is necessary between networks operating in overlapping areas to avoid interference. Therefore, the uplink and downlink schemes, and the start time of radio frames need to be properly time aligned. The International Telecommunication Union -Radiocommunication Standardization Sector (ITU-R) laid down in the Radio Regulations, clause 1.14 [2] that Coordinated Universal Time (UTC) shall be used as the common time basis.
According to ITU-T Recommendation G.8271.1 [3], the timing accuracy needed locally at the base station input, expressed as the maximum absolute time error (max|TE|), is 1.1 µs. This is the maximum absolute value of the time error between the reference timing signal at the input of the base station and the time of the primary reference time clock (PRTC), which is the reference time generator for the network that provides a reference timing signal traceable to an internationally recognized time standard (e.g., UTC) [4,5]. This allows reaching the specified air interface performance with a max|TE| of 1.5 ms, as specified by the 3rd Generation Partnership Project (3GPP) for TDD operation.
Additional features inside mobile networks, such as inter-site carrier aggregation (CA) and other new features, have been introduced with the LTE-A standard and further specified by 3GPP, and will be used with 5G for better spectrum efficiency and/or better coverage. They require a feature-specific maximum time alignment error (TAE) between cooperating base stations and respective remote radio unit antennas inside a cooperation cluster of 260 ns, 130 ns, or even 65 ns [6]. (TAE is defined as the largest timing difference between any included clock signals.) Other 5G features, like ultra-reliable low-latency communication (URLLC) or accurate geo-positioning for emergency calls, are expected to require the same order of max|TE|. For the design of synchronization networks, the ITU -Telecommunication Standardization Sector (ITU-T) has specified different types of primary reference clocks with max|TE| values between 30 ns and 100 ns and boundary clocks with max|TE| values between 5 ns and 100 ns. While TAE can be seen as relative time error between cooperating base stations, the maximum time error for TDD and several other features is to be understood with respect to UTC, which is an a priori standard external to the telecommunications world. Timing laboratories "k," Fiber-Based UTC Dissemination Supporting 5G Telecommunications Networks mostly in national metrology institutes, realize physical timescales UTC(k) as local realizations of UTC [5].
To guarantee the needed synchronization performance, network operators are currently on the way to migrating their synchronization networks toward 5G demands with coherent frequency, phase, and time-of-day synchronization supply. In this article, we present an example of such a future synchronization network, consisting of core, core to aggregation, and aggregation to base station synchronization levels. A coherent network primary reference time clock (cnPRTC) architecture at the core level is described, with special emphasis on the concept and role of optical time transfer (OTT).

coherent network PrImAry reference tIme clock ArchItecture
The cnPRTC approach is a robust network-wide synchronization architecture and has been specified by ITU-T in Recommendation G.8275 [6]. This idea tries to solve a problem, which is more and more noticed nowadays, related to the reliance of telecom synchronization chains on time information from global navigation satellite systems (GNSS). In the case of GNSS signal failure (due to, e.g., intentional or unintentional jamming, spoofing, or possible satellite segment malfunction), the synchronization is impaired, which could result in reduced capabilities of the network. A use of the cnPRTC concept at the highest (core) level of a synchronization network is shown in Fig. 1. The cnPRTC concept allows an efficient network synchronization structure and exploits meshed local clock combiners in all core locations. Each combiner generates synchronization signals locally using its own primary atomic cesium-based sources and receives remote signals from neighboring nodes via appropriate time transfer links. GNSS is used to set the local clock system close to UTC, for example, based on the prediction of the difference between the GPS time and UTC(USNO) or between the Galileo System Time and UTC in the respective navigation messages of the GNSS.
A clock combiner function merges the available clock sources by appropriate weighting or eliminating low-quality signals (through an agreement algorithm), and generates the timescale (through a timescale algorithm) and the wanted synchronization signals (through a coordination function).
The time transfer links necessary in the cnPRTC concept can be established using the IEEE1588 Precision Time Protocol (PTP) with High-Accuracy (HA) profile or PTP with Full Timing Support from network (PTP-FTS) according to ITU-T G.8275.1 [7] without any telecom boundary clocks (T-BC). Both mentioned protocols can be operated bidirectionally over the same fiber, providing better time transfer accuracy.
In general, the cnPRTC architectural concept can be regarded as an upgrade compared to timing and synchronization via GNSS only. Here, GNSS may serve for initial synchronization with periodic updates only. In the case of overall GNSS loss, the meshed cesium-based clock combiner architecture will be able to provide the needed synchronization performance for weeks or months with quality comparable to an enhanced PRTC (ePRTC). Details are still a subject of further study, ongoing development, and specification.

An exAmPle of A synchronIzAtIon network ArchItecture
For synchronization of installations in the direction from the core toward the aggregation level ( Fig. 1), the so-called horse-shoe concept can be used, where every single aggregation site has its west to east or east to west synchronization chain, each sourced by one of the core clocks. In the case of any problem, such as fiber break, the synchronization direction can be changed. Therefore, the clocks can use the best master clock algorithm (BMCA) as defined in IEEE1588-2008, specified for telecommunication application in ITU-T G.8275.1 [7]. For the horse-shoe synchronization, separate wavelengths intended solely for the purpose of synchronization can be used. They can go over a passive optical multiplexer together with Figure 1. A 5G synchronization network with a cnPRTC architecture in the core level. In the aggregation level, a horseshoe synchronization is implemented with a separate wavelength designated for the synchronization signals and using Class D T-BC. Synchronization to the base stations is passed together with the telecom traffic using Class C TB-C.
telecom traffi c over the same fi ber, or even via a separate fi ber.
To reach the required synchronization performance at aggregation sites, the horse-shoe synchronization must use T-BCs according to ITU-T equipment specification G.8273.2 Class D [10], each with max|TE| of 5 ns. At the aggregation site, the T-BC output can be used to supply the mobile backhaul aggregation nodes, which have fi ber links toward base stations. With an aggregation node, synchronization supply could be provided in-band, together with regular telecom traffic. In such cases the demarcation devices at base station locations and base station TBCs and telecom time slave clocks (TTSC) with Class C (according to ITU-T G.8273.2) are recommended.

synchronIzAtIon network suPerVIsIon
To operate such a complex cnPRTC synchronization network architecture, which needs to function 24 hours per day and seven days per week, a suitable supervision system is needed. It should offer the capability to continuously monitor the synchronization performance, to detect possible synchronization problems early, and to facilitate GNSS-independent synchronization of the network to UTC. This requires implementing an additional time transfer system that ideally should be fully independent of the telecom synchronization chain. In addition, due to the required synchronization performance, the performance of this supervision system should be unambiguously able to verify the required cnPRTC synchronization as given above.
The above mentioned requirements call for a technology that avoids reliance on GNSS signals, because during a potential GNSS problem, both the local clock combiner and the supervision system itself would probably be impacted. Therefore, a separate fiber-based time transfer system is a valid option, and OTT using an electronic stabilization method (ELSTAB) is a suitable candidate. This method does not use any packet-based techniques to transfer one pulse per second (1 PPS) reference, so there is no impact of any queuing or packet buff ering.
deutsche telekom suPerVIsIon network wIth utc(dtAg) As A meAsurement reference The rollout of an OTT system intended for supervision of the telecom synchronization network started in 2015 with a series of proof-of-concept experiments devoted to checking the suitability of ELSTAB technology for the specific requirements of Deutsche Telekom, and to test the reliability and stability of ELSTAB in a real fi ber optic telecom infrastructure. Representative results are shown in the experimental section of this article. From the beginning, the system evolved from just a single link [10], connecting Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany's National Metrology Institute, with the Deutsche Telekom Synchronization Test Center in Bremen, to its fi nal architecture, shown in Fig. 2. Deutsche Telekom realizes UTC(DTAG) in Frankfurt/Main, Germany. In a few months, using OTT, UTC(DTAG) is disseminated to the Deutsche Telekom UTC Hub located in Hannover, which is responsible for allocation of UTC timing signals and its future distribution to selected core locations, at present to a facility in Bremen and also to PTB. In Bremen, the received signal serves as reference for the assessment of the quality and performance of equipment to be deployed in quantities in the synchronization network, named system under test (SUT) in Fig. 2.
For ensuring the traceability to UTC, the comparison of UTC(DTAG) is considered necessary against other UTC realizations. Since November 2018, after launching the OTT link between Frankfurt and the Hannover Hub and reconfi guration of the entire system, UTC(DTAG) has been permanently compared against UTC(PTB) at both sites. In the final arrangement, this comparison will be made in three places: in PTB, in Frankfurt, and in Bremen, which also receives UTC(PTB)

ott elstAb technology
The implementation of the OTT at Deutsche Telekom uses ELSTAB technology [11], developed at the AGH University of Science and Technology in Kraków, Poland. This technology allows stabilization of the propagation delay of the fi ber optic link, to the extent that the delay value is not only constant but also known with an accuracy of a few to several dozens of picoseconds (depending on link length and number of terminals involved). Each individual link is composed of two terminals, the local (L) and remote (R) modules, and typically also includes several bidirectional optical amplifi ers necessary to compensate losses due to fi ber attenuation. A simplifi ed schematic diagram of such a time and frequency dissemination link L  R is shown in Fig. 3. The stabilization of the link propagation delay in ELSTAB is based on bidirectional transmission of an intensity-modulated light within a single fi ber and on compensation of any change in the fi ber delay. This is possible because the delay fl uctuations are practically the same in both directions of the same optical fi ber.
The compensation is done exploiting a couple of electronic variable delay lines -one for each forward and backward direction (see the details of the delay stabilization block in Fig. 3). These delay lines must be closely matched to ensure that the amount of the delay inserted for the forward and backward signals is exactly the same. To achieve this, it is essential that the delay lines are manufactured as a single-chip application-specifi c integrated circuit (ASIC).
Within the local terminal, the phases of the composite signal (formed inside the PPS embedder from 1 PPS and 10 MHz signals) and its copy received from the remote terminal are compared, and the difference (measured by the phase comparator marked as  in Fig. 3) is kept at zero by applying a control signal to the delay lines. It has been shown [11] that in such a negative-feedback system, the change of the delay of the fi ber results in the exactly opposite reaction of the delay lines. This results in a constant timing position of the 1 PPS signal at the remote output (extracted from the composite signal by the PPS de-embedder).
To determine the link propagation delay, an initial calibration is necessary [10,12]. This process comprises a few simple steps that need to be undertaken only once after the link installation or substantial change of the link length (e.g., re-routing after a fiber break). The calibration is facilitated because all necessary measurements are performed at the local terminal side only. The process is briefl y outlined below.
Using a time-interval counter (or equivalent equipment), the measurement of the time delay between the local terminal reference (PPS_Ref) and returned (PPS_Ret) ports can be performed, which gives the round-trip delay (i.e., the delay from L to R and back). Half of this value gives a first-order approximation of the delay between the reference port and the output in R. To get its exact value, it is necessary to consider asymmetries between the forward and backward directions that can be done by adding three correction values [12]. The fi rst of them is related to the hardware asymmetry of the local and remote terminals and is individually assigned to each set of the transfer equipment (it is determined at the manufacturing stage). The second one results from different wavelengths of the optical signals transmitted in the forward and backward directions to overcome an increase of the noise due to backscattering occurring in the optical fi bers. This can be either calculated based on the known dispersion value of the involved fi bers or measured in situ using built-in capabilities. The last correction is due to the Sagnac eff ect that is related to the rotation of the Earth and can be calculated with enough accuracy using known coordinates of the local and remote terminals and the amplifi ers. To fi nish the calibration of the link, the relation to UTC(k) need to be established, which is done by considering the delay between the UTC(k) reference point and the local terminal reference port.
In the fi nal step, the position of the output 1 PPS signal can be aligned with UTC(k). This is due to the fact that the delay introduced by the fi ber link is quite large a rough estimate is 500 µs per each 100 km of a fiber. This is realized by programming the PPS advancing block located at the output of the remote terminal (Fig. 3) with the value being the diff erence between the full second and the delay established during the link calibration.
After finishing the calibration, the ELSTAB dissemination system does in principle not require any additional control. In practice, however, monitoring and management of its operation is often desirable. It helps, for example, in detecting problems with potential fiber breaks. It can also be used to perform a fully automatized initial calibration (e.g., with a designated calibrator shown in Fig. 3) or setting the gains of optical amplifiers. Each component of the link (i.e., both local and remote terminals and each optical amplifier) allows accessing its internal status and setting parameters as they are equipped with a control unit providing an Ethernet interface and supporting Simple Network Management Protocol (SNMP) version 3. Using these features, all components forming the OTT architecture can be connected in a network and conveniently supervised from a single terminal.

eVAluAtIon of the ott lInks' PerformAnce
As the OTT links are intended to perform supervision of key nodes within a cnPRTC synchronization architecture, evaluating its performance and the capability of a long-term uninterrupted operation (perfectly with no human interventions) is an important task. One should distinguish here between the stability (i.e., the degree of constan-cy in time, usually expressed as time deviation [TDEV] or maximum time interval error [MTIE] [4]) and the absolute accuracy of time transfer (i.e., the degree of agreement with the time reference, which is determined through the calibration process), which are addressed in the paragraphs below.
When assessing the performance of the system itself, the parameters of the signals transmitted should ideally not infl uence the results. This means that a differential method of measurement is a suitable approach, when the reference signal, which is derived from the input, is compared against its copy that passed the transfer link. The result of a long-term (255 days starting from February 6, 2016) measurement performed in PTB is shown in Fig. 4. The compared signals (1 PPS pulses) were produced locally at PTB, and their copies were transmitted directly to the Telekom Test Center in Bremen and then back via the Hannover Hub. Such a measurement was made before connecting the site in Frankfurt when a direct return connection from Bremen to PTB still existed and was used for verifi cation purposes (shown as gray connections in Fig. 2). Three sets of terminals with four in-line bidirectional optical amplifi ers were involved in this measurement. The total length of the fiber was over 445 km.
The data shown in the left part of Fig. 4 reveals two main components of transfer instability. One is a short-term, fast noise, related mainly to the internal noise of the time interval counter (TIC) used in the measurement, whereas the second one is a slow fluctuation that can be attributed to thermal processes ongoing in the transfer system and within optical fibers. These can be well observed on the TDEV and MTIE plots, shown in the right part of Fig. 4. For averaging times below about 10 3 s, the fluctuations are limited by the TIC, but for longer averaging times, transfer system noise dominates. However, the achieved stability in all cases is well below telecom required specs, shown as corresponding TDEV and MTIE masks.
A few gaps visible in the phase fluctuations plot (around days 10, 135, and 240) are related to temporary loss of optical signal due to fiber breaks and construction work. In addition, 21 data points were manually removed from the raw data collected and treated as outliers caused by improper triggering of the TIC. This means that within the analyzed period exceeding 250 days, the number of data points lost was only about 3 percent, and for the remaining 97 percent of time, the entire system was able to provide valid data. The absolute accuracy of time transfer within the entire OTT system was subject to a few calibration campaigns. Thanks to the redundant structure of interconnections (i.e., the signals to Braunschweig, Bremen, and Hannover were delivered over two physically different paths), it was possible to perform not only the calibration of individual links but to check its consistency as well. The results showed that the inconsistency of calibration is on the order of tens of picoseconds and falls within the limits of calibration uncertainty (consult [13] for in-depth details).
As OTT is used to compare UTC(PTB) against UTC(DTAG), it is instructive to observe the difference between this method and the routinely computed results from the satellite-based time transfer using signals from the Global Positioning System (GPS). Results of 50 days' comparison data are shown in Fig. 5. The red curve represents {UTC(PTB) -UTC(DTAG)} using OTT, where each data point corresponds to the direct difference between the 1 PPS pulses derived from corresponding timescales and measured in real time with a TIC. The apparent phase fluctuations are characteristic for commercial cesium clocks (Microsemi 5071A), which realize UTC(DTAG). The instability of UTC(PTB) is smaller by an order of magnitude as it is generated using an active hydrogen maser, which is steered to primary cesium fountain frequency standards [14]. The blue curve shows the GPS time transfer result, where each data point is obtained by averaging 16 minutes of observation of a single satellite, and the green overlay is an average of all available satellites during the respective time interval. The noise level is typical for GPS code-based time transfer. In comparison, and as also shown before [15], OTT provides the capability of time transfer with superior stability and accuracy. summAry 5G mobile networks require accurate, stable, and robust frequency, phase, and time synchronization. Hierarchical synchronization networks are needed that allow frequency, phase, and time coherence of various network components to be obtained. Within the network, the required level of stability and accuracy decreases from the core level through aggregation nodes toward the base stations.
The core level can efficiently be organized using the cnPRTC approach. Cesium atomic clocks in the core nodes create the framework with interconnecting time transfer links. To ensure synchronization to UTC as a common time basis, as recommended in the ITU Radio Regulations, OTT has been shown as a viable solution. OTT is able to disseminate a UTC(k) realization to selected nodes of the core level with accuracy, stability, and speed of compari-son superior to the commonly exploited GNSS techniques. OTT would also allow real-time supervision of the synchronization network and thereby increase its reliability by making it less dependent on GNSS. In addition, OTT allows making simple measurements with a local TIC or oscilloscope at selected cnPRTC sites, setting the local time independent of GNSS or using the physical signal from the supervision level as a fallback scenario for a while if the local clock combiner fully fails due to any issue.
OTT in general can support any kind of network requiring synchronization with UTC for its operation. In this article we describe its use within the infrastructure of Deutsche Telekom, which is being designed to support a future 5G mobile network.