A Cloud Optical Access Network for Virtualized GNSS Receivers

This paper presents a novel system architecture and describes a proof-of-concept for a virtualized GNSS receiver. The main idea consists of moving apart the GNSS antenna from the baseband processing, thus allowing for a number of new applications and services based on GNSS signal products. After contextualizing this concept into a major trend followed by the IT industry and the academia in the latter years, the paper provides a snapshot on the latest industry trends and describes a vibrant ecosystem of open source projects and tools, which make possible the actual (and rapid) deployment of virtual services over software-deﬁned optical networks that allow for the continuous transmission of GNSS signals from the antenna to a remote baseband processor. After providing a brief description of the main concepts and building blocks of the presented architecture, and identifying the fundamental technological bottlenecks, we present a proof-of-concept implemented with actual commercial–off–the–shelf hardware components, providing evidences of the system feasibility and releasing different versions of a free and open source software-deﬁned, virtual GNSS receiver.


INTRODUCTION
In recent years, the Information Technologies (IT) industry has witnessed an explosion of successful new business models, services and applications based on the combination of three key concepts: the cloud, software defined networking and network function virtualization.Namely, the cloud is a term referring to accessing computer, IT, and software applications through a network connection, often by accessing data centers using wide area networking (WAN) or Internet connectivity; Software-defined networking (SDN) is an approach to computer networking that allows network administrators to programmatically initialize, control, change, and manage network behavior dynamically via open interfaces and abstraction of lower-level functionality; and Network functions virtualization (NFV) is a network architecture concept that uses technologies for logically dividing the system's IT resources to implement virtual network node functions (i.e., functional entities that act as physical devices) which can then be managed as building blocks that may be connected, or chained together, to create communication services.
The application of such concepts and associated technologies allows companies to increase the efficiency and flexibility of their IT resources by managing them as logical entities instead of as physical, hardwired units dedicated to a given application or service.Examples of industry-embraced, commercial success stories are: i) the Cloud Radio Access Network (C-RAN) approach of mobile communication network operators [1], in which the baseband unit is moved away from physical antennas, allowing the virtualization of mobile base stations; ii) commercial Platforms as a Service (PaaS) [2], in which consumers deploy their own applications using the computing platform supported by the provider, such those offered by Amazon Web Services Elastic Beanstalk, Windows Azure, Heroku, IBM Bluemix or Google App Engine; iii) commercial Infrastructures as a Service (IaaS) [3], which offers processing, storage and fundamental computing resources to consumers so they can run their own applications, but without control of the underlying infrastructure (such as those offered by Amazon Web Services EC2 and S3, Windows Azure, Rackspace, Google Compute Engine and others); and iv) content delivery networks (CDN) [4] that serve content to end-users with high availability and high performance, such as those offered by Quantil, Limelight and Cloudflare, among others.In all those successful use cases, SDN and NVF concepts play a crucial role in their inception, design, implementation and commercial deployment.
The trend to programmatically define functions that were traditionally implemented in dedicated hardware devices has also reached Global Navigation Satellite Systems (GNSS), with a number of software-defined receivers currently available both at open source and commercial levels.A software-defined GNSS receiver is a computer program that takes raw GNSS signal samples as its input and performs all the baseband processing up to the computation of GNSS observables and the Position-Velocity-Time (PVT) solution, thus replacing dedicated integrated circuits.In general, these programs can work in post-processing mode (for instance, from raw signal samples stored in a file) or, with enough computational power, in realtime mode.The latter requires a GNSS antenna and a radio-frequency front-end performing signal amplification, frequency downshifting, filtering and conversion into the digital domain.Then, the delivered stream of raw signal samples is connected to a computer (via USB, Gigabit Ethernet, PCIe Express or optical fiber cable), where the software-defined receiver executes the baseband processing chain.First instances of software-defined GNSS receivers appeared in academia (see for instance TRIGR, from the Ohio University [5]; NAMURU, from the University of New South Wales [6]; N-GENE, from Istituto Superiore Mario Boella (ISMB) and Politecnico di Torino [7]; and GSNRx, from the University of Calgary [8]), and currently there are commercial options (such as SX3 [9], Aramis [10], Piksi [11], GSN and Nusar), and projects released under free and open source licenses (e.g., OpenSource GPS [12], GPS-SDR [13], SoftGNSS [14], GNSS-SDRLIB [15] and GNSS-SDR [16]).
A software-defined GNSS receiver executed in the cloud appears to be the next natural step in this technology trend.However, the particularities of GNSS signal processing pose some technical challenges (described below) to an actual system deployment.In fact, other cloud-based solutions for software-defined GNSS receivers have already been proposed in academic literature.All of them are based on a "snapshot-based GNSS receiver" (methods proposed for instance in [17,18,19]), in which a batch of data (that could be as short as 2 ms of signal) is sent to the software receiver, which is able to compute a PVT solution with the aid of pre-loaded ephemeris.Notable examples of such approach are Microsoft's energy-efficient GPS sensing with cloud offloading [20,21] and the system running on Amazon Web Services presented in [22,23].These approaches allow extremely low power consumption to the user equipment, at the expense of limited accuracy (ranging from 30 to 100 m of error for the 80 percent of locations) and high latency.
In this paper, we propose a different approach consisting of a software-defined, virtualized GNSS receiver, executed in the cloud, receiving a collection of GNSS signals' streams captured by a set of radio heads located elsewhere, and connected to the cloud via a high-performance communication network.The proposed system architecture allows for continuous GNSS signal streaming from the antenna to the GNSS baseband unit, in addition to any arbitrary duty cycling required by a certain application or service.The possibilities of such a system architecture are enormous.For instance, one could envisage the rapid deployment of a set of low-cost radio heads sending raw GNSS signals to a software-defined receiver in the cloud, and thus acting as a network of GNSS reference stations, generating high-rate pseudorange, phase and Doppler observables at real-time, and interesting products such as differential data to provide high-accuracy services to third users.This architecture allows for a number of interesting applications and services, outlined in Section 2.6, in addition to those already proposed for snapshot-based GNSS cloud offloading.
The main technical challenge is system availability: the transport network and the cloud infrastructure must ensure that the system is in functioning condition 100% of the time.This means high bandwidth, low latency, and possibly QoS and encryption mechanisms in the transport network, and high computational capabilities for the cloud infrastructure: the host device executing the software-defined receiver must reach real-time, which is not obvious in the most demanding configurations (i.e., multi-system, multi-band receivers).Another key aspect for a real deployment is scalability, which is how such a system adapts to more users, more GNSS signals and bands, more external data sources or to more complex signal and data processing algorithms.In some applications, other aspects such as reliability (a trusted receiver for certification / security aspects), efficiency (power consumption trade-off between the user equipment and the cloud infrastructure), interoperability (the possibility to exchange information with other sources, devices and systems) and marketability (for instance, providing a rapid path from a source code change in the software receiver to service deployment) could be of equal importance.
Leveraging on the practices and trends of the IT industry described above, this paper proposes a system architecture which addresses such technical challenges by defining a virtual network function infrastructure for a new generation of virtualized software-defined GNSS receivers and the new applications and services that such a concept makes possible.The remainder of the paper is organized as follows.Section 2 describes the proposed system architecture, Section 3 describes the proof of concept, and Section 4 concludes the paper.For the sake of the reader's convenience, two appendices have been included at the end of the paper, including a list of acronyms (Appendix A) and a taxonomy of software projects (Appendix B) mentioned along the text.

SYSTEM DEFINITION
The proposed system is depicted in Figure 1.This section defines the functionality of the building blocks and provides an overview of existing industry-grade implementations.The automated arrangement, coordination, and management of computer systems, middleware, and services, generally referred to as orchestration, has emerged as a key concept in such a software-defined-everything environment.Orchestration is understood as the coherent coordination of heterogeneous systems, allocating diverse resources and composing functions to offer end-user services.The standardization efforts are lead by NFV MANO, a working group of the European Telecommunications Standards Institute Industry Specification Group (ETSI ISG NFV).It is the ETSI-defined framework for the management and orchestration of all resources in the cloud data center.This includes computing, networking, storage, and virtual machine resources.Founded in November 2012 by seven of the world's leading telecoms network operators, this large community (currently composed of more than 290 companies including 38 of the world's major service providers), is still working intensely to develop the required standards for NFV.The most recent sets of normative documents, known as releases, are described in [24,25].
In parallel, the NFV market is evolving rapidly, as technology and service providers alike use pieces of open source and add them to their existing software solutions to provide the full complement of NFV functions, resulting in a vibrant ecosystem of free and open source software projects that is constantly progressing and growing, thus proving a large number of options with different levels of maturity, functional coverage and adherence to the ETSI-proposed model.In September 2014, the Linux Foundation announced the Open Platform for NFV Project (OPNFV), which facilitates the development and evolution of NFV components across various open source ecosystems.OPNFV integrates components from upstream projects (such as OpenStack, OpenDaylight, ONOS, OpenContrail, Ceph, KVM and LXD in the control plane; and FD.io, Open vSwitch, DPDK and OpenDataPlane in the data plane) across compute, storage and network virtualization in order to provide an comprehensively tested, industrially-proven end-to-end reference NVF platform.Specially relevant is OpenStack, a cloud management system consisting of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center, managed through a dashboard or via the OpenStack application program interface.The OpenStack community sustains a broad set of projects, each one providing specific services (e.g., Nova manages computing instances, Neutron enables network connectivity, Swift manages data storage, Keystone provides authentication and authorization services, etc.), from which the OPNFV community consumes a sub-set.In should be noted that, in general, open source projects follow roadmaps driven by service requirements, and they not always can be clearly mapped into a reference architecture functionality [26].
In addition to OPNFV, a plurality of solutions is already available.Notably, Open Source MANO (OSM) [27], an ETSIhosted project to develop an open source NFV Management and Orchestration software stack aligned with ETSI MANO.Other examples of open source ETSI NFV compliant MANO frameworks are Open Baton, Open-O and Cloudify.In February 2017, the Linux Foundation announced the creation of the Open Network Automation Platform (ONAP) project, aimed to create a harmonized and comprehensive framework for software automation of virtual network functions that expands the scope of ETSI MANO.The full picture results in an incommensurable number of possible combinations and configuration options, providing a wide choice of tools for the automated deployment of network services of very different nature and requirements.This trend towards the full automation of provisioning processes, along with a generalized orchestration of systems, is relying on diverse initiatives combining de facto and de jure open, modular and extensible standards, including the fast prototyping, open development, user-driven and frequent releases of open source projects and initiatives [26].For more details about the software tools mentioned along this paper, the reader is referred to Appendix B.
The ETSI NFV MANO architectural framework defines three main functional blocks [28]: the NFV orchestrator, the virtual network function manager (VNFM) and the virtualized infrastructure manager (VIM), which are depicted in the upper part of Figure 1, as well as reference points (denoted in Figure 1 as "Or-Vnfm", "Vi-Vnfm" and "Or-Vi"), which are peerto-peer information exchange mechanisms within functional blocks, following a producer-consumer paradigm and through well-defined APIs.The role of those blocks is described hereafter.

Network Function Virtualization Orchestrator
The Network Function Virtualization Orchestrator (NFVO) is a functional block within the MANO framework that is responsible for on-boarding of new network services and virtual network function (VNF) packages, network services lifecycle management, global resource management, and the validation and authorization of network functions virtualization infrastructure (NFVI) resource requests.The NFVI is the totality of the hardware and software components which build up the environment in which VNFs are deployed.This includes the user equipment, the network elements all the compute, storage, and networking resources.A Virtual Network Function (VNF) is defined as a functional block that has well-defined external interfaces and well-defined functional behavior, and that can be deployed in a NFVI.
Resource orchestration is important to ensure there are adequate compute, storage, and network resources available to provide a network service.To meet that objective, the NFVO can work either with the VIM or directly with NFVI resources, depending on the requirements.It has the ability to coordinate, authorize, release, and engage NFVI resources independently of any specific VIM.It also provides governance of VNF instances sharing resources of the NFVI.
The NVFO also maintains four repositories: two catalogues that hold the information related to the creation and management of the all the supported network services and VNF packages, a third repository holding information of all VNF and Network Service instances, and a NFVI Resources repository holding information about available/reserved/allocated NFVI resources as abstracted by the VIM.
Today, in 2017, NFV orchestrators are typically delivered as a component of an NFV MANO solution, as is the case in the aforementioned Open Source MANO, OPNFV, Open-O or Open Baton frameworks.

Virtual Network Function Manager
The Virtual Network Function Manager (VNFM) is a functional block within the MANO framework that is responsible for the lifecycle management of VNF instances.This includes operations such as VNF instantiation (that is, to create a VNF using the VNF on-boarding artifacts); VNF scaling in/out (that is, the ability to scale by adding/removing resource instances, for instance virtual machines); scaling up/down (the ability to scale by changing allocated resources, e.g.increasing/decreasing memory, CPU capacity or storage size); updating and/or upgrading (support VNF software and/or configuration changes of various complexity); and VFN termination (release of VNF-associated NFVI resources).Other VNFM functions include VNF initial configuration (e.g., assigning IP addresses), instantiation feasibility checking, notification of changes in the VNF lifecycle, integrity monitoring, and the collection of VNF instance-related NFVI performance measurement results.
Large players such as Nokia, Cisco, Ericsson, Huawei and NEC/Netcracker all have VNFM offerings, which are sometimes offered as a component of an overall NFV MANO solution.In the open source community, Tacker is an OpenStack project that consists of an open NFV orchestrator with a built-in, general-purpose VNF manager to deploy and operate virtual network functions on an NFV platform.

Virtualized Infrastructure Manager (VIM)
The Virtualized Infrastructure Manager (VIM) is the functional block within the MANO framework that is responsible for controlling, managing and monitoring the NFVI compute, storage, and network resources.There can be a single VIM or multiple, specialized VIMs (e.g., compute-only, storage-only, networking-only), but the idea is to have a single abstraction layer that exposes northbound open interfaces that support management of the NFVI, and southbound interfaces that interact to a variety of network controllers and hypervisors (that is, programs that create and run virtual machines) in order to perform the functionality exposed through its northbound interfaces.Hence, this provides VNF managers and NFV orchestrators with the ability to deploy and manage VNFs.This block keeps an inventory of the allocation of virtual resources to physical resources.This allows the VIM to orchestrate the allocation, upgrade, release, and reclamation of NFVI hardware resources (compute, storage, networking) and software resources (e.g., hypervisors).It also collects performance and fault information, which allows to exercise usage optimization.
In the diagram shown in Figure 1, we distinguish two functional sub-blocks within the VIM: a network orchestrator, in charge of managing the end-to-end connectivity (i.e., the communication network from the user equipment to the compute resources executing instances of a software-defined GNSS receiver, being a private data center, a public cloud computing service, or a mix of both), and a cloud orchestrator, specialized in hardware and software resources.
• The network orchestrator supports the end-to-end management of VNF Forwarding Graphs, e.g. by creating and maintaining virtual links, virtual networks, sub-nets, and ports, in order to transport the GNSS signals collected by the user equipment through the communication network to a data center, in which a computer will be executing one or more virtual machines or software containers, one of them executing the software-defined GNSS receiver that will process the signals gathered by the user.The network orchestrator sends requests to the agents of the user equipment and back-end network end-points.An agent is a program continuously running as a background process (sometimes called a "daemon") at each of those elements that listens for such requests and applies the corresponding actions.The network orchestrator is also in charge of the management of security group policies to ensure network/traffic access control.
• The cloud orchestrator coordinates the server hardware, so that virtual server instances (e.g., virtual machines or software containers), can be created from the most convenient underlying physical server.It can be used to manage a range of virtual IT resources across multiple physical servers, and provides for centralized administration of virtualized resources including creating, storing, backing up, patching and monitoring.It is also in charge of the management of software images (e.g., a virtualized GNSS receiver) as requested by the NFVO and the VNFM.
The OPNFV community consumes a sub-set of OpenStack projects to implement VIM.In turn, OpenStack implements a sub-set of the functionalities that ETSI could end up defining for VIMs.Other open source examples are OpenNebula and OpenVIM, now integrated into Open Source MANO.As commercially available VIM tools, we can mention VMware vCloud Director and Citrix XenServer.Many other vendors supply VIM solutions as add-ons to OpenStack.If software containers are used instead of virtual machines (more details in Section 2.5.3),there are container orchestrators such as Docker Swarm, Kubernetes, Nomad and Mesosphere Enterprise DC/OS.Compute resources orchestration can include remote cloud hosting services, such as those offered by Amazon Elastic Compute Cloud (EC2), Google Compute Engine and Microsoft Azure.

Software Defined Networking Controller
A SDN Controller is the application that acts as strategic control point in the SDN network, managing the flow control to the switches/routers "below" in Figure 1 (via the so-called southbound APIs) and the applications and business logic "above" (via northbound APIs) to deploy intelligent networks.In the northbound direction, the control plane provides a common abstracted view of the network to higher-level applications and programs using APIs.In the southbound direction, the control plane programs the forwarding behavior of the data plane, using device level APIs of the physical network equipment distributed around the network.The SDN Controller is then in charge of managing the network elements (switches, routers, etc.) that will transport the GNSS signal streams from users equipment to the compute resources executing instances of virtualized GNSS receivers.
The OpenFlow protocol [29], considered the first SDN standard, defines the open communications protocol that allows the SDN Controller to work with the forwarding plane (also known as data or user plane, defined as the part of the router architecture that decides what to do with packets arriving on an inbound interface) and make changes to the network.In particular, one of the OpenFlow strengths is to have identified a basic common hardware model for a data switch (with emphasis in packet switching) and, by virtue of the formal specification of tables, flows, and actions, to provide a flexible and extensible mechanism to automate networking decisions.However, it also poses some challenges in terms of scalability, security and the need of specialized hardware, so other alternatives to OpenFlow are emerging such as the Border Gateway Protocol (BGP), NETCONF, Extensible Messaging and Presence Protocol (XMPP), Open vSwitch Database Management Protocol (OVSDB) and the Multiprotocol Label Switching Transport Profile (MPLS-TP).
There are many SDN controllers available today relevant to virtual environments (e.g., Neutron), an OpenStack project that enables network connectivity as a service between interface devices (e.g., virtualized Network Interface Cards, VNICs) managed by other OpenStack services (e.g., Nova); OpenDaylight (the industry's de facto SDN platform, with more than 100 deployments, including Orange, China Mobile, AT&T, T-Mobile, Comcast, KT Corporation, Telefonica, TeliaSonera, China Telecom, Deutsche Telekom, and Globe Telecom), ONOS (a carrier-grade SDN network operating system) and OpenContrail (an extensible platform for SDN).

Radio heads
We distinguish two types of users: • Type A: Static users with connectivity to an optical network.In this scenario, the user equipment (that is, the radio head) would consists of one or more GNSS antennas, each one with one Low Noise Amplifier per targeted GNSS band (see Table 1), an Electrical-to-Optical (E-O) converter, and an optical fiber connection to an optical network.This approach is usually known as radio-over-fiber (RoF) [30,31,32].
• Type B: Mobile users (or users without connection to an optical network).In this scenario, the user equipment would consist of one or more GNSS antennas, each one with a RF front-end per targeted GNSS band that converts the RF (analogue) signal to a stream of digitized signal samples, usually downconverted to baseband or to a low intermediate frequency.The analog-to-digital conversion will deliver a minimum data bit rate of where BW is the targeted (passband) bandwidth, q is the number of bits per sample and N is the number of antennas.
The data stream(s) must be then sent to the network through a wireless interface.Table 1 provides some figures of the required data rate for different GNSS receiver configurations.Moreover, a 20% of overhead caused by PHY/MAC/Net protocols should be added to those rates.By simple inspection, it is easy to see that a LTE network (which theoretically sustains a peak upload rate of 50 Mbps, but on average allows about 1.5 Mbps [33]) cannot cover the more basic receiver configuration for a continuous signal delivery.This is actually the bottleneck that snapshot-based cloud GNSS receivers overcome by operating on discontinuous segments of GNSS signal.Even LTE-A/4G networks (which theoretically can sustain peak upload rates of 500 Mbps, but in practice averaging about 12 Mbps [33]) do not meet the requirements of most challenging configurations.Current designs of 5G networks are targeting upload rates possibly up to 10 Gbps and guaranteed data rates up to 50 Mbps [34], which would cover most of the possible GNSS frequency plans.Other possible wireless technologies for this link could be IEEE 802.11n or 802.11ac, which typically deliver an average throughput of 35 Mbps and 250 Mbps per data stream, respectively.

Agents
Network agents monitor network resources and make IP addresses and computer names available.They can be from simple scripts to complex, full-featured software tools, depending on the service or application requirements.

Network routers and switches
A router is a networking device that forwards data packets between computer networks.They operate at the network layer, and its role is to connect different logical sub-networks (e.g., 5G to WAN, which in Figure 1 inject GNSS signals gathered by mobile users to the optical network; or WAN to LAN, which in Figure 1 inject GNSS signals from the optical network to the local area network of the data center).Devices operating at a data link layer (for instance connecting devices within a LAN, or nodes within an optical WLAN) are usually referred to as switches.
In a classical switch based on the Ethernet protocol, the fast packet forwarding (data path) and the high level routing decisions (control path) are managed on the same device.On the contrary, an OpenFlow-based switch separates these two functions.The data path portion still resides on the switch, while high-level routing decisions are moved to a separate controller (e.g., the SDN controller in Figure 1).The OpenFlow switch and controller communicate via the OpenFlow protocol, which defines messages such as "packet-received", "send-packet-out", "modify-forwarding-table", and "get-stats".Other alternative protocols to OpenFlow operate under the same principle.

Optical switches
The data traveling in the form of light (possibly on multiple wavelengths) through an optical network need to be switched at the network nodes.A data stream arriving at a given node is forwarded to its final destination via the best possible path, which is determined by factors such as distance, cost, and the reliability of specific routes.While conventional optical switching consists of converting the input fiber optical signal to an electrical signal, performing the switching in the electrical domain, and then converting the electrical signal back to an optical signal that goes down the desired output fiber, new approaches such as reconfigurable optical add/drop multiplexer (ROADM) systems are able to avoid the unnecessary O-E-O conversion (and its associated expensive, bulky, and bit-rate/protocol dependent subsystems), enabling transparent O-O-O systems that use optical switching.This involves lower capital expenditures (CAPEX), as there is no need for a large amount of expensive highspeed electronics, and lower operational expenditures (OPEX) because fewer network elements are required.The complexity reduction also allows for physically smaller optical switches.
There are four main types of ROADM: Type I, with fixed (colored) ports; Type II, which offers reconfigurable (colorless) add/drop ports; Wavelength Selective Switches (WSS), that allow for degree-N connectivity; and Optical Cross-Connects (OXC), which are used for wavelength cross-connect switching in mesh networks.The Generalized Multi-Protocol Label Switching (GMPLS) [36] is the de facto control plane of wavelength switched optical networks.

Signal ingestion
In the case of Type A users (those that inject the RF signal received at the GNSS antenna directly into a fiber via an E-O converter), the signal must be converted back to the electric domain via an O-E converter.Then, the analogue signal stream  (still at RF) must be converted down to baseband (or low IF), filtered and converted to the digital domain by an analogue-todigital converter (ADC), and then sent to the corresponding host computer through the data center's LAN.
In the case of Type B users (those that inject digitized GNSS signals to the network), the data stream can be directly fed to the data center's LAN by a router connected to the optical WAN.

Data Center
A Data Center is a resource pool for storage, management, processing and distribution of data pertaining to a particular business or administrative domain.It is commonly understood as a (large) group of networked computer servers.Until recently, cloud management frameworks had been built assuming, in most cases, a centralized location, so servers could interchange information using Ethernet technologies within a LAN.However, now the concept has broadened to embrace distributed cloud infrastructures [26], where a set of either on-premises or remote, private or public computing clouds are all orchestrated together.This is specially interesting in services in which a low latency is required, for instance by placing computing resources near the end user.

Virtualized GNSS receivers
A virtualized software application is a program that can be executed regardless the underlying computer platform (i.e., processor architecture, operating system and installed library versions) that is executing it.This can be achieved by packaging the application and all its software requirements (the operating system and all the application-required supporting libraries and programs) in a single, self-contained and isolated software entity, that can be then run on any platform.Hence, for instance, using virtualization tools a complete Windows system can be run on a Linux machine, or on another version of Windows.This is a very convenient strategy for orchestration, since it allows the elastic creation, execution and destruction of application instances as requested in a user basis, and to intelligently spread the running instances along the available compute resources, regardless of what they are exactly composed of (that is, processor architecture, version of the host operating system or physical location).An instance of a software-defined GNSS receiver executed in a virtual environment can then be called a virtualized GNSS receiver.There are two main approaches to software virtualization: virtual machines and software containers.
A virtual machine (VM) is a software-based environment designed to simulate a hardware-based environment, for the sake of the applications it will host.A VM emulates a computer architecture and provides the functionality of a physical computer.Within each virtual machine runs a full operating system, so conventional software applications expecting to be managed by an operating system and executed by a set of processor cores (e.g., a software-defined GNSS receiver) can run within a VM without any required change.With VMs, a software component called a hypervisor interfaces between the VM environment and the underlying hardware, providing the necessary layer of abstraction.The hypervisor is responsible for executing the virtual machine assigned to it, and it can execute several of them simultaneously.Examples of hypervisors that a cloud orchestrator could control through an API server are the Kernel-based Virtual Machine (KVM), Xen, Quick Emulator (QEMU), VMware's ESXi, Oracle's VirtualBox and Microsoft's Hyper-V.Virtual machine technology enjoys a longstanding tradition in the IT industry, so there is plenty of commercial and open source choices, and load balancing mechanisms are well understood and established.
Recently, however, software containers are replacing VMs as the preferred supporting software stack system for virtualized software applications because of the faster and more lightweight nature of the former.An application running in a container can be more efficient in making use of the underlying hardware than when it is executed on a VM (since it operates directly with the real processing units instead of against an emulated layer, avoiding its overhead [37]), and many more containers than VMs can be put onto a single server, thus optimizing the investment in compute resources.The concept of containerization was originally developed as a mechanism to segregate namespaces in a Linux operating system for security purposes, isolating process groups (a process and possible descendant processes) from the outside world.The first approach consisted of producing partitions (sometimes called "jails") within which applications of questionable security or authenticity could be executed without risk to the kernel.The kernel was still responsible for execution, though a layer of abstraction was inserted between the kernel and the workload.Once the environment within these partitions was minimized for efficiency's sake, the concept expanded to make the contents of those partitions portable.Hence, this technology can be seen as an advanced implementation of the standard chroot mechanism in UNIX-like systems.The first container system was Linux Containers (LXC), followed by a container hypervisor (LXD), and then by other projects such as Docker or Ubuntu Snaps.These latter systems provide native environments with no hypervisor but a daemon that supplements the host kernel and that maintains the compartmentalization between containers, while connecting the kernel to their workloads.Other solutions, such as Virtuozzo's, allow the creation of encrypted containers for security purposes.

Back-end services
The main services foreseen to have a natural accommodation in the system proposed in this paper are related to high accuracy positioning (allowing sophisticated, cm-error level algorithms such as PPP [38], Fast-PPP [39], network RTK [40] or WARTK [41]), rapid deployment of reference stations, GNSS signal authentication, and low energy consumption GNSS receivers.It follows a list of services and applications that the proposed system could make possible: • A network of GNSS reference stations can produce differential data which can then be used to provide real-time corrections (and thus cm-level accuracy) to third users.
• Programmable output rate of GNSS observables, paving the way to new applications in space weather, precision agriculture, and surveying.
Localization of interference sources.
• Rapid deployment of GNSS-related infrastructure in disaster relief scenarios.
• Convenient solution for GNSS Commercial Services (e.g., Galileo E6), GNSS Authentication, and security-related applications (GPS M code, Galileo PRS), since the encryption module remains on the service provider's premises.
• Certified "space-time-stamping".A user could grab a batch of GNSS signals, send it to the cloud, and receive back a trusted certificate of position and time.
• A low energy, cloud off-loaded GNSS receiver for the Internet of Things.

PROOF OF CONCEPT IMPLEMENTATION
In order to provide evidences which demonstrate the technical feasibility of the proposed concept, the authors implemented a proof-of-concept for users of Type A, as defined in Section 2.3 (that is, an static antenna with a direct connection to an optical network), in the simplest configuration of a single-antenna user.The experimental setup, shown in Fig. 8, was carried out by integrating two research facilities available at CTTC: GESTALT R (a GNSS signal testbed described at [43]) and the ADRENALINE Testbed R (an equipment for experimental research on high-performance and large-scale intelligent optical transport networks described at [44]).The arrangement consisted of a GNSS antenna platform at the roof of CTTC premises (see Fig. 2), whose signals were amplified and injected into an E-O converter in the user side; an optical transport network (including 35 km of optical fiber) emulating an optical WAN, in charge of transporting the GNSS signals, in form of light, from the user's antenna to the data center; and an O-E converter, a RF front-end and a virtualized GNSS receiver in the back-end side.
The user equipment consisted of an antenna, an amplification stage and an E-O converter.The chosen GNSS antenna was a NavXperience's 3G+C, which features a low noise amplifier providing a gain of 42 dB with noise figure of 2 dB.A 40-meter long 1/2" coaxial cable drove the RF signal from the antenna to a patch panel cabinet located in the lab and designed to protect the RF connectors from the weather effects, such as a lightning discharge.At the ending of the RF cable a second GNSS amplifier was placed (model A11 by GPS Source Inc.) providing 30 dB of gain with a noise figure of 1.8 dB, and its output connected to the E-O converter in charge of turning the received RF signals into light.The E-O converter consisted of a Tunics Reference SCL tunable laser source tuned at 1550.12 nm and with 2 dBm of output power; a Photline Technologies' DR-AN-40-MO single-ended driver (that is, a wideband RF non-inverting amplifier delivering a gain of 26 dB with a noise figure of 3 dB); and a Mach-Zehnder modulator which controlled the amplitude of the optical wave (see Fig. 3).The generated signal was then injected into an optical fiber that brought the signal to an optical switch that acted as the entry point of a transport optical network emulated with the ADRENALINE Testbed R (see Fig. 4).
The ADRENALINE Testbed R includes a multi-technology, software-defined control plane for multilayer (packet over optical) networks, which manages the networking resources and covers the long-haul core transport and aggregation segments, thus automating the processes involved in the provisioning of networking services, such as optical lightpaths or Ethernet/MPLS-TP/IP connectivity services.The design of ADRENALINE's control plane follows broad Software Defined Networking (SDN) principles, such as stacking components in a hierarchical setting with different levels of abstraction.Network connectivity services are provisioned by an overarching control orchestration.In particular, at a given domain and layer, the control plane can based on the GMPLS technology and protocols -a distributed system in which a dedicated controller is responsible for each node autonomously -or follow SDN/OpenFlow principles, with a centralized controller that manages all the aspects of a network, dynamically configuring networks according to users an application needs.The facility also includes a SDN/NFV cloud computing platform, which can be deployed over multi-domain transport networks and distributed data centers [45].
In this particular experimental setup, the network management and orchestration was kept at the bare minimum.The optical network was configured as one OXC acting as the entry point, routing the optical signal through 35 km of real optical fiber (see Fig. 5), and with a ROADM Type II at the other end.The optical routing was performed by a self-developed GMPLS control plane, which is managed by a SDN controller implemented with OpenDaylight.The optical signal was then accessed from a port of the ROADM and converted again into the RF domain by means of a O-E converter (a Discovery Semiconductors DSC-R401HG InGaAs PIN photodetector with a transimpedance amplifier, i.e., a current-to-voltage converter, delivering a linear response to > +3 dBm optical input, 600 mVp-p of linear output voltage, 20 GHz of RF bandwidth and a conversion gain of 160 V/W), directly connected to the antenna input of an Ettus Research's USRP E300 RF front-end (see Fig. 6).Such front-end downconverted the received GNSS RF signals into baseband and performed the Analog to Digital conversion.The stream of raw signal samples was then injected via Gigabit Ethernet to the computing resources, executing an instance of a virtual GNSS receiver and computing GNSS products and PVT fixes in real-time.The compute resources were managed with OpenStack (using Nova for controlling compute resources, Neutron for managing network connectivity as a service and Keystone for authentication).Fig. 7 shows the position of the antenna, as computed by the virtual GNSS receiver, as plotted by Google Earth.The receiver was also able to generate GNSS products (that is, pseudorange, phase-range and pseudorange rate observables) for GPS L1 C/A and Galileo E1b/c signals, obtained without any kind of external assistance nor differential system, and delivered in standard formats such as RINEX files or RTCM 10403.2messages.Three virtualization mechanisms were tried, all based on the free and open source software-defined receiver GNSS-SDR: i) GNSS-SDR running in a virtual machine prepared with VirtualBox; ii) GNSS-SDR in a Docker container; and iii) GNSS-SDR in an Ubuntu's Snap package.A full analysis of their performance is left as future work, but all of them reached real-time when using a sampling frequency of 4 Msps in the RF front-end.In the tests performed with live signals, we obtained a Circular Error Probability (that is, the radius of a circle centered at the average position, containing the position estimate with probability of 50%) of 2.16 m when using a simple least squares algorithm for the PVT computation.The creation of the GNSS virtual network function took 11.9 s, and the lightpath setup in the optical network took 4.2 s, as shown in Fig. 9.The latency measured from the antenna to the output of the O-E converter at the data center (see Fig. 10) was 177 µs, from which 116 µs correspond to the light propagation time along 35 km.

CONCLUSIONS
This paper introduced the concept of the virtualized GNSS receiver, in the context of network function virtualization for optical access networks.After an overview of the state-of-the-art and a description of the proposed system and related existing tools, including the identification of technology bottlenecks, we report the results of a proof-of-concept that demonstrates the feasibility of the physical separation of the GNSS antenna and the virtual GNSS receiver.
The proof-of-concept consisted of a transmission of GNSS RF signals (gathered by an antenna and directly converted into light) over optical fiber, transported through a software-defined optical network from the antenna to a remote (35 km away from the antenna) data center in which a virtualized GNSS receiver was performing the frequency downshifting, analog to digital conversion and all the baseband processing up to the generation of GNSS observables and PVT fixes, delivered in real time.The results obtained by this simple proof of concept shows the technical feasibility of the proposed approach using commercialoff-the-shelf optical and electronic components and the latest trends and available software tools from the IT industry.Other configurations (for instance, moving the RF front-end to the user side and transporting digitized data) are also possible in the presented setup, and will be explored in future contributions.A wireless user equipment is still a challenge due to the high throughput requirements of a continuous transmission of digitized GNSS signals.
An extensive list of existing tools for implementing management and network orchestration, network function virtualization, software-defined networking, virtual infrastructure management, and handling of virtualized GNSS receivers is provided in Appendix B, including the free and open source, virtualized GNSS receivers presented in this work.

Fig. 8 .
Fig. 8. Diagram of the Proof of Concept.

Fig. 10 .
Fig. 10.The latency measured from the antenna to the output of the O-E converter was 177 µs.

Table 1 .
GNSS signals and their frequency allocation, as transmitted by satellites.The minimum required receiver bandwidth is computed upon its corresponding modulation and the Nyquist criterion (although narrower receivers are known to work, e.g., for Galileo E5a and E5b [35]) and the reference bandwidth is as defined in the corresponding Interface Control Document.Notation is as follows: OS: Open Service; SoL: Safety of Life; CS: Commercial Service; PRS: Public Regulated Service; M: Military.CDMA/FDMA: Code / Frequency Division Multiple Access.BPSK(n): Binary Phase Shift Keying with chip rate r