Multitenant Transport Networks With SDN/NFV

We propose the combination of optical network virtualization and network function virtualization (NFV) for deployment of on-demand OpenFlow-controlled virtual optical networks (VON). Each tenant SDN controller is run on the cloud, so the tenant can control the deployed VON. This paper demonstrates the feasibility of the proposed use case and provides implementation details in the ADRENALINE testbed of an NFV orchestrator, which is able to provide multitenancy on top of an heterogeneous transport network by means of network orchestration and virtualization.


I. INTRODUCTION
O PTICAL Network Virtualization is a key technology to optimize network infrastructure. Infrastructure multitenancy is one of the main requirements for future 5G transport networks. These will cover several network segments with heterogeneous technologies (i.e., packet, optical, wireless) and control domains (i.e., GMPLS, MPLS-TP, OpenFlow).
It is in this context, where SDN orchestration [1] has been proposed as a feasible solution to handle the heterogeneity of network domains, technologies and vendors. SDN orchestration can fulfill the stringent requirements for End-to-End (E2E) connections, such as low latency and high bandwidth. It focuses on network control and abstraction through several control domains, whilst using standard protocols and modules. A network domain is understood as a set of Network Elements (NE) under a logically centralized SDN Controller. A multi-domain SDN Orchestration can be analyzed in several contexts, such as pure OpenFlow (OF)-enabled networks, and heterogeneously-controlled networks (GMPLS/PCE and OF). Several initiatives in standardization organizations such as ONF or IETF advocate for the necessity of SDN orchestration.
As an example, OIF-ONF presented last year the results of their Global Transport SDN Prototype Demonstration [2], where several SDN controllers and hierarchy control levels where analyzed. The Abstraction and Control of Transport Networks (ACTN) framework, proposed as draft at IETF [3], is expected to provide a number of capabilities/features such as seamless hierarchical service coordination across multitenants, control of virtual and physical network domains, as R. Vilalta Network function virtualization (NFV) aims to reduce network OpEx and CapEx. In brief, NFV [4] looks to implement network functions that are typically deployed in specialized hardware servers as software instances running on commodity servers through software virtualization techniques. These virtualized functions are called virtual network functions (VNF), and can be located in the most appropriate places (e.g., data centers), referred to as NFV Infrastructure Point of Presence (NFVI-PoPs) [4]. NFV is applicable to any data plane packet processing and control plane function in both fixed and mobile network infrastructures.
NFV and SDN have become key concepts to understand the evolving of current networks. In this regard, the main objective pursued by telecom operators is to gradually adopt the innovations carried out in the IT industry in the last years.
It is in this view, where we propose the direct handling of the allocated Virtual Optical Network (VON) resources by a tenant independently controlled by its own Customer SDN Controller (CSC). The entity responsible for network virtualization is the Multi-domain Network Hypervisor (MNH) [13]. It interacts with a Multi-domain SDN Orchestrator (MSO) [10] in order to provision E2E virtual links, which compose the VON.
Typically, the SDN controller of each VON runs in a dedicated host. It can be deployed using several available software implementations such as OpenDaylight, or ONOS. The use of a standardized interface between the SDN controller and the MNH for network discovery and connection provisioning allows that any SDN controller implementation can be used to control a VON. Thus, when a new VON is dynamically created through the MNH, it is required to manually install and configure an SDN controller implementation on a dedicated server, as well as to provide connectivity between the MNH and the SDN controller servers, typically located in a Network Operation Center [5].
We have also proposed to virtualize the CSC and move it into the cloud [5]. Dynamic deployment of independent SDN controller instances can be provisioned within minutes, whenever new VONs are dynamically deployed. This approach offers additional advantages such as the lack of hardware maintenance downtime (i.e., a virtual SDN controller can be quickly and easily moved between physical hosts within a data center when hardware maintenance is required), along with faster recovery times in case of a network disaster or failover. This paper extends [6] and we present and detail the experimental validation of an SDN/NFV orchestration architecture for multi-tenant transport networks to dynamically deploy VONs and their corresponding virtual SDN controllers (e.g.,

II. PROPOSED ARCHITECTURE FOR SDN/NFV TRANSPORT NETWORKS
We observe the necessity to offer different NFV services upon multi-domain transport networks [7]. The architecture for providing such NFV services is depicted in Figure 1. The main components of this architecture are: NFV Orchestrator, VNF Manager, and the Integrated Cloud and Network Orchestrator.
The NFV manager and orchestrator (MANO) architecture is defined by the ETSI [4] and is responsible for handling the life cycle management of the physical and software resources to support the infrastructure virtualization and the creation/destruction of the different VNFs. Inside the NFV MANO, NFV Orchestrator is the responsible for handling the various VNF managers and for offering the aforementioned services. The importance of defining a North Bound Interface (NBI) for the NFV Orchestrator is clear, as users or applications shall use the NBI to request the NFV services. A VNF Manager is responsible for the life cycle management (i.e., creation, configuration, and removal) of a VNF. Multiple VNF Managers may be deployed; a VNF Manager may be deployed for each VNF, or a VNF Manager may serve multiple VNFs.
Finally, the virtual infrastructure managers (VIM) are responsible for the control and management of the interaction of a VNF with the different IT and network resources under its authority, as well as the virtualization of these resources. We have introduced the Integrated Cloud and Network Orchestrator [8] acting as a VIM. The Integrated Cloud and Network Orchestrator is able to both interact with a Cloud Controller (e.g., OpenStack) and a MSO. By doing so, it can also request a VON towards a MNH and use the customer (i.e., tenant) SDN Controller. The Integrated Cloud and Network Orchestrator interacts with the MSO by declaring the necessary services that need to be established for the E2E path provisioning of several Virtual Machines (VM).
In the next sections we follow a bottom-up approach to describe the different components of the proposed architecture.

A. Multi-domain SDN Orchestrator
The MSO ( Figure 2) is introduced in order to support end-to-end connectivity by orchestrating the different network domains through per-domain SDN/OF or GMPLS/PCE controllers. It is based on the Application-Based Network Operations (ABNO) framework [9], which has been standardized by IETF. The MSO takes into account the heterogeneous underlying network resources (e.g., multi-domain, multi-layer and multi-control network resources). It is assumed that the underlying SDN controllers are able to provide network topology information and flow programming functions. In [10], MSO has been experimentally validated for enabling multi-layer and multi-domain network orchestration. The NBI of SDN The Network Orchestration Controller is the component responsible for handling all the processes involved and to provision end-to-end connectivity services. It also exposes a NBI to offer its services to applications, such as the MNH.
The Topology Server is the component responsible for gathering the network topology from each control domain and building the whole network topology which is stored in the Traffic Engineering Database (TED). The TED includes all the information about network links and nodes, and is used by the dedicated PCE for calculating routes across the network.
VNTM is responsible for the multi-layer management. In the proposed architecture, the VNTM arranges the establishment of an optical connection, which is then offered it as a logical L2 link to satisfy any incoming L2 connectivity demand.
The Provisioning Manager implements the different provisioning interfaces to command the forwarding rules and the establishment of connectivity segments into the data plane. The Flow server stores the connections established in the network into a Flow DataBase (FlowDB). It can be observed that an E2E connection is requested to the MSO. The MSO computes the necessary path taking into account the overall topology of the network domains. The MSO first requests an optical lightpath to the Active Stateful Path Computation Element (AS-PCE), which acts as an optical SDN controller. Once the lightpath has been established, the different packet flows are requested to the underlying packet SDN controllers. More details on this process can be obtained in [11].
With regard to SDN orchestration, we shall present the Control Orchestration Protocol (COP) [12], which abstracts a common set of control plane functions used by a number of SDN controllers, allowing the interworking of heterogeneous control plane paradigms (i.e., OpenFlow, GMPLS/PCE). COP has been defined using YANG model language and can be transported using RESTconf, which is being adopted by the industry. The COP definition covers the topological information about the network, a call service for establishing E2E connections and path computation service. The use of COP will ease the deployment of SDN orchestration in the near future.

B. Multi-domain Network Hypervisor
The MNH is responsible for providing the abstraction and virtualization of the underlying network resources (Figure 4). It is introduced to dynamically deploy multi-tenant virtual networks on top of networks orchestrated by the MSO, providing a network overlay.
The MNH architecture [13] is as follows. The VON request controller is the component that is responsible for providing the MNH interface to request both virtual switches and virtual links to actually deploy a VON. To do so, the IP address of the CSC is necessary. The Virtual Switch Handler provides an abstract network view of the allocated VON to the CSC (identified by the incoming IP address).
A virtual switch request includes a number of virtual Ethernet ports, which relate to underlying NE (which might have been abstracted as nodes by the MSO). On the other hand, a virtual link request includes the source and destination virtual switches. The Resource Allocation (RA) component is responsible for the allocation of the physical ports of the physical domains to the virtual switches and to request to the MSO (through the provisioning component) the necessary multi-domain connections to interconnect the requested virtual switches, which are related to physical domains. Once the connections have been established, the RA allocates the virtual port identifiers, to which the connections are related.  For each VON, the Virtual Switch Handler provides the necessary OF datapaths with the provided IP address of the corresponding CSC, which is run as a virtual SDN controller. Each OF datapath is provided by an emulated OF virtual switch. The different emulated OF virtual switches are interconnected with virtual links, so when the CSC triggers the automatic topology discovery by means of the link layer discovery protocol (LLDP) to the emulated virtual switches, the CSC is able to retrieve the VON topology. The emulated virtual OF switches are connected to the Virtual to Physical (V2P) Interpreter, which is the responsible to translate the received OF command (e.g., FLOW MOD) from the CSC using the abstract VON topological view, to the allocated physical resources. To this end, it consults the VON Database for the allocated physical ports and the established connections. The processed requests are sent to the provisioning module, which is the responsible to request the provisioning of the physical resources to the MSO. This procedure is described with more detail in [13]. Figure 5 shows the proposed workflow for the dynamic deployment of an SDN-controlled VON and its control. The VON deployment starts with the creation of the VM which will act as the CSC. Once the CSC is running, the Integrated Cloud and Network Orchestrator requests a VON to the MNH. The MNH interacts with the MSO in order to provision the necessary virtual links and finally the VON creation is acknowledged. CSC is connected towards the MNH and is able to handle the allocated VON resources. The OF commands towards the VON are translated by the MNH.

C. Integrated Cloud and Network Orchestrator
The Integrated Cloud and Network Orchestrator is responsible for handling the different IT and Network resources. Virtualization of IT resources is provided by means of a Cloud Orchestrator. The virtualization of the network resources is handled by means of MNH, which slices the different physical network domains (e.g., packet and optical domains).
The Integrated Cloud and Network Orchestrator provides the following services: VM Create, Read, Update, Delete (CRUD) mechanism; VON CRUD; Network CRUD mechanism; and VM migration.
The VM CRUD mechanism allows, via a REST API, to create, read, update or delete a VM. A VM might be requested based on its availability zone, its hardware resources (i.e., flavor), or the disk image to be loaded. A VM is also allocated inside a network. A second management network is provided by default to the VM to provide management access to the VM by the different applications (e.g., VNF Managers).
The VON CRUD mechanism allows to create, read, update or delete a VON, which might be controlled through a CSC. In order to provide the necessary VONs, the Integrated Cloud and Network Orchestrator interacts with the MNH.
The network CRUD mechanism allows to create, read, update or delete a L3 network, which might include a valid IP range, from which an IP address is assigned to a VM virtual network interface card (NIC). To create the network, end-toend paths are requested between each VM attached to the network. The process to provide an end-to-end path between two VMs is described in Fig. 5. The Integrated Cloud and Network Orchestrator issues flow requests between VM1 and VM2 to MSO. After computing the route, the MSO is aware of either the positive reachability of the computing resources through the packet network (intra-DC) or whether an inter-DC connection is needed.
In the first case, the MSO is ready to send the command to the SDN Controller to establish the forwarding rules to the virtual OpenFlow Host switches (e.g., OpenVSwitch -OVS) and into the intra-DC switches. In the second case, the MSO needs to establish an optical connection between the DCs, via the AS-PCE PCInitiate message. When the optical connection has been established, the SDN Controller is able to discover the new L2 link established between the DCs. A new path computation will be triggered and its results will derive into the establishment of the necessary forwarding rules to the SDN controller.
Different types of VM migration exist, the most common being live and block migrations. The former allows moving a VM without interrupting the processes running inside it. The latter involves service downtime as the VM needs to be stopped. VMs do not run isolated. Typically, a VM is connected with other VMs in the same network to offer a joint service. If one of the VMs from a network is migrated, its connection state must be maintained, which is known as a VM seamless migration, and is performed by the Integrated Cloud and Network Orchestrator [8].

D. NFV Orchestrator
The NFV Orchestrator is responsible for managing the life cycle of the physical and software resources to support the infrastructure virtualization and the life cycle of the different VNFs [4], as defined by the ETSI. The NFV Orchestrator architecture and interfaces are subject to definition at standardization bodies, and still under discussion.
In order to simplify the proposed transport NFV architecture, we have included in the NFV orchestrator a simplified VNF manager. The VNF manager design and definition are being discussed as well at standardization bodies. We propose a simple design, based on the necessary functionalities to be  (Fig. 6). The VNF controller offers the interface to NFV Orchestrator to control the VNF. The Virtual IT resources component interacts with the Integrated Cloud and Network Orchestrator to obtain the necessary IT and network resources to deploy a VNF. In order to control the VNF, the VNF Manager has also access to the obtained IT resources (e.g., the VNF Manager has access to the allocated VM). Finally, VNF life cycle component manages the life cycle of the VNF, monitors it and notifies of incidences to the VNF controller.

A. Testbed description
The proposed architecture has been validated in the cloud computing platform and transport network of the ADRENALINE Testbed [14] (Figure 7). The cloud computing platform is controlled using OpenStack (Havana release), which has been deployed using servers with 2 x Intel Xeon E5-2420 and 32GB RAM each. An OpenStack controller node and four compute nodes have been setup in different network locations. Four OpenFlow switches have been deployed using COTS hardware and OpenVSwitch. Three data center border switches include 10 Gb/s XFP tunable transponder, in order to connect into the DWDM network as alien wavelengths. Finally, the GMPLS/PCE-controlled optical network is composed of an all-optical WSON with 2 ROADMs and 2 OXCs providing reconfigurable end-to-end lightpaths. The implementation of the MSO has been previously reported [7] and uses Python for most of the components and  Figure 9 shows the different network domains from the MSO perspective, which are the optical network domain (in red), the OpenFlow domain (in green), and the inter-domain links (in blue). The NFV orchestrator is a basic set of Python scripts in order to request the necessary VNF managers depending on the proposed use case. A mechanism for providing service function chaining will be incorporated.

B. Virtual SDN Controllers as VNF for controlling a Virtual Optical Network
In this section we present the experimental validation of the presented use case of virtual SDN controllers, which run as VNFs and are responsible for controlling dynamically deployed VONs.
In the proposed validation, two virtual switches and a virtual link are deployed on top of the ADRENALINE transport network. The virtual ports of the virtual switches are allocated to physical ports on specific switches, which is known as port slicing. Other types of slicing can be applied using the proposed architecture. Figure 10 depicts the different network domains from the MSO perspective (in green), and the deployed VON on top of the orchestrated domains (in blue). The slashed lines represent the logical relationship between the deployed virtual and the physical switches that have been allocated. Fig. 10: Mapping between physical topology and virtual topology Figure 11 shows the deployed vSDN controller for the dynamically allocated VON. It can be observed how the customer OpenDayLight (ODL) receives the incoming connections from the virtual OF switches, including each virtual switch information. The virtual links are discovered through standard LLDP. The MNH receives the incoming OF Packet Out messages to be sent through the source port of the source virtual link (including LLDP information) and translates it into an OF Packet In message it might have received in the destination port of the destination virtual switch. Through this mechanism, ODL is able to identify the virtual links.  Figure 12, we observe the protocol network analyzer (i.e., Wireshark) capture of the workflow from the perspective of the Integrated Cloud and Network Orchestrator. It interacts with the Cloud Controller, in order to provision the virtual SDN controller (create vm); and later, towards the MNH, for the provisioning of the VON (create virtual network). We can also observe the interaction (create service call) between the MNH and MSO for the establishment of the VON. It can be observed how the related messages follow a REST HTTP interface, including including JavaScript Object Notation (JSON) messages. The deployment time for a virtual SDN controller is around 69s, and the requested virtual network setup delay, including the necessary network connections is around 4s. These results are in the expected order of magnitude to simplify network operations. We have experimentally validated in the Cloud Computing Platform and Transport Network of the ADRENALINE testbed, the combination of optical network virtualization and NFV for deployment of on-demand OF-controlled Virtual Optical Networks (VON). Each tenant SDN controller will be run on the Cloud, focusing on the deployment of control plane functions as VNFs.