Multi-Tenant Transport Networks with SDN/NFV

We propose the combination of optical network virtualization and NFV for deployment of on-demand OpenFlow-controlled Virtual Optical Networks (VON). Each tenant SDN controller will be run on the Cloud, so the tenant can control the deployed VON.


Introduction
Optical Network Virtualization is a key technology to optimize network infrastructure.Infrastructure multi-tenancy is one of the key requirements for future transport networks, which will cover several network segments with heterogeneous technologies and control domains.It is in this context, where SDN orchestration 1 can provide the stringent requirements for End-to-End (E2E) connections, by handling the heterogeneity of network domains, technologies and vendors.Network Function Virtualization (NFV) is expected to reduce network OPEX and CAPEX.
NFV and SDN are key concepts to understand the evolving of current networks.In this regard, the main objective pursued by telecom operators is to gradually adopt the innovations carried out in the IT industry in the last years.
It is in this view, where we propose the direct handling of the allocated Virtual Optical Network (VON) resources by a tenant independently controlled by its own Customer SDN Controller (CSC).The responsible for network virtualization is the Multi-domain Network Hypervisor 2 (MNH).It interacts with a Multi-domain SDN Orchestrator 3 (MSO) in order to provision E2E virtual links.
Typically, the SDN controller of each VON runs in a dedicated host.It can be deployed using several available software implementations such as OpenDaylight, or ONOS.The use of a standardized interface between the SDN controller and the MNH for network discovery and connection provisioning allows that any SDN controller implementation can be used to control a VON.Thus, when a new VON is dynamically created through the MNH, it is required to manually install and configure an SDN controller implementation on a dedicated server, as well as to provide connectivity between the MNH and the SDN controller servers, typically located in a Network Operation Center 4 .
We have also proposed to virtualize the CSC and move it into the cloud 4 .Dynamic deployment of independent SDN controller instances can be provisioned within minutes, whenever new VONs are dynamically deployed.This approach offers additional advantages such as the lack of hardware maintenance downtime (i.e., a virtual SDN controller can be quickly and easily moved between physical hosts within a data center when hardware maintenance is required), along with decreased recovery time in case of a network disaster or failover.
In this paper, we present the experimental validation of an SDN/NFV orchestration architecture for multi-tenant transport networks to dynamically deploy VONs and their corresponding virtual SDN controllers as Virtual Network Functions (VNF) in data-centers.All the presented results have been obtained in the Cloud Computing Platform and Transport Network of the ADRENALINE testbed.

Proposed Architecture
Different NFV services are offered upon multidomain transport networks 5 .The architecture for providing NFV services is depicted in Fig. 1.a.The main components of this architecture are: NFV Orchestrator, Virtua Network Function (VNF) Manager, and Virtual Infrastructure Manager (VIM).
The NFV MANO is defined by the ETSI 6 as the responsible to cover the life cycle management of the physical and software resources to support the infrastructure virtualization and the life cycle of the different VNFs.The NFV Orchestrator is the responsible for handling the various VNF managers and for offering the aforementioned services.The importance of defining a North Bound Interface (NBI) for the NFV Orchestrator is clear, as users or applications shall use the NBI to request the NFV services.A VNF Manager is responsible for the life cycle management (i.e., creation, configuration, and removal) of a VNF.Multiple VNF Managers may be deployed; a VNF Manager may be deployed for each VNF, or a VNF Manager may serve multiple VNFs.
Finally, the VIM(s) are the responsible for the control and management of the interaction of a VNF with the different IT and network resources under its authority, as well as the virtualization of these resources.We have introduced the SDN IT and Network Orchestrator (SINO) 5 acting as the VIM.The SINO is able to both interact with a Cloud Controller (e.g., OpenStack) and a MSO.By doing so, the SINO can also request a VON towards a MNH and use the Customer (i.e., tenant) SDN Controller.The SINO interacts with the MSO by declaring the necessary services that need to be established for the E2E path provisioning of several Virtual Machines (VM).
The MNH is the responsible for providing the abstraction and virtualization of the underlying network resources (see Fig. 1.b).It is introduced to dynamically deploy multi-tenant virtual networks on top of networks orchestrated by the MSO, providing a network overlay.The MNH architecture is as follows.The VON request controller is the component that is responsible for providing the MNH interface to request both virtual switches and virtual links to actually deploy a VON.To do so, the IP address of the CSC is necessary.The Virtual Switch Handler provides an abstract network view of the allocated VON to the CSC (identified by the incoming IP address).A virtual switch request includes the related physical domains (abstracted as nodes by the MSO) and a number of virtual Ethernet ports.On the other hand, a virtual link request includes the source and destination virtual switches.The Resource Allocation (RA) component is responsible for the allocation of the physical ports of the physical domains to the virtual switches and to request to the MSO (through the provisioning component) the necessary multidomain connections to interconnect the requested virtual switches, which are related to physical domains.Once the connections have been established, the RA allocates the virtual port identifiers, to which the connections are related.
For each VON, the Virtual Switch Handler provides the necessary OpenFlow (OF) datapaths with the provided IP address of the corresponding CSC.Each OF datapath is provided by an emulated OF virtual switch.The different emulated OF virtual switches are interconnected with virtual links, so when the CSC triggers the automatic topology discovery by means of the LLDP to the emulated virtual switches, the CSC is able to recover the VON topology.The emulated virtual OF switches are connected to the Virtual to Physical (V2P) Interpreter, which is the responsible to translate the received OF command (e.g., FLOW_MOD) from the CSC using the abstract VON topological view, to the allocated physical resources.To this end, it consults the VON Database for the allocated physical ports and the established connections.The processed requests are sent to the provisioning module, which is the responsible to request the provisioning of the physical resources to the MSO.
The MSO is introduced in order to support endto-end connectivity by orchestrating the different network domains through per-domain SDN/OpenFlow or GMPLS/PCE controllers.It is based on the Application-Based Network Operations (ABNO) framework.The MSO must take into account the heterogeneous underlying network resources.The NBI of a Physical SDN Controllers are typically technology and vendor dependent, so the MSO shall implement different plugins for each of the NBI.The Network Orchestration Controller is the component (see Fig. 1.b) responsible for handling all the processes involved and to provision end-to-end connectivity services.The Topology Server is responsible for gathering the (abstracted) network topology from each control domain which Fig. 2.a shows the proposed workflow for the dynamic deployment of an SDN-controlled VON and its control.The VON deployment starts with the creation of the VM which will act as the CSC.
Once the CSC is running, the SINO requests a VON to the MNH.The MNH interacts with the MSO in order to provision the necessary virtual links and finally the VON creation is acknowledged.CSC is connected towards the MNH and is able to handle the allocated VON resources.The OF commands towards the VON are translated by the MNH.

Experimental Validation
The proposed architecture has been validated in the Cloud Computing platform and Transport Network of the ADRENALINE Testbed 7 .The OpenStack Havana release has been deployed into servers with 2 x Intel Xeon E5-2420 and 32GB RAM each.Four OpenFlow switches have been deployed.Each Data Center border switch has a 10 Gb/s XFP tunable transponder and OFS.Finally, the GMPLS-controlled optical network is composed of an all-optical WSON with 2 ROADMs and 2 OXCs providing reconfigurable end-to-end lightpaths.
The implementation of the MSO has been previously reported 5 and uses Python for most of the components and C++ for PCE component.Several internal REST interfaces are offered between the different components.In Fig. 2.b, we observe the Wireshark capture of the workflow from the perspective of the SINO, which interacts with the Cloud Controller, in order to provision the CSC; the MNH, for the provisioning of the VON.We can also observe the interaction between MNH and MSO for the establishment of the VON.

Conclusions
We have experimentally validated in the Cloud Computing Platform and Transport Network of the ADRENALINE testbed, the combination of optical network virtualization and NFV for deployment of on-demand OF-controlled Virtual Optical Networks (VON).Each tenant SDN controller will be run on the Cloud, focusing on the deployment of control plane functions as VNFs.

Fig. 1 .
Fig.1.cshows the different network domains from the MSO perspective (in green), and the deployed VON on top of the orchestrated domains (in blue).Two virtual switches and a