Two-Level Abstraction Approach for SDN-based Service Provisioning in Open Line Systems Featuring TAPI Externalized Path Computation

Co-employment of node level and component level model is essential to achieve both SDN controller scalability and accurate path computation for functional block-based disaggregation networks. A proposed control system is proto typed and optical path provisioning within 30 seconds is demonstrated on an optical network testbed.


Introduction
In the evolution towards cloud-native photonic networks and beyond 5G, the requirements to optical networks are increasingly demanding especially in terms of flexibility and agility. To realize such future optical networks, disaggregation technologies have been widely studied, including partial disaggregation, full disaggregation, and component level disaggregation [1] . The latter considers finer management granularity and imposes a larger number of devices to manage on an SDN controller. Partial disaggregation decouples optical transponders from the line equipment and employs a dedicated open line system (OLS) controller for provisioning network media channels. It has attracted much attention due to advantages such as timely transponder upgrades, clear design of transmission lines, and SDN controller scalability. Yet, component level disaggregation is of increasing importance since having full visibility of the optical components is needed for an accurate path computation reflecting the actual node structures. Recently, a functional block-based disaggregation (FBD) [2] model that describes precise network topology including intra-node structures and mathematically defines the routing constraints has been developed.
In order to address both SDN controller scalability issues and to allow for an accurate path computation service, this paper demonstrates an automated optical network control/management system employing two abstraction levels, i.e., partial disaggregation and component level disaggregation. In the developed system, the WDM orchestrator and the OLS controller (OLS-C) operate at a "node" level abstraction, while the path computation function works at the "device" or "cir- cuit pack" level. The path computation function is externalized in a dedicated server, relying on the ONF TAPI model [3] with RESTCONF interface between the OLS-C and the path computation server (PCS).

FBD model and TAPI topology context
In the FBD model, individual optical components are modeled and the optical fiber connections among the ports of the individual components are described. Furthermore, the switching/filtering functionalities (i.e., constraints such as input/output connectivity and spectrum availability) of the individual optical components are mathematically defined in a machine-readable format using ILP formulation methods. By computing the ILP formula, the switching functionalities for whole nodes or networks can be automatically analyzed [2] . On the other hand, in the TAPI topology context, the detailed intra-node structures are abstracted and attributes such as links, nodes, node edge points (NEP), and service interface points (SIP) are described. The photonic media model augments e.g., core components with information such as supportable frequency ranges. Fig. 1 illustrates the schematic mappings between the FBD model (indicated by green and red lines) and TAPI topology context (indicated by orange circles and letters). Since the FBD model provides the precise topology information, the TAPI topology context can be generated from the FBD model-based topology description by extracting the information of nodes, links, NEPs, and SIPs. In addition, the supportable frequency at each NEP or SIP can be calculated with the ILP formula defined in the FBD model. We have developed a mapping algorithm which automatically analyzes an FBD model-based topology description and outputs a TAPI topology context. Fig. 2 shows an overview of the developed PCS based on FBD model which consists of two parts: the swagger-server and the FBD-based calculator module. The former provides a TAPI compatible interface where the server stub is generated by the open-source yang2swagger and necessary functions are implemented. In the latter, an FBD model-based topology description, a calculation model file describing constraint and objective functions, and calculation data files describing variable and set definitions are stored. The model file, data files, and the TAPI topology context file are automatically generated from the FBD model-based topology description file; here, the generation process is performed offline. In addition to the TAPI topology context, the PCS stores detailed resource usage to resolve the network resource contention at the component level.

TAPI path computation service based on the FBD model
For a path computation, the server receives a path computation request via TAPI, obtains the SIPs corresponding to the end points, and maps them to the component port names defined in the FBD model-based topology description. The calculator module adds the end-points to the data files for specifying the path computation source/destination points and then sends the model and data files to the open-source GLPK optimization solver [4] . Here, the objective function is simply set to minimize the required network resources at the component level. After receiving the computation results from the GLPK solver, the server returns the computation results to client and updates the resource usage and the TAPI topology context accordingly. Note that this computation service can compute not only the inter-node paths (visible links) but also the intranode detailed connections, i.e., the intra-node contention constraints are fully considered.
The PCS can universally handle any optical components or node structures without software updates since the routing constraints are defined externally in the topology description files at the component level with a general way (i.e., ILP formula) instead of dedicated definitions such as colored or directioned. Though a path computation is formulated in a single optimization problem here, the problem can be divided into several sub problems such as inter-node and intra-node routing, which can enhance the PCS scalability. Furthermore, the ILP formula are described in a standard machine readable format, GNU MathProg modeling language [4] , so that the external optimization solver can be easily changed/upgraded. The time of path computation based on the FBD model is discussed in [2] .

Experimental setup
The validation of the approach is carried out in the ADRENALINE testbed [5] , shown in the Fig. 3. As per Fig. 2, the WDM orchestrator controller (192.168.0.82) follows a hierarchical partial disaggregation and is responsible coordinating the functions of the OLS-C and the Transponders. The PCS (IP address 172.50.0.1) starts and constructs a per-device FBD-based node model of the optical network and abstracts it into a TAPI 2.1 photonic-media model and context. The OLS-C, responsible for setting up Media Channel (MC) connectivity services, has two IP addresses: 172.50.0.254 within the same sub-net as the path computation server in order to retrieve the TAPI context, topology and to request path computations and 192.168.169.100, which is used to connect to the agents running in the nodes (notably The experiment is as follows (within brackets, the corresponding item in Fig. 4): At time 0, [1] the OLS-C starts and requests the TAPI context from the PCS. Since the OLS-C needs to provision the path once computed, the IP addresses and ports of the node agents are included in the topology objects using the "value-name" and "value" mechanism of TAPI to encode additional node attributes. In [19] the PCS replies. At [20], the WDM controller gets the whole context from the OLS and maps transceivers line ports with OLS client ports. At [56], the WDM controller requests a TAPI photonic media channel connectivity service between 2 OLS client ports, using the create-connectivity-service remote procedure call (RPC). The OLS-C [57] retrieves the most up-to-date context (this is done on a per request basis) and requests a path computation service [79]. Approximately 20 seconds later ([84]) the PCS replies with the path (see Fig. 3 for an example encoding). The OLS-C instantiates the crossconnections to nodes 2 and 4, [86,87]. The RPC reply is sent to the WDM controller orchestrator [94]. As shown,the process takes approximately 30 s, being the path computation the most important factor. Note that this process only involves the OLS domain and does not include the additional time required by the transceivers. A similar process [95-135] is done to release the connectivity service, which takes less time (approx. 7 s) since there is no path computation involved.

Conclusions
A control architecture co-employing partial and component level disaggregation models was experimentally demonstrated.
The experiment shows the applicability of the TAPI models (i.e., the photonic-media layer) for provisioning and path computation. Results are in line with industry practice and validate the approach: latency and overhead are acceptable, with setup delays in the order of seconds, significantly lower than typical hardware configuration delays. Path computations at the component level remain as concern about computation time for large scale networks, however, the component model described in a machine-readable way will be beneficial to the future disaggregation era; since it can address any node structures and make externalization of the path computation function simpler.