TAPI-enabled SDN control for partially disaggregated multi-domain (OLS) and multi-layer (WDM over SDM) optical networks [Invited]

Network operators are facing a critical issue on their optical transport networks to deploy 5G$+$+ and Internet of Things services. They need to address the capacity increase by a factor of 10, while keeping a similar cost per user. Over the past years, network operators have been working on the optical disaggregated approach with great interest for achieving the required efficiency and cost reduction. In particular, partially disaggregated optical networks make it possible to decouple the transponders from the transport system (known as an open line system) that are provided by different vendors. On the other hand, space division multiplexing (SDM) has been proposed as the key technology to overcome the capacity crunch that the optical standard single-mode fibers are facing to support the forecasted $10 \times$10× growth. Spatial core switching is gaining interest because it makes it possible to deploy SDM networks to bypass the overloaded wavelength division multiplexing (WDM) networks, by provisioning spatial media channels between WDM nodes. This paper presents, to the best of our knowledge, the first experimental demonstration of transport-application-programming-interface-enabled software defined networking control architecture for partially disaggregated multi-domain and multi-layer (WDM over SDM) optical networks.


INTRODUCTION
Over the past decade, datacenter operators (e.g., Facebook, Google, and Amazon) have consolidated a recognized strategy for achieving efficiency and cost reduction within a datacenter by disaggregating the software from the hardware. It is based on the use of white boxes (i.e., commercial off-the-shelf hardware) with open application programming interfaces (APIs) decoupled from the operating system that can be customized. These models are extended to support high-bandwidth interconnection between datacenters, including optical transmission and switching, as in Facebook's Telecom Infrastructure Project [1]. Transport network operators look at this disaggregated approach with great interest to replace the established aggregated model, based on non-interoperable single-vendor optical systems [2]. This closed model generates vendor islands, since the same vendor has to provide all optical systems within a domain. Disaggregation aims at providing a new degree of flexibility, allowing system migrations and upgrades without vendor lock-in.
In general, two optical disaggregation models are considered: partially or fully disaggregated. In the former, the transport system [reconfigurable optical add-drop multiplexers (ROADMs), optical amplifiers, etc.], known as an open line system (OLS), still remains a single-vendor system, but the transponders are provided by different vendors from the OLS. Both the OLS and the transponders require open and standard APIs to export the programmability to an optical transport software defined networking (SDN) controller that orchestrates both the OLS and transponders. The main goal is to provide unified information and data models (vendor-neutral) using flexible data modeling languages [e.g., Yet Another Next Generation (YANG)] to describe the device capabilities, attributes, operations to be performed on a device or system, and notifications, as well as efficient transport protocols [e.g., network configuration protocol (NETCONF) or representational state transfer configuration protocol (RESTCONF)] that provide primitives to view and manipulate the data, providing a suitable encoding as defined by the data model. The open and standard interfaces that are getting more support from the Moreover, efficient routing, spectrum, and spatial assignment (RSSA) algorithms can be applied to weakly coupled or even strongly coupled MCFs to minimize the impact of the intercore crosstalk. In this paper, we consider uncoupled/weakly coupled SDM networks that allow being considered not only for point-to-point transmission, but also for switching. Several generic architectures are proposed in the literature [9]: (i) independent switching, where each spatial core/mode and optical channel can be switched independently; (ii) spatial switching, (i.e., spatial core/mode switching across all optical channels); (iii) joint switching (i.e., spectrum switching across all spatial cores/modes); and (iv) fractional joint switching (i.e., spectrum switching across groups of spatial modes/cores). In this paper, we consider spatial (core) switching in order to deploy a spatial channel network (SCN) as proposed in [10]. An SCN makes it possible to bypass the overloaded WDM networks by provisioning spatial (core) paths between WDM nodes, following a similar strategy as done for Internet Protocol (IP) over WDM. It can be developed with discrete components, such as MCF fan-in/out, mode mux/demux, and optical fiber switches, or with state-of-the-art technology such as core selective switches (CSSs) as proposed in [11]. The performance comparison among WDM, BDM, and SDM are out of the scope of this paper. For interested readers, please refer to [8,12].
In [13], we presented the first TAPI-enabled SDN control architecture for partially disaggregated multi-domain (OLS) and multi-layer (WDM over SDM) optical networks. The target scenario involved multiple WDM OLS domains and SDM OLS domains (i.e., an SCN), as well as multiple transponders. It described the multi-layer provisioning of an end-to-end connectivity service between WDM transponders across the WDM and SDM OLS domains. In this paper, we extend this work by experimentally evaluating this SDN control architecture and providing full details about all involved control entities and functionalities. The paper is organized as follows: in Section 2, we review the state of the art on SDN control architectures to conclude that this paper is the first work addressing partially disaggregated multi-domain (OLS) and multi-layer (WDM over SDM) optical networks; Section 3 describes the target software-defined partially disaggregated multi-domain and multi-layer optical network architecture; Section 4 presents the proposed SDN control system architecture and protocols, including the OLS controller and the optical SDN controller; and Section 5 presents the proposed SDN control workflow for the multi-layer management of the TAPI context and end-to-end TAPI connectivity services. Finally, we report the proof of concept carried out to validate the proposed SDN control system in a joint testbed between CTTC and KDDI Research in Section 6, and Section 7 concludes the paper.

RELATED WORK ON SDN CONTROL ARCHITECTURES AND INTERFACES
Several SDN control and orchestration architectures for disaggregated optical networks have been proposed since 2018. Reference [14] presents an SDN control architecture for fully disaggregated optical networks based on the Open Network Operating System (ONOS) SDN conroller [15]. The implemented extensions on ONOS for TAPI, OpenROADM, and OpenConfig are presented. Similarly, in [16], the authors present an SDN control architecture for fully disaggregated optical networks based on container-based microservices. First experimental demonstrations of an ONOS platform for fully disaggregated optical networks with vendor neutral NETCONF/YANG control of optical elements, including ROADMs, transponders, and amplifiers, are presented in [17,18]. Experimental demonstration of OpenConfig for sliceable bandwidth variable transponders (BVTs) using an ONOS SDN controller is also presented in [19]. Another SDN control architecture based on ONOS for fully disaggregated optical networks is presented in [20]. This paper summarizes the latest developments and results within the ONF ODTN project. The demonstration covers mainly the dynamic provisioning of data connectivity services and advanced automatic failure recovery, at both the control and data plane levels. Several other papers propose to extend the ONOS SDN controller to add an in-operation planning tool (e.g., net2plan, GNPy) for quality of transmission estimation in fully disaggregated optical networks [21][22][23]. All the above papers target fully disaggregated optical networks for WDM. The first work dealing with fully disaggregated SDM optical networks was in [24]. In this paper, we presented SDN-enabled SDM network architecture with sliceable spatial-mode SDM transceivers. In particular, we addressed the scaling up and down of SDM super-channels to increase/decrease the capacity of the SDM super-channels by exploiting the spatial modes. Previously, we presented and experimentally validated in [25] the first SDN-enabled sliceable spatial-spectral transceiver architecture to support multiple and independent SDM, WDM, and hybrid super-channels controlled with YANG/NETCONF. The deployed transceiver enables the optical SDN controller to dynamically configure the optical sub-channels of the SDM super-channels, specifying the core id, mode id, frequency slot, constellation, forward error correction (FEC) and MIMO equalization parameters.
The first paper presenting an SDN control architecture for partially disaggregated optical networks involving OLS controllers is [26]. In particular, this paper presents a demonstration of an SDN controller with two OLS controllers for WDM domains from different vendors. The interface between the OLS controller and the SDN controller is based on TAPI. Another paper dealing with partially disaggregated optical networks is [27]. Unlike the previous paper, this work addresses only a single OLS controller for a WDM domain, but multiple transponders from different vendors, supporting multiple layers (e.g., digital signal rate, optical channel data unit, photonic media layer). All the above papers are focused on the integration of the OLS controller with the optical SDN controller, but no details on the OLS architecture and implementation are given. The first paper providing a detailed description of the OLS controller architecture for hybrid fixed/flexi-grid disaggregated networks with open interfaces is given in [28]. This paper also considers a single OLS WDM domain integrated with an ONOS SDN controller using TAPI. Thus, we can conclude that no previous work deals with partially disaggregated optical networks considering multiples OLS domains for WDM and SDM. Figure 1 shows an example of the target partially disaggregated multi-domain (OLS) multi-layer (WDM over SDM) network. In general, it is composed of multi-vendor OLS domains and transceivers leveraging WDM and SDM technologies. WDM OLS domains are deployed using flexi-grid ROADMs (with internal frequency granularity of 6.25 GHz) that provide photonic media channels (MCs) (i.e., continuous optical spectrum along a spectrum path between end-points in the WDM layer). SDM OLS domains are deployed among WDM OLS domains and make it possible to bypass the WDM OLS domains, providing spatial MCs between flexi-grid ROADMs Research Article (i.e., continuous optical core along a spatial core path between end-points in the SDM layer). SDM MCs are intended to transport WDM MCs and optical signal tributaries. Each OLS domain is configured and managed by an OLS controller.

TARGET SOFTWARE-DEFINED PARTIALLY DISAGGREGATED MULTI-DOMAIN AND MULTI-LAYER OPTICAL NETWORK
Each WDM or SDM OLS domain may have connected one or more transponders provided by different vendors. WDM transponders are based on BVTs providing optical tributary signals with modulation format adaptability (for variable bitrate/distance data flows) transported in the photonic MCs. WDM transponders can support sliceability (known as sliceable BVT or S-BVT) to generate multiple optical tributary signals (known as WDM super-channels) that may be transported in the same or different photonic MCs. S-BVTs can be implemented by an array of BVT modules connected to programmable spectrum selective switches (SSSs), as presented by the authors in [29]. Similarly, the SDM transceivers can also generate multiple optical tributary signals but allocated in different spatial MCs. They are known as SDM super-channels.
The considered SDN control system architecture relies on an optical SDN controller that orchestrates the OLS controllers and the transceivers. The WDM and SDM transceivers provide common APIs (e.g., OpenConfig) to the optical SDN controller for the configuration and monitoring of the optical tributary signals in the WDM and SDM MCs. The interface between the OLS controllers and the optical SDN controller is based on TAPI. TAPI defines a common Unified Modeling Language/YANG data model for control services of the transport network, with an API-based OpenAPI specification (OAS), which provides HTTP end-points for requests using the RESTCONF protocol. The specific control services supported are context retrieval, connectivity management, notification subscription, path computation services, and virtual network management. TAPIs with extensions for photonic MCs are already included in the official release, but not for spatial MCs.
In general, the optical SDN controller gets a TAPI context from the OLS controllers. A TAPI context is defined by a set of service interface points (SIPs), which enables the optical SDN controller to request connectivity services between any pair of SIPs to an OLS controller. Additionally, a TAPI context may also expose the internal topology. OLS controllers may expose the real topology or an abstract topology for scalability purposes. The TAPI topology is expressed in terms of nodes and links. Nodes aggregate node edge points (NEPs) acting as node ports. Links interconnect two nodes and terminate on NEPs, which are mapped to SIPs at the edge of the network. The topology information is updated for each connectivity service. In particular, the NEPs are updated with a list of connection end-points (CEPs). CEPs encapsulate information related to a connection at the ingress/egress NEPs associated with the source/destination SIPs, or at every node that the connection traverses in a topology. Finally, the optical SDN controller may generate a dedicated TAPI context for each customer [e.g., network function virtualization (NFV) orchestrator]. Each customer TAPI context exposes an abstract topology from the optical SDN controller's internal context, mapping the resources assigned to this customer.

SDN CONTROL SYSTEM ARCHITECTURE A. OLS Controller Architecture
Each OLS domain is under the control of a dedicated controller instance (see [28]). As their respective northbound interface, a TAPI v2.1 photonic media model has been implemented, covering connectivity, topology, and path computation services. In particular, the (WDM) OLS domain is modeled as a (single layer) TAPI forwarding domain (FD) within the PHOTONIC_MEDIA layer and covering the MC protocol qualifier. Consequently, a TAPI client (in our case, the TAPI optical SDN controller spanning the overall network) is able to request, dynamically, the creation and deletion of connectivity services between end-points (TAPI SIPs), optionally specifying the optical spectrum to be provisioned. To support this, each SIP object within the OLS context is augmented with specific MC resource availability, referred to as the MC pool. The MC pool encompasses information about the available, supportable, and occupied frequency slots for that SIP (e.g., to convey tunability constraints or hardware limitations). Each frequency slot in the pool is characterized by its grid type and granularity as well as the lower and upper frequencies (in MHz); see Listing 1.
When a (unidirectional) connectivity service is requested, the request operation includes the source and destination SIPs as well as (optional) constraints on the spectrum allocation, based on the aforementioned spectrum resource availability. The TAPI client may request only a certain capacity (e.g., 50 GHz) and/or directly provide the frequency range to use for the underlying connection; see Listing 2.
The OLS implements a routing and spectrum assignment (RSA) heuristic taking into account hybrid fixed/flexi-grid networks with tunability constraints, based on a k-shortest (Yen's algorithm) path approach, ensuring spectrum continuity. The heuristic considers grid spacing constraints for a potential path, and, for example, a request for 125 GHz with a transceiver restricted to central frequencies in the 100 GHz grid requires allocating 10 consecutive wavelength selective switch (12.5 GHz) slices. The resulting composition may have a nonusable central frequency in the 100 GHz grid, (e.g., a nominal The SDM domain follows the same principle. For this, an additional protocol layer qualifier (tapi-sdm:PHOTONIC_ LAYER_QUALIFIER_SDM ) within the PHOTONIC_ MEDIA layer has been introduced. The model augments key TAPI entities and objects in support of SDM core switching. Consequently, the service-interface-points data nodes are augmented to report available, occupied, and supported core identifiers, describing the so-called core-pools. This is reflected in the model using the (tapi-sdm:sdm-coreservice-interface-point-spec and core-pool fields and additional data nodes. Likewise, the CEPs and connectivity service end-points (CSEPs) are augmented to reflect (and allow the specification of ) a specific core id, by means of the (tapisdm:sdm-coreconnection-end-point-spec) and (tapi-sdm:sdm-coreconnectivity-service-end-point-spec) augmentations, respectively, following the TAPI model style.

B. Optical SDN Controller Architecture
The optical SDN controller is based on the IETF Application-Based Network Operations (ABNO) architecture [30], developed at the CTTC. It makes use of a modular architecture, where the different parts have specific functions and can communicate with each other. It comprises the following modules, shown in Fig. 2: • Service orchestrator: responsible for the lifecycle management of TAPI services (connectivity services, context service, and path computation). It keeps the state of the TAPI connectivity services in the service call database (DB). It also is responsible for managing the whole workflows, calling the other modules to perform actions of their responsibility, such as requesting the connection manager to provision or remove a connection or serving the context after requesting it of the context manager.
• Connection manager: responsible for management of the connections to the WDM/SDN OLS controllers. It keeps the state of the connections requested of the WDM/SDM OLS controllers in the connection DB.
• Context manager: responsible for management of the whole internal TAPI context (i.e., initialization, update). It is based on the retrieval of TAPI context from the SDM/WDM OLS controllers plus additions for the inter-domain links, transceivers, etc. It offers support to all other control modules that can request the whole topology and add/remove/modify elements (e.g., nodes, links, etc). It can also generate abstracted views of the internal TAPI context for the path computation element (PCE), customers, etc. The TAPI context is stored in the context DB.
• Virtual network topology manager (VNTM): responsible for management of virtual links. The service orchestrator demands of the VNTM the creation of a virtual link when the computed path of a connectivity service contains OLS domains from different layers. In this case, it asks the connection manager to provision the connection within the SDM domain and creates a virtual link connecting the WDM nodes. The VNTM keeps this virtual link in the virtual DB.
• PCE: computes an end-to-end path between the endpoint SIPs (i.e., source and destination) using the internal TAPI context. It can perform a full end-to-end intra-domain path if the full TAPI context is provided or perform just domain selection if an abstract topology is provided. It is developed as a TAPI-enabled component, which receives a TAPI path-request, requests the internal context of the context manager, calculates the path, and responds with a TAPI path-reply after finding a path within that internal context.
The connection and the context managers are extensible modules, in the sense that they support multiple plugins, enabling communication with different APIs. In this paper, as the southbound interfaces communicate only with the OLS, they will make use of only the TAPI plugin.

SDN CONTROL SYSTEM WORKFLOW A. Composition of the Internal TAPI Context
The optical SDN controller invokes several TAPI operations to compose the internal multi-domain multi-layer (WDM over SDM) TAPI context, as shown in Fig. 3. First, the service orchestrator gets the information of the involved OLS controllers (e.g., IP address, port). This information is statically saved in a configuration file. Then, for each OLS controller, the service orchestrator requests of the context manager the initialization of the context (i.e., topology and SIPs). To this end, the context manager first registers the plugin for this specific OLS in the controller DB. Then, it requests the OLS TAPI context using the TAPI context service. After that, the context manager inserts the obtained topology and the SIPs in the context DB, and replies to the service orchestrator. It is worth highlighting that OLS controllers may provide abstract topologies instead of the real one with a fully detailed view of the resources.

Research Article
Once the service orchestrator has initialized all OLS controllers, it requests the context manager to initialize the inter-domain topology. The context manager handles it as another plugin that is stored in the controller DB. Then, the context manager reads the information of the inter-domain links between OLS domains from a static file (e.g., pair of NEPs between two OLS domains), inserts them in the context DB, and replies to the service orchestrator.
The next step is to initialize all the involved transceivers. First, the service orchestrator gets the static information of all transponders (e.g., IP address, port) from a static configuration file. Then, for each transponder, the service orchestrator requests of the transponder manager the initialization of the transponder. The transponder manager first registers the transponder plugin in the transponder DB. Then, it gets the transponder capabilities from a static configuration file, or dynamically requests them from the transponder (e.g., using OpenConfig). A transponder can be composed of one or several transceivers (e.g., BVT). The transponder manager generates a new node for the transponder, and a new NEP for each transceiver or client port of the transponder. Each transceiver is modeled as an optical tributary signal (OTSi) that is already supported by TAPI. Extensions for OTSi are already included in the official release. The protocol used is PHOTONIC_MEDIA, as for the optical MCs, but the qualifier is PHOTONIC_LAYER_QUALIFIER_OTSi. An example Internal TAPI context, connections, and connectivity services.
of the supportable parameters listed in TAPI are central frequency, modulation format, and power. Then, the transponder manager asks the context manger to insert the new node with the NEPs in the internal TAPI context DB. Each client port is modeled as a digital signal rate (DSR) that is also supported by TAPI. It makes it possible to specify digital rates (e.g., 10, 40, 100 Gb Ethernet).
Once all transponders are initialized, the service orchestrator has a full view of the topology (i.e., nodes, links, and NEPs) and SIPs in its internal TAPI context DB, as shown in Fig. 4(a). Finally, the service orchestrator can generate TAPI contexts from its internal TAPI context and expose them to its customers (e.g., NFV orchestrators). The simplest approach is to expose customer TAPI contexts composed of only SIPs. These SIPs are generated by the service orchestrator in function of the defined customers, following the standard TAPI DSR model that makes it possible to request a capacity (in Gb/s) between two SIPs. The customer SIPs are associated with one or several client port NEPs in the transponders.

B. Provisioning of End-to-End WDM Connectivity Services
Next we describe the workflow required to provision an endto-end connectivity service between two WDM transceivers, involving two WDM OLS domains and one SDM OLS domain, as considered in the previous example. First, the optical SDN orchestrator receives a TAPI DSR connectivity service request between two SIPs from a customer (i.e., C1 and C2), specifying a capacity (e.g., 100 Gb/s), as shown in Fig. 5. The first action that must be performed by the optical SDN controller is to provision a WDM connection (i.e., a photonic MC) between the two involved transponders. To this end, the service orchestrator requests the PCE to compute and an end-to-end path to perform an OLS domain selection. Before computing the path, the PCE requests an abstract topology of the context manager. The context manager generates an abstract topology for the PCE by representing each domain as a single node, containing all the edge NEPs and mapped SIPs from the domain. It allows for an easier and faster path computation between the end-points. Once the PCE has computed the end-to-end path, the service orchestrator segments the path into OLS domain segments. In our example, the PCE computes an end-to-end path across OLS domains WDM-1, SDM, and WDM-2, since WDM-1 and WDM-2 are not directly connected. The three involved OLS domain segments are WDM-1 OLS (W1.1. and W1.2), WDM-2 OLS (W3.2 and W3.1), and SDM OLS (S1.1 and S1.3).
Since the computed path is multi-layer, the service orchestrator requests of the VNTM the provisioning of the SDM connection and the generation of the virtual link between the two involved WDM domains. The VNTM first requests of the connection manager the provisioning of an SDM connectivity service (i.e., a spatial MC) between SIPs S1.2 and S1.3 to the SDM OLS controller using the TAPI connectivity service. The SDM OLS controller applies a routing and spatial assignment algorithm to select the links and allocate available cores. We consider uncoupled cores, and therefore different cores can be assigned on different links since we are not constrained by inter-core crosstalk. Once the spatial MC is computed, the SDM OLS controller configures the spatial core switching nodes and updates the TAPI context, including the CEPs associated with the new SDM connection. CEPs provide information related to the connection-in this case, the selected cores. Once the SDM connectivity service is provisioned, the VNTM stores the information of the SDM connection and the connectivity service in the service DB. Then, it creates a virtual link associated with the SDM connection between the ingress/egress WDM NEPs, and inserts it in the internal context DB, as shown in Fig. 4(b). Finally, the VNTM notifies the service orchestrator about the creation of the virtual link.
At this point, there is connectivity between the two WDM domains and the service orchestrator requests the transponder manager to compute the spectrum required (in GHz) to serve the requested capacity between the pair of transceivers (i.e., TP1 and TP2), as shown in Fig. 6. Once the spectrum is computed, the service orchestrator requests of the PCE an available frequency slot across the involved WDM OLS domains. The main limitation of an end-to-end WDM connection across multiple OLS domains is to satisfy the endto-end spectrum continuity constraint. If the PCE has a fully detailed view of the topology, then it can compute a continuous spectrum across the involved WDM OLS domains using the internal topology. However, if the OLS controllers provide abstract topologies, then the PCE needs to request of each OLS controller the computation of the intra-domain path using the TAPI path computation service. Therefore, the actual intradomain path computation is delegated to the respective OLS controller. The WDM OLS domains apply an RSA algorithm to select the links and allocate the specified frequency slot. We have also extended the TAPI path computation service in order to provide the available spectrum along the computed path. It is used to compute a common frequency slot that meets the spectrum needs and is available across all WDM OLS domains. Then, the service orchestrator requests of the connection manager the provisioning of the photonic MCs on each WDM OLS domain through the TAPI connectivity service (i.e., between SIPs W1.1-W1.2 and W3.2-W3.1), specifying the path and the common frequency slot previously computed in order to guarantee the continuity constraint across domains. After the configuration of the ROADMs, the WDM OLS controllers update the TAPI context, including the new CEPs associated with the WDM connections. Once completed, the connection manager notifies the service orchestrator, and it stores the information of the lower WDM connection and connectivity service calls from the involved WDM OLS controller as well as the end-to-end WDM connection in the service DB. In a similar approach followed by the initial retrieval of the internal TAPI context, the context manager requests the OLS TAPI context (i.e., topology and SIPs) of all OLS controllers and updates the internal TAPI context.

Research Article
At this point, there is an end-to-end WDM connection provisioned between the pair of transponders, and the service orchestrator can configure the associated transceivers through the transponder manager. First, the transponder manager allocates the most appropriate modulation and power for the end-to-end WDM connection, and computes the central frequency from the allocated frequency slot. Then, the transponder manager sends the configuration messages to both transponders using an open interface (e.g., OpenConfig), specifying the parameters to configure the transceivers and the client ports. Once the transponders are configured, the transponder manager augments the NEPs associated with the transceivers and client ports with the CEPs associated with the OTSi and DSR connections. Both OTSi and DSR connections are stored in the service DB. Finally, the service orchestrator also stores the DSR connectivity service in the service DB. At this point, the service orchestrator can reply to the TAPI connectivity service from the customer.
Several RSSA algorithms have been proposed for an optimal/efficient resource allocation with static (i.e., planning) and dynamic traffic, using integer linear programming (ILP) modeling and greedy heuristic or metaheuristic approaches. The majority of algorithms available in the literature focus on uncoupled/weakly coupled SDM networks. In the former, the main objective is to minimize the usage of resources or the equipment cost without dealing with impairments. In the latter, the objective is to avoid or minimize the crosstalk by decreasing the spectrum overlap in neighbor cores, or estimate the crosstalk during resource allocation. For more details on the available papers, the reader is referred to [31].

PROOF-OF-CONCEPT AND EXPERIMENTAL VALIDATION A. Experimental Scenario
The experimental setup consisted of three network domains: two WDM network domains and an SDM network domain, as shown in Fig. 7. The control plane (i.e., the otical SDN controller and the WDM/SDM OLS controllers) were deployed at CTTC in Barcelona (Spain), and the data plane (WDM/SDM hardware and SDN agents) at KDDI Research in Saitama (Japan). Both premises were connected using OpenVPN tunnels across the Internet.
Each WDM network domain comprised ROADMs, a 40 km SSMF transmission line, an SDN controller (OLS controller), and an SDN agent. On the transmitter side, four transponders (ADVA FSP3000) operated at 193.2 THz to 193.5 THz following the 100 GHz ITU grid were connected to the WDM-1 domain. Each transponder was equipped with a C-band tunable wavelength 100 Gb/s optical interface, and the modulation format of the optical signal was dual polarization quadrature phase shift keying (DP-QPSK). The ROADMs in the WDM domain were based on wavelength selective switches with erbium-doped fiber amplifiers from Lumentum and had a single input/output, 20 outputs/inputs, and supported flexible grid in the C-band. On the receiver side, four transponders were connected to the WDM-2 domain. All transponders were controlled by the optical SDN controller via OpenConfig in this experiment.
The SDM network domain comprised two optical fiber switches from Polatis, an 11 km SDM transmission line (i.e., 19-core fiber [32]) with a fan-in device, a fan-out device, an SDN controller (OLS controller) and an SDN agent. In this experiment, the optical fiber switches were used at the input of the fan-in device and the output of the fan-out device because a desired port of a ROADM in a WDM domain can be connected to a desired core of the SDM transmission line. The optical fiber switches were based on an 8 × 8 non-blocking alloptical matrix switch. Three of eight ports of the optical fiber switches were connected to the fan-in device in order to input optical signals into the SDM transmission line. After the SDM fiber transmission, three of eight ports of the fan-out device were connected to the optical fiber switches in this experiment.
The control plane is composed of an optical SDN controller, two WDM OLS controllers, and one SDM OLS controller. The optical SDN controller orchestrates the WDM/SDM OLS controllers and the transponders. It keeps the state of the whole TAPI context and enables the creation and removal of end-to-end connectivity services between WDM transponders across the multi-domain (OLS) and multi-layer (SDM/WDM) data plane. The interface between the optical SDN controller and the WDM/SDM OLS controllers is based on TAPI, as previously described. The WDM/OLS controllers can send the remote procedure calls (RPCs) to the deployed SDN agents in the WDM/SDM hardware for the provisioning or removal of connections. The interface between the WDM/OLS controllers and the SDN agents is proprietary. Finally, the optical SDN controller can directly configure and monitor the WDM transponder using NETCONF.
To validate the feasibility of the previously presented SDN architecture over this experimental scenario, we will demonstrate an end-to-end connectivity service between two WDM transponders along the two WDM domains connected over an SDM domain. Afterwards, both connections will be removed to return to the initial state. At initialization, the optical SDN controller first retrieves the TAPI context from all OLS domains. This can be seen in Fig. 8(a), where the optical SDN controller requests and receives the TAPI context coming from the two WDM OLSs and the SDM OLS. Figures 8(b)-8(d) show the exchange of messages at the OLS controller side. After that, the SDN controller composes the internal context ready as shown in Fig. 9. This internal context also consists of the inter-domain links, and the created nodes and SIPs from the WDM transponders, exposing its supported capabilities. In this experimentation, the supported capabilities of the WDM Fig. 7. Experimental SDN-controlled partially disaggregated WDM over SDM scenario. transponders are stored in a static file, and therefore they are not requested of the WDM transponders.
In the first test, we provision the SDM connection between the two WDM domains and create the virtual link in the TAPI context. First, we generate a connectivity service in the SDM domain to enable connectivity between the WDM domains. Then, the provisioned SDM connection is used to create a virtual link between the NEPs that are connected to the WDM domains, creating a direct route between the WDM domains. Figure 10(a) shows how the SDN controller receives a request to create a connectivity service between SIPs WDM2.1Rx and WDM3.1Tx (Fig. 9). It processes the request, computes a path between the SIPs, using the PCE, and demands that the SDM OLS create a connectivity service in its domain between SIPs SDM1.1Tx and SDM2.1Rx. After that, the SDN controller requests the context to the SDM OLS to reflect the changes and update the internal TAPI context, and in approximately 7 s, the connectivity service is served. Figure 10(b) shows the request to the SDM agent from the SDM OLS controller perspective. The SDM OLS receives the request from the optical SDN controller and demands that the SDM agent create a spatial connection using core #1 of the MCF. Then, the SDM OLS controller receives the response from the SDM agent, and in turn, responds to the optical SDN controller. Finally a virtual link connecting NEP2.1 and NEP3.1 is created, and the internal TAPI context is updated (Fig. 9   In the second test, we request a connectivity service of 100 Gb/s between two WDM transponders. Figure 11(a) shows the exchange of messages from the optical SDN controller. First, it receives a request to create a connectivity service between SIPs TP1.Tx and TP1.Rx. The optical SDN controller asks of the transponder manager the optical bandwidth needed for the required capacity. With that bandwidth, the optical SDN controller, using the PCE, computes a path and a common frequency slot along the WDM domains. Then, it sends the WDM connectivity service requests between SIPs WDM1.Tx and WDM2.RX to the WDM-1 OLS controller and between SIPs WDM3.1Tx and WDM4.1Rx to the WDM-2 OLS controller. Once the WDM connections are provisioned, the optical SDN controller requests the new OLS TAPI contexts. Figures 11(d) and 11(e) show the requests arriving from the optical SDN controller at the WDM OLS controllers, which in turn ask the WDM agents to provision the WDM connections. After approximately 40 s the WDM connections are provisioned in the WDM domains, and the optical SDN controller requests of the WDM OLS controllers the TAPI context reflecting the changes caused by the provisioned WDM connections. Next, the SDN controller creates a NETCONF-over-SSH session, where the WDM transponder associated with SIPs WDM1.1Tx and WDM4.1Rx are turned on, with the required parameters such as the central frequency. After 18 s, the NETCONF-over-SSH session is over, and the transponder is operational. Figure 11(c) shows, after 59 s, the response of the connectivity service is sent to the TAPI customer.   Figure 12 provides a breakdown of the time required for provisioning connectivity services between WDM transponders. We can identify four main processes that contribute to the over-provisioning time: (1) initial process composed of the connectivity service call process, path computation, and segmentation into OLS domains, (2) WDM OLS controller requests, (3) TAPI context update, and (4) WDM transponder configuration. We took different measurements and calculated the average time lapsed at each step. The average time for the whole workflow is 59.32 s, divided into 1.2 s for the service call process, path computation, and segmentation into domains, 40.39 s between the connection request to the WDM OLS controllers, 0.19 s for the internal context update, and 17.53 s for the WDM transponder configuration. Almost all of the time between the request of the service call and the response to the customer is consumed by hardware configuration. Figure 13 shows the measured optical spectra for different tests of provisioning connectivity services between WDM transponders, using the four available WDM transponders and three spatial channels.
In the third test, we remove the connectivity service of 100 Gb/s between two WDM transponders provisioned in the second test. Figure 14 shows a Wireshark capture with the exchange of messages for the removal of the connectivity service. To trigger the removal, a delete connectivity service request is sent to the optical SDN controller, stating the identifier of the connectivity service to remove. Figure 14(a) shows the arrival of the request to the optical SDN controller, the subsequent requests to both WDM OLS controllers to remove the connections on their domain, and the affirmative response from them after 5 s. Figures 14(d) and 14(e) show this same process from the WDM OLS controllers' perspective. They receive the request from the optical SDN controller for the removal of the WDM connectivity service and communicate with the WDM agents to release the WDM connection. In the next step, the WDM transponder is turned off, as shown in Fig. 14(b). The optical SDN controller creates a NETCONFover-SSH connection to send a NETCONF RPC asking to turn off the transponder, which is done after 20 s. Finally, the optical SDN controller requests of the WDM OLS controllers the new TAPI context with the changes after removing the WDM connections, updates its internal TAPI context, and replies to the connectivity service removal from the TAPI customer, as shown in Fig. 14(c). The overall removal of the connectivity service is performed in around 26 s.
In the fourth test, we return to the initial state, and the spatial connection in the SDM domain and the virtual link is removed, as shown in Fig. 15. The optical SDN controller receives the request for the removal of the connectivity service, stating the identifier. Since it is composed of only one connection, it sends a request for the removal of the SDM connectivity service to the SDM OLS controller, as shown in Fig. 15(a). The SDM OLS controller receives the request from the optical SDN controller and communicates with the SDM agent to remove the spatial connection at the hardware level, as shown in Fig. 15 Fig. 13. Measured optical spectra for different tests of provisioning connectivity services between WDM transponders using different spatial channels.  controller the TAPI context reflecting the changes in the SDM domain and updates its internal TAPI context. Finally, the optical SDN controller replies to the connectivity service removal request 4 s after it was requested.

CONCLUSION
We have presented and demonstrated the first SDN control architecture for partially disaggregated optical networks composed of multiple WDM and SDM OLS domains and transponders. Experimental validation has been performed in a real testbed combining both control and hardware equipment. We have demonstrated that the proposed SDN control architecture enables dynamic provisioning of spatial channels between WDM domains when there is no connectivity between WDM domains or for offloading the traffic in the WDM domains. We have demonstrated the feasibility of this network approach to continue growing in capacity. This will allow more flexibility in deploying and upgrading optical systems, as vendor lock-in will be avoided, also fostering competition between manufacturers. In this paper, we consider only the spatial core dimension for SDM. Next steps will consider the spatial mode dimension in the SDM domains, considering MMFs or FM-MCFs, providing spatial channels between WDM domains.