Comprehensive model for technoeconomic studies of next-generation central offices for metro networks

This paper introduces a novel and simplified cost model for designing and evaluating a Central Office Rearchitected as a Datacenter (CORD). The model includes equipment and elements for the realization of optical, packet switching, and data center parts with a special focus not only on relative costs but also on power consumption figures. The cost model is then applied to the design and comparison of a metropolitan area network (MAN) including both aggregation and metrocore nodes following several MAN node architectures based on CORD-like leaf-and-spine fabric. In particular, equipment disaggregation at the Central Offices, both on the packet-switching and optical components, can provide important cost savings to telco operators. On the other hand, incorporating computing/storage capabilities in the MAN for the realization of multiaccess edge computing (MEC) has a significant impact on the total network cost but especially on power consumption.


INTRODUCTION
Metropolitan area networks (MANs) are at a turning point as they are subjected to the incessant growth of traffic with annual increase rates between 25% and 40%, the technological changes driven by fiber access networks and 5G systems, and factors such as hardware/software (HW/SW) disaggregation, cloudification, and automation. The need to rethink the network architecture and most importantly the new internal architecture of the Central Office (CO) node, now including new elements (e.g., data centers moved to the edge), requires the assessment of not only its technical feasibility but also of its economic sustainability. (Table 1 lists the acronyms used in this paper.) The next-generation MAN architecture analyzed in this work features distributed data-center equipment in the different nodes with compute and storage resources used to host network and service application virtual functionality, focused on the support of emerging 5G services. This equipment results in much increased cost and power consumption compared to traditional MAN architectures. However, it is essential if network service providers want to offer support to many of the future services like Industry 4.0 or verticals, so as not to incur a loss of opportunity. Another benefit of this architecture is the reduced requirements for transmission capacity in the metro segment, which also leads to savings. These capacity benefits are not quantified in this study.
EU H2020 project METRO-HAUL [1] aims at designing next-generation MANs featuring higher performance and better cost efficiency than existing MAN architectures, but, more importantly, embedding edge computing capabilities and support for 5G services, namely, URLLC, mMTC, and eMBB, along with emerging 5G related verticals (Industry 4.0, etc) [2]. To this end, the project, supported by the experi- (Telefonica, Telecom Italia, and British Telecom), has defined a new MAN architecture with a number of key required innovations in the architecture of the MAN nodes, which include the following: • incorporating computing and storage capabilities realized as micro data centers (µDCs), following the CORD reference design [3]; • disaggregated systems leveraging SDN and open-source software like ONOS [4], seeking faster agility and flexibility regarding service provisioning, while reducing vendor lockin leading to important cost savings [5][6][7][8]; • both filtered and filterless optical WDM technology enabling S-BVTs, with reduced line system deployment cost but retaining bandwidth allocation flexibility [9,10]; • an intelligent SDN-based control plane that allows the orchestration of both the computing and communication infrastructure resources, leveraging network telemetry and performance monitoring along with ML/AI algorithms for fast network reconfiguration and automation [11,12].
These novel technologies, which will transform MAN architectures, need to be evaluated not only from a performance point of view but also from an economic perspective. Indeed, technoeconomic studies have become a major goal in many research projects as the proposed hardware and network architectures need to be compared to existing conventional designs to justify any possible system and architecture shift.
Essentially, the target of meeting key performance indicators (KPIs), where a number of quantitative objectives are measured for a successful evaluation of a given research solution/architecture, has triggered the need to demonstrate that novel research proposals are more cost-effective (better performance with reduced cost) than existing ones. In other words, newly adopted technologies need not only be of higher performance but also cost and energy efficient.
Several research projects and studies have provided in the past cost models regarding the metro network segment. Some of them make available cost values to some of (but not all) the building blocks. Thus, there is a need for a complete, yet simplified cost model for next-generation MAN architectures with updated cost and power consumption figures for existing and emerging relevant technologies.
Furthermore, the proliferation of edge computing platforms, often colocated with traditional aggregation and transport sites, has made joint dimensioning of both the compute and communications infrastructure essential to fully optimize deployment and operation costs [13]. Given that cost models for optical transport and data-center/packet components are typically siloed, optimization frameworks find it difficult to accurately assess the tradeoffs between components in different layers, being forced to either use parametric approaches or prioritize one component over the other (e.g., minimizing the nodes with DC capabilities). As such, an integrated cost model for MAN node/network design can be highly valuable for the practical implementation of advanced optimization algorithms.
Having this target in mind, the first part of this paper focuses on providing an up-to-date cost model with a detailed description of the required elements to build the next-generation MAN nodes. Such elements are categorized into optical components, packet-switching hardware, and data-center equipment, with enough variety to build a large number of MAN node architectures. The second part uses the cost model to compare different MAN node architectures following the Trellis [14] guidelines, a recent project under the CORD umbrella, defined as "production-ready multipurpose leaf-spine fabric designed for NFV." Trellis describes the physical and functional configuration of a multisite switching infrastructure that combines transport, switching fabrics, and computing/storage servers using white-boxes organized in a leaf-and-spine topology to optimize East-West traffic in MAN scenarios. Specialized hardware would be available only for access interfaces to (residential, mobile, business) customers and for optical transport components. Using the cost model for optical, packet, and data-center elements, we show that Research Article incorporating computing and storage capabilities into the MAN nodes has a moderate impact in terms of CAPEX, but on the contrary the power consumption of the COs massively increases to feed the servers in the data center.
Thus, the remainder of this work is organized as follows. Section 2 reviews previous technoeconomic studies conducted on optical WDM and data-center architectures. Section 3 overviews the cost model along with the three main building blocks to realize COs with data-center capabilities, namely, optical components, packet switching, and computing/storage elements. Section 4 shows a technoeconomic use-case where the cost model is applied to the design of next-generation metro networks. Finally, Section 5 completes this work with its main findings and conclusions.

PREVIOUS WORK
Technoeconomic studies have been traditionally a must for telecom operators before adopting any new technology. Essentially, the operator needs to evaluate not only the performance metrics of emerging technologies (throughput, latency, etc.) but also adoption costs, which include both CAPEX-and OPEX-related metrics like power consumption, integration, and its operation and management. As a rule of thumb, a new solution is considered for potential deployment when it shows a significant performance improvement with at least 30% cost-savings with respect to existing solutions [15].
In the past, several research projects have provided cost models regarding the metro network segment addressed here, which join the efforts and expertise of a large number of researchers and technology specialists from operators, system providers, and academia. In particular, the first set of projects in addressing the cost modeling issues were STRONGEST [16] and IDEALIST [17] which focused on the evaluation of elastic/gridless optical technologies and their adoption in longhaul wide-area networks, and further provided a first set of tables with technoeconomic data [18]. A summary of the STRONGEST cost model was published in [19] and attracted much attention from the research community, which used this article as a starting point in subsequent cost model analyses [20][21][22]; this is the case for projects like FOX-C [23], ACINO Some newer 5G-related EU projects provide cost models in other network segments, namely, the radio access segment, like 5G-NORMA [27], iCIRRUS [28], and METIS [29], or in the optical access part of the network, namely, PON and its flavors, like COMBO [30], thus having limited applicability in the technoeconomic analysis of MAN architectures [31][32][33][34].
There is also a broad number of earlier technoeconomic studies published in journals and conferences; however, most of them rely on the original equipment data provided by the seminal paper [19] from the STRONGEST project. Remarkable examples of application include the joint optimization of multilayer network design with and without protection [35][36][37][38], the feasibility and cost-saving opportunities of elastic/gridless optical networks [39][40][41], and cloud/grid design over optical WDM networks [42,43]. Other interesting technoeconomic studies on optical access technologies and C-RAN may also be found in [20,[44][45][46][47][48][49].
On the other hand, cost models for packet-switching devices and computing/storage servers are relatively easy to find in public websites but not many are easy to find regarding specialized optical equipment. For instance, the authors of [50] study the feasibility of incorporating optical circuit switching in data-center networks and [51,52] give an overview of the cost of deploying different data-center architectures, namely, BCube, FatTree, and deBuiyin and Leaf-and-Spine.
In the next section, a simplified, yet updated cost model for the design of next-generation MAN nodes featuring MEC is presented; this includes both optical transmission systems, packet-switching devices, and data-center hardware. The model is then applied to an example use case where MAN nodes with both aggregated and disaggregated packet switches are compared to a solution with legacy routers.

COST MODEL FOR MAN ARCHITECTURES
This section overviews the cost model focusing on the three main building blocks for constructing the next-generation MAN nodes, namely, optical components accessing the metro networks, packet-switching equipment both interfacing the optical infrastructure and composing the data-center architecture (leaf-and-spine-based), and computing/storage nodes enabling caching, virtualization, and processing. This model has been developed inspired from the references above, and the publicly available datasets, albeit updated with the expertise and internal datasets of major telecommunication operators and the network system providers that participate in the METRO-HAUL project.
The number and heterogeneity of equipment are very large, so an effort to simplify the model was conducted by reducing the amount of hardware options. As it is typical in technoeconomic studies, the cost values of hardware components have been normalized to cost units (CUs), where the value of 1 CU is equal to the cost of a 10G transponder (see Table 2). Figure 1 gives a simplified overview of next-generation MAN nodes inspired by the Trellis architecture, showing their main representative blocks and connectivity. Equipment and connections shown in the figure do not reflect a real implementation of a MAN node, but they are only a high-level visualisation of the model.
The optical layer is represented in the figure with a Mux/Demux (in the case in which a simple optical line terminal is used to interconnect the node with the other ones) or, alternatively, with a ROADM which includes both WSSs and add/drop (A/D) block units. Depending on the node architecture, A/D blocks can be realized with a combination of WSSs, splitter/combiner modules and amplifiers. Transponders and flexible transponders connected with the packet-switching equipment complete the optical systems, while other components like DCMs and wavelength blockers, which form part of the model as well, are not shown for the sake of brevity but can be used for implementing other node architectures (e.g., fixed OADM or filterless optical nodes) and in the case of compensated links.
The packet-switching equipment shown in Fig. 1 is a single piece of equipment, a router or a carrier-grade switch, but in general the structure within a metro node may include more  than one switch, possibly connected in a hierarchical manner (for instance, leaf-and-spine). The model adopts a compact equipment block characterized by a maximum capacity that can host gray pluggable transceivers, added to the switch until maximum capacity is reached. A modular structure (i.e., equipment made of shelf-hosting line cards) has been avoided for two reasons. First, different vendors have different equipment modules, and as a result it is difficult to define a unifying model. Second, the trend for carrier-class switches, at least those with up to tens of Tb/s of capacity suitable for metro nodes, seems to be in the direction of adopting a compact structure, e.g., a single block of a maximum capacity requiring a number of slots in a rack. This makes

Research Article
L2/L3 carrier-grade switches similar to L2 DC-like switches but with higher capacity and additional features.
The data-center part of the model has three main block components: compute server nodes, storage server units, and L2 switches. Compute and storage servers are connected to the switch (usually within the same rack, i.e., top-of-rack switch) by means of cables and pluggable transceivers.
In the next sections, different components for these three parts of a MAN node (optical, packet switch, and data center) are presented.
A. Optical WDM Equipment Table 2 shows the main building blocks for the optical layer of a generic metro node. The first set of elements includes the generic optical WDM equipment necessary to build the analog WDM network elements, namely, ROADMs and OLA systems. The second and third parts of the table include classical state-of-the art transponders for metro scenarios featuring multiple bitrates along with a set of S-BVTs. The cost/power figures do not include the chassis housing the optical components, since these can vary significantly depending on adopted configurations.
It is worth noticing from the table that both fixed/flex-grid WSSs have the same cost value, since according to equipment manufacturers there is no real difference in terms of hardware elements. In addition, the transponder values provided in the table span several standard and emerging technologies ranging from IM/DD, coherent, PAM4 IM/DD modulation, and BVTs with various configurations, such as the one presented in [10].

B. Packet-Switching Equipment
Regarding the packet-switching components, the cost model defines four different sets of hardware equipment: Layer-2/Layer-3 telco-like carrier-class switches with highperformance chipsets (e.g., Broadcom Qumran or Jericho+), which can in turn be distinguished between aggregated and disaggregated switches; Layer-2 data-center-like switches with  lower-performance chipsets (e.g., Broadcom Tomahawk2 or Trident), intended for inter-/intra-data-center traffic mainly; and classical Layer-3 IP routers, to provide the case where a telco operator already has expensive IP equipment that can be reused in its solutions. These are listed in Table 3.
Concerning the L2/L3 telco-like carrier-class switches, the aggregated switches consist of HW and SW provided by the same equipment vendor, whereas in disaggregated cases implementing the CORD concept, these are composed of white-box HW while SW can be developed by the telco itself (in house) or provided by a third party. Next, a set of conventional IP routers are shown for completeness, where a fixed configuration is assumed for simplicity, even if the router can also be modular with a chassis and line cards. Finally, several pluggable transceivers are provided, although their relative cost compared to the previous hardware sets are almost negligible.
These capacity values are intended as the maximum sum of rates of interfaces that can be connected to the router or the switch. This is what is called bidirectional capacity in [53] (e.g., 400 Gb/s can host up to four interfaces at 100G, or two at 100G plus 20 at 10G. However, datasheets usually provide the maximum unidirectional capacity value for their switches or routers which is twice this value (i.e., 800G in the previous example), since the equipment must be able to process the bitstream in the two transmission directions.
The cost values of routers and switches in Table 3 are assumed to increase linearly with capacity, although some studies suggest that the cost of the packet equipment increases less than linearly (the linear assumption is slightly pessimistic). Concerning power consumption values, the values represent a less-than-linear increase with capacity as observed from vendor datasheets and previously analyzed in [53][54][55]. Thus, such typical/average consumption values should be considered indicative since different vendor equipment with similar capacity and features may report different power consumption values in their datasheets.

C. Computing and Storage Equipment
Finally, Table 4 shows the computing and storage equipment elements for data-center design along with other specialized hardware that may be required in certain scenarios. Again, the table is reduced to show only four relevant sizes concerning computing nodes, namely, Small, Medium, Large, and XLarge sizes, and two to four sizes regarding storage hardware.

TECHNOECONOMIC USE CASE: DESIGN OF NEXT-GENERATION MAN NODES
A. Reference Topology and Node Architecture The application scenario used for evaluations is a reference generic metro network topology composed of 60 nodes, as shown in Fig. 2, which is considered to span a large metropolitan area providing network services to two million people, including residential, 4G/5G mobile, and business traffic. The network topology comprises six fully meshed metro-core edge nodes (MCENs), each adjacent couple connected to a horseshoe formed by two MCENs and nine access-metro edge nodes (AMENs), for a total of 60 COs (6 MCENs + 6 × 9 = 54 AMENs) serving a total of 900,000 HHs (about 15,000 HHs per node). This MAN serves approximately two million residents and 900 mobile stations collecting traffic from about 3.3 million mobile lines.

Research Article
For simplicity only one horseshoe is shown in Fig. 2 (the one connected to the lower MCENs) while the other five identical horseshoes around the central mesh are not shown. According to such a composed topology, it results in the MCEN having a topological degree of five (not including the connection to the backbone node, which is left out of the evaluation) while the AMEN nodes have degree 2, thus resulting in an average node degree of 2.1 for the whole network topology under evaluation.
Regarding traffic and equipment requirements, both intraand inter-CO links do not come from a dimensioning study based on traffic and routing rules but are simply assigned. Concerning offered traffic per node, total traffic of the order of 100 Gb/s for AMENs and 1 Tb/s for MCENs is assumed to be exchanged, respectively. All the interfaces connecting AMENs to the access networks (fixed residential OLTs/DSLAMs, mobile antennas, equipment of business customers) and to data-center servers are at 10G. Networking at the metro level relies on 100G interfaces on horseshoes and on 400G in the central mesh.
Concerning computing and storage capabilities at the nodes, servers are assigned considering a limited local processing requirement in the AMENs and a significantly higher processing requirement in shared data-center resources in the core nodes. Four DC profiles have been defined following the experience of the METRO-HAUL project participants, as shown in Table 5. Without an exact definition of edge VNFs in the nodes (e.g., vOLT for fixed access, DU/CU/UPF for mobile, vBNG for IP traffic, etc.) and their compute and storage requirements, which are out of scope of this work, the assignments of CPU cores for different DC profiles have been based on numbers discussed within the project that seemed reasonable for a first phase of deployment of a CORD-like metro network. In this sense, taking into account the required CPU cores on nodes, the servers in the the aggregation should be of the order of ten or a few tens on each access-metro node, while in the metro-core the servers should be of the order of many tens or a few hundreds on each metro-core node.
Therefore, as summarized in Table 5, a basic DC resource profile (identified as Basic) requires 240 CPU cores per AMEN and 2400 cores per MCEN. Three more DC profiles are then introduced to evaluate the impact of the DC component on cost and power consumption. These three profiles are assumed to require more processing capacity than the Basic one, in particular twice for the High Balanced (High B) profile (480 cores per AMEN and 4800 cores per MCEN). The High Concentrated profile (High C) considers 240 cores on the AMEN and 6000 cores on the MCEN, and the High Distributed profile (High D) comprises 720 cores on the AMEN and 3600 cores on the MCEN.
Essentially, the Basic profile comprises a starting minimal one, while the other three consider some variants where more computing resources are necessary (2 to 3 times), with a different balance between centralized and distributed workloads. It can be expected that a massive deployment of VNFs at the network edge will require significantly higher compute and storage resources than those listed in Table 5, and these scenarios are left for further analysis.
All High profiles are assumed to support the same processing load at the network level. Compared to the High B profile, the High C needs less compute capacity at the AMENs since these can benefit from pooling servers at MCENs; on the other side, the High D profile gets penalized due to compute resources being distributed, resulting in less server sharing at MCENs. As far as the conversion of cores into the number of servers in the nodes, Large-and XLarge-type servers are assumed to be installed in the AMEN and MCEN, respectively. RAM memory and storage resources follow considering the configuration of each server type.
Concerning optical WDM networking, this can be either aggregated or disaggregated equipment (ROADM, transponders, OLA, etc.). Disaggregated equipment is assumed to have similar functionality to the aggregated equipment but at reduced HW cost. In the optical WDM layer two A/D units and one A/D unit are assumed in the MCEN and AMEN, respectively, giving an average number of 1.1 A/D units per node. WDM systems of 40 fixgrid channels with 100 GHz spacing is enough for AMENs, while 80 fixgrid channels with 50 GHz spacing are used for the links in the core mesh.
Concerning packet switching, three architectural options are considered in the analysis: • Option 1, named legacy router, uses traditional carrier-class IP routers for networking and a separate DC communication infrastructure. Aggregated optical WDM equipment is considered in this case.
• Option 2, named aggregated L2/L3 switch (Aggr. L2/L3 switch), uses L2/L3 switch-routers for both internode and intranode networking and assumes that networking SW on switch-routers is supplied as integrated SW by the same vendor that provides the HW. No assumptions are made about the SW used by the switches: it can be distributed like in legacy routers or centralized (using an SDN controller). Optical WDM equipment is provided as aggregated.
• Option 3, named disaggregated L2/L3 switch (Disaggr. L2/L3 switch), uses L2/L3 switch-routers for both internode and intranode networking. In this option HW is made of white-boxes while networking SW is developed in house or by a third party. The reference model for the CO architecture is CORD, and the paradigm of the networking control plane is SDN, with the Trellis extension. Disaggregated optical WDM equipment is also considered. Figure 3 shows the AMEN and MCEN equipment for the legacy architecture, where standard L3 routers are used. The Aggregation node has a simple structure, including only one router and a micro-DC with a few servers, one storage unit and a L2 switch. The metro node has two routers and a mini-DC with equipment redundancy composed of many servers, some storage units, and two L2 switches. Figure 4 shows the AMEN and MCEN equipment for the architecture based on L2/L3 switches, which can be either L2/L3 aggregated or disaggregated equipment. The aggregation node has a simple structure, including two L2/L3 switches with no redundant HW parts, and a micro-DC as described above. The metro node follows a leaf-and-spine architecture (two spines and three leaves) and a mini-DC.

B. Evaluation
The cost of the three CO architectural options described above is evaluated applying the cost tables shown in Section 3. The cost of the CO is split into three main parts: the packet layer, the optical layer, and DC costs.
For the packet layer in Option 1 (legacy router), one traditional XSmall-size (800G) router and two Medium-size (3.2T) routers are assumed for AMENs and MCENs, respectively. The choice of having only one piece of equipment in the AMEN is motivated by the fact that routers are assumed to be "carrier class" (i.e., full equipment redundancy), and only the data rate requirement is considered. The cost of all SW required for packet networking is included in the equipment cost. The DCs are connected to the routers using 100G links coming from TOR switches, which are connected to the servers using 10G links.
For Option 2 (Aggr. L2/L3 switch), the packet layer has two XSmall-size (800G) switches for COs in the aggregation sites and a fabric of five switches, namely, three Small-size (1.6T) leaves and two Medium-size (3.2T) spines for the MCENs. The reason for the adoption of two pieces of equipment in AMENs is that L2/L3 switches are only partially redundant, and a couple of switches are necessary to ensure an adequate level of reliability. The DC is directly connected to leaf switches with 10G interfaces, avoiding the use of TOR switches.
Concerning Option 3 (disaggregated), plain white-boxes without SW are considered in the analysis. Such HW is  assumed at a −50% discount with respect to its legacy counterpart, as follows from [5,56,57]. Essentially, such a discount is justified by the fact that both equipment SW and its integration is provided separately. In particular, the SW cost in Option 3 is evaluated by assuming a SW development of 100,000 lines of code (LOC) for the whole customized carrier-class SDN controller environment based on ONOS plus the network edge mediator (NEM) for the CO [58]. The SW cost results in 500 CU, under the assumption of a SW productivity of 4 LOC per hour, and the cost of a SW engineer is 0.02 CU per hour, as follows from [5,56,57].
The 4 LOC per hour productivity number comes from [59] where the authors conducted an extensive study on SW projects employing different programming methodologies and languages, finding an estimated range between 325 and 750 LOC per month (i.e., 2 to 4.3 LOC/h) and including all the software life cycle, namely, requirements, design, coding, documentation, validation, operation, and support. Such SW development cost is shared among all L2/L3 equipment owned by the operator and, assuming that 250 switches share the SW, thus resulting in a SW cost of 2 CU per switch, which totals 276 CU for the entire MAN of 60 nodes and 138 switches.
Concerning the DC equipment, the cost is basically the same for all architectures. From the number of cores reported in Table 5 it follows that the Basic profile requires 5 Large-size servers per AMEN and 25 XLarge-size servers per MCEN. For the other DC profiles, the results show that 5, 10, and 15 Large-size servers are required per AMEN, while 63, 50, and 68 XLarge servers are required per MCEN, for High C, High B, and High D DC profiles, respectively. A Medium/Small-size NAS storage unit is added to AMENs and two Medium-size NAS storage units are added to MCENs, for every single DC profile.
For the optical layer, in aggregation network segments (horseshoe in Fig. 2) each AMEN is equipped with two 1:4 WSS modules for interconnection to the other nodes plus an A/D block based on one 1:4 WSS module and a Mux/Demux with two amplifiers. In the core mesh segment each MCEN is equipped with six 1:9 WSS modules for mesh interconnection with the other MCEN nodes (four links) and to aggregation horsehoes (two links), plus two more 1:9 WSS modules, two Mux/Demux with four amplifiers to build the two A/D blocks. The HW has the same characteristics for the three options, but in Option 3 the optical layer is assumed disaggregated. A discount of 20% in the disaggregated WDM HW with respect to the prices of the aggregated devices has been applied, as shown in [56,57]. However, an additional investment in SW is necessary for the disaggregated case, but medium or large size operators can share this cost among a large number of equipment. For the disaggregated WDM layer, the SW cost is calculated following the same assumptions used for disaggregated switches, i.e., 2 CU per ROADM and O.5 CU per transponder (100,000 LOC SW is assumed shared by 250 ROADMs and again 100,000 LOC SW shared by 1000 transponders).
The cost and power analysis results are depicted in the following charts. In Fig. 5, the cost of a single node is shown for the three options and for the two types of metro nodes (i.e., AMEN and MCEN) in the case of the Basic DC profile. The total cost is broken down into the data center, packet switch, optical, and disaggregated SW components. The disaggregated SW includes the SW for both optical and packet disaggregated equipment. The cost of an AMEN represents about 12% of the cost of an MCEN. As the cost of the DC component does not change significantly for different node architectures (the small difference is due to the presence of additional L2 switches in the legacy router architecture), the difference of cost between node architectures comes from the cost of the packet and optical parts.
For both AMEN and MCEN, the aggregated L2/L3 switch architecture shows nearly equally balanced costs among the packet, optical, and data-center components. Compared to the aggregated L2/L3 switch architecture, in the legacy router case the cost of the packet layer is remarkably higher, while for the disaggregated L2/L3 switch case both the costs of the packet and optical parts are lower than the other two cases; also, the additional cost of the disaggregated SW does not offset the reductions in equipment costs, making these solutions the cheapest option.
The percentages of the cost reduction of the two cases adopting L2/L3 switches having as a reference the legacy router case are highlighted in the diagram. The percentage decreases obtained with the aggregated L2/L3 architecture are −21.6% and −15.4% for the MCEN and AMEN, respectively, and become noticeable for the disaggregated architecture that, compared to the legacy router, reaches −33.2% and −21.7% reduction for the MCEN and AMEN, respectively. Figure 6 shows the total equipment cost of the MAN, obtained summing up six identical contributions from each MCEN and 54 from each AMEN. Results for the three node architectures are shown as an absolute cost component breakdown (bar diagram with the scale of values on the secondary y axis) and as a value normalized to the HHs covered by the entire MAN on the primary y axis (i.e., total cost divided by 900,000 HHs). In the cost breakdown part of the diagram the cost components are differentiated not only in terms of the  infrastructure component (DC, optical, or packet) but also between aggregated and disaggregated. Moreover, the contribution of the disaggregated SW (including SW for both packet and optical equipment) is explicitly shown for the L2/L3 disaggregated node architecture. The DC cost component is less than half of the total cost, ranging from 33% (legacy router) to 43% (L2/L3 disaggregated switch), as shown from the ratio between the yellow bar over the total in Fig. 6. Overall, the cost reduction obtained with L2/L3 switches, if used instead of routers, is −18.4%, while the disaggregation of both the packet and the optical layer increases the cost reduction to about −27.3%. The cost of the disaggregated SW is about 10% of the total cost. Results for the HH normalized cost range from 0.0075 CU per HH for the legacy router, to 0.0061 of L2/L3 aggregated switch, to 0.0054 CU per HH for the L2/L3 disaggregated switch. (The percentages of reduction with respect to the legacy router are the same as reported before for the absolute total cost.) Figure 7 shows the power consumption of a single AMEN and MCEN for the Basic DC profile. The values for L2/L3 switch architectures, aggregated and disaggregated, are the same, and so only one category that applies to both is reported in the diagrams (the HW of the switches is the same in the two cases, the difference only being in the way the SW is provided). Noteworthy is the large percentage of power required by the DC, even in the presence of a Basic DC profile for which the number of computation cores deployed is not large. The percentage of total power consumed in a node ranges between 74% (for an AMEN in a legacy router) to 91% (for both an AMEN and MCEN in an L2/L3 switch). The power reduction achievable because of the L2/L3 switches (all other parts consume the same) is about −5.9% in the MCEN and about −18.0% in AMEN. Figure 8 reports the total power consumption of the entire MAN (broken into subparts with a reference for the values on the secondary y -axis scale) and the resulting power consumption per HH (scale on the primary y axis) for the Basic DC profile. The high impact of the power consumption of the DC component is evident. The absolute consumption of the entire MAN for the reference case (legacy router) reaches about 1200 kW, with a DC component of about 80% of the total, while for the L2/L3 switch the power consumption results in just under 1000 kW with the DC component increased to 90% of the total, and a reduction of −13.6% of the power consumption cost compared to the legacy router case. The large reduction of power consumption achievable in the packet layer with the adoption of switches, instead of the high-energyconsuming routers, is offset by the high power consumption of the DC components. This remains approximately the same (as mentioned in the cost analysis it differs only by the presence of additional top-of-rack switches and pluggables in the legacy router architecture). The total normalized power consumption of the MAN gives 1.31 W per HH in the case of the legacy router and 1.15 W per HH in the case of the L2/L3 switch architecture. Figure 9 reports the cost and power per HH for the disaggregated L2/L3 switch architecture for the four specified DC profiles. As expected from the results already presented, it emerges that, compared to the cost, the power consumption has a slightly higher sensitivity to the DC component. Going from 27,360 computation cores of the Basic DC profile to 60,480 cores of the High-D profile (+220%), the cost increases by +53% while the power consumption increases by +125% reaching 2.5 W per HH. The percentage of power due to the DC component is not shown in the figure, but it grows  from 84% of the total (Basic DC profile) to a value close to 94% (High-D DC profile). Figure 10 shows the partitioning between the sum of equipment cost and energy cost. To make comparable the CAPEX due to the equipment purchase and OPEX due to the energy cost, the two costs are referred to a period of 1 year. In this CAPEX-OPEX analysis, only power consumption and equipment costs are considered while other important cost sources like installation, maintenance, or housing and floor rental have been left out of the analysis. The yearly cost of equipment is evaluated distributing the total equipment cost (as shown in Fig. 8 left y axis) over 8 years (i.e., the amortization period). The yearly energy cost is calculated using the power consumption values shown in Fig. 9 (right y axis) and assuming a cost of 0.00002 CU per kWh (which is approximately the average energy cost for industrial customers in Europe and the USA). The diagram shows that the energy cost approximately amounts to one-fourth of the total, reaching 30% of the total for the High D profile, which is the one that employs the largest volume of compute resources. The share of energy cost is significant with a remarkable sensitivity to the amount of compute resources.

SUMMARY AND DISCUSSION
This work presents both a cost model with the necessary building blocks to realize COs with computing/storage capabilities and, as an example of application, a technoeconomic evaluation and comparison of different MAN node architectures based on the CORD model. The building blocks of the nextgeneration MAN architectures, featuring multiaccess edge computing (MEC) and 5G support, have been described with updated cost and power consumption values, for the optical layer components, packet switching, and storage/computing elements.
The technoeconomic analysis shows that moving toward disaggregated CO architectures can reach important cost savings in the range between 15% to 30% approximately, thanks Research Article to cost savings in the optical and packet-switching components of the MAN nodes. On the other hand, introducing DC capabilities at the MAN nodes increases the total cost by approximately one-third extra in CAPEX; however, it has a tremendous impact in OPEX, in particular, regarding the power consumption of the data-center part at the CO, which increases between 3 to 5 times with respect to the legacy CO without data-center capabilities. In this sense, future research is necessary on investigating different ways to reduce the power consumption of mini-DCs, for instance, dynamic on/off strategies during low periods of activity. Also, operators need to investigate where it makes sense and profit the deployment COs with data centers within the MAN.
In conclusion, the large investment required to transform legacy COs into COs with data-center capabilities, both in terms of CAPEX, but especially in terms of power consumption, will require a sound business case planning the offered services for return of investment. However, in an increasingly digitized world, the loss of opportunity (not addressed in this study) must also be included in the investment decision-making.
It is worth adding that such an architecture can also bring, in general, a reduction of the transmission capacity needed in the metro network. This benefit has not been quantified in this study because the traffic assumptions and the granularity chosen for the connections (i.e., 100G) preclude further reduction of the aggregation part, resulting in a minimum of 100G between any pair of connected nodes. To achieve such savings opportunities, a wider choice of connection rates (e.g., 25G and 50G), or scenarios with higher traffic volumes, should be considered. In this respect the cost and power model presented in this article paves the way to further technoeconomic evaluations of forthcoming MANs.