Open access peer-reviewed chapter

Content Defined Optical Network

Written By

Hui Yang

Submitted: 05 September 2017 Reviewed: 13 November 2017 Published: 20 December 2017

DOI: 10.5772/intechopen.72432

From the Edited Volume

Broadband Communications Networks - Recent Advances and Lessons from Practice

Edited by Abdelfatteh Haidine and Abdelhak Aqqal

Chapter metrics overview

1,205 Chapter Downloads

View Full Metrics

Abstract

Optical interconnection has become one of the key technologies to adapt the needs of large-scale data center networking with the advantages of large capacity, high bandwidth, and high efficiency. Data center optical interconnection has the characteristics of resource and technology heterogeneity. Its networking and control face enormous challenges for the increasing number of users with a high level quality of service requirements. Around different scenarios, there are a series of key networking and control problems in data center optical interconnection, such as multiple layers and stratums resources optimization in inter-data center, and time-aware resource scheduling in intra-data center. To solve these problems and challenges, this chapter mainly researches on content defined optical networking and integrated control for data center. For networking of vertical “multi-layer-carried” and horizontal “heterogeneous-cross-stratum”, the chapter launches research work around application scenarios about inter-data center optical interconnection with optical network, and intra-data center. The model architecture, implementation mechanism and control strategy are analyzed and demonstrated on the experiment and simulation platform of data center optical interconnection. This chapter will provide important references for future diverse applications of data center optical interconnection and software defined networking and control in practice.

Keywords

  • software defined optical network
  • content
  • data center
  • optical interconnect
  • OpenFlow

1. Introduction

With the rapid development of cloud computing and high rate services, data center services have attracted a great deal of attention from network service providers. With the variety and massiveness of applications, the high-performance network-based datacenter applications have the features of high burstiness and wide-bandwidth, particularly for the super-wavelength services [1]. Flexi-grid optical networking with big capacity, low power density and distance adaption, provides a promising solution for the new datacenter network bottlenecks. A novel optical orthogonal frequency division multiplexing (OFDM)-based architecture with high spectral efficiency and high energy efficiency is presented for data center networks [2]. Networking architecture, algorithm and control plane for inter-datacenter network is also addressed in flexi-grid optical networks [3]. Lower power, improved scalability and port density which is the advantages of software defined optical networking for highly virtualized datacenters are studied [4].

Compared with optical interconnect networks between datacenters, optical interconnect networks in a datacenter is a more imperative requirement and respective case to serve the services in a flexible and high-efficient way [5]. Besides, miscellaneous datacenter services have a lower delay and higher availability requirements whose quality of service (QoS) can be guaranteed in corresponding levels [6]. Many studies focus on the architecture and equipment in datacenter interconnection [7, 8, 9]. For instance in Ref. [7], arrayed waveguide grating router (AWGR)-based interconnect architecture is proposed. A distributed all-optical control plane is designed with low latency and high-throughput at high traffic load in the case of sufficient packet transmission time. The authors in [8] achieve the hitless adaptation between Ethernet and time shared optical network (TSON) by designing a novel network on-and-off chip approach for highly efficient and transparent intra-datacenter communications. The work [9] make the datacenter offloads heavy inter-pod traffic onto an optical multi-ring burst network by proposing efficient scheme to all-optically inter-networking the pods. However, the time feature of application to guarantee the services delivery with various QoS in intra-datacenter networks from the view of service is relatively unexplored. Recently, as a centralized software control architecture, the software defined networking (SDN) enabled by OpenFlow protocol has become a focus of study by making the network functions and protocols programmable [10, 11, 12, 13], where maximum flexibility is provided for the network operators and the integrated optimization of services can be achieved in a centralized control over multi-dimensional resource [14, 15, 16, 17]. Hence, to introduce SDN method to centrally control network and application resources in optical interconnect of intra-datacenter has a great significance.

In our previous study, the network architecture with cross stratum optimization (CSO) based on SDN including muti-stratums resources in inter-datacenter networks has been designed to partially satisfy the QoS requirement [18, 19, 20, 21]. Even in the edge of the network, the similar architectures with CSO based on SDN have been studied for improving the performance of cloud‐based radio access network, which is similar as the inter‐datacenter networks [22, 23, 24]. Based on the previous work, this paper proposes a Content Defined Optical Network (CDON) architecture in OpenFlow-based datacenter optical networks for service migration, in which a time-aware service scheduling (TaSS) strategy is introduced. CDON considers the time factor, in which the applications with required QoS can be arranged and accommodated to enhance the responsiveness to quickly provide for datacenter demand. By the experimental implementation on our testbed with OpenFlow-based intra-datacenter and inter-datacenter optical networks and the statistics collection of blocking probability and resource occupation rate, the overall feasibility and efficiency of the proposed architecture are verified. Intra-datacenter and inter-datacenter networks are considered in this paper. Based on the unified and flexible control advantages, SDN is deployed in both two networks.

The rest of this paper is organized as follows. In Section II, we propose the novel datacenter-network-based CDON and builds functional models. Then intra-datacenter optical interconnection architecture and inter-datacenter optical network architecture are designed. Section III describe the TaSS strategy. Finally, we describe the testbed and present the experimental results and analysis in section IV and section V conclude the paper.

Advertisement

2. Datacenter-network-CDON

In order to promote the control efficiency of datacenter networks, control architecture based on CDON is described as shown in Figure 1. Different OpenFlow controllers for different resources including intra-datacenter computing recourse and inter-datacenter communication resources have been developed. The latter is mainly flexi-grid optical network resource in this work. All resources are software-defined with OpenFlow and support datacenter application. Then OpenFlow-enabled network controller and OpenFlow-enabled application controller can work together. By using user and application interface (UAI), application plane which is served through application and controller interface (ANI) can provide users with various services. The architecture of intra-datacenter and inter-datacenter networks is discussed in detail as follows.

Figure 1.

Datacentre-network-based CDON.

2.1. Intra-datacenter optical interconnection architecture

The CDON architecture for OpenFlow-based intra-datacenter optical interconnect is shown in Figure 2(a). Top-of-rack (ToR), aggregation and core optical switches three kinds of optical switches are used to interconnect datacenter servers with the deployment of application stratum resources (e.g., CPU and storage). Application controller (AC) and network controller (NC) respectively centrally control each stratum resources which are software defined with OpenFlow. To control intra-datacenter networks for service migration with extended OpenFlow protocol (OFP), OpenFlow-enabled optical switches with the OFP agent software (OF-OS) is used. The CDON architecture in intra-datacenter networks have twofold motivations. Firstly, CDON can highlight the cooperation between AC and NC for supporting TaSS strategy to schedule datacenter applications based on different time sensitivity requirements reasonably and optimize application and network stratum resources efficiency. Secondly, considering the burstiness, burst applications fast provisioning with unified CDON control and process can be supported.

Figure 2.

(a) The SDN-based intra-datacenter network architecture of CDON. (b) The SDN-based switching fabric of intra-datacenter. (c) The extension of protocol for CDON.

Figure 2(b) describes the OpenFlow-switching structure of intra-datacenter. The functions and interaction descriptions of relevant functional modules are described below. The AC is responsible for monitoring and maintaining the resources of application stratum for CDON, NC supports abstraction of the network information from physical stratum and lightpath provisioning of optical networks in intra-datacenter. When request arrives, AC processes it by using TaSS strategy considering the requirements of delay sensitivity and achieving CSO of the computing and storage in an internal database and sent the decision to NC. The control of virtual and physical network are concerned in NC. The former controls the virtual network and send abstracted network information to AC, the responsibilities of latter is to monitor and control the programmable physical modules. With the extended OFP, the lightpath can be built according to the request from AC. It is noteworthy that burst traffic can be correspondingly serviced by the high level optical switch by fast tunable laser (FTL) and burst mode receiver (BMR) in ToR switch. The aggregation and core switching fabrics are built based on space and wavelength circuit-switching technologies, fast optical switch (FOS) and cyclic arrayed waveguide grating (CAWG) respectively. The OFP agent software embedded in optical module has four functions which are maintaining flow table, modeling the information of node with programmability, mapping the content to configure and controlling hardware.

In connection to networks control of intra-datacenter, Figure 2(c) shows extended flow entry of OFP. The rule is added with the main characteristics of intra-datacenter including the in/out port, intra-datacenter label (e.g., channel space, lambda and time slot) and port constraints The action is extended as five types: add, switch, drop and configure to set up a path, and release a path. Using combinations of rule and action, the control of optical node is realized. The responsibility of stats function is monitoring the flow property to provide service provisioning for CDON.

2.2. Optical network architecture of inter-datacenter

Flexi-grid optical networks are the promising technology for inter-datacenter networks as these networks can satisfy the requirement of burstiness. The CDON architecture is built and shown in Figure 3(a). The distributed datacenters are interconnected through the flexi-grid optical networks. The network architecture mainly consists of the optical resources stratum and application resources stratum. In a unified manner, network controller and application controller can control each resource stratum which is software defined with OpenFlow. Software defined OTN (SD-OTN) is necessary to control the flexi-grid optical networks for inter-datacenter network with extended OFP. SD-OTNs are essentially OpenFlow-enabled elastic optical device nodes with OFP agent software. It has twofold motivations to design the CDON architecture over inter-datacenter optical network. Firstly, the CDON can realize the global interworking of cross stratum resources that the physical layer parameter (e.g., bandwidth and modulation format) can be adjusted. So the cooperation between AC and NC is emphasized to realize software defined path (SDP) with application and spectrum elasticity. Secondly, the different time sensitivity requirements of services can be considered reasonably through scheduling data center services with time elasticity to optimize the application and network resources utilization further. Based on functional architecture described above, TaSS scheme is proposed in the AC and it can arranging the start time, transport time and corresponding transport bandwidth for services for realizing the application and network stratums resources optimization.

Figure 3.

(a) The architecture of CDON in OpenFlow-based inter-datacenter networks. (b) OF-enabled SD-OTN functional models. (c) Flex ODU. (d) The extension of protocol for CDON in inter-datacenter.

The functional modules of AC and NC and the coupling relationship between different modules are shown as follow. NC is responsible for the control in physical and virtual network. The former includes controlling spectrum resource and modulation format. The latter is responsible for managing the virtual network and sending virtual resource information to AC. While a request arrives, AC runs TaSS strategy based on muti-stratum resource information and sent the decision to NC through application-transport interface (ATI). The SDP can be found out and built up based on extended OFP according to the request from AC. It is noteworthy that, the length of SDP decides the modulation format of service (e.g., QPSK and 16QAM). In the case of short distance, spectrum bandwidth which is more precious than other resource can be economized by using high-level modulation format. In SD-OTN, OpenFlow-enabled agent software is embedded to realize the communication between NC and optical node. The SD-OTN maintains optical flow table and modeled node information as software. The physical hardware which contains flexible ROADM and ODU boards shown in Figure 3(b) and (c) respectively is configured and controlled through the content mapping. In the side of the control of flexi-grid optical networks in inter-datacenter, flow entry of OFP is extended and shown in Figure 3(d). In this architecture, the rule is extended as the main characteristics of flexi-grid optical networks which including the in/out port, flexi-grid label (e.g., central frequency and spectrum bandwidth) and ODU label (e.g., tributary slots). The action of optical node mainly includes three types: add, switch and drop. Through combining rule and action, the control of flexi-grid node can be realized. The responsibility of stats function is monitoring the flow property to provide SDP provisioning.

Advertisement

3. Service scheduling strategy

In the side of the service accommodation of datacenter optical networks, the traditional strategy can.

allocate the optimal datacenter server application and corresponding lightpath network resources when a request arrives. That may satisfy the following conditions, which are shown in Figures 4 and 5. We assume that two datacenter servers are deployed in the candidate destination node. The storage utilization of server #1 and server #2 are 80% and 85% respectively when service arrives. After a relatively short time, the services which are provided by the two servers have changed, i.e., some services are released when they are complete and new ones arrive. It leads that the storage utilization of server #1 increases to 95%, while that in server #2 is down to 25%. The server #1 (80% at arriving time) would be chosen as the destination node When the traditional allocation principle is used. However, if the resource is allocated after a short time (within delay tolerance of user), the better optimization and more effective resource utilization may be realized. The other instance is about the service bandwidth. The bandwidth which most of datacenter services use is specified and the time to transmit service is fixed for achieving the overall data volume of service. If the bandwidth is fixed with traditional methods, the service cannot be provided with enough resource in case of less available bandwidth. With compressing the bandwidth to adapt the available bandwidth and increasing the service time (overall data volume of service is constant), the allocation scheme can be feasible. Similarly, by enhancing the provide bandwidth, the service can be completed as soon as possible with relatively abundant resources. The time factor of datacenter service is considered in the two issues. Therefore, we propose the time-aware service scheduling strategy.

Figure 4.

Illustration of time-aware datacenter application resource allocation.

Figure 5.

Illustration of time-aware network resource allocation.

3.1. Network modeling

G (V, L, F, A) denotes the OpenFlow-based datacenter interconnect with optical networks, where V = v 1 v 2 v n is represented as the set of SDN-based optical switching nodes, L = l 1 l 2 l n denotes the set of optical fiber links connecting nodes in V. F = ω 1 ω 2 ω F indicate the set of wavelengths of optical fiber and A is the set of servers in datacenter. While source node s sends a service request, the request contains total data volume of service D and storage space S. The services are classified with latency-sensitive and delay-tolerant service. The former requires immediate service process and its ith request is denoted as SR i s D S , and the latter incluces the arriving time tc and tolerant delay T, and we denote its ith service request as SR i s D S t c T . The request SRi + 1 will be the next request in time order when the connection demand SRi arrives. In addition, Table 1 shows some requisite notations and their definitions.

Symbol Definitions
·S The total capacity of storage space in target data center
·Sr The residual storage space in target data center
·D The total data volume of service
·Br The product of available bandwidth of lightpath
·T The tolerant delay of delay sensitive service
·t The lightpath duration
·Ba The network average transmission bandwidth
·tG The guard time
N The number of network nodes
L The number of links
F The number of wavelengths
A The number of datacenter nodes

Table 1.

Symbols and definitions.

3.2. Time-aware service schedule strategy

Developing on the functional architecture, we present a novel time-aware service scheduling (TaSS) strategy which is implemented in application controller to schedule datacenter service with time sensitivity requirement. For the requirements of the arriving service, we classify them as burst latency-sensitive and delay-tolerant service including flow volume and tolerant latency, based on the delay sensitivity of each service. For the delay-tolerant service, we search servers and lightpaths of datacenter to judge whether the volume of Sr, Br and t is enough to satisfy the incoming service. If the S r S and B, *t ≥ D, by comparing CSO factor [16], the minimum one would be chosen to provision. If the resources for accommodating is not enough, the service would wait until maximum tolerant delay arrives, i.e., t = T D / B a t G , which considers Ba and tG. New lightpath will be allocated with CSO to accommodate the service. In the side of delay-sensitive requests, by searching the existing candidate with the same method, the available resources are decided. The path which has minimum propagation delay obtained from candidate paths can be prepared to provision. When an available built path is found, the TaSS.

Strategy builds new lightpath for the service immediately. In order to realize the quick response for service provisioning, service time is used to reckon the release lightpath procedure immediately. The flowchart of TaSS strategy is shown in Figure 6.

Figure 6.

Flowchart of TaSS strategy.

Advertisement

4. Experimental demonstration and results discussion

A testbed of datacenter networks consisting of intra-datacenter and inter-datacenter is built. SDN is deployed on the two parts together. The following is a detailed description of the testbed demonstrations.

For experimentally evaluating CDON architecture, we deploy it into optical intra-datacenter networks and realize the service migration in this architecture on our testbed as shown in Figure 7(a). In data plane, it contains 4 optical switches with FOS, Two CAWG cards in the core side, BMR embed into burst mode transceiver (BMT) card with FTL and the software OFP agent. The switching time of optical switches is 25 ns and the insert loss is lower than 4.5 dB. The frequency deviation of CAWG cards is 12.5GHz and the insert loss is 10.5 dB. The receiving power sensitivity of BMR is -25 dBm. The switching time of FTL has 98 ns and the frequency deviation is 2.5GHz according to ITU-T standard. The software OFP agent use PI to control the hardware through OFP. We use VMware software to build groups of virtual machines to realize Datacenters. Each virtual machine model a real node with the independent operation system, CPU and storage resource. In control plane, NC is realized with optical module control function, PCE computation function and resource abstraction function corresponding to three servers, database server is deployed to maintain transmission resources and the database of traffic engineering. AC server is used to carry TaSS strategy and monitoring the computing resources in servers. User plane is built in a server for running the required service.

Figure 7.

(a) Experimental testbed and demonstrator setup. (b, c) eye diagram and tuning waveform of FTL. (d) Spectrum of CAWG port.

We have designed experiment to verify the lightpath provisioning in CDON architecture for datacenter service migration. AC runs TaSS strategy to determine the path and schedule for service migration based on various application utilizations among datacenters and current network resource, then setup the path from source to destination node chosen by CSO. Figure 7(b)-(d) show the eye diagram and tuning waveform of FTL and spectrum of CAWG port reflected on the filter profile. The experimental results are further shown in Figure 8(a) and (b). Figure 8(a) demonstrates the OpenFlow message exchange sequence for CDON by capturing in NC with Wireshark. In Figure 8, 10.108.50.74 denotes AC and 10.108.65.249 denotes NC, OF-OS nodes are 10.108.50.21 and 10.108.51.22. AC performs TaSS strategy and NC control all OF-OS via flow mod message through the allocated lighpath to provide the path. Then the path releasing information is sent via flow mod message, and keep synchronization by updating the application usage with UDP to. Figure 8(b) verifies OPF extensions for CDON in intra-datacenter networks with a snapshot of the extended flow table modification message for lightpath provisioning. We also evaluate the performance of CDON in intra-datacenter networks under the condition of heavy traffic load through simulation by comparing TaSS with traditional CSO (TCSO) strategy [16]. The migration data volumes from datacenter node following a Poisson process are randomly from 50Gbit to 400Gbit. Figure 9(a) and (b) compare the performances of two strategies. Comparing to TCSO, TaSS can reduce blocking probability effectively, especially when the network is under heavy load. We can also find that TaSS outperforms TCSO in the resource occupation rate significantly. The reason is TaSS realizes global optimization considering the time schedule with various delay sensitivity requirements and adjusts the service bandwidth according to the distribution of network resources.

Figure 8.

(a)The capture of the OpenFlow message sequence and (b) extended flow mod message for CDON.

Figure 9.

Comparisons on (a) blocking probability and (b) resource occupation rate between two strategies in heavy traffic load scenario.

in order to evaluate the proposed architecture in experiments, we build CDON including both control and data planes based on testbed, as shown in Figure 10(a). In data plane, SDN-based flexible optical nodes are deployed, each of which including flex ROADM and ODU boards.

Figure 10.

(a) Experimental testbed. (b, c) optical Spectrum of SDPs. (d) SDP setup/release delay. (e) Blocking probability (f) resource occupation rate under heavy load.

Datacenters and the other nodes are also implemented on virtual machines created by VMware software on servers. As each virtual machine has independent operation system, IP address, CPU and memory resource, it can be regarded as a real node. The virtual OS technology can set up experiment topology including 14 nodes and 21 links which is similar to the backbone of US. For OpenFlow-based CDON control plane, the NC is set up to support the proposed architecture and deployed in three servers corresponding to elastic spectrum control, physical layer parameter adjustment, PCE computation and resource abstraction, while the database server work for maintaining traffic engineering database, management information base, connection status and the configuration of the database and transport resources. The AC server support CSO agent and monitoring the application resources from datacenter networks with TaSS strategy. User plane is set up in a server for running the required application.

We have designed experiment to verify SDPs provisioning in CDON over flexi-grid optical networks for datacenter service migration whose results are shown in Figure 10(b)-(d). AC runs TaSS strategy to determine the destination datacenter based on various application utilizations among datacenters and current network resource, and then set up SDP for the service migration along the lighpath from source to destination node. Moreover, for different SDP distances, the spectrum bandwidth and corresponding modulation format is tunable. Figure 10(b) and (c) show the spectrum of SDPs on the flexible link between two SD-OTNs which is reflected on the filter profile. The setup/release time of end-to-end SDP is evaluated and shown in Figure 10(d) by collecting the statistics of the strategy processing time, OFP transmission delay and device handle times of software and hardware.

We test the performance of CDON by collecting the performance statistics in TaSS with CSO and physical layer adjustment strategies (PLA) under the condition of heavy load. The spectrum bandwidth of requests are randomly from 50GHz to 400GHz and the adjustable minimal frequency slot is 12.5GHz. All the destination nodes are datacentre in this network. These requests follow a Poisson process and statistics have been collected through the generation of 1 × 105 requests per execution. Figure 10(e) and (f) compare blocking probability and resource occupation rate of three strategies. In the figure, TaSS has a lower blocking probability than CSO and PLA, especially when the network is under the condition of heavy load. We can also find that TaSS can provide a better resource occupation rate than the other strategies. That is because both application and network resources can be considered in TaSS integrally and global optimization can be realized. Furthermore on the basis of it, by choosing high-level modulation format the spectrum resource is economized again. These results are further emphasized in Figure 11(a)-(c). Figure 11(a) shows the application interface of CDON with status and destination node choice of datacenter service migration and various bandwidths of SDP. Figure 11(b) illustrates the OpenFlow message exchange for CDON with the capture in TC. Figure 11(c) shows a capture of the extended flow table modification message for SDP setup to verify the OPF extensions for CDON over flexi-grid optical networks.

Figure 11.

(a)Application graphical user interface (GUI) of testbed. (b) The capture of the OF messages for network discovery. (c) The capture of extended flow table message for SDP setup.

Advertisement

5. Conclusion

Datacenter server resources will become more and more important in the future information society. It will be the main decision factor for reducing cost to allocate the resources efficiently. Considering the high capacity and low energy consumption requirements, Optical networking is regarded as promising technology for both intra-datacenter and inter-datacenter networks. The SDN technology can improve the resource utilization of application and network efficiently. In this chapter, we provide a CDON architecture in SDN-based datacenter optical network for service migration. Besides, we design the TaSS strategy for CDON and the extended OFP further. Based on our intra-datacenter and inter-datacenter networks testbed, we verify the feasibility and efficiency of CDON. The blocking probability and resource occupation rate of our approach under the condition of heavy traffic load are collected and compared with TaSS strategy. The experimental results indicate that the service with time sensitivity can be scheduled effectively and cross stratum resources utilization efficiency is improved in the CDON with TaSS strategy.

Our future works for CDON are to improve TaSS performance with dynamic parameters and consider the scalability in optical network. Then we will realize the network virtualization in datacenter optical network on our OpenFlow-based testbed.

References

  1. 1. Kachris C, Tomkos I. A survey on optical interconnects for data centers. IEEE Communications Surveys and Tutorials. 2012;14:1021-1036 http://dx.doi.org/10.1109/SURV.2011.122111.00069
  2. 2. Kachris C, Tomkos I. Energy-Efficient Bandwidth Allocation in Optical OFDM-Based Data Center Networks. Los Angeles, USA: OFC 2012; Mar 2012
  3. 3. Yoo SJB, Yin Y, Wen K. Intra and Inter Datacenter Networking: The Role of Optical Packet Switching and Flexible Bandwidth Optical Networking. Colchester, England: ONDM 2012; April 2012 http://dx.doi.org/10.1109/ONDM.2012.6210261
  4. 4. Casimer D. Optical Networking in Smarter Data Centers: 2015 and beyond. Los Angeles, USA: OFC 2012; Mar 2012
  5. 5. Yang H, Zhang J, Zhao Y, Ji Y, Han J, Lin Y, Qiu S, Lee Y. Experimental Demonstration of Time-Aware Software Defined Networking for OpenFlow-Based Optical Interconnect in Intra-Datacenter Networks. GLOBECOM 2013, WS-CCSNA; Dec. 2013. pp. 1-6 http://dx.doi.org/10.1109/GLOCOMW.2013.6825027
  6. 6. Yang H, Zhao Y, Zhang J, Wang S, Gu W, Ji Y, Han J, Lin Y, Lee Y. Multi-stratum resources integration for OpenFlow based data center interconnect [invited]. Journal of Communications and Networks. Oct. 2013;5(10):A240-A248 http://dx.doi.org/10.1364/JOCN.5.00A240
  7. 7. Proietti R, Nitta CJ, Yin Y, Akella V, Yoo SJB. Scalability and Performance of a Distributed AWGR-based All-Optical Token Interconnect Architecture. In: Proc. of OFC/NFOEC 2013, OW3H.1. pp. 1-3
  8. 8. Rofoee BR, Zervas G, Yan Y, Amaya N, Qin Y, Simeonidou D. Network-on-and-off-Chip architecture on demand for flexible optical intra-datacenter networks. In: Proc. ECOC, 2012. Th.2.B.2. http://dx.doi.org/10.1364/ECEOC.2012.Th.2.B.2
  9. 9. Deng N, Xue Q, Li M, Gong G, Qiao C. An Optical Multi-Ring Burst Network for a Data Center. In: Proc. of OFC/NFOEC 2013, OTh1A.5. pp. 1-3
  10. 10. Liu L, Peng W, Casellas R, Tsuritani T, Morita I, Martínez R, Muñoz R, Yoo SJB. Design and performance evaluation of an OpenFlow‐based control plane for software defined elastic optical networks with directdetection optical OFDM (DDOOFDM) transmission. Optics Express. 2014;22(1):30‐40. http://dx.doi.org/10.1364/OE.22.000030
  11. 11. Yang H, Zhang J, Zhao Y, Han J, Lin Y, Lee Y. SUDOI: Software defined networking for ubiquitous data center optical interconnection. IEEE Communications Magazine. February 2016;54(2):86‐95. http://dx.doi.org/10.1109/MCOM.2016.7402266
  12. 12. Yang H, Zhang J, Ji Y, Tian R, Han J, Lee Y. Performance evaluation of multistratum resources integration based on network function virtualization in software defined elastic data center optical interconnect. Optics Express. November 2015;23(24):31192‐31205. https://doi.org/10.1364/OE.23.031192
  13. 13. Yang H, Zhang J, Ji Y, Tan Y, Lin Y, Han J, Lee Y. Performance evaluation of data center service localization based on virtual resource migration in software defined elastic optical network. Optics Express, September. 2015;23(18):23059‐23071. https://doi.org/10.1364/OE.23.023059
  14. 14. Channegowda M, Nejabati R, Rashidifard M, Peng S, et al. Experimental demonstration of an OpenFlow based software‐defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi‐domain testbed. Optics Express. Mar. 2013;21:5487‐5498. http://dx.doi.org/10.1364/OE.21.005487
  15. 15. Yang H, Zhang J, Zhao Y, Ji Y, Han J, Lin Y, Lee Y. CSO: Cross Stratum Optimization for Optical as a Service. IEEE Communications Magazine. August 2015;53(8):130‐139. https://doi.org/10.1109/MCOM.2015.7180520
  16. 16. Yang H, Zhang J, Zhao Y, Ji Y, Wu J, Lin Y, Han J, Lee Y. Performance evaluation of multi‐stratum resources integrated resilience for software defined inter‐data center interconnect. Optics Express, May 2015:23(10):13384‐13398. https://doi.org/10.1364/OE.23.013384
  17. 17. Yang H, Zhang J, Zhao Y, Ji Y, Li H, Lin Y, Li G, Han J, Lee Y, Ma T. Performance evaluation of time‐aware enhanced software defined networking (TeSDN) for elastic data center optical interconnection. Optics Express, July 2014;22(15):17630-17643. https://doi.org/10.1364/OE.22.017630
  18. 18. Yang H, Zhao Y, Zhang J, Wang S, Gu W, Lin Y, Lee Y. Cross Stratum Optimization of Application and Network Resource based on Global Load Balancing Strategy in Dynamic Optical Networks. In: Proc. OFC/NFOEC. 2012. JTh2A.38
  19. 19. Yang H, Zhao Y, Zhang J, Wang S, Gu W, Han J, Lin Y, Lee Y. Multi‐Stratum Resources Integration for Data Center Application Based on Multiple OpenFlow Controllers Cooperation. In Proc. OFC/NFOEC, 2013. NTu3F.7
  20. 20. Zhang J, Yang H, Zhao Y, Ji Y, Li H, Lin Y, Li G, Han J, Lee Y, Ma T. Experimental demonstration of elastic optical networks based on enhanced software defined networking (eSDN) for data center application. Optics Express. 2013;21(22):26990‐27002. http://dx.doi.org/10.1364/OE.21.026990
  21. 21. Yang H, Zhang J, Zhao Y, Li H, Huang S, Ji Y, Han J, Lin Y, Lee Y. Cross stratum resilience for OpenFlow‐enabled data center interconnection with flexi‐grid optical networks. Optical Switching and Networking. January 2014;11(Part A):72‐82. http://dx.doi.org/10.1016/j.osn.2013.10.001
  22. 22. Yang H, Zhang J, Ji Y, He Y, Lee Y. Experimental demonstration of multidimensional resources integration for service provisioning in cloud radio over fiber network. Scientific Reports, July 2016;6:30678. http://dx.doi.org/10.1038/srep30678
  23. 23. Yang H, Zhang J, Ji Y, Lee Y. C‐RoFN: Multi‐stratum resources optimization for cloud‐based radio over optical fiber networks. IEEE Communications Magazine, August 2016;54(8):118‐125. https://doi.org/10.1364/OE.22.017630
  24. 24. Yang H, He Y, Zhang J, Ji Y, Bai W, Lee Y. Performance evaluation of multistratum resources optimization with network functions virtualization for cloud-based radio over optical fiber networks. Optics Express. May 2016;24(8):8666‐8678. https://doi.org/10.1364/OE.22.017630

Written By

Hui Yang

Submitted: 05 September 2017 Reviewed: 13 November 2017 Published: 20 December 2017