Datacenter applications occasionally require moving many terabytes of data between datacenters across a wide-area network. This might be needed for better distribution of the workload, for migration of a customer from one cloud provider to another due to failures or maintenance activities, or in preparation for large-scale disasters, such as storms (disaster preparedness).
Moving such amounts of data requires very high-speed connectivity. For example, the relatively modest migration of 4TB requires 30 minutes using approximately 20Gbps of dedicated capacity. More realistic use cases easily require 100Gbps of capacity for longer periods of time. These use cases are sometimes called “cloud bursts.” Since such events are fairly rare, it does not make sense for the datacenter operator (be it a customer for the network operator or part of the operator itself) to lease the full capacity 100 percent of the time. If the network operator can find a way to provide such capacity on demand and engage the required resources for other purposes when the particular customer does not need them, then the business case can be made for a lucrative high-speed bandwidth on demand (HS-BoD) service.
I will discuss different approaches from both economical and operational perspectives.
Implementing Cloud Bursts through the WAN
At a high level, there are three approaches to HS-BoD.
- Using the IP/MPLS layer alone
- Using the Optical layer alone (OTN or DWDM)
- Using a multilayer approach
Numerous claims have been made regarding the use of either the IP/MPLS layer or the Optical layer for the job complemented with SDN control. Not surprisingly, equipment vendors are divided on which of these approaches is more economical, depending on the type of equipment they are selling. Based on the analysis below, we believe that the optimal approach is to use both layers together under multilayer orchestration. These different approaches are graphically demonstrated in the figure.
Economical Perspective

Three Approaches to HS-BoD
When comparing different approaches from an economical perspective, we have to consider how datacenters are being connected to the WAN today. Since datacenters are connected to the IP/MPLS layer for their normal data-center-to-data-center connectivity needs and for connectivity to customers and other businesses, it is natural to use the IP/MPLS layer alone for HS-BoD. However, this implies vast overprovisioning of the IP/MPLS layer, which is clearly non-economical.
Supporting cloud bursts in the Optical layer seems more economical at first glance, since optical capacity has a much lower cost. However, this approach implies connecting the datacenters directly to the Optical layer and dedicating optical resources just for this purpose, which deems this solution equally non-economical and complex to operate.
The economical solution for cloud bursts involves keeping the datacenters connected to the IP/MPLS layer under all circumstances, while temporarily adding IP capacity using shared optical resources, which will be there for other HS-BoD needs or for other needs of the IP/MPLS layer, such as disaster recovery.
Operational Effort
From an operational perspective, connecting the datacenter directly through the Optical layer seems to be the most complex to operate. First, new gear is required, which results in an additional maintenance effort. Secondly, a mechanism for splitting between the normal path through the IP/MPLS layer and the HS-BoD path when it exists.
Connecting the datacenter directly through the IP link does not require any overhead since no changes are needed. The multilayer solution also doesn’t require additional connectivity effort; however, an appropriate Orchestration layer is necessary to allow automation of the networking configuration process.
Summary
Three different approaches for providing short-duration, high-capacity data-center-to-data-center connectivity were presented. When considering the natural evolution from the current connectivity state, as well as the cost, the multilayer solution seems to be the most suitable to current datacenter architectures since no additional expensive hardware is required and since the process is being automated and controlled, leveraging the emerging wave of network programmability.