
This is a short version of a tutorial paper that will be presented at the ECOC’15 conference in September. The full paper and the presentation can be found here.
We discuss the control architecture for multilayer, multivendor, and multidomain networks. In the past, coordination amongst nodes was assumed to be done with distributed control, using extensions to existing protocols like GMPLS or PCEP. While partial multilayer implementations for this approach exist, they are incompatible between different vendors. However, the recent push for centralized control, under the SDN umbrella, has caused a rethinking of the architecture.
We argue that without global network state awareness, some key multilayer and multidomain capabilities cannot be reliably implemented. Such global awareness is very hard to implement without central control.
Multivendor, Multidomain Optical Networks
Multivendor optical networks are hard to manage. Each optical network is a closed system (at least when it comes to DWDM networks, in which multivendor interop in the analog domain does not exist). This implies that a single vendor subnetwork forms as least one domain. Setting up a connection across multiple domains requires understanding the constraints in each domain and based on this information, deciding which of the domain entry and exit points should be used to set up the connection. Since optical feasibility is computed differently by every optical vendor, it is impossible for the controller of one vendor to decide what is feasible in the domain of a different vendor (unlike IP/MPLS networks, in which multidomain routing is simpler).
Therefore, collaboration between different controllers is needed for a feasible solution. Together with various constraints on the routing of the connection (such as, low latency and avoidance of certain SRLGs), the problem becomes extremely complex to solve in a distributed fashion. It is much simpler to achieve it with a central multidomain controller or orchestrator that has the view of all domains and is capable of querying their respective controllers before setting up the connection.
Multilayer Networks
Controlling an IP/MPLS layer on top of a multivendor Optical layer adds another level of complexity that is even harder to solve in a distributed manner. It stems from the fact that a change in the topology in the IP/MPLS layer, such as adding an IP link to optimize the topology or temporarily taking down an IP link while optimizing the Optical layer, must be carefully managed based on understanding the entire end-to-end IP traffic in the network. Once the traffic is known, it is possible to simulate the impact of the topology change on the IP/MPLS layer both under normal conditions and under failures. This is necessary since a change in the IP topology can yield unexpected behavior, such as overload of the new link if its routing metric is too low, light load on the link if the metric is too high, or overload of another link under certain failure conditions. All this requires global knowledge that is hard to compile and disseminate in a distributed system.
This process also requires a sophisticated network simulation engine (or online planning tool) that goes through various what-if scenarios. This process is CPU and memory intensive, and therefore a poor fit for the embedded controllers that run the distributed control plane in the gear.
Resulting Hierarchical Architecture
So far we have discussed why central control is needed (potentially in conjunction with distributed control). The paper goes into more details about the relative role of both centralized and distributed control systems, but I’m going to leave this to another blog post.
However, I would like to explain why I believe that the control architecture in this case will be hierarchical: single vendor/layer controllers at the bottom of the pyramid, an orchestration platform connected to their northbound interfaces (NBI) in the middle, and multilayer apps on top as shown in the following figure. This is because each of these controllers will be built by a different vendor to best control its gear and will not be necessarily based on the same common platform; some will be built on top of OpenDaylight, some might be built using ONOS, and some might leverage the existing network management system code base. As a result, a single monolithic controller that includes components from different vendors remains an elusive vision.
Further justifying a system built out of separate controllers is the need for each vendor to keep innovating at their own pace, resulting in unsynchronized upgrade cycles. And let’s not forget security considerations; separate controllers with well-defined APIs are less vulnerable than a single system that runs code from different vendors.
More details on considerations leading to this architecture can be found in the recording of my OFC’15 tutorial here; however, one needs an IEEE/OSA account to access it.