Service providers spend billions to assure network availability and avoid service outages. The typical result of that spend is overprovisioning of the network. Common wisdom says that while overprovisioning is expensive, it’s necessary – just to be on the safe side. Even so, network availability often comes up short, as evidenced by numerous customer outages and SLA penalties paid out each year.
How can this be? Haven’t network planners and operators figured this out after years of experience?
The problem isn’t in the expertise of network planners. The problem is in the network data that is available to them, and the ability of all network layers to respond to changes automatically and in harmony with one another.
First, let’s look at the importance of network data.
Accurate Network Data is Key to Resiliency Planning
Decisions about the best way to assure service resiliency are hampered by network data that is collected manually, maintained in Excel spreadsheets, and is often inaccurate and incomplete. The effort to keep those spreadsheets current or bring them up to date when they are not, involves many hours and much expense, including technician trips to verify layer 0-3 connectivity and service paths across numerous network sites. As a result, network planning and provisioning rules are updated only periodically. In the meantime, planners and operators must make do with data that probably does not reflect actual network inventory and service paths. This in turn leads to vulnerabilities (SRLG issues) and inefficiencies (underutilization of resources).
To assure resiliency, network planners need accurate, real-time network data. Ideally that data should feed directly into planning systems, and paint a live picture of network inventory and service routes from IP to fiber optical layers, across all infrastructure domains and vendors. Complete, up-to-the-second network data that is easy to understand and manipulate is something few service providers have. While this data is critical today, it becomes even more important as they prepare to move into 5G territory.
Network Data You Can Trust for Automation
Lack of automation is another obstacle to assuring network uptime. First of all, service providers need a platform that automates the real-time network discovery process that we just described.
Once network planners have network data they can trust, they can use it to automate many of their planning and provisioning processes and enable the network to self-correct. For example, an automated system like Sedona NetFusion Network Intelligence and Automation Platform, identifies points of concern in and across the network that could potentially cause network outages or disconnections – and helps the planner address those issues. It also detects shared risks between links and services, flagging risk violations and automating alternative remedies to detected risks.
Once provisioned, the network is constantly monitored to assure network uptime. Disruptions are detected, analyzed, and repaired by automatically adjusting existing resources to self-correct resilience deficiencies, instead of deploying additional gear.
It helps to visualize what we’re talking about. The rules-based assurance method in the figure below on the left starts with questionable data (manually collected) and just gets more complicated from there. Data-driven proactive assurance, shown on the right, starts with accurate and complete data (thanks to real-time network discovery), and continues with an automated process that is streamlined and efficient.
The choice is clear. Resiliency planning and automation powered by accurate network data can help service providers maximize network uptime at a much lower cost.