Data center networking was initially focused on traffic within a single data center. However, over time connections between two or more data centers have become increasingly important. This is where data center interconnect (DCI) solutions come into play. Several enterprise data center challenges and trends shed light on the capabilities and limitations of current DCI solutions. These include the need for greater bandwidth, cost efficiency and flexibility. The data center interconnect solutions of tomorrow are poised to bring advances in all of these areas.

What is data center interconnect?

Data center interconnect (DCI) technology connects multiple data centers together. Traditionally, data center interconnect solutions were almost exclusively used as part of disaster recovery and business continuity plans. This way, data could be quickly restored from another data center in the event of failure or another cause of data loss. Eventually, data center connectivity would come to increasingly be used not only for replication and backup but also for optimizing cross-location workloads.

DCI technology empowers enterprises to optimize their data workloads more effectively in several ways. These include sharing both physical and virtual resources across multiple locations as well as load balancing. In data center interconnect solutions, load balancing is all about effectively distributing network traffic across servers. This is made more efficient when you are able to combine the resources of multiple data centers using data center interconnect technologies.

Another way DCI helps optimize workloads across data centers is that it makes it easier to apply quality of service and other policies. Given that much sensitive data, such as financial transactions and personally identifiable data, is generally stored in data centers, data security is crucial. With advanced networking solutions, in-flight encryption and robust access controls can be applied to data center interconnect operations. This is key for ensuring compliance in the context of transferring data between data centers, especially as distances increase and borders are crossed.

Finding the sweet spot between reach and simplicity for data center interconnect solutions

The further apart data centers are, the more important high-performance data center interconnect solutions become to minimize latency. About 90% of all DCI solutions cover a range of 10-80 km, a common distance within a metro area. This means that short to medium range networking solutions have been good enough so far. However, this may not always be the case as some enterprises will increasingly look to interconnect data centers across regions and borders.

After being proven effective in extending the reach of telecom operators, enterprises have increasingly turned to DWDM for data center interconnect solutions as well. This is especially the case when high performance is important. However, traditional telco-grade DWDM-based solutions can be a bit overkill in terms of cost and complexity for a data center environment. Not to mention their chassis-based form isn’t exactly a natural fit with the box switches and rack-mountable servers used in typical data centers.

Therefore, today´s up-to-date data center interconnect solutions are expected to provide the reach of telecom-grade DWDM solutions with the form factor and simplicity of in-house networking infrastructure. For instance, transceivers embedded directly into switches or muxponders and transponders as open line systems are simple and tailored to match existing data center infrastructure.

Increased productivity and cost-efficiency crucial for tomorrow’s data center connectivity

Manual administration of data center interconnect operations can be labor-intensive, error-prone and slow. As a result, the DCI solutions of tomorrow will increasingly focus on reduced complexity and increased automation. Open APIs serve as a key enabler for automation by allowing for increased flexibility, central management and custom scripting and applications. This paves the way for decreasing complexity, which makes provisioning, maintenance and other fundamental operations simpler.

Lost productivity due to manual administration has a certain indirect negative effect on cost efficiency. But that’s not the only cost concern for data center interconnect setups. Cost efficiency also becomes a challenge as more and more data is transmitted between data centers via DCI. One key challenge is ensuring costs do not increase at the same rate as bandwidth. This requires optimizing performance with WDM and related technologies as well as lower power consumption and smaller footprints. All of this is crucial for ensuring the lowest possible cost per bit for the data center interconnect solutions of tomorrow.

Higher bandwidth requirements driving DCI upgrades today and tomorrow

Enterprise data centers are seeing rapid growth in demand for more bandwidth in data center interconnect solutions. This is being driven by the steady increase in the volume and importance of data in line with big data and related trends. An increasing need for supporting content delivery, the prevalence of cloud computing applications and higher data duplication requirements for redundancy and security are all putting greater pressure on bandwidth.

This is pushing upgrades to 100G higher on the agenda for enterprises to ensure reliable, high-capacity data center interconnect solutions. However, some enterprises are also intrigued with solutions for 400G speeds recently introduced to the market. One of the most interesting developments for DCI is the new standard, 400ZR, introduced along with 400G. This will brings bandwidth improvements and more networking power in small form factors.

Designed for interoperability between systems from different vendors by the Optical Internetworking Forum (OIF), 400ZR also enables simpler and more flexible data center interconnect solutions with more open networking. All of this is well aligned with the future data center interconnect challenges referenced above involving cost efficiency, simplicity and capacity.

Make the data center interconnect upgrade path simpler with the right open line system

Instead of trying to decide whether to upgrade to 100G now or 400G later, enterprises can make the upgrade path simpler with an open line system that supports both 100G connectivity and 400ZR. This opens the door to a lower total cost of ownership for data center interconnect operations and provides excellent flexibility.

Smartoptics provides innovative optical networking solutions and devices for the new era of open networking. Our flexible and futureproof solutions based on embedded DWDM includes transceivers, transponders, smart management and SDN control. Learn more about Smartoptics DCI solutions built around open line systems with support for 100G and 400ZR.

CTA – 400G – Solution Brief – W450

Get the 400G pluggable DWDM solution brief

Download our solution brief and learn more about the 400G DWDM solution

CTA – 400G – Solution Brief – W450

Get the 400G pluggable DWDM solution brief

Download our solution brief and learn more about the 400G DWDM solution

Related articles

Misty forest

What is a SAN and how does it protect mission-critical workloads?

Enterprises increasingly see downtime as a substantial risk, given that just an hour of downtime can cost $700,000. The most surefire way to secure uptime for mission-critical workloads is to have a solid disaster recovery plan with synchronous data mir...

What is Fibre Channel used for?

Many enterprise data centers with SANs for mission-critical workloads and sensitive data subject to regulatory requirements rely on Fibre Channel. One of the reasons is the ability to prevent data loss and downtime with powerful security features and sy...

What is DWDM and when should you use it?

The connected society now taking shape depends on robust and future-proof networking solutions for a range of fiber optic network applications – from corporate and governmental data centers to service provider networks. DWDM is increasingly a key ingred...