The demand for data centres is fast growing. Data centre capacity is expected to double by 2023 to over 1,000 MW. Domestically speaking, as per NASSCOM, the Indian data centre market can see cumulative investments of $25 billion between 2019 and 2025. Given the growing demand the Indian government came out with the Data Centre Policy in 2020. The policy seeks to ensure sustainable and trusted data centre capacity in the country to meet the enormous demand. Further, it aims to strengthen India’s position as one of the most favourable countries for data centres by incentivising and facilitating establishment of state-of-the-art data centres. In addition, the policy seeks to promote domestic manufacturing, including non-IT as well as IT components, to increase domestic value addition and reduce dependence on imported equipment for data centres.
A look at some of the key connectivity requirements of data centres and the growing data centre interconnectivity market…
Data centres vary widely in their design and requirement. Some of the factors influencing the design of data centres are budget availability, adherence to insurance and building code, availability of power, presence of cooling capacity, adoption of site safety protocols to safeguard against disasters such as floods, seismic shocks, etc., availability of space for moving equipment, weight bearing capacity of the site and availability of uninterrupted high speed connectivity.
Of these, ensuring uninterrupted high speed connectivity is gaining increasing focus among data centre operators. To meet this connectivity requirement, deploying optic fibre cable is being seen as a suitable solution. While traditional copper cables can provide only 10 Gbps of bandwidth, fibre optic cables can provide over 60 Tbps. Further, optic fibre cables are more immune to noise, while copper cables are susceptible to EM/RFI interference, cross-talk and voltage surges. Moreover, fibre cables are more secure than copper cables as they are nearly impossible to tap. In addition, fibre cables are lightweight, have a thin diameter, come with strong pulling strength and weigh only 4 LBs. Copper cables, in contrast, are much heavier, have a thicker diameter, strict pulling specifications and weigh over 39 LBs. Over and above this, optic fibre cables come with a life cycle of 30-50 years and have an energy consumption rate of 2 W per user. Meanwhile, copper cables come with a life cycle of five years and have an energy consumption rate of over 10 W per user.
Data centre interconnecting market
Data centre interconnectivity (DCI) includes both “intra-” and “inter-” data centre connectivity. Intra-data centre connectivity offers short reach and it is cost sensitive to connect servers and storage in this scenario. Meanwhile, inter-data centre connectivity offers long reach between data centres. The global market for DCI is projected to be around $6 billion by 2025, and 50 per cent of this market is expected to be in emerging economies, dominated by Asia. DCI spending is dominated by web-scale companies (such as FAANG) in developed markets and by telcos in developing countries such as India.
Key DCI requirements
- Bandwidth scalability: DCI demands high-bandwidth connections with wavelength capacities ranging from 100G to 600G. Further, it requires support for up to 64 waves per fibre @ 600G per wave or 96 waves per fibre @ 200G per wave. It also requires a rich set of client interfaces (SDH, Ethernet, fibre channel) with tunable and pluggable optics.
- Transmission security: This is another key requirement of DCI. Data encryption is possible on the client end, for instance, Ethernet, or at frame (OTN) layers.
- Terabit-scale switching: This is required for countrywide networks. Scalable multi-terabit OTN cross-connect is required at metro data centres to optimally aggregate traffic from multiple edge data centres ranging from sub-Tbps to tens of Tbps. The system entails pay-as-you-grow scaling through innovative disaggregated leaf-and-spine architecture.
- Service agility: DCI requires software defined networks (SDNs) for service agility. SDN in the transport layer enables network automation, which speeds up configuration and provisioning of tasks; network intelligence, which enables proactive response to various network scenarios such as failures; and network programmability, which integrates third-party apps and service requests through APIs.
- Future-proof architecture: This is yet another requirement of DCI. Installation of software-defined hardware brings the benefits of lower total cost of ownership and ability to deliver custom features for service providers. Further, for OEMs it offers the advantage of shorter time-to-market and field upgrades on the installed base. Also, design reuse helps lower research and development cost.
New data centre build-outs and intra-DC infrastructure upgrades to support rising cloud service adoption is driving high capacity DCI networks. Modern DCI networks have to satisfy multiple requirements – scalable capacities with bandwidth-reach maximisation, secure and low-latency transport, cost-effective terabit-scale switching, versatile software control and future-proof programmable hardware.
Based on a presentation by Kanwar Jit Singh, Vice-President, Tejas Networks