Продолжая просмотр сайта и(или) нажимая X, я соглашаюсь с использованием файлов cookie владельцем сайта в соответствии с Политикой в отношении файлов cookie в том числе на передачу данных, указанных в Политике, третьим лицам (статистическим службам сети Интернет), в соответствии с Пользовательским соглашением >X
Для помощи нажмите здесь
Wang Zhiwen2020-03-09 16
Data Center Interconnect (DCI) connects two or more isolated data centers for mutually reliable and efficient data transmission. DCI platforms share these common characteristics:
Compact footprint in 1RU or 2RU chassis;
IT equipment form factor with front-to-back airflow;
Modular design support for low-cost pluggable line interfaces;
Simple service modules, usually transponder or muxponder.
The creation of DCI platforms was primarily the result of incessant growth of webscale cloud business and ICP content distribution traffic, which leads to greater requirements for high-capacity connections between cloud data centers. At the same time, a variety of industries are creating their own use cases and DCI networks.
In Europe's top IXP, DCI devices with only one pair of dark fibers replace legacy devices with more than 200 dark fibers, reducing not only the fiber rental cost, but also laborious fiber connecting and maintenance. This reduces service provisioning time from 12 weeks or even 6 months to only 4 hours.
In the financial industry, DCI platforms were built for disaster recovery or mission-critical services. The popularization of mobile banking and working remotely meant a national European bank replaced private lines with its own DCI network to handle a 250% increase in traffic.
In the public sector, simplified and compact DCI platforms require less rack space and dark fiber, and eliminates the need for dedicated professional WDM O&M engineers.
Furthermore, self-built DCI networks are gradually spreading in National Research and Education Network (NREN), healthcare and other industries.
• Data center growth and distribution drive the enormous shipment of purpose-built DCI devices:
According to a third party report, the number of data centers will quadruple by 2025, and DC capacity is beginning to be limited by real estate, power, or the physical connectivity available at chosen sites. Meanwhile, the applications and content become ever closer to end users. All these factors drive DCI equipment shipments, and moreover, customers will continue to compare solutions concerning capacity, footprint, power consumption, flexibility and scalability.
• More investment into DCI equipment to improve fiber utilization, lowering OPEX:
DC operators need to continuously break the fiber transmission bottleneck without deploying or renting new fiber resources, either by improving attainable spectrum or by using higher modulation schemes.
• Low-cost pluggable modules for DCI are attracting more enterprise customers:
The increased availability of low power consumption and low-cost pluggable modules, both on the client and line side, makes modular DCI solutions more appealing in smaller DC or enterprise WAN environments. Most vendors are developing one version of their Optical Digital Signal Processors (oDSPs) aimed at low-power, low-cost applications and another for high-capacity, high-performance use cases.
What can IT directors learn from this report?
DCI networks initially used point-to-point connections, with simple optical line systems, and later evolved to ring or mesh topologies with more complicated fixed optical add/drop multiplexers (FOADMs) or reconfigurable add-drop multiplexers (ROADMs). The most-concerning features in the market are capacity, simplicity, deployment ease, and pay-as-you grow scalability.
• Capacity Is King: Capacity is the top priority and customers tend to use overcapacity for resilience, achieving up to 9.6 Tbps of throughput capacity per chassis, and 4.8 Tbps throughput per RU.
• Line Interfaces and Transport Features: DCI platforms support a maximum per-wavelength capacity of 600G to 800G with 64QAM programmable coherent solutions, and up to 120 wavelengths in super C band using standard 50 GHz channel spacing. Meanwhile, Huawei's future-proof Super C+L band platform can support up to 76.8 Tbps (upgradable to 88 Tbps) on one single fiber, potentially doubling the per-fiber transport capacity.
• Client Port Capacities: The 100GbE port will remain the most important client-side interface in the foreseeable future, with 400GbE count growing in importance over 2020. The need for slower Ethernet interfaces is secondary, but the demand for OTN interfaces, and other interface types (like Fibre Channel and TDM) are increasing in financial industry and enterprise use cases.
• DCI Is Becoming Modular and Disaggregated: Most of the products in this class support varying combinations of client and line interfaces, or optical amplifiers and FOADMs. The modules usually support pluggable client interfaces, predominantly SFP+, QSFP+ or QSFP28. On the line side, however, MSA high-performance modules offer 800G maximum wavelength capacity, and low-cost, standard CFP2- Digital Coherent Optics (DCO) pluggable modules with up to 200G wavelength capacities. However, to avoid the complexity of the optical line system, some vendors also combined the traditional amplifier, optical supervisory channel unit, fiber interface unit, and optical performance monitor unit into one module.
• Automation and Easy O&M: A third party report predicts that 30% of large enterprises will be using AI for IT and network operations by 2023. As DCI deployments increase in size and complexity, automation (both standalone and integrated across the network, tied in with other network domains using open API) has become increasingly important. Customers demand automation to simplify provisioning and change management, as well as introducing self-healing and self-optimization features.The simplified, intelligent and auto-provisioning design of Huawei's DCI product has helped one of China's leading OTT vendors reduce person-days by 90% in E2E DCI network construction, including planning, design, site surveying, installation, provisioning, and commissioning.
What are the recommendations for DCI Buyers?
• Think Beyond Pure Connectivity: The most important buying criteria for DCI platforms remains their throughput capacity per unit of rack space. Buyers should also look beyond headline wavelength capacities and examine the capacity/reach results that new DCI solutions can help achieve on their existing or planned fiber plant.
• Examine the Benefits of Automation: DCI buyers should now examine the benefits DCI solution automation and integrate automation into their own optical transport networks. Following this, the next step should be end-to-end automation spanning data center networks, IP networks, and optical transport.
• Consider Market Momentum: In DCI, adoption within large webscale client infrastructures can also serve as a catalyst for price cuts, as R&D costs dissolve in large volume platform shipments. This can boost the platform's competitiveness in the marketplace, creating a positive feedback loop.
How is Huawei OptiXtrans DC908 performing?
GlobalData report ranked Huawei OptiXtrans DC908 as a leader in architecture, performance, modularity, line interfaces, and network management. Despite being a first-time candidate, Huawei OptiXtrans DC908 has outperformed mainstream vendors.
Huawei OptiXtrans DC908 scorecard in the report
Huawei OptiXtrans DC908 supports 800 Gbps per wavelength and as many as 220 wavelengths by leveraging the future-proof Super C+L band technology, to achieve the highest single fiber capacity of 88 Tbps. It also features simple deployment for IT engineers with an innovatively simplified and intelligent architecture, allowing zero-touch commissioning in just eight minutes.
For more information, please see the following comparison table:
|Buying Criteria||Huawei OptiXtrans DC908||Competitors|
|Architecture, Performance, Modularity||Max Client Side Performance||9.6 Tbps||4.8 Tbps|
|Max Line Side Performance||9.6 Tbps||4.8 Tbps|
|Power Consumption per Gig||0.13 w/Gb||0.16 w/Gb ~ 1.5 w/Gb|
|System Redundancy||1+1 control board protection
1+1 power supply backup
2+1 fan backup
|Only power supply redundancy|
|Line Interfaces||Maximum 100G/200G Interfaces||16||6|
|Maximum 400G Interfaces||16||6|
|Maximum 600G Interfaces||16||6|
|Maximum 800G Interfaces||16||Unsupported|
|Client Port Capacities||Maximum 10G Ethernet Ports||160||120|
|Maximum 40G Ethernet Ports||16||Unsupported by most vendors|
|Maximum 100G Ethernet Ports||48||36|
|Maximum 400G Ethernet Ports||16||9|
|Other Interfaces||32 * OTU4;
48 * OTU2/OTU2e;
48 * STM-64;
48 * FICON8G;
48 * FC800/FC1200/FC1600;
32 * FC3200;
|Unsupported by most vendors|
|Transport Features||Maximum Interface Wavelength Capacity||120 C-Band wavelengths @ 50GHz;
100 L-Band wavelength @ 50GHz ready
|96 C-Band wavelengths @ 50GHz|
|Maximum Capacity Per Fiber||88 Tbps ready||76.8 Tbps|
|Network Level Protection||Client 1+1 protection
Intra-board 1+1 protection
Optical line protection
|Only features optical protection switching|
|Network Management||NMS/Control Planes||Network level WebGUI/eSight/iMaster NCE||WebGUI is unsupported by most vendors|
|Operation & Maintenance||• "5A" deployment: Five automatic processes, including fiber auto-discovery, fiber connection auto-verification, wavelength auto-configuration, optical-layer auto-commissioning, and service auto-adaptation
• Built-in AI chip for proactive O&M, network resource and health prediction
• O&M LCD information display
• Optical Doctor and Fiber Doctor to support fiber and optical quality visualization and diagnosis
• Latency measurement
• Optical power management
• Simplified fiber connections
|No Intelligent O&M features|
|Physical Attributes||Chassis Dimensions||2U, 8 service board slots
Support 600 mm cabinet
|Power Requirements||AC, DC, HVDC||AC or DC|
|Optical transceiver form factor (pluggable/discrete)||Wavelength-tunable optical module MSA;
Wavelength-tunable and pluggable CFP/CFP2-DCO;
Pluggable QSFP28, QSFP+, and QSFP-DD