AI chip embedded, reaching an AI computing power of 100%
The convergence of computing and storage, in addition to the popularity of AI applications, present new challenges for intelligent O&M and intelligent data center network optimization. Huawei’s CloudEngine 16800 is equipped with a high-performance AI chip. The chip enables the CloudEngine 16800 to implement local analysis and real-time decision-making, realize intelligent lossless switching and data center network maintenance, and gradually advance towards autonomous driving networks with zero packet loss and zero faults.
-
AI training efficiency improved by
40%
-
Distributed storage IOPS improved by
30%
-
Faults location time down from hours to
minutes
-
CAPEX reduced by
90%
Core of the Core: AI Chip
The high-performance AI chip uses the innovative iLossless algorithm to obtain network performance with lower latency and high throughput based on zero packet loss, overcoming the computing power limitations caused by packet loss on the traditional Ethernet, increasing the AI computing power from 50 percent to 100 percent and improving the data storage Input/Output Operations Per Second (IOPS) by 30 percent.
Industry-Leading Performance
The CloudEngine 16800 boasts an upgraded hardware switching platform, and with its orthogonal architecture and no backplane, overcomes multiple technical challenges such as high-speed signal transmission, heat dissipation, and power supply. It provides the high-density 48-port 400 GE line card per slot and 768-port 400 GE switching capacity (five times the industry average), to meet traffic multiplication requirements in the AI era. In addition, the power consumption per bit is reduced by 50 percent, ensuring greener operation.
Each year, electricity usage is reduced by 320,000 kWh and carbon emissions by over 259 tons.
-
55% space savings
before
after
-
49% electricity savings
before
after
-
50% cooling savings
before
after
-
Power consumption per bit reduced by 50%
before
after