• A CloudEngine Switch Built for the AI Era

    A fully connected, intelligent world is fast approaching. As data centers evolve from the cloud era to the AI era, they become the infrastructure core for new technologies such as 5G and Artificial Intelligence (AI).

    The Huawei CloudEngine 16800 is the industry’s first data center switch built for the AI era. Huawei defines three data center switch characteristics in the AI era: Embedded AI chip, 48-port 400 GE line card per slot, and the capability to evolve to the autonomous driving network. With an innovative incorporation of AI technologies into data center switches, Huawei can help customers accelerate their intelligent transformation.

Embedded AI Engine, Advances Toward Autonomous Driving Networks

The convergence of computing and storage, in addition to the popularity of AI applications, present new challenges for intelligent O&M and intelligent data center network optimization. Huawei’s CloudEngine 16800 is equipped with a high-performance AI chip. The chip enables the CloudEngine 16800 to implement local analysis and real-time decision-making, realize intelligent lossless switching and data center network maintenance, and gradually advance towards autonomous driving networks with zero packet loss and zero faults.

  • AI training efficiency
    improved by

    40%

  • Distributed storage IOPS
    improved by

    30%

  • CAPEX
    reduced by

    90%

  • Faults location time down
    from hours to

    minutes

Core of the Core: AI Chip

The high-performance AI chip uses the innovative iLossless algorithm to obtain network performance with lower latency and high throughput based on zero packet loss, overcoming the computing power limitations caused by packet loss on the traditional Ethernet, increasing the AI computing power from 50 percent to 100 percent and improving the data storage Input/Output Operations Per Second (IOPS) by 30 percent.

Industry-Leading Performance

The CloudEngine 16800 boasts an upgraded hardware switching platform, and with its orthogonal architecture and no backplane, overcomes multiple technical challenges such as high-speed signal transmission, heat dissipation, and power supply. It provides the industry’s highest density 48-port 400 GE line card per slot and largest 768-port 400 GE switching capacity (five times the industry average), to meet traffic multiplication requirements in the AI era. In addition, the power consumption per bit is reduced by 50 percent, ensuring greener operation.

  • Eighty 36 x 100G line cards

    100G platform switch in the cloud era

  • 5x
  • Sixteen 48 x 400G line cards

    CloudEngine 16800 in the AI era

  • 48-port

    400

    GE port

    line card per slot

  • Up to

    768

    400GE ports

  • 5x

    higher switching capacity

    One device can replace five

Each year, electricity usage is reduced by 320,000 kWh and carbon emissions by over 259 tons.

  • 55% space savings

    before

    after

  • 49% electricity savings

    before

    after

  • 50% cooling savings

    before

    after

  • Power consumption per bit reduced by 50%

    before

    after

  • SuperPower:
    Ultra-High Efficiency Power Supply

    The magnetic blow-out and large exciter technologies realize switching in milliseconds between a power module’s two independent dual inputs. This saves significant space in the equipment room and improves power supply efficiency of the unit space by 95 percent.

    95%
  • SuperCooling:
    Powerful Heat Dissipation Medium

    The new carbon nanotube thermal pad and VC phase-change radiator are used to improve the cooling efficiency four-fold and improve the entire system’s reliability by 20 percent.

    4x
  • SuperCooling:
    fan

    The industry's first mixed-flow fans and magnetic permeability motor achieve the industry’s highest cooling efficiency of an entire system. Each bit of data’s average power consumption is reduced by 50 percent, and the mute deflector ring reduces noise by six dB.

    50%

Awards

Products & Solutions

  • CloudFabric Data Center Network
  • AI Fabric Intelligent and Lossless Data Center Network
  • Advanced Technologies

    Huawei’s patented congestion-scheduling algorithm enables smart O&M and reduces AI training time by 40 percent.

  • Mature Commercial Use

    Huawei’s CloudFabric Solution has been deployed in more than 6,400 data centers in over 120 countries.

  • Open Architectures

    Huawei works with 20 top alliance partners for device interconnection. As a result, devices and controllers from the ecosystem are open and interoperable.

  • Lossless Network Technology

    Zero packet loss on the Ethernet network.

  • Low Latency

    Efficient network performance with high throughput and sub-nanosecond latency.

  • Low Cost

    Decreases AI training time by over 40 percent and reduces TCO by 53 percent.

Venue

China World Summit Wing Hotel, Beijing, China

Get Pricing/Info