This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>Close

If you need help, please click here:

Fast, Flexible “Ethernet Network Processor”

The architecture of Huawei’s Ethernet Network Processor (ENP) provides the flexibility and performance for handling new services, end-to-end Quality of Service (QoS), and simplified management. The keys are programmable Network Processors (NPs) for specific Ethernet services.

Huawei’s ENP switches are programmable and maintain high line rates. Designed for OpenFlow, ENP-based switches are integral to the future of Software-Defined Networking (SDN); so, in addition to handling today’s services, will adapt to future needs with no change of hardware.

ASICs: High-Throughput, not Extensible

Application-Specific Integrated Circuits (ASICs) have long been the heart of high-performance packet switching. By processing network packets in silicon logic (hardware) rather than software, ASICs are highly reliable, secure, and cost-effective with low power consumption.

ASICs are responsible for the performance of today’s gigabit per second speeds in addition to routing functions beyond basic Layer 2 switching. Large on-chip RAM supports Layer 3 switch functions for cloud services, video, Bring Your Own Device (BYOD), and Internet of Things (IoT) networks.

The downside is that ASICs, designed to process only pre-defined protocols, are inflexible. For example, if isolating an enterprise service within a discrete Virtual Private Network (VPN) using Multi-Protocol Label Switching (MPLS) — and the ASIC switch does not support MPLS — the enterprise would need new and different switches. In the same vein, an HD videoconferencing system may require an ASIC upgrade with a larger buffer capacity for handling bursty video streams. Even the smallest change in the forwarding process requires new silicon — which after design, fabrication, and integration, may take two years or more.

NPs Make Switches Programmable

NPs execute tasks ranging from packet processing to video segmentation performed by multiple Network Processing Unit (NPU) groups, where each NPU group has an independent instruction space.

NPs have greater flexibility than ASICs, though are often limited in performance. General-purpose NPs also tend to have high power consumption. The flexibility of NP architectures is limited by the instruction space of each NPU group. Overloading an NPU instruction space can limit the ability to deploy an additional service on the network. In general, the lack of automation — when instructions must be segmented by hand among the NPUs — results in buffer overloads among the NP groups.

For NPs to be broadly useful, they must be as fast and efficient as ASICs.

Fully Programmable ENP

Huawei set out to improve the flexibility and performance of NPs with the simple strategy of focusing on Ethernet services.

In the Huawei ENP architecture, the NPU groups differ from competing NPs in a significant way: the ENP groups can access the complete instruction space. Ergo, any NPU group can execute any instruction in the service process. This flexibility eliminates the need for programmers to segment services among the groups, meaning that Huawei’s ENP silicon is more versatile than third-party NPs and has a simpler development process for creating new processes.

The resulting (ENP) chips are multi-threaded to reduce the influence of I/O latency to and from external memory, for instance on NPU group execution performance, and also benefit from a power-throttling method able to keep power consumption at the low levels associated with ASICs.

ENPs Boost Forwarding Performance

The ENP uses several strategies to speed up forwarding tasks, beginning with hardware-based pre-processing derived from Ethernet and IP packet forwarding methods. For example, the load on the NPU kernel can be reduced by pre-processing selected Layer 2, Layer 3, MPLS, and VPN services.

The ENP supports complex instruction sets for Ethernet and IP packet forwarding that enable long instructions to complete in fewer clock cycles than is possible with other NPs. Moreover, the use of branch prediction enables the ENP to execute multiple branches of code (such as if-else sequences) simultaneously and then keep only the result from the valid branch. The chip can therefore execute multiple instruction steps in a single clock period. This capability alone provides a significant performance advantage over other NPs.

Opening the Memory Access Bottleneck

Progressing from separate NP and ASIC processing and memory components, Huawei SmartMemory integrates a search engine, co-processor, and traffic manager into a single, on-silicon feature set.

The Huawei ENP SmartMemory processor makes memory access more efficient by eliminating memory access bottlenecks and the need for address latching and data synchronization co-ordination between separate components. The result is a great reduction in the processing overhead necessary to the ensure data consistency of multiple NPU sources interacting with shared memory.

In traditional IP/MAC address entry searches, devices get IP and MAC addresses using a two-step search operation that takes two memory I/Os. The first search returns an address entry index to the NPU groups, from which a second search is computed based on the new address. In contrast, ENP SmartMemory returns the final search result to the NPU groups and/or forwarding logic in one step.

Full Support for OpenFlow

OpenFlow, managed by the Open Networking Foundation (ONF), defines a standard interface between the control and forwarding layers of a network architecture, and enables switches from multiple vendors to be managed using a single, open protocol. The Huawei ENP engine supports OpenFlow traffic forwarding with as many as 16 million flow table entries.

The fully programmable ENP supports both OpenFlow and traditional forwarding. This hybrid forwarding mode ensures user service continuity for smoothly migrating networks from traditional traffic forwarding to OpenFlow forwarding, without having to buy new hardware.

Power Consumption Throttle

The integration of packet buffer and forwarding functions onto single ENP chips contributes to an overall reduction of power consumption. Additionally, an activated advanced power-saving throttle further reduces power consumption based on processing demand.

Specifically, a Huawei power monitor unit inserted between NPU groups and the data path tracks bits and packets per second of the ENP’s internal traffic. As data rates decrease, the ENP disables both the power supply and clocks of some NPU groups to achieve maximum possible power reductions. As data rates increase, the NPU groups begin to work again, immediately.

Programmable Switches Enable Tailored Services

SDN promises to transform networks to support flexible service scheduling and management. To achieve this goal, Ethernet devices will need to provide excellent performance and high levels of flexibility. ASIC-based switches have the performance but no capacity to adapt to service changes, and conventional NPs do not meet the highest performance or flexibility requirements.

Tailored for Ethernet services and designed to support the flexibility necessary for SDN, fully programmable network switches, such as Huawei’s ENP, are indispensable for supporting the full range of modern IT services.

By Lv Chao, Director/Peng Xiaopeng

Network Products Management Department, HiSilicon/Principal Technical Marketing Engineer, Network Products Management Department, HiSilicon

Share link to: