This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Read our privacy policy>


To have a better experience, please upgrade your IE browser.


Structural Design Considerations for Data Center Networks

Total: 0 products

  • Choose product to compare

Compare with CISCO
Compare with H3C
  • Choose product to compare

Compare with CISCO
Compare with H3C
  • Choose product to compare

Compare with CISCO
Compare with H3C
  • Choose product to compare

Compare with CISCO
Compare with H3C

Structural Design Considerations for Data Center Networks

Author: Source: 2012

Cloud Services Challenges

As information services have entered the cloud era, determining how best to quickly and accurately transmit large amounts of information across an increasing variety of terminals, servers, and storage systems has become a monumental challenge. A new network architecture is needed, which provides high bandwidth, low latency, and high reliability to tackle these challenges.

Overlapping Service Model on Traditional Networks

In traditional data center networks, physical interfaces and communications protocols are designed and implemented for each device. Physical interfaces include GE/10 GE, FC, and IB and communications protocols include Ethernet, IPv4, IPv6, FCoE, and IBoE. Each protocol has its own data format and control packets. To connect these devices on a traditional network, the device hardware and software must support these physical interfaces and protocols.

At the data layer, the network devices must identify the data type of each received packet and forward the packet based on the corresponding forwarding rule. At the control layer, the network devices process the protocols and coordinate and control the forwarding behaviors at the data layer, as shown in Figure 1. Thus, a physical network contains multiple overlapping service logical networks. When the service types and quantities increase, a large number of service logical networks are created, and hardware and software becomes more complex. As a result, network-forwarding efficiency degrades and services are slow to launch.

In the data center, each tenant may have a service logical network and each department of a single tenant may have an independent service logical network. Therefore, the physical network of the data center must support a large number of service logical networks. The traditional network cannot withstand all the required services.

Figure 1. Overlapping service model on a traditional network

New Data Center Network Design

How does a physical network consisting of a large number of nodes support a large number of service logical networks that have their own addressing system and forwarding rules? The answers can be found in our daily life.

Modern postal systems use zip codes and standard mail packages. The zip code system is equivalent to the addressing system. Each zip code indicates the destination mail address group. In the distribution and delivery stages, a postal worker only needs to check the zip codes. A zip code maps a group of users’ physical addresses, limiting the increasing number of unique physical addresses needed. Additionally, a standard mail package is used to implement automatic distribution.

In a cargo transport system, standard containers and destination addresses are used to simplify the transport process and to improve work efficiency.

Similar to the postal and cargo transport systems, networks are responsible for forwarding users’ information from source addresses to destination addresses. The addressing system and information format may vary according to service types, but the network will always forward the information. An efficient network is not affected by the addressing system or information format.

Forwarded information is standardized only by the edge service nodes on a network, and the content of forwarded information is irrelevant to internal nodes. The internal information forwarding system of a network, similar to a bridge that extends in all directions, connects all edge service nodes. The source edge nodes only need to add the addresses of destination edge nodes into service packets (containers), and place the service packets into vehicles on the bridge. The vehicles carry the standard containers to the destination addresses.

Figure 2. Separated forwarding and service planes on a bridge network

The bridge network separates service functions from forwarding so that routing is not affected by increasing numbers of user addresses and the forwarding process between network nodes is simplified. Network efficiency is improved.

Implementation of a Bridge Network

  1. The service access nodes at the edge of a bridge network encapsulate various packets from users into a uniform format, add destination and source nodes information to the packets, and forward the packets to internal nodes
  2. The internal nodes forward packets based on destination addresses in order to decouple the addresses used for packet forwarding from the large number of users
  3. The destination edge nodes encapsulate the received packets and forward the original packets to destination users
Compared with the traditional overlapping service model, the bridge network uses route calculation results as the basis to forward data, and uses the edge node addresses as the data forwarding addresses. It decouples data forwarding addresses from actual user addresses so that user address changes do not affect internal forwarding. Additionally, bridging nodes exchange data in a uniform format to implement high-speed and efficient transmission, support lossless Ethernet protocols such as DCB, and simplify network management.

The bridge network provides point-to-point, point-to-multipoint, and multipoint-to-multipoint services. The entire network is equivalent to a switch, router, FC switch, or IB switch, and the edge node interfaces are equivalent to the interfaces on the switch. All data can be transmitted over the bridge network as long as it has correct outbound interfaces.

Bridge Network Technologies

Various service and data separation solutions and protocols have been developed for data centers. For example, IT solutions include GRE, NvGRE, VPLS, VxLAN, and MACinIP, and CT solutions include TRILL and SPB. The technologies and standards used for data centers must support the following requirements:
  • Point-to-point, point-to-multipoint, and multipoint-to-multipoint services flexibly and dynamically
  • Shortest end-to-end paths
  • A large number of logical networks, multicast groups, and broadcast groups to meet the requirements of a large number of service logical networks
  • Compatibility with current protocols and data formats to implement seamless service connection
  • CT and IT solutions
Generally, the data layer determines the tasks that the network can do by using a certain protocol, and the control layer ensures that the network can automatically and efficiently complete the tasks. Therefore, appropriate protocols are significant to the data layer.

Currently, data layers used in data centers are based on the existing Layer 2/3 networks, for example, Ethernet + IP network. Typical examples are GRE, NvGRE, VPLS, and VxLAN. Although many solutions are being developed to fix defects or solve problems of existing networks, the problems of Layer 2/3 networks are still unsolved and potential network resources are wasted.

The CT solutions, such as TRILL and SPB, aim to solve the essential problems of Layer 2 networks. To support the CT solutions, network devices must be upgraded to meet the fast-developing data center requirements. Therefore, Layer 3 issues must be considered in the design of CT solutions and the CT solutions must support all IT solutions.

These issues also exist in enterprise campuses, carriers’ LANs, and wireless Layer 2/3 networks. Therefore, resolving the problems in Layer 2/3 networks is important.

Bridging Architecture Helps Build Elastic Cloud Networks

In traditional network architectures, the edge layer and core layer have similar functions. If you deploy cloud services in this architecture, the entire network must be frequently upgraded when you change the configurations of servers and storage devices to support new features. This makes network maintenance difficult and, moreover, network performance degrades when you deploy many complicated services on the same device.

Learning from successful practices, Huawei improved the ‘elastic cloud network architecture’ concept and has deployed an advanced bridging architecture that moves complicated applications to network edges to simplify configuration of the core network. The architecture has a service control layer that separates inconsistent services from network devices.

The ‘elastic cloud network architecture’ ensures high network performance, while meeting requirements of complicated and frequently changing services. When enterprises need to deploy new services, they can upgrade the service control layer without changing network device configurations. For example, a network can migrate to IPv6 by upgrading the IPv6 control plane, and the FCoE protocol can be upgraded from FC-BB-5 to FC-BB-6 by upgrading the FC network control plane. The core layer does not need to be changed during the upgrades.

Impact of New Technologies on a Bridge Network

Network switching is changing from ‘optical transport + electrical switching’ to ‘optical transport + optical switching.’ Due to technology limits, data in optical signals cannot be modified; therefore, optical switches directly forward signals using end-to-end routes. The end-to-end forwarding mechanism used on the bridge network accommodates new technology development requirements.