iMaster NCE-Fabric
Autonomous driving data center network management and control system.
iMaster NCE-Fabric
Automated E2E Network Deployment
Network Resource Simulation Verification
"1-3-5" Troubleshooting
Specifications
Feature | Description |
Zero-Touch Provisioning | • Automatically identifies and manages network devices to implement automatic deployment of underlay networks. |
Network Service Provisioning | • Supports interconnection with the mainstream cloud platform OpenStack or third-party applications from Layer 2 to Layer 7. The cloud platform or third-party applications invoke the standard interfaces to provision network services. • Supports independent network service provisioning (including association with computing platforms) to implement automatic network deployment. |
Fabric Management | • Uses the standard VXLAN protocol to implement automatic network deployment, including VXLAN protocol encapsulation. iMaster NCE-Fabric also supports VXLAN Layer 2 and Layer 3 interconnection and interconnection between VXLAN and traditional networks. • Supports various VXLAN networking scenarios and management and control of software and hardware network devices. • It also allows hybrid access of multiple types of terminals including physical servers, VMs, and bare metal servers in differing scenarios. |
Service Function Chain | • Supports the IETF-based SFC model and adopts PBR or NSH as traffic diversion technologies, to guide the service traffic to different nodes for service processing. As a result, the topology-independent SFC function with graphical orchestration and automatic configuration can be implemented. • Provides VAS services, including security policy, NAT, and IPSec VPN. |
Network Security | • Supports microsegmentation and implements security isolation based on refined groups, including subnets, IP addresses, VM names, and host names. • Supports role-based access control to implement isolation between multiple tenants and management of multiple user accounts and rights. • Supports password-based local authentication and security authentication, including RADIUS and AD. |
O&M and Fault Location | • Supports physical, logical, and tenant resource monitoring. • Supports visibility of the application, logical, and physical network topologies. Mappings from the application to logical topology, and from the logical topology to physical topology can also be displayed. • Displays forwarding paths of VTEPs and VMs in VXLAN scenarios. This implements the precise location from the logical network to the physical network. • Supports intelligent loop detection and provides one-click repair. • Supports detection of Layer 2 or Layer 3 network connectivity between VMs, as well as between VMs and external networks, through IP Ping and MAC Ping. This helps administrators rectify faults efficiently. • Supports traffic mirroring (traffic on VMs or bare metals can be mirrored to remote addresses through GRE tunnels). |
Reliability | • Adopts distributed cluster deployment. A single cluster supports a maximum of 128 member nodes. The service control node supports dynamic expansion without service interruption. • Supports deployment of cluster members in the same Layer 2 network or across a Layer 3 network. This is on the condition that routes between cluster members are reachable. • Load balances the northbound cloud platform API requests or web access to different controller nodes. • Supports southbound load balancing capability. Devices on the entire network are evenly distributed for management by different controller nodes. If a fault occurs on one of the controller nodes, the network devices managed by it can be smoothly switched to other normal nodes to avoid service interruption. • Supports active/standby mode to implement highly reliable remote disaster recovery. |
Openness | • Based on ONOS and compatible with ODL architecture. • Supports northbound interfaces including RESTful, RestConf, WebService, and Syslog from Layer 2 to Layer 7. Supports interconnection with the mainstream OpenStack platform (standard OpenStack, Red Hat, Mirantis, and UnitedStack) with Neutron plug in. • Supports interconnection with physical and virtual network devices using southbound protocols, such as SNMP, NETCONF, OpenFlow (1.3/1.4), OVSDB, JSON-RPC, and sFlow. • Supports interconnection with a computing resource management system, such as VMware vCenter and Microsoft System Center, for collaboration with network and computing resources. |
Management Capacity and Performance | Typical configuration: Three nodes • Physical network devices: 1800 • Physical servers: 9000 • VMs: 180,000 • VM online rate: 200 per second Typical configuration: Five nodes • Physical network devices: 3000 • Physical servers: 15,000 • VMs: 300,000 • VM online rate: 350 per second |
Technical Support