Pokud potřebujete pomoc, prosím klikněte zde:
iMaster NCE-Fabric is a network automation and intelligence platform that integrates management, control, analysis, and Artificial Intelligence (AI) functions. It efficiently translates business intent into configurations and policies for the physical network. The product provides extensive capabilities, including simplified deployment and intelligent, closed-loop Operations and Maintenance (O&M) throughout the entire network lifecycle, redefining service provisioning and O&M for data center networks. Additionally, iMaster NCE-Fabric interconnects with the container orchestration system through plug-ins, to implement on-demand deployment and unified management of container networks, simplifying O&M.
Zero Touch Provisioning (ZTP) and flexible planning-based deployment effectively meet the process design requirements of customers. Service provisioning efficiency is also significantly improved.
Support for intent-driven network deployment, automatically translating service intent to network models based on AI models, providing preferred solutions for users.
By constructing a network knowledge graph using AI algorithms, troubleshooting is transformed into the “1-3-5” model: one minute to detect faults, three to locate them, and five to rectify.
|Zero-Touch Provisioning||• Automatically identifies and manages network devices to implement automatic deployment of underlay networks.|
|Network Service Provisioning||• Supports interconnection with the mainstream cloud platform OpenStack or third-party applications from Layer 2 to Layer 7. The cloud platform or third-party applications invoke the standard interfaces to provision network services.
• Supports independent network service provisioning (including association with computing platforms) to implement automatic network deployment.
|Fabric Management||• Uses the standard VXLAN protocol to implement automatic network deployment, including VXLAN protocol encapsulation. iMaster NCE-Fabric also supports VXLAN Layer 2 and Layer 3 interconnection and interconnection between VXLAN and traditional networks.
• Supports various VXLAN networking scenarios and management and control of software and hardware network devices.
• It also allows hybrid access of multiple types of terminals including physical servers, VMs, and bare metal servers in differing scenarios.
|Service Function Chain||• Supports the IETF-based SFC model and adopts PBR or NSH as traffic diversion technologies, to guide the service traffic to different nodes for service processing. As a result, the topology-independent SFC function with graphical orchestration and automatic configuration can be implemented.
• Provides VAS services, including security policy, NAT, and IPSec VPN.
• Supports microsegmentation and implements security isolation based on refined groups, including subnets, IP addresses, VM names, and host names.
• Supports role-based access control to implement isolation between multiple tenants and management of multiple user accounts and rights.
• Supports password-based local authentication and security authentication, including RADIUS and AD.
|O&M and Fault Location||• Supports physical, logical, and tenant resource monitoring.
• Supports visibility of the application, logical, and physical network topologies. Mappings from the application to logical topology, and from the logical topology to physical topology can also be displayed.
• Displays forwarding paths of VTEPs and VMs in VXLAN scenarios. This implements the precise location from the logical network to the physical network.
• Supports intelligent loop detection and provides one-click repair.
• Supports detection of Layer 2 or Layer 3 network connectivity between VMs, as well as between VMs and external networks, through IP Ping and MAC Ping. This helps administrators rectify faults efficiently.
• Supports traffic mirroring (traffic on VMs or bare metals can be mirrored to remote addresses through GRE tunnels).
|Reliability||• Adopts distributed cluster deployment. A single cluster supports a maximum of 128 member nodes. The service control node supports dynamic expansion without service interruption.
• Supports deployment of cluster members in the same Layer 2 network or across a Layer 3 network. This is on the condition that routes between cluster members are reachable.
• Load balances the northbound cloud platform API requests or web access to different controller nodes.
• Supports southbound load balancing capability. Devices on the entire network are evenly distributed for management by different controller nodes. If a fault occurs on one of the controller nodes, the network devices managed by it can be smoothly switched to other normal nodes to avoid service interruption.
• Supports active/standby mode to implement highly reliable remote disaster recovery.
|Openness||• Based on ONOS and compatible with ODL architecture.
• Supports northbound interfaces including RESTful, RestConf, WebService, and Syslog from Layer 2 to Layer 7. Supports interconnection with the mainstream OpenStack platform (standard OpenStack, Red Hat, Mirantis, and UnitedStack) with Neutron plug in.
• Supports interconnection with physical and virtual network devices using southbound protocols, such as SNMP, NETCONF, OpenFlow (1.3/1.4), OVSDB, JSON-RPC, and sFlow.
• Supports interconnection with a computing resource management system, such as VMware vCenter and Microsoft System Center, for collaboration with network and computing resources.
|Management Capacity and Performance||
Typical configuration: Three nodes
• Physical network devices: 1800
• Physical servers: 9000
• VMs: 180,000
• VM online rate: 200 per second
Typical configuration: Five nodes
• Physical network devices: 3000
• Physical servers: 15,000
• VMs: 300,000
• VM online rate: 350 per second