Wenn Sie Hilfe benötigen, klicken Sie hier:

Atlas 800 AI Inference Server (Model: 3000)

Powered by the Ascend 310 chip, the Atlas 800 AI inference server (model: 3000) supports up to 8 Atlas 300 AI inference accelerator cards for powerful real-time inference. It is widely used in AI inference scenarios at the data center side.

Huawei

Specifications

Parameters Atlas 800 AI Inference Server
Model: 3000
Form Factor 2U 2-socket AI server
Processor 2 Kunpeng 920 processors
Memory 32 DDR4 DIMM slots, up to 2933 MT/s
AI Accelerator Card Up to 8 Atlas 300 AI accelerator cards for 512-channel intelligent video analytics of people, vehicles, and objects
AI Computing Power 512 TOPS INT8
NPU Memory Up to 256 GB, total bandwidth up to 1638.4 Gbit/s
Local Storage • 25 x 2.5'' SAS/SATA HDDs or SSDs
• 12 x 3.5'' SAS/SATA HDDs or SSDs
• 8 x 2.5'' SAS/SATA HDDs or SSDs + 12 x 2.5'' NVMe SSDs
RAID RAID 0, 1, 5, 6, 10, 50, or 60
PCIe Up to 9 PCIe 4.0 slots: 8 PCIe standard slots and 1 PCIe slot for the RAID controller card
Power Supply 2 hot-swappable 900 W or 2000 W AC PSUs in 1+1 redundancy mode
Fan Module 4 hot-swappable fan modules in N+1 redundancy mode
Operating Temperature 5°C to 40 °C (41°F to 104°F)
Dimensions (H x W x D) 86.1 mm x 447 mm x 790 mm (3.39 in. x 17.60 in. x 31.10 in.)

Für Partner

Wenn Sie bereits Partner von Huawei sind, klicken Sie bitte hier um zu weiteren Marketing-Ressourcen zu gelangen, und klicken Sie hier um die Partnerzone zu besuchen, um den Anfragestatus zu überprüfen, Bestellungen zu verwalten, Support zu erhalten oder mehr über andere Huawei-Partner zu erfahren. 

Share link to: