Cliquez ici pour obtenir de l'aide :

Atlas 800 AI Inference Server (Model: 3000)

Powered by the Ascend 310 chip, the Atlas 800 AI inference server (model: 3000) supports up to 8 Atlas 300 AI inference accelerator cards for powerful real-time inference. It is widely used in AI inference scenarios at the data center side.

Huawei

Specifications

Parameters Atlas 800 AI Inference Server
Model: 3000
Form Factor 2U 2-socket AI server
Processor 2 Kunpeng 920 processors
Memory 32 DDR4 DIMM slots, up to 2933 MT/s
AI Accelerator Card Up to 8 Atlas 300 AI accelerator cards for 512-channel intelligent video analytics of people, vehicles, and objects
AI Computing Power 512 TOPS INT8
NPU Memory Up to 256 GB, total bandwidth up to 1638.4 Gbit/s
Local Storage • 25 x 2.5'' SAS/SATA HDDs or SSDs
• 12 x 3.5'' SAS/SATA HDDs or SSDs
• 8 x 2.5'' SAS/SATA HDDs or SSDs + 12 x 2.5'' NVMe SSDs
RAID RAID 0, 1, 5, 6, 10, 50, or 60
PCIe Up to 9 PCIe 4.0 slots: 8 PCIe standard slots and 1 PCIe slot for the RAID controller card
Power Supply 2 hot-swappable 900 W or 2000 W AC PSUs in 1+1 redundancy mode
Fan Module 4 hot-swappable fan modules in N+1 redundancy mode
Operating Temperature 5°C to 40 °C (41°F to 104°F)
Dimensions (H x W x D) 86.1 mm x 447 mm x 790 mm (3.39 in. x 17.60 in. x 31.10 in.)

Pour les partenaires

Si vous êtes déjà partenaire, cliquez ici pour accéder à d'autres ressources marketing.
Cliquez ici pour visiter notre Espace Partenaires et suivre les requêtes, gérer les commandes, obtenir une assistance ou en savoir plus sur les partenaires Huawei.

Share link to: