Запити
Powered by the Ascend 310 processor, the Atlas 800 inference server (model: 3000) supports up to 8 Atlas 300I inference cards to provide powerful real-time inference. It is widely used for AI inference in data centers.
• Supports 8 Atlas 300I inference cards to meet the inference requirements in multiple scenarios; 640-channel real-time HD video analytics (1080p 25 FPS)
• Runs on the 64-core Kunpeng 920 processors to unlock powerful computing for application acceleration
• Provides an AI computing platform with high efficiency, low power for inference scenarios, fully leveraging the multi-core, low-consumption advantages of Kunpeng
• Atlas 300I runs at only 67 W, fueling the AI server with faster computing and higher performance per watt
Parameters | Atlas 800 Inference Server Model: 3000 |
Form Factor | 2U AI server |
Processor | 2 Kunpeng 920 processors |
Processor Memory | 32 DDR4 DIMM slots, up to 3200 MT/s |
AI Accelerator Card | Up to 8 Atlas 300I inference cards |
AI Computing Power | Up to 704 TOPS INT8 |
Local Storage |
25 x 2.5'' SAS/SATA drives 12 x 3.5'' SAS/SATA drives 8 x 2.5'' SAS/SATA + 12 x 2.5'' NVMe |
RAID | RAID 0, 1, 10, 5, 50, 6, or 60 |
PCIe | Up to 9 PCIe 4.0 PCIe ports, among which one is a PCIe slot dedicated for screw-in RAID controller card, and the other 8 are for plug-in PCIe RAID controller cards |
Power Supply | 2 hot-swappable 900 W or 2000 W AC PSUs, supporting 1+1 redundancy |
Fan Module | 4 hot-swappable fan modules, supporting N+1 redundancy |
Operating Temperature | 5°C to 40 °C (41°F to 104°F) |
Dimensions (H x W x D) | 86.1 mm x 447 mm x 790 mm (3.39 in. x 17.60 in. x 31.10 in.) |
Якщо ви вже є партнером компанії, натисніть тут, щоб отримати більше маркетингових ресурсів.
Натисніть тут, щоби перейти на сторінку партнерських організацій для перевірки стану запиту, управління замовленнями, отримання підтримки або додаткової інформації про партнерів компанії Huawei.