OceanStor A800
The next-gen high-performance distributed file storage for AI.
Este sitio utiliza cookies. Si continúa navegando en este sitio, acepta nuestro uso de cookies. Lea nuestra política de privacidad>
Productos, soluciones y servicios empresariales
OceanStor A800
OceanStor A800 is the next-gen high-performance distributed file storage for Artificial Intelligence (AI), fulfilling the end-to-end (E2E) data processing needs for AI training and inference.
With 24 million input/output operations per second (IOPS) and 500 GB/s bandwidth per controller enclosure, OceanStor A800 handles the requirements of loading training sets with massive small files and the high bandwidth needed for resumption from checkpoints. Indeed, compared to alternative systems, it delivers 8x faster training set loading and 4x faster training resumption from checkpoints.
Intrinsic vector knowledge repository reduces AI hallucinations and ensures inference response in milliseconds. Long-term memory storage capabilities eliminate repeated computing, significantly reducing the pressure on inference computing power. This reduces the Time To First Token (TTFT) by 78%, as well as increasing the inference throughput of a single xPU by 60%.
Ultra Performance
Ultra-Large Cluster
Powerful Inference
Specifications
Model | OceanStor A800 |
Enclosure form factor | Dedicated hardware, 8 U/2 nodes |
CPUs per node | 4 CPUs |
Node expansion | Scale out to 512 nodes in fully symmetric mode |
Max. number of SSDs per enclosure | 64 SSDs (controller enclosure with integrated SSDs) |
Fans per node | 10 |
Memory per node | 1,024 GB |
Network types | 25/100/200 Gbit/s ETH, 200 Gbit/s IB |