Produits, solutions et services pour les entreprises
[St. Louis, USA, November 18, 2021] Huawei OceanStor Pacific Storage — released by the Huawei HPDA Lab — took second place on the IO500 list, which is one of the world's most recognized High Performance Computing (HPC) storage performance rankings.
Huawei OceanStor Pacific series is a next generation intelligent distributed storage system that has been put into large-scale commercial use worldwide, thanks to its strong support for hybrid workloads and reliable data protection, alongside other key benefits.
Working with OceanFS, a Huawei-developed parallel file system, this distributed storage system uses a series of cutting-edge technologies — such as directory Distributed Hash Table (DHT) partitioning, large Input/Output (I/O) passthrough and small I/O aggregation, and multi-granularity disk space management — to provide high bandwidth for large files and high Input/Output Operations Per Second (IOPS) for small files, meeting the varied needs of different hybrid workloads.
Using Erasure Coding (EC) technology, OceanStor Pacific implements N + M data protection, ensuring enterprise-grade data reliability and availability, while also offering more available capacity and higher resource utilization.
The IO500 is one of the world's leading HPC storage performance rankings, widely recognized by research institutions including universities, national laboratories, and commercial companies. Since November 2017, the IO500 list has been released at top HPC conferences around the world, such as the Supercomputing Conference (SC) in the USA and the Information Security Conference (ISC) in Germany.
In terms of measuring the efficiency of supercomputer applications, I/O performance has become a crucial indicator, measuring storage performance through both data bandwidth (GB/s) and meta data read/write speeds (IOPS).
Storage systems are evaluated by simulating different I/O models, from light to heavy workloads. Scores are obtained from the geometric mean of performance values across all scenarios. By limiting the benchmark to 10 nodes, the test challenges single-client performance from the storage system. The 10-node list is a good reference framework for users because it closely represents the scale that an actual parallel program can achieve and better reflects the I/O performance that the storage system can provide in real-world application.