• decoupling storage and compute banner pc

    Huawei OceanStor Pacific: the Storage for Big Data

    Expand and manage storage and computing resources easily.


  • Overview
  • Benefits
  • Products
  • Resources

Harnessing the Power of Big Data

Data is cast as the essential asset in this age of the digital economy. Accordingly, enterprises are shifting away from mere data management, toward data operations, with big data playing an increasingly important role. However, traditional all-in-one big data solutions are plagued by a series of problems, brought on by explosive data growth — from high storage costs and prolonged scheduling to data silos — now making the decoupling of compute and storage resources more crucial than ever before. Huawei OceanStor Pacific Storage innovatively implements native Hadoop Distributed File System (HDFS) semantics at the storage layer. Doing away with the need to install plug-ins, reconstruct applications, or migrate data, the solution smoothly decouples storage and compute for big data, providing enterprises with superior storage, flexible data mobility, and simple management through storage-compute synergy. Powered by Huawei OceanStor Pacific Storage, the Decoupled Storage-Compute Big Data Solution has already been widely used in the finance, government, Internet, and telecom sectors.



Storage-Compute Decoupling for Lower TCO

• Storage-compute decoupling with on-demand resource expansion.
• Elastic Erasure Code (EC) replaces multi-copy for 1.75x higher resource utilization, lowering Total Cost Ownership (TCO).
• High density, with 120 disks taking up just 5U, slashing footprint by 62.5%, compared to other solutions.

Rich Reporting

Data Convergence for Higher Analytical Efficiency

• Resources on one storage device are shared through multiple protocols.
• One data copy is accessed through multiple protocols, without the need for data migration.
• Data tiering and automatic lifecycle management.

data application services

High Data Security and Reliability

• Support for multiple fault domains, maintaining application continuity even if four nodes in each domain concurrently fail.
• Fast data reconstruction and recovery.
• Comprehensive sub-health detection and pre-processing prevent risks.