Αναζήτηση
  • OceanStor High-Performance Data Analytics

    OceanStor Pacific Storage
    for High-Performance Data Analytics

    Unlock value and intelligence from mass data.

Overview

  • Overview
  • Benefits
  • Why Huawei
  • Case Studies
  • Products
  • Resources
HPDA Is Now
According to IDC, 67% of the world's high-performance computing (HPC) data centers use AI and big data technologies. The accelerated convergence of
HPC, AI, and big data has led to the emergence of data-intensive high-performance data analytics (HPDA). But the explosion of data-intensive apps,
such as autonomous driving, genome sequencing, and weather forecasting, raises the requirements for real-time data analytics. And,
as computing power shifts from homogeneous to multi-cloud heterogeneous, workload models become more complicated, exposing
an urgent need for a solution that more efficiently and cost-effectively unlocks greater value from mass data.
cir
icon
Data Explosion
icon
Complex Workload Models
icon
Inefficient Analytics and Mobility
icon
Global data is set to reach 180 ZB by 2025. More data needs to be stored for an extended period.
HPDA use cases like genome sequencing drive dozens of times of data growth.
The Trend of Data-intensive HPC, a white paper from Hyperion Research, highlights that in the transition from computing-intensive
to data-intensive workloads (typically big data, AI, machine learning, and deep learning), researchers, engineers, and data analysts
want to accelerate the time to results. For this, storage is crucial but it faces challenges and must be redefined.
icon
Data Explosion
Global data is set to reach 180 ZB by 2025. More data needs to be stored for an extended period.
HPDA use cases like genome sequencing drive dozens of times of data growth.
icon
Complex Workload Models
Homogeneous computing power evolves into heterogeneous (GPU, ASIC, and FPGA), intensifying the challenge of diversified workloads.
Different service processing phases have different workload models, requiring one storage system to deliver high performance in both bandwidth and IOPS.
icon
Inefficient Analytics and Mobility
Data read/write performance does not match the computing power, making storage a bottleneck in analytical efficiency.
Accessing data across diversified protocols leads to inefficient data copying and time-consuming data migration.
The Trend of Data-intensive HPC, a white paper from Hyperion Research, highlights that in the transition from computing-intensive to data-intensive workloads (typically big data, AI, machine learning, and deep learning), researchers, engineers, and data analysts want to accelerate the time to results. For this, storage is crucial but it faces challenges and must be redefined.
Download White Paper
What Are
Data-Intensive HPDA Solutions?
HPDA involves data-intensive workloads that use HPC resources, like big data and AI workloads. It presents huge challenges, such as large data volumes, latency-sensitive processing, and complex algorithms. Data-intensive HPDA solutions provide a data infrastructure for supercomputing centers, research organizations, and enterprises, helping address the challenges from data-intensive apps’ development and unlock value and intelligence from their data.
icon
icon
Efficient Services
icon
Efficient Services
Data analytics with a super low latency meets the need for instant insights and shortens the R&D period. Data is seamlessly shared between development, testing, and production for efficient data mobility.
icon
icon
Simplified Management
icon
Simplified Management
A single storage system supports data mobility across protocols and provides unified resource O&M with high service agility, fulfilling the requirements for diversified computing power.
icon
icon
Optimized Costs
icon
Optimized Costs
Intelligent tiering stores hot and cold data on different storage media to maximize data value and reduce footprint and energy consumption for optimal cost-effectiveness throughout the data lifecycle.
Benefits of
Data-Intensive HPDA Solutions
Data is now a strategic business resource, and data-intensive HPDA solutions are designed to support this. Centering on high-end storage, they provide an efficient, cost-effective platform for data-intensive HPC use cases like meteorology & oceanography, supercomputing centers, genome sequencing, and energy exploration.
Meteorology & Oceanography | More Accurate
As weather forecasting becomes ever more accurate, the data refresh frequency increases from once every 2 hours to once every few minutes, increasing the parallel computing capability requirements of data center storage. Our solution delivers:
Stable running of critical real-time services with high-end storage
Real-time, high-precision data analytics and processing through a hybrid-workload design
Supercomputing Center | More Intelligent
The shift from traditional to intelligent computing increases the performance from petascale to exascale, resulting in larger data volumes. To adapt to the complex multi-tenant workload models of supercomputing centers, storage must support cross-protocol access, resource isolation, and quality of service (QoS). Our solution provides:
Higher processing efficiency with dedicated HPC parallel file systems
Multi-protocol support that meets the access needs of various application systems and enables data mobility across HPC, big data, and AI platforms for lower total cost of ownership (TCO)
Higher management efficiency through full-lifecycle intelligent management
Genome Sequencing | More Efficient
Efficient genome sequencing helps cure diseases and develop new medicines. In the HPDA age, a single genome sequencer generates up to 6 TB of data every day, all of which must be stored long term. This requires high storage performance, stability, and cost-effectiveness. Our solution achieves:
Faster report output shortened from 13 years to just 1 day thanks to higher overall storage performance and quicker genome analysis and testing made possible by stable high bandwidth
Stronger genome analysis capabilities via quick hybrid analysis without data migration
Energy Exploration | More Cost-Effective
The evolution of seismic exploration from 2D to 3D and 4D results in 5- to 10-fold data growth. In seismic data processing, the aggregate bandwidth reaches 2 to 20 GB/s per PB and the latency per I/O must be 200 to 400 μs, requiring excellent storage scalability, bandwidth, and latency. More cost-effective energy exploration leads to a lower TCO, which is why our solution ensures:
Less physical storage space usage at a lower TCO with tiered storage
Worry-free, long-term data growth with one-time deployment and on-demand expansion
img
Meteorology & Oceanography
Supercomputing Center
Genome Sequencing
Energy Exploration
Meteorology & Oceanography | More Accurate
images
As weather forecasting becomes ever more accurate, the data refresh frequency increases from once every 2 hours to once every few minutes, increasing the parallel computing capability requirements of data center storage. Our solution delivers:
Stable running of critical real-time services with high-end storage
Real-time, high-precision data analytics and processing through a hybrid-workload design
Supercomputing Center | More Intelligent
images
The shift from traditional to intelligent computing increases the performance from petascale to exascale, resulting in larger data volumes. To adapt to the complex multi-tenant workload models of supercomputing centers, storage must support cross-protocol access, resource isolation, and quality of service (QoS). Our solution provides:
Higher processing efficiency with dedicated HPC parallel file systems
Multi-protocol support that meets the access needs of various application systems and enables data mobility across HPC, big data, and AI platforms for lower total cost of ownership (TCO)
Higher management efficiency through full-lifecycle intelligent management
Genome Sequencing | More Efficient
images
Efficient genome sequencing helps cure diseases and develop new medicines. In the HPDA age, a single genome sequencer generates up to 6 TB of data every day, all of which must be stored long term. This requires high storage performance, stability, and cost-effectiveness. Our solution achieves:
Faster report output shortened from 13 years to just 1 day thanks to higher overall storage performance and quicker genome analysis and testing made possible by stable high bandwidth
Stronger genome analysis capabilities via quick hybrid analysis without data migration
Energy Exploration | More Cost-Effective
images
The evolution of seismic exploration from 2D to 3D and 4D results in 5- to 10-fold data growth. In seismic data processing, the aggregate bandwidth reaches 2 to 20 GB/s per PB and the latency per I/O must be 200 to 400 μs, requiring excellent storage scalability, bandwidth, and latency. More cost-effective energy exploration leads to a lower TCO, which is why our solution ensures:
Less physical storage space usage at a lower TCO with tiered storage
Worry-free, long-term data growth with one-time deployment and on-demand expansion
Why Huawei?
Huawei has 12 R&D centers with over 4000 R&D personnel and 3000 patents, all dedicated to storage, serving more than
15,000 customers in over 150 countries. With ultra-high density and multi-protocol interworking capabilities, Huawei HPDA solution is
designed for hybrid workloads to unlock data potential and tackle the challenges arising from data-intensive apps.
Hybrid workloads-oriented
A single storage device delivers industry-leading 32 GB/s bandwidth and 400,000 IOPS per U, boosting service efficiency.
Learn Moreicon
Multi-protocol interworking
Seamless interworking between POSIX, NFS, CIFS, HDFS, S3, and MPI-IO with zero data copying is purpose-built for HPDA scenarios, simplifying data management.
Ultra-high-density design
24 disks/U, high-density, large-capacity hardware and 16 disks/U, high-density, NVMe all-flash hardware combined with intelligent data tiering offer optimal cost-effectiveness.
Learn Moreicon

TOP