Este sitio utiliza cookies. Si continúa navegando en este sitio, acepta nuestro uso de cookies. Lea nuestra política de privacidad>Search

Búsqueda

Challenge 2: Data Storage with Ultimate Per-Bit Cost Efficiency

2022.09.28

Explosive data growth is causing performance and capacity bottlenecks in existing storage systems. To leverage new types of non-volatile, optical, and magnetic media, research is needed into technologies like those that accelerate replacement of HDDs with SSDs in primary storage scenarios, memory media, memory-class storage, new cost-effective warm/cold data storage media and systems, and cost-effective storage-compute convergence at the edge. This will enable us to build next-gen storage with superb cost-effectiveness per bit.

Challenge Direction 1: Media Application Technologies for Replacing HDDs with SSDs

1. New cost-effective NAND technology: Research reprogramming based on some pages in a block to prevent the earliest failed page from determining the lifetime of the entire block. Explore technologies like write performance tuning and refresh optimization based on manufacturing process differences to prevent a wear increase from reducing write performance.

2. Cost-effective system with hybrid NAND storage units under a single controller: Research an encoding algorithm that unifies intra-NAND error correction codes (like LDPC) and inter-NAND erasure codes (like RAID 4). Explore the optimal configuration ratio for a NAND combination (for example, SLC + QLC).

3. High data reduction ratio: Explore compression algorithms for model prediction and AI prediction in general-purpose lossless compression scenarios. Explore high-precision, adaptive semantic recognition algorithms and extensible restoration algorithms for unstructured data.

Challenge Direction 2: Memory-Class Storage

1. Build a memory-centric, storage-compute decoupled architecture powered by memory media and memory interconnect bus. Research a new distributed memory interconnect bus, memory management mechanism based on that bus, distributed large memory, and distributed memory semantic access model.

2. Research technologies related to new memory media and memory disks, like tiering based on new memory media and memory disks as well as collaboration between new memory media and compilers.

3. Research near-memory and in-memory computing technologies to improve metadata access performance in big data analytics, increase the bandwidth utilization rate via data reduction, and prevent operator push-down (like Shuffle) from reducing system performance.

Challenge Direction 3: New Tiered Warm/Cold Data Storage Media and Systems

1. New high-density magnetic media and systems: With the focus on large-capacity and high-performance magnetic media, research technologies related to high-performance read/write drives and preparation of large-capacity tapes and high-density magnetic heads.

2. New high-density optical storage media and systems: With the focus on optical or other new storage media, research technologies related to high-precision servo control, high-performance read/write drives, and low-cost write drives.

3. High-performance data tiering based on new media: Research a high-performance data tiering technology that adapts to cold storage media like magnetic and optical storage media. Deal with challenges of new media in access latency and read/write mode to make the overall cost lower than that of HDDs.

4. New media access protocols: Research access protocols that comply with magnetic, optical, and other new media, with capabilities like read and write pass-through, multi-concurrency, and protocol semantic offloading to media. Guide ecosystem collaboration across the entire industry to promote new media protocols into industry standards.

Challenge Direction 4: New Micro Storage with Ultimate Cost Efficiency

1. In-depth disk-controller convergence: Integrate the disk- and system-layer OP, RAID, FTL, and GC capabilities to implement them in a unified manner. Develop FTL table memory compression and fast swap-in and swap-out in large-capacity scenarios to avoid space reservation for both the disk and system layers, thereby maximizing system space utilization.

2. High-performance encoding and layout: Based on a large-capacity disk-controller converged architecture, implement metadata indexing with a high fan-out ratio and low memory usage as well as cross-DIE high-ratio erasure coding (EC) that enables fast reconstruction.

3. Client ecosystem based on the diskless architecture: Based on NoF+, provide standard KV, block, and file APIs and SNIA-compliant computational storage APIs to build a client ecosystem that seamlessly integrates with open-source storage and application software, thus improving performance and reliability.

Arriba