Reminder

To have a better experience, please upgrade your IE browser.

upgrade
If you need help, please click here:

Huawei OceanStor 9000: High Data Reliability in Video Surveillance Applications

Storage capacity is vital to the operational capabilities of video surveillance systems, and requirements on this critical component continue to escalate as the industry continues to develop at impressive speed. More storage capacity and better performance are required as the number of HD video channels increase and as video data is being stored for longer periods. These factors along with the growing importance of data also place heightened demands on the reliability of the storage system.

Huawei OceanStor 9000 offers robust data reliability with SecureRAID, SecureVideo, and SecureData technologies for the storing, restoration, and disaster recovery (DR) of massive amounts of data.

SecureRAID enables improved data protection

Erasure coding is a popular data encoding scheme that splits data into blocks of specified sizes. The scheme uses specified encryption algorithms to reconstruct redundant data pieces. If a fault occurs, the scheme is able to restore the complete data set from the subset in the data block. Compared with traditional RAID technology, erasure coding achieves improved restoration capabilities with less overhead, making it ideal for storage of massive amounts of data. Given its advantages, many well-known storage equipment manufacturers, open source systems, and e-commerce sites are now using the redundancy scheme.

Working principles

The SecureRAID technology in the OceanStor 9000 uses an erasure coding scheme that splits the data in files into strips of the same size. The file and parity data are then written to different disks on different nodes after undergoing a parity-check operation. If a disk becomes damaged or a node goes down, the original file data can be restored from the parity data, thereby helping to ensure data is not lost. Administrators can set the redundancy level according to the importance of the file, with +4 redundancy being the highest level. 

SecureRAID implements N+1 to N+4 data protection schemes (N indicating the number of copies). See Figure 1 and Figure 2.

Data write process flow

Figure 1: Data write process

Figure 2 Data restoration process

Highlights

High efficiency

SecureRAID splits file data into specific strip sizes, and a certain number of strips then form data stripes – the original number of tokens created from the coding process (N). SecureRAID uses erasure coding algorithms to perform check operations on these data strips to generate the required number of parity data blocks, or the extra or redundant tokens added to ensure protection of data (M). The system sends N and M data blocks to different and respective nodes, achieving faster write speeds as the operations are executed simultaneously.

If a disk or node in the system becomes corrupted, data can be restored from elsewhere in the storage array. Huawei OceanStor 9000 restores data objects on corrupted disks to different locations in the array to implement concurrent restoration, thereby minimizing the time needed to reconstruct data.  

High reliability

SecureRAID delivers the highest level of reliability in any available NAS offering. The Huawei system can ensure data remains protected even if four of the five disks used to back up the data become corrupted (depending on the redundancy level set for the data file). Flexibility in configurations allows users to set their own performance and backup preferences based on the importance of the data files.    

High utilization rate

SecureRAID technology ensures high reliability while improving system space utilization based on the number of nodes and redundancy level set by the user (M number of parity blocks for the N number of data strips that need to be stored). SecureRAID can achieve up to 94% utilization of storage space, far more than other RAID- or replication-based data protection schemes. 

SecureVideo enhances data restoration capabilities

The SecureVideo technology in the OceanStor 9000 uses new erasure coding technology for its data protection scheme to suit the backup requirements of the video surveillance scenario. In the event of a disk failure, data block integrity is still assured as the data is directly reconstructed from the redundant copies if the redundant copies contain all the data that needs to be restored. If some data that needs to be restored is not retained in the redundant copies, the data that cannot be read is cleared from the space while other data is retained to the extent possible, avoiding the total loss of data across the entire grouping with conventional RAID approaches.  

Working principles

The SecureVideo technology in the OceanStor 9000 optimizes file reads for video based on the SecureRAID data protection scheme. When a video file is being stored to the OceanStor 9000, SecureRAID technology is used to split data in files into strips of the same size. After executing the check algorithm, file and parity data is written to different disks on different nodes. When reading video files, SecureVideo checks whether the redundant copies contain complete data. If the amount of corrupt data is larger than that in the redundant copies, the remaining integral data is read to the extent possible. The data that cannot be read is cleared from the space and returned to the upper-level application system, thereby avoiding loss of all data and comprised playback ability with traditional RAID groupings. SecureVideo dramatically improves reliability of video data, maximizing continuous playback and minimizing distorted or missing frames. The difference in reliability between RAID and SecureVideo is demonstrated in the following figures. 

Intact video sample

Figure 3: Completely intact video sequence

Total data loss and inability to play back video

Figure 4: Traditional RAID groupings lead to total data loss and inability to play back video

Playback with minimal frame loss

Figure 5: SecureVideo ensures playback with minimal frame loss 

SecureData ensures disaster recovery for video

SecureData implements off-site backup in video storage to ensure data reliability. If the storage system at the production center fails, the video monitoring media server accesses the backup files at the DR center through the IP network, ensuring maximized service continuity. Data backed up to other sites can also be used to restore the data in the event of a disaster or system failure at the primary site, avoiding data loss.

Synchronous backup is implemented only on critical data from certain lower-level monitoring sites if not all data needs to be backed up to the primary site. When configuring remote backup, critical data is remotely backed up to the specified path. The SecureData technology in OceanStor 9000 is able to suit the particulars of the video surveillance scenario and implement remote backup across different locations to achieve off-site disaster recovery.   

SecureData is completely transparent to the upper-level video surveillance platform, removing any issues in compatibility. SecureData can be considered a remote mirroring utility for video surveillance data. If the upper-level platform needs to access the backup data, only the data storage path of the backup destination in the video surveillance platform needs to be configured, providing quick and easy access to backup data when most needed.

Working principles 

The SecureData DR technology for video applies to two main application scenarios:

Scenario 1: OceanStor 9000 systems are deployed at the primary and secondary sites.

Scenario 2: OceanStor 9000 is combined with third-party storage systems to form the DR solution. 

The SecureData working principles for each scenario are described separately.

In scenario 1, the asynchronous remote replication of video files is implemented between the source (home) and destination (secondary) directories with directory-based snapshots.  

When replicating data for the first time, the system will create a mirrored snapshot of the video files being stored on the home directory at the time of the replication, which will serve as the reference snapshot for video file synchronization. After each subsequent synchronization, the system will take another mirrored snapshot of the home directory then compare and record any differences between this latest snapshot and the previous snapshot. Any change to the video files on the home directory will be synchronized to the secondary directory, eliminating the need for the system to transverse the entire directory tree and thereby improving the efficiency of the incremental synchronization operation. Any video files changed after this point-in-time snapshot will not be sent to the secondary directory until the next initiated full or incremental backup to ensure consistency of data and avoid overlap in ongoing processes. 

For example, a customer implements synchronized replication on the video files in the home directory at 14:00, and the operation does not complete until 14:10. At 14:05, the user modifies video file A in the home directory. However, this change to the file in the home directory will not be copied to the secondary directory until the next synchronization operation. Once data synchronization is complete, the system will compare the video files in the secondary directory with that of the reference snapshot of the home directory to see if there are any differences. Because the synchronization only considers the files in the directory at the time the process is initiated, the snapshot of the secondary directory will match that of home directory.    

Expanded explanation  

The synchronization interval can be set to 15 minutes for this function, which also means the recovery point objective (RPO) can be achieved in as little as 30 minutes with the condition that there is ample bandwidth and the settings for load-balancing, access, and other features allow.

By default, the secondary directory is in the write-protection state, which means that the directory can only receive data synchronized from the home directory, and will deny any other write requests to it. If the primary site fails or needs to go offline for maintenance, the data in the secondary directory will be automatically rolled back to the latest snapshot point when the home and secondary directories were consistent, and then the secondary directory is ready to take over the services of the home directory.

When the primary site recovers to normal, SecureData switches roles between the sites. In this situation, the primary site will automatically roll back the original home directory (which is the acting secondary directory) using the latest consistent snapshot, and restores the data in this directory to the state generated at the most recent consistent snapshot point. 

Scenario two uses open source rsync software to enhance task configurations, logical control, and other functions. With this feature, OceanStor 9000 is able to implement remote synchronization of the video files on the local and remote systems (which can be heterogeneous storage systems).

The rsync-based remote replication of video files compares any differences in the source directory and destination directory. If a video file is modified or added, the video file is copied to the secondary directory during the next scheduled or initiated synchronization event. 

Push or Pull synchronization modes can be implemented in scenario 2.

Push mode: OceanStor 9000 replicates the video files in the local directory to the backup directory. This mode is mainly used to backup local video files.

Pull mode: OceanStor 9000 replicates the video files in the backup directory to the local directory. This mode is mainly used to restore video files on the local directory that have become corrupted.

In both Push and Pull mode, the local OceanStor is the initiating end of all synchronization operations.  

In Push mode, the OceanStor 9000 is the NFS client and the backup system serves as the NFS server. The NFS protocol must be used to back up video files to the backup system. The local directory is the source directory for the video files and synchronizes basic attributes to the destination directory (backup directory) in the backup system. In Pull mode, the backup directory serves as the source directory to synchronize video files to the OceanStor 9000 in Pull mode. 

Conclusion

With the ever-increasing value of data, customers are attaching more importance to the reliability of their video assets. Huawei OceanStor 9000 provides full-dimensional assurances in data reliability, delivering compelling benefits to ISVs in enhancing their core competencies. 

Asynchronous remote replication snapshot

 Figure 6: Snapshot-based asynchronous remote replication

OceanStor 9000 for Video Cloud Center storage

Designed for Big Data, the Huawei OceanStor 9000 storage system uses a symmetric distributed architecture to deliver cutting-edge performance, large-scale horizontal expansion capabilities, and a super-large single file system, providing shared storage for structured data and unstructured data.  

In the video surveillance field, OceanStor 9000 serves as the video cloud. Applications include Safe City, banking, rail transport, and other layouts requiring large-scale video surveillance systems. Huawei OceanStor 9000 consolidates storage, archiving, and analysis capabilities, making it ideal for interworking with the upper-layer service system in providing real-time access to video feeds, intelligent analysis, image detection, facial recognition, and geographic information system (GIS) services among the many other applicable utilities.  

Product design

OceanStor 9000 - front 

OceanStor 9000 - rear

OceanStor 9000

10 GE networking scenario

Figure 7: Networking applications

 

Internal network IB scenatio

Figure 8: 10 GE networking

 

10 GE external network scenario 

Figure 9: Internal network IB

Figure 10: External network 10 GE

IB networking

Highlights

Smart convergence 

  • Big data life cycle integration
  • Management integration

Outstanding performance

  • High-speed internal interworking
  • SSD-based metadata access acceleration 
  • 55 TB global cache  
  • Dynamic storage tiering
  • Intelligent load balancing

Flexible expansion

  • Linear expansion from 3 to 288 nodes  
  • 40 PB Global namespace

High reliability and availability

  • RAID-based N+M data protection technology
  • Restore data at a speed of up to 1 TB/hour

Simplified management

  • Flexible space quota
  • Automatic statistics collection and analysis
  • Automatic deployment

Model

OceanStor 9000 Capacity Node

Hardware specifications

System architecture

Symmetric distributed architecture (4U)

Number of nodes

3 - 288

Cache per node

Standard configuration: 48 GB, expandable to 192 GB

Number of disks per node

Standard configuration: 1 x 3.5-inch 200 GB SSD + 35 x 3.5-inch 4 TB SATA disks (Based on actual performance requirements, the SSD/HDD configuration ratio can be adjusted)

Disk type

3.5-inch SSD, SATA, and NL-SAS

RAID levels

0, 1, 3, 5, 6, 10, 50

Front-end network type

10 GE or 40 GE InfiniBand

Internal network type

10 GE Ethernet or 40 GE Infiniband

Software features

Data protection level

N+1, N+2, N+3, N+4

File system

Wushan distributed file system. Supports global namespace and can be dynamically expanded up to 40 PB

Value-added features

  • Dynamic storage tiering (InfoTier)
  • Automatic client connection load balancing (InfoEqualizer) Space quota management (InfoAllocator)

Thin provisioning

Configuration-free thin provisioning

Data self-healing

  • Automatic, concurrent, and quick data restoration
  • Maximum restoration speed of 1 TB/hour

System expansion

  • One-click online expansion
  • Single node expansion in 60 seconds

Global cache

Up to 55 TB

Supported operating system

Windows, Linux, Mac OS

Supported protocol s

NFS, CIFS, HDFS, NIS, Microsoft Active Directory, LDAP, and SNMP

System management

  • Support for different management rights users
  • Domain and rights-based user management
  • Alarm notification by email, SMS, SNMP, and Syslog

Free from instant maintenance

  • Automatic bad disk detection and alarm notification
  • Centralized batch replacement of bad disks
  • Avoids instant replacement and reduces manual maintenance