Reminder

To have a better experience, please upgrade your IE browser.

upgrade
If you need help, please click here:

Redefining Storage with a Software-Defined Solution

Software-Defined Storage (SDS) has had a significant impact in the data center and cloud realms. Already, software-defined products are unifying data centers by enabling better control and more scalable environments. But the question remains: What can it really do for storage?

SDS is a component subsystem within a converged cloud architecture designed to support dynamically changing storage requirements based on real-time fluctuations in application traffic. As with networks and infrastructures, storage products are being engineered to separate the control and data planes, enabling technical resource management at a group-policy level — independent of the low-level mechanics of each device.

The core theory of the SDS process is that all storage services, including resource pool management, are implemented via autonomous SDS controller software that is decoupled from standard server-based storage hardware. While the working principles of SDS products between different vendors may vary, the focus on maximizing flexibility to address customer concerns is a common goal. SDS is designed to unlock the full capabilities of mass storage hardware platforms in all cases, and virtualization techniques enable the deployment of unified resource management layers to organize scheduling and other high-level activities.

Historical Perspective

At one time, costly segregation of physical infrastructure components by vendor was the norm. Data could not be shared or moved from one vendor’s storage device to another without significant labor and, usually, paying for a third-party product or driver.

‘Hardware creep’ set in as more servers, racks, and other equipment were acquired to grow. Then, storage virtualization came along with the creation of a hardware abstraction layer that lays across multiple-vendor storage devices. This meant all disks from the different vendors could function as a single disk pool. Storage virtualization helped protect the existing infrastructure while allowing for future expansion.

But storage needs kept growing, and, with the wider acceptance of the cloud model, storage virtualization was failing to be enough. Big Data, the IoT, and all the other trends compounded the growth of data, necessitating the introduction of a logical layer to optimize operations of the storage, network, and server infrastructures.

A Star is Born

Thus, the software-defined age was born. The idea was simple: Create a vendor agnostic platform from which to work and a virtual layer that helps direct the 1s and 0s more efficiently. This software-defined environment could include Software-Defined Networking (SDN), SDS, VMware, KVM, and other servers. Using these software technologies together produces a new type of environment: hyper-converged. Because functionality is free from the physical infrastructure, automation and management become more flexible.

SDS Benefits

  • Logical Storage Abstraction places a virtual layer between data requests and the physical storage component. Allowing manipulation of where and how the data is distributed, the storage abstraction layer enables a heterogeneous storage infrastructure while still controlling the entire process from a virtual instance. Users can present storage repositories to the SDS layer to allow that instance to control data flow.
  • Intelligent Storage Optimization permits the efficiencies of existing storage not yet utilized. Capacity pools are created and data can then be delivered to that pool using the correct storage type while still utilizing built-in disk array features like deduplication, compression, and thin provisioning. So, for a tier-one application like SAP HANA (an in-memory, column-oriented, relational database management system), a storage pool of Solid-State Drive (SSD) disks can use only the disks suited for that application. The real power is the flexibility of choosing what is used for each application or environment. For SAP HANA, disks from three different storage arrays and three different vendors can be used because the entire array is assigned to the software-defined layer — or just a shelf or two.
  • Creating a More Powerful Storage Platform, in this case, a hybrid storage model, provides a virtual control layer for the physical disk that allows the use of the physical and virtual storage features. Users can create a logical control layer that helps manage all the physical storage, regardless of the vendor, in the data center. The storage platform prevents vendor lock–in and permits migration of data or applications from different storage arrays.
  • Regaining Control of the Storage Infrastructure is a fundamental issue that must be addressed across the entire infrastructure. How many disks, shelves, arrays, and controllers must be bought, and which vendor is the right fit?

Personnel must know and fully understand the I/O–intensive applications and where each data stream needs to be stored. Organizations must change how they think of storage in order to create tiers of logical disks that are independent of the physical systems.

By creating powerful virtual SDS applications that control their data infrastructures, organizations will see, control, manage, configure, and migrate data from one disk, or sets of disks, to one or more others. Disks will no longer sit in reserve storage arrays.

  • Expanding Cloud Storage is simplified due to the ability of SDS to dynamically expand physical data centers into the cloud. Creating storage links from on-site IT systems to public cloud vendors establishes distributed data centers for replication, Disaster Recovery (DR), and load balancing. The long-term advantage is that data storage will happen at the virtual layer between heterogeneous storage components on site and in the cloud.
  • Hyper-Convergence is the result of combining SDS with other software-defined technologies. Systems with virtual servers, using virtual networks with correct memory, user, and zoning allocations, and virtual data pools can be matched exactly to the needs of each application — in minutes instead of weeks.

Summary

Industry is beginning to leverage the power of storage, regardless of vendor, location, or technology. Organizations looking at present-day resources will pool and distribute them on a global scale. With the vast quantities of data being generated each year, a better way to control the flow of the information was required.

As a way to avoid buying more disks or being constrained by a fixed platform storage infrastructure, SDS technologies abstract entire storage pools for the purpose of 100 percent utilization. Like every other physical component in the data center infrastructure, virtual layers can truly help optimize resource consumption. Server virtualization has helped mitigate server sprawl, application virtualization has helped create new delivery methodologies, and now, virtual storage technologies are helping to control heterogeneous storage platforms.

The future cloud and data center model anticipates massive growth in the volume of data passing through the infrastructure. Moreover, software-defined technologies have become critical to the efficient delivery of next-generation content.

By Jarrett Potts

Worldwide Director of Storage Marketing, Enterprise Business Group, Huawei Technologies Co., Ltd.