Benefiting from virtualization
Recent technological developments deliver tremendous flexibility to build and architect network storage solutions. By far the most prominent trends are disk space and CPU density. We can achieve significantly larger capacities with significantly higher processing power in smaller and less–powerintensive configurations than ever before.
To be effective, aggregating drive space and processing power requires virtualization capabilities that present multiple components as a single logical pool. Whether a group of disk drives presenting a single logical unit number (LUN) or a set of clustered file servers presenting a single file space, underlying virtualization tools currently deliver on this requirement.
Carrying forward, one can envision the ability to continuously add capacity or processing power to any storage solution and then divide resources as needed. In once sense, this organic growth provides a neverending roadmap. Grid computing often refers to this as a "scale out" architecture.
If only that were the case. Looking historically at technological constraints, an ongoing cycle exists between growing bigger and growing smarter, neither of which covers market needs alone.
An interesting parallel exists in the networking market. Ethernet has and continues to defy preconceived bandwidth limits. Over a short timeframe we've seen thousand-fold improvements in bandwidth, providing virtually unlimited capacity. At the same time, we've seen a large market for bandwidth management devices and software within local and wide area environments. No matter how bigger we make the pipes, there remains a need to be more efficient about their use.
The same effects will play out in the storage market, particularly with large, virtualized storage architectures. It is more than just a matter of more capacity or performance, but how effectively it can be used.
There are two primary steps to get us there. Facing an increasing physical number of resources, the software and hardware tools to aggregate, virtualize, and manage storage will remain critical. But the more important step will be distilling the thought processes required by information technology and business managers to use it. With more space, performance, and configuration possibilities, the bottleneck will shift toward the management time and attention required to optimize resources. This is the critical gap going forward for virtualization. New solutions will need to set constraints on the human capital required for deployment and find ways to minimize this part of the equation. Today the time for assessment, planning, implementation, maintenance, and optimization storage solutions often outweighs the benefits of deploying a new solution.
Tools restricting administrative decisions to the essentials will fulfill the promise of large virtualized network storage configurations. The greatest savings and advantages will come from delivering capacity, performance, and reliability coupled with a decision framework that minimizes management oversight. Ultimately, this will allow more users to interact with more applications that access a larger and more flexible pool of centrally managed storage.
- TABLE OF CONTENTS
- Introduction: Observations on storage virtualization
- Delivering a global storage solution
- Virtualizing servers, storage, and networks
- Moving storage virtualization up the stack
- Virtualization of data
- Benefiting from virtualization
- Future directions for virtualization: A foundation for the future
This is excerpt was written by Gary Orenstein, author of IP Storage: Straight to the Core, from the book Storage Virtualization: Technologies for Simplifying Data Storage and Management, by Tom Clark and courtesy of Addison-Wesley. If you found this book excerpt helpful, purchase the book from the Addison-Wesley.
This was first published in October 2006