With David G. Hill, principal, Mesabi Group LLC.
Question: A lot of IT executives facing a storage crunch are investigating virtualization but are concerned about interoperability, particularly in multi-vendor environments. Has the technology overcome this hurdle yet?
Hill: Interoperability in the sense of can one vendor's virtualization product virtualize across a multi-vendor environment is not an issue. Several products can do that very well. Interoperability in the sense of being able to run data management software, such as synchronous remote mirroring software, at the same time on Vendor B's hardware while virtualizing with Vendor A's software is typically non-existent. And therein lies the rub, as IT executives are not likely to give up products that they have already bought and are familiar with (even though technically they could switch) just to get virtualization. So, except in select circumstances, virtualizing in a multi-vendor environment is biting off more than most IT executives are willing to chew. However, virtualizing in a homogenous (one vendor) environment in a single array or across multiple arrays is very doable.
Question: Do you think it could usher in an era in which competing storage mechanisms (SAN vs. NAS, iSCSI vs. Fibre Channel) become irrelevant?
Hill: No, SAN, NAS, iSCSI, and Fibre Channel relate to the physical "plumbing" (in some sense) of how servers use a storage network to connect to the storage that they use. The physical plumbing will not go away. Although networks can be virtualized, storage virtualization is about having physical storage resources, such as a set of disk drives or tape drives, appear to a server as a logical resource. For example, a server thinks that it is writing to drive E, but in reality there is no fixed drive E. An IT administrator may have a number of drives that can serve as if they were drive E, and that drive may change over time. Moreover, storage virtualization comes in many "flavors" — block-based, file-based, device-based and tape-based. For example, a popular use of virtualization today is using a virtual tape library (VTL) for disk-based backup. That VTL is probably connected to a SAN.
Question: One of the chief benefits of virtualization is that it frees individual servers from relying in a fixed partition in a storage array. But wouldn't such an environment require an advanced layer of data management software as well?
Hill: No. Virtualization software operates at the "back end" and data management software, such as replication software or backup/restore software, operates at the "front end." For example, as long as backup/restore software can issue an I/O request for a file and the virtualization product can make sure that the file is delivered, the backup/restore software could care less where the file physically resides. Now that ability for the front-end and the back-end products to do the necessary handshaking may not exist natively. Vendor A can design its remote mirroring software to work with its virtualization software, but would have to get the cooperation of Vendor B in order to get Vendor B's remote mirroring software to work with Vendor A's virtualization product. And Vendor A is not likely to make the change itself nor give a source code version of its software to Vendor B to look at. Virtualization runs very well in a homogenous storage array where one vendor makes sure that the data management software is compatible.
This 3 Questions originally appeared in a weekly report from IT Business Edge