Internet Small Computer System Interface (iSCSI) is essentially the SCSI command set for performing block I/O mapped to TCP/IP leveraging networking interfaces like Ethernet. Intended to be used for storage access, iSCSI is great for implementing tiered storage access using Ethernet and IP based networks from servers as an alternative to Fibre Channel or network attached storage (NAS) based file serving. iSCSI is also used by some for remote data access and movement (mirroring) applications as well.
Still not enough information? Get the complete expert response from SearchStorage.com.
Can you run multiple drives off a SATA port?
By default, basic SATA supports a point-to-point connection, for example between a server and a disk drive. However, you can run multiple SATA disk drives off of a single SATA port if port multipliers are used.
Port multipliers enable a SATA controller port to fan-out to multiple SATA devices. When a port multiplier is used, for example, in a storage enclosure or disk shelves, a single SATA port and cable can access multiple SATA disk drives. With the emergence of SAS and co-existence of SAS and SATA, you will start to hear more about SAS and SATA port multiplexers, multipliers, expanders and switches to facilitate connectivity between servers and storage arrays. These components are also found inside of storage systems between server and the storage controller and between storage controllers and disk drives.
Vendors to watch or learn more about if you are interested in SAS and SATA components include Broadcom, Intel, LSI Logic, PMC-Sierra and SiliconStor.
Keep up with major vendors. We
have a 150 server dual-fabric SANs currently in place. That will grow to 350+ servers before
year-end. We have 10 terabytes (TB) of HDS arrays, soon to be 20 TB, going through two Cisco
directors. Some of our servers have dual HBAs, but many do not. Management is balking at spending
the extra funds to go with dual HBAs for all servers, basically saying to live with single HBAs for
the time being. What are the pros and cons of this approach?
A question for your management is what value they place on a specific application or server's availability and uptime. If a particular application or server can be offline and unavailable for some period of time for planned and unplanned outages, then a single HBA might be applicable. On the other hand, if the value of application availability exceeds the costs of an HBA; you need to work with your management so they will understand that the price of an HBA can be less than the cost for downtime. When possible, look for ways to build in redundancy including dual HBAs or network NICs unless there is no business reason or benefit to do so. It is important to consider the potential consequences to your customers and applications users.
Find the rest of this answer at SearchStorage.com
We are considering buying high-end storage. One vendor advocates Fibre Channel Arbitrated Loop
technology and the other uses a switched-disk interface. Of course, each vendor is saying that
their technology is better. We'd like to hear from independent source and expert on this
Traditional Fibre Channel Arbitrated Loop (FCAL) has been used on the back-end of storage systems for attachment of disk drives and disk expansion shelves to controllers. Similar to the way the host or server side attachment ports changed over to switched Fibre Channel several years ago, the new standard for storage systems is to leverage a switched or semi-switched (switched loop) connection.
The advantage to using a switched interface is simpler point-to-point connectivity and in some cases, potential for better performance and reliability. The lower cost of switching technology and embedded switch chips has given rise to the shift to switched bunch of disk (SBOD) for storage.
Get more Fibre
Channel insights. We know a Fibre Channel (FC) drive will cost roughly two times as much as
a Fibre-attached technology adapted (FATA) drive. So, we could have twice as many FATA spindles
working for the same price. Can you think of any reason to use the FC over FATA?
If you can afford the FC disk drives and your applications can benefit from the lower latency, you could go that route. However, performance aside and assuming that your environment and applications will be fine today and in the future with the FATA disk drives, that is a viable option. FATA is essentially FC attached disk drive with dual porting similar to a regular FC disk drive. The big difference is in the price and lower performance -- putting FATA on par as an alterative to Serial Advanced Technology Attachment (SATA) disk drives offered by other vendors. HP and other vendors including EMC (DMX and new CLARiiON CX3) and Xiotech also utilize FATA type disk drives under different marketing names in addition, or instead of, SATA disk drives.
Research SAN channel
issues. I currently work in a 150 TB, primarily Brocade/EMC environment. I have 15 years IT
and five years SAN experience. What SAN-related certifications should I pursue?
There are many different options to pursue depending on where you want to go with your career, including vendor-specific and technology-neutral training from industry and private groups.
For example, if you want to leverage your experience with Brocade and EMC, then certifications from those vendors would be an option. If you are looking for more broad and generic certification and qualifications, then check out the various SNIA certification and qualification programs.
Still looking for more on storage partners? Find them on SearchStorageChannel.com. How can I know
how many I/Os each OS will perform on storage? I know there are many considerations. Can you help
me get started?
Depending upon what OSs are involved, there are different tools that can be used to gather various levels of performance detail on a server-by-server basis. Likewise, depending upon what storage systems you have, there are tools that can tell you what your I/O performance is per server of storage I/O port or on a LUN basis.
At a minimum, you should be able to use vendor supplied tools from Microsoft, HP, IBM and Sun to monitor disk I/O performance. Also, you should look at third-party add-on tools. In general, look at the amount of I/Os per second to the various disk drives during the boot process, while monitoring performance during normal running operations. Also, take into consideration performance impact during backup and database maintenances for a holistic performance picture.
Keep track of the average number of I/Os per second, the peak and sustained reads and writes and the average I/O sizes to help characterize the workloads of the server. Depending upon the storage system being used, you should be able to get some information about the I/O workload activity. If this is a new install with no baseline or historical data to work from, you have more of a challenge in front of you. If not sure what to do with a new environment, drop me a note, and we can discuss some different options and strategies.
Which is better: directors or switches? What are the advantages of each one?
In general, a director will have more scalability in terms of number and types of ports, protocols and interfaces (types of blades), overall performance, redundancy and perhaps partitioning or other advanced resiliency features. Avoid the temptation to replace several switches with a single director. Instead, for high availability, deploy a pair of directors as separate fabric and data access paths. They key is to identify what your needs and requirements are in terms of performance, protocols and interfaces, number of ports, topology and availability among others and align the applicable technology to meet those needs.