Tuesday, January 6, 2009

A look at some more AX4/iSCSI availability diagrams

Your comments on AX4 & iSCSI high availability were very informative and provided a number of idea for improving on the described availability scenario. In this post, Scott Lowe continues the availability discussion.
You guys gave me some great thoughts in my last posting in which I discussed my AX4/iSCSI highly available architecture. In this posting, I will continue the thread and give you a look at what the Westminster College architecture will look like in a few weeks. Some of this information is based on ideas provided in your comments. Although I’ve had the basic architectural diagram in mind for quite some time, your comments have helped to refine it.

Let’s start with a look at how VMware ESX will fit into our architecture.


This diagram is very similar to the one from the previous posting with one change. At the bottom of the diagram, I show an ESX cluster, fully VMotion-enabled. Each ESX server has multiple connections to the iSCSI storage network as well as to the primary network the users use to connect to the ESX servers. Under this scenario, we will achieve a high level of service availability for all of the servers running on the individual ESX servers. We’ll get to a highly available architecture with our SQL servers — and well as some other non-ESX services — through clustering, which will also entail a setup like the one above.

The next scenario expands on the scenario shown in the previous discussion.


I mentioned in that posting that, for simplicity’s sake, I wouldn’t show the connections to our core switch — an HP Procurve 5412zl. One of the comments on the previous posting recommended that we use the HP 5412zl for our primary iSCSI VLAN rather than our Dell blade-based M6220 switch. Under this scenario, we would bond together the four uplink ports on the M6220 to the 5412zl. The only downside to this scenario is that all iSCSI traffic from our blade chassis will have to traverse both the M6220 and the 5412zl. An alternative would be to use one uplink port on each of the M6220’s to connect to the AX4 and connect the other pair of iSCSI ports on the AX4 to the 5412zl. Doing this, we would have only two ports available to bond together from the M6220s to the 5412zl. We will test both scenarios, but I suspect that we will go with the alternative scenario I just described as it provides a higher level of redundancy.

I look very forward to your comments and suggestions.

iSCSI is the future of storage

iSCSI is here to stay and will eventually supplant a significant portion of the installed base of Fibre Channel SANs out there. Further, as organizations make their initial forays into block-level shared storage, iSCSI will beat Fibre Channel more often than not.

This week, HP announced their $360 million acquisition of LeftHand networks. Last year, Dell surprised the tech industry with a $1.4 billion purchase of the formerly independent EqualLogic. With these iSCSI snap-ups by true tech titans, iSCSI has officially arrived, is here to stay, and, I believe, will become the technology of choice for most organizations in the future.

This is not to say that iSCSI has been sitting in the background up to this point. On the contrary, the technology has taken the industry by storm. Both of these companies based their entire business hopes on the possibility that organizations would see the intrinsic value to be found in iSCSI’s simplistic installation and management. To say that both companies have been successful would be an understatement.

I’m a big fan of both EqualLogic and LeftHand Networks offerings, having purchased an EqualLogic unit in a former life. At that time, I narrowed my selection down to two options - LeftHand and EqualLogic. Both solutions had their pros and cons, but both were more than viable.

It’s not all about EqualLogic and LeftHand, though. The big guns in storage have finally jumped feet first into the iSCSI fray with extremely compelling products of their own. Previously, these players, including EMC and NetApp, simply bolted iSCSI onto existing products. Lately, even the biggest Fibre Channel vendors are releasing native iSCSI arrays aimed at the mid-tier of the market. EMC’s AX4, for example, is available in both native iSCSI and native Fibre Channel versions and is priced in such a way that any organization considering EqualLogic or LeftHand should make sure to give the EMC AX4 a look. To be fair, the iSCSI-only AX4:

-Does not support SAN copy for SAN to SAN replication
-Is not as easy to install or manage as one of the aforementioned devices, but isn’t bad either
-The bandwidth to the array does not increase as additional space is added
-It does not include thin provisioning, although this was rumored to be rectified in a future software release
-The AX4 supports up to 64 attached hosts

But, the price per TB is simply incredible and a solution based on a different vendor would not have been attainable. This year, I purchased just shy of 14 TB of raw space on a pair of AX4 arrays-4.8 TB SAS and 9 TB SATA-for under $40K. For the foreseeable future, I don’t need SAN copy and space can be managed in ways other than through thin provisioning. Over time, we’ll run about two dozen virtual machines on the AX4 along with our administrative databases and Exchange 2007 databases. By the time I need additional features, the AX4 will be due for replacement anyway.

iSCSI started out at the low end of the market, helping smaller organizations begin to move toward shared storage and away from direct attached solutions. As time goes on, iSCSI is moving up the food chain and, in many cases, is supplanting small and mid-sized Fibre Channel arrays, particularly in organizations that have never had a SAN before. As iSCSI continues to take advantage of high-speed SAS disks and begins to use 10Gb Ethernet for a transport mechanism, I see iSCSI continuing to move higher into the market. Of course, faster, more reliable disks and faster networking capabilities will begin to close the savings gap between iSCSI and Fibre Channel, but iSCSI’s reliance on Ethernet for an underlying transport mechanism brings major simplicity to the storage equation and I doubt that iSCSI’s costs will ever surpass Fibre Channel anyway, mainly due to the expensive networking hardware needed for significant Fibre Channel implementations.

Even though iSCSI will continue to make inroads further into many organizations, I don’t think that iSCSI will ever completely push Fibre Channel out of the way. Many organizations rely on the raw performance afforded by Fibre Channel and the folks behind Fibre Channel’s specifications aren’t sitting still. Every year brings advances to Fibre Channel, including faster disks and improved connection speeds.

In short, I see the iSCSI market continuing to grow very rapidly and, over time, supplanting what would have been Fibre Channel installations. Further, as organizations continue to expand their storage infrastructures, iSCSI will be a very strong contender, particularly as the solution is updated to take advantage of improvements to the networking speed and disk performance.