To test Stonefly’s i3000, I used a legacy storage system consisting of a parallel-SCSI array from Winchester Systems, mounting 12 Ultra 160 drives with two host SCSI connections.
Setting up my test IP SAN with just one storage device was easy: I connected each SCSI port on the array to one on the i3000 ports. To connect additional arrays, you can either daisy-chain to the first one or connect to the i3000’s second SCSI port.
For networking, the i3000 has two GbE ports for balanced data traffic. I linked those to my fast switch and a separate port for management that went to a slower hub.
My hosts were an HP ProLiant ML350 and a DL360 running Windows 2000 Advanced Server with SP 4 and Windows Server 2003. All had built-in GbE cards, so I downloaded the Microsoft iSCSI drivers to handle virtualized storage from the Stonefly boxes. On a separate machine, I installed Red Hat Linux 9 and iSCSI drivers from the Red Hat site.
Part of my tests involved verifying that removing physical drives and cutting connections between switches and disk arrays within the redundancy limits of the configuration would not cripple the unit.
To simulate data traffic during fail-over and resiliency tests, I kept Iometer scripts running on my servers to verify that ongoing applications were not affected. Performance (as measured by Iometer) was not bad; using various Iometer scripts mimicking file-server behavior, I measured combined transfer rates from three concurrent machines ranging from 18 to 38MBps.
I ran Replicator on one server, setting mirrors between volumes on local drives and on the Winchester appliance. To reproduce the inertia of a slow connection during those tests, I moved the server behind a 10Mbps hub and added more network traffic to generate collisions. From the Replicator GUI, I clocked replication data-transfer rates of about 40MBps.