How to stress a UTM
We challenged the Astaro, SonicWall, WatchGuard, and ZyXel appliances with a maximum dose of legitimate traffic, 200 VPNs, and hundreds of Internet attacks, all at the same timeFollow @infoworld
For testing the throughput of the UTMs, we used Ixia Communications' IxLoad system to run synthetic Web, FTP, and e-mail traffic in patterns between the different interfaces. Ixia recently gave the IxLoad the ability to run these same simulations through the IPsec VPN tunnels, allowing us to exercise rules on the firewalls for the VPNs. One of our biggest problems over the years has been manually correlating traffic numbers between several different test tools and trying to deal with the fact that TCP-based traffic (HTTP) will back off if UDP-based traffic (FTP) starts filling the pipe. In the case of IxLoad, we could actually see the HTTP traffic backing off as the FTP traffic ramped up.
Ixia loaned us the smallest chassis in the Ixia product line (Optixia XM2) with 16 ports of gigabit throughput per blade and an embedded Linux machine behind each port. The basic architecture is that tests are loaded onto each port from a console, and when the test is run, the console just collects data. This way, each dedicated port can run flat out, generating huge amounts of data and saving us the hassle of setting up banks and banks of CPUs to generate the same load. Since we had multiple ports in each firewall zone (LAN, WAN, DMZ), we aggregated the ports together on a trio of Extreme Networks gigabit switches that provided more than enough bandwidth to avoid any potential throttling of the test. We also kept the traffic rates "real world" since the WAN port really never got over the rate you might find on even the sexiest commercial cable modems. When the test was done, the Ixia reporting feature generated comprehensive reports on the various test streams and correlated them in an easy-to-read format.
To run each UTM's management console, we set up a couple of modern workstations, each connected to an Avocent IP KVM and a remote power-down device from Server Technologies. This remote management setup really paid off, as we spent many a late night working out IPSec incompatibilities with firewall vendors remotely logged into the firewall console while our staff ran the Ixia console.
The overall test goal was to create a reproducible set of tests so that each firewall vendor was tested against the exact same benchmarks, but with test structures still based upon published Internet standards for network equipment testing. Overall, we think we've succeeded, but we've learned a great deal about just how flexible the IPsec VPN standard can be and just how many variants there are in its implementation.
The future: Testing the rest
We've been on the scenario-based testing soap box for more than a dozen years, and our hopes for this type of testing still haven't quite come true. The missing piece is the ability to start up each portion of the test that might reside on test equipment from as many as a half-dozen vendors. We also have the challenge of correlating the test results from all the test tools without going blind trying to read all the reports. We're not willing to say that we can see the light at the end of the proverbial tunnel, but we can see a dim glow in the stygian darkness as we've been reading up on the TesLA alliance.
To put this alliance into perspective, one needs to realize that a quiet revolution has been afoot. The use of XML for configuration and control has slowly become a de facto standard in the industry. TesLA is just one of the better-organized cooperative efforts to take advantage of this that we've seen to date.