Making Wireshark work for high-speed networks

To keep up with today’s big and complex networks, traditional packet capture tools need a little help

Capturing packets, or sniffing them from networks using relatively lightweight probes and monitoring tools, has long been one of the most common ways to uncover issues on the network. A lot of these tools are still free and widely supported, such as Wireshark, but they might not get to the root cause of issues today as effectively as they did in the past.

The reason is the sheer volume of data produced by today’s complex physical and virtual network architectures. High-speed packet capture from 10G or 40G connections (with 100G lines now looming on the horizon), being able to store the packets effectively, and sifting through all of that data with any kind of fidelity pose enormous challenges. You can still find out anything that happened on the network if you have the packets. But the haystacks have never been bigger, and the needles have never been better at hiding.

Traditional packet capture tools cannot keep up unless you know exactly where to look. In today’s high-speed networks, relying only on traditional packet capture would be like using a scalpel to cut down a tree. What you really need is a chain saw to get the tree down first; then, if you’re looking for more precision, you’d use the scalpel. These days, even seconds of packet capture can generate millions of packets that will be meaningless unless you can get to the packets you need quickly.

The key is using a funnel approach. That is, start by monitoring the bigger picture of user experience and response times through a combination of flow data and passive monitoring on taps or SPAN (Switched Port Analyzer) ports. Find out if you have specific users, links, or applications that are consuming more bandwidth and crowding out others. For the delays, you can decode the TCP/IP conversations to provide the composition of delay. Metrics such as server delay, retransmission delay, connection setup delay, and payload transfer delay will help you decide where to look. As you work down the funnel, you’ll learn where you need to capture and analyze at the packet level.

To take a real-world example, think of the microbursts that arise from high-frequency transactions, such as in the wake of the release of market data to financial trading institutions. In this case, you might find that trades are not being executed as expected, yet bandwidth utilization from flows and TCP/IP conversations looks perfectly normal. This is where packet capture comes in. If you drill down into your packet capture engine and view the packets that show the microbursts at millisecond levels, you’ll be able to see the wall of saturation where trading has stopped. Then you'll know you need to upgrade your multiple 40G links to a 100G link.

A smart combination of monitoring and analysis tools will provide you with the context needed to determine whether or not a packet capture is warranted, then will help you get to the exact slice of data you need in that context for analysis.

In some cases, an “on demand” approach is sufficient, such as in a branch office location or for mobile users where capturing all the packets all the time for every user might not be worth it. With an intelligent packet capture approach, you can configure alerts on slow application responses that trigger a packet capture to identify the root cause of the issue. You could have mobile users in delivery trucks or even law enforcement personnel who suddenly experience slow Web apps because someone at headquarters updated the news feed with a picture of a newborn baby or the employee of the month. The size of the JPEG image could instantly clobber the performance of apps used in the branch or on the road, but you might not catch it without doing a packet capture.

In other cases -- such as for the mission-critical apps that run your business, where slow time is like downtime because you’re losing thousands of dollars by the second -- having a complete history of packet captures available for inspection might make more sense. The importance of keeping packet data was highlighted by the recent Heartbleed security issue, where the key to knowing if your data was exposed to hackers was having the history of packets to inspect at a later date.

You cannot diagnose and fix what you don’t know. Ultimately, the truth about your network resides in the packets. But in today’s high-speed networks, making effective use of packet analysis requires both chain saws and scalpels. By combining higher-level monitoring and analysis solutions with intelligent packet capture, you can continue to use packet analysis to unearth the causes of poor network performance. After all, slow is the new down.

Patrick T. Campbell has split his 20-plus year career equally between application and network performance management and K-12 education. He began his IT career at InfoVista as a technical trainer, followed by Raytheon Solipsys, OPNET Technologies, and now Riverbed Technology, where he is a technical marketing engineer.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to