As powerful as modern infrastructure technology has become, there's no denying it's grown more complex and interdependent. As much as these new technologies have made life in IT easier and more efficient, they also have created a new class of difficult-to-sort-out failures -- some that can sit dormant for months or years before they're detected.
In the past, a typical enterprise data center might have consisted of many servers, some top-of-rack and end-of-rack network switching gear, and a few large storage arrays. Dependencies in that sort of environment are clear. The servers rely on the availability of the network and the storage they're addressing. The network and storage (and its associated network) don't depend on much beyond themselves.
Today, the picture is quite different. There are still servers, of course, but they might be blades in a blade chassis that includes a built-in converged network fabric enabling connectivity both to the LAN and to storage. The storage then attaches directly to that fabric. Beyond that, some critical functionality of the converged network might be implemented in software running on the server blades. More complex still, if IP-based storage is used, simple access to that storage might depend on everything else working.
It's all too easy to allow a circular dependency to be built into such a system without realizing it. If you're particularly unlucky, you'll find out that you have that flaw only after a lot of other things have gone wrong. The only way to truly avoid such circular dependencies is to spend a lot of time reading documentation, charting interdependencies, and -- above all else -- testing.
A real-world example