I once worked in the IT department of a tech manufacturing company where we ran into an interesting problem: A large portion of the office network would go offline for several minutes to an hour or more at random intervals on random days.
The network administrators ran diagnostic after diagnostic, probed logs, and interrogated routers, yet couldn't pinpoint the problem. However, most of the time the network had started working again before they could find the issue.
[ Read InfoWorld Paul Venezia's take on ridiculous IT job postings: "Wanted: Brilliant technology slave." | Want to find out when the latest tale of IT terror hits? Get a new tech tale in your inbox every week in InfoWorld's Off the Record newsletter. | Follow InfoWorld's Off the Record on Twitter. ]
After weeks of this random, intermittent behavior, the network eventually went down and stayed down. Finally, the network admins had a repeatable problem to locate.
They discovered that there were two nodes on the network with the same IP address. One of these was the default gateway for an on-site office networks. The other was a mystery.
They traced the port to the cubicle of an engineer who had two PCs at his desk. One PC was his regular office PC; the other was a PC set up for development work.
This second PC was connected to a small, unmanaged switch. These switches were fairly common, since they allowed the engineers to develop machine control code at their desks, rather than inside the clean rooms were they had to wear bulky clothing. Also connected to the switch was a petite industrial camera used as part of a machine vision system that inspected the products with an Ethernet port at the back.
The unmanaged switch had originally been set up as a local network between the camera and the development PC, but someone had later uplinked the switch to the office network.
It turned out that the camera had just happened to ship from the factory with the same IP address and netmask as the default gateway for that office network. This was odd because the IP address in question was within the range of public IP addresses owned by the company. In theory, a camera manufacturer shouldn't have been using that IP address, and who knows if it was chosen by request or was random chance, or if some other flawed logic was involved.
But the result was that every time this engineer powered on the camera, the level 2 switches in the area would start routing traffic to it, which (surprise, surprise) turned out not to know how to behave as a router.
[ Get a $50 American Express gift cheque if we publish your tech experiences. Send your story of a lesson learned, of dealing with frustrating coworkers or end-users, or a story that illustrates a relevant takeaway to today's IT profession to email@example.com. ]
When the engineer, who was frustrated that he couldn't get his camera to work, turned it off again, the office network returned to normal.
The network admin who finally found the problem physically cut the cable uplinking the desktop switch to the rest of the network and left both halves of the cable on the engineer's desk -- along with a note sternly warning the engineer to never connect that camera to the office network again.
After this incident, IT disabled all unused Ethernet ports in the office areas. To get a port turned on for a second computer, networked printer, or other device required a written request to IT that was reviewed prior to having the port turned on. All IT-supplied unmanaged switches were promptly replaced with managed switches, most with their uplink ports turned off by default.
Moral of the story? Don't trust end-users (even engineers) with network equipment you don't control, and never trust the configuration of equipment from your suppliers. Also, cameras don't make good network routers.
This story, "The case of the random network problems," was originally published at InfoWorld.com. Read more crazy-but-true stories in the anonymous Off the Record blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.