Back in the bad old days, sloppy network cabling practices caused intermittent network problems that were painful to solve
It was the 90s, by the way, which explains why the entity I worked for was still doing all token-ring networking. It wasn't until much later that IBM even acknowledged the existence of Ethernet -- and our CIO wore Blue underwear.
One of our campuses had been built partly in the 1940s, and the newest portion in the late 60s. All of the inter-building was AT&T-style voice-grade stuff. No fiber existed. We started having a major problem with the network in one entire wing. It would periodically go completely bonkers, lots of lost connections, time-outs, and slowdowns. Then it would mysteriously stop. Even after we threw everything we had at it, we really had no way of knowing whether we had fixed the problem.
After checking out the trouble call from the woman with the connectivity issues, I discovered that the fellow who had finished the cross-connect to this user's jack in the attic above a large conference room had neglected to certify the connection back to the hub with our Fluke Lanmeter. The feeder carrying the signal from the hub to the attic was part of a 600 pair feed cable that AT&T had spliced in a Y to continue on to another wing! Standard practice in the 40s, I guess. So whenever the secretary at the end of the connection fired up her 3270 emulation program that opened her token-ring connection, the feedback and echoes from the Y connection threw the entire building into a tizzy. Then when she shut down, the problems would go away. After I put the Fluke meter on the connection, it immediately noticed the anomaly and allowed me to track it down.
The solution? I had to open a 600 pair splice case, tone out which 25 pair Y splice bracket carried her connection, and snip off the other leg of the Y.
Thank God for the day they realized they'd have to start investing in structured data cabling.