The top 20 IT mistakes to avoid

InfoWorld’s CTO tells tales from the trenches, flagging the most common IT mistakes that can ruin peace of mind and even careers

When it comes to network performance, there’s no single metric by which to judge network health. Douglas Smith, president of network analysis vendor Network Instruments, points out that it’s a mistake to think that network utilization can be quantified in a single way. When management asks for a single network utilization report, IT is typically sent scurrying for a single metric for network health that is ultimately impossible to define.

That said, certain aspects of a network, such as port utilization, link utilization, and client utilization, can and should be measured. In any scenario, successful network analysis means taking a step back and looking at the data in the context of your enterprise.

Network utilization requires judgment calls. If two ports on a switch are 90 percent utilized and the others are not utilized, do you consider your switch utilization to be 90 percent? It might be more appropriate to ask which application is causing those particular ports to reach 90 percent utilization. Understanding the big picture and analyzing utilization levels in context are the keys to getting a sense of your network’s health.

13. Throwing bandwidth at a network problem

One of the most common complaints addressed by IT is simple: The network is running slower than normal. The knee-jerk reaction is to add more capacity. This is the right solution in some cases but dead wrong in others. Without the proper analysis, upgrading capacity can be a costly, unwise decision. Network Instruments’ Smith likens this approach to saying, “I’m running low on closet space, and therefore I need a new house.”

Capacity aside, common root causes of slowdowns include unwanted traffic broadcasting over the network from old systems or apps, such as IPX traffic, or misconfigured or inefficient applications that spew streams of packets onto the network at inconvenient times.

According to Smith, one of Network Instruments’ banking customers was considering upgrading its WAN links due to complaints from tellers that systems were running slow. The IT team used a network analyzer to determine that increased traffic levels were being caused by a security app that ran a daily update at 3 p.m. When the IT team reconfigured this application to make updates at 3 a.m. instead, they were able to quickly improve traffic levels without making the costly WAN upgrade.

14. Permitting weak passwords

In the Internet age, new threats such as worms and phishing tend to garner all the security attention, but the SANS Institute’s Top 20 Vulnerabilities list released in October points to a basic IT mistake: weak authentication or bad passwords (infoworld.com/2193). The most common password vulnerabilities include weak or nonexistent passwords; user accounts with widely known or physically displayed passwords (think Post-it Notes); administrative accounts with weak or widely known passwords; and weak or well-known password-hashing algorithms that are not well secured or are visible to anyone. Avoiding the weak authentication mistake boils down to simple IT blocking and tackling -- a clear, detailed, and consistently enforced password policy that proactively deals with the most exploited authentication weaknesses detailed in the SANS report.

15. Never sweating the small stuff

CTOs and CIOs like to talk about the strategic application of technology, but ignoring basic tactical issues can lead to simple but extremely costly mistakes. Missing a $30 domain name registration payment can be enough to grind your business to a halt. In one notorious example, last February a missed payment by The Washington Post knocked out employee e-mail for hours until the renewal was paid.

As datacenter environments become denser, even low-level facilities issues may demand scrutiny. On his Weblog, Sun Microsystems President Jonathan Schwartz quoted a CIO who responded to a “what keeps you up at night” question with, “I can no longer supply enough power to, or exhaust heat from [our datacenter]. I feel like I’m running hot plates, not computers.” A CIO who overlooks burning -- but not necessarily obvious -- issues such as these may soon be in search of another job.

16. Clinging to prior solutions

A common mistake for IT managers moving into a new position at a new company is to try to force solutions and approaches that worked at a prior job into a new environment with different business and technology considerations.

One current vice president of operations describes a new, low-cost open source environment he had to manage after working in a more traditional shop that relied on high-end Sun hardware and Oracle and Veritas software. The new startup company couldn’t afford the up-front cash required to set up a rock-solid environment based on commercial software, so they ran a LAMP (Linux, Apache, MySQL, PHP) architecture with an especially aggressive Linux implementation on 64-bit AMD Opteron machines. Gradually, the vice president realized that his old solutions wouldn’t work in the new environment from a technology or cost angle, so he changed his approach to fit the new reality, using none of the technologies from his prior job.

17. Falling behind on emerging technologies

Staying current can prevent a disaster. For instance, the emergence of inexpensive consumer wireless access points during the past few years has meant that anyone can create a wireless network -- a real problem for any reasonably structured corporate IT environment. A Network Instruments retail client, for example, was installing a WLAN to serve the needs of employees who measured warehouse inventory levels. Soon enough, management wanted access to the WLAN, and without asking for approval, some employees installed wireless access points at their desks.

Fortunately, the IT staff had implemented ways to check for rogue access points, and a WLAN channel scan with a network analyzer quickly showed there were more access points on the network than the administrator knew had been deployed. In this case, the IT staff recognized an emerging technology that might be stealthily introduced by employees and developed procedures to inventory the threat, thereby controlling it.

18. Underestimating PHP

IT managers who look only as far as J2EE and .Net when developing scalable Web apps are making a mistake by not taking a second look at scripting languages -- particularly PHP. This scripting language has been around for a decade now, and millions of Yahoo pages are served by PHP each day.

Discussion of PHP scalability reached a high-water mark in June, when the popular social-networking site Friendster finally beat nagging performance woes by migrating from J2EE to PHP. In a comment attached to a Weblog post about Friendster’s switch to PHP, Rasmus Lerdorf, inventor of PHP, explained the architectural secret of PHP’s capability of scaling: “Scalability is gained by using a shared-nothing architecture where you can scale horizontally infinitely.”

The stateless “shared-nothing” architecture of PHP means that each request is handled independently of all others, and simple horizontal scaling means adding more boxes. Any bottlenecks are limited to scaling a back-end database. Languages such as PHP might not be the right solution for everyone, but pre-emptively pushing scripting languages aside when there are proven scalability successes is a mistake.

19. Violating the KISS principle

Doug Pierce, technical architect at Datavantage, says that violating the KISS (keep it simple, stupid) principle is a systemic problem for IT. Pierce says he has seen “hundreds of millions” of dollars wasted on implementing, failing to implement, or supporting solutions that are too complex for the problem at hand. According to Pierce, although complex technologies such as CORBA and EJB are right for some organizations, many of the organizations using such technologies are introducing unnecessary complexity.

This violation of the KISS principle directly contributes to many instances of project failures, high IT costs, unmaintainable systems, and bloated, low-quality, or insecure software. Pierce offers a quote from Antoine de Saint-Exupery as a philosophical guide for rooting out complexity in IT systems: “You know you’ve achieved perfection in design, not when you have nothing more to add, but when you have nothing more to take away.”

20. Being a slave to vendor marketing strategies

When it comes to network devices, databases, servers, and many other IT products, terms such as “enterprise” and “workgroup” are bandied about to distinguish products, but often those terms mean little when it comes to performance characteristics.

Quite often a product labeled as a “workgroup” product has more than enough capacity for enterprise use. The low cost of commodity hardware -- particularly when it comes to Intel-based servers -- means that clustering arrays of cheap, workgroup hardware into an enterprise configuration is often more redundant and scalable than buying more expensive enterprise servers, especially when it comes to Web apps.

| 1 2 Page 6
From CIO: 8 Free Online Courses to Grow Your Tech Skills
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.