When good recommendations go bad

Just because a recommendation is in a "best practices" document doesn’t mean it's right for your network

Let me ask a question: Does your job include implementing official security guidelines that tell you step-by-step which security features to enable? And let me ask a telling follow-up question: If you followed all those guidelines, would they not irretrievably break the asset that it must be applied to?

img92528.jpg
I can already tell you I’ll get dozens of letters from readers saying yes to both. If you’ve ever been involved in applying the security settings from official recommendation guides, you have, like me, come across settings that don’t work on the computer you’re applying it against -- or worse, they'd “break” the computer.

[ RogerGrimes's column is now a blog! Get the latest IT security news from the Security Adviser blog. ]

I was recently reminded of this problem by my friend, Mark Burnett. He is a longtime respected security expert on many topics. Mark was venting his frustration over the Defense Information Systems Agency (DISA) recommendations for Microsoft computer hosts. Mark recently ran the DISA gold disk scanner against some of his very secure Windows systems and discovered that many of findings were absurd. Here are his first examples:

1 accounts [sic] have the User Right: Act as part of the operating system. No accounts should have this right.
The accounts that have this right are: SYSTEM

Unauthorized accounts have the User Right: Back up files and directories.
Accounts [are]: Backup Operators

Unauthorized accounts have the User Right: Manage auditing and security log.
Accounts [are]: Administrators

Mark says, "The list goes on and on with recommendations that are really pointless."

Flawed logic
Most industry-accepted computer security guidelines contain flaws. Not all -- but most of them. Sometimes the flaws are just ignorant and don’t cause any harm, such as the government’s frequent recommendation of turning on Directory Service auditing on Windows end-user workstations. The setting works on only Windows domain controllers. That fact doesn’t stop it from being recommended.

Other government-mandated document standards, if implemented, would irrevocably break the servers they are supposed be installed on. For example, they recommend disabling NetBIOS/SMB in environments that clearly rely on it. They recommend changing permissions that are clearly not going to be supported by the vendor. They tell you to delete key vendor settings and objects that, although they could convey some additional risk, have never been used in a published exploit.

Some guideline documents just age. They include settings long ago recommended, but since disproved or no longer needed. Some include unintentional errors that, once printed, seem to live on without anyone questioning their veracity. One government list I saw recommended blocking 20 specific file extensions on incoming e-mail. Ignoring for the moment that the list really should be “deny by default, allow by exception," the 20 file extensions included one that doesn't exist. I did months of research (in my spare time), only to learn that the file extension was mentioned in the source code for one worm that was popular for a month nearly a decade ago. The problem was that the extension never existed; it was a worm writer’s typo. But that didn’t stop it from being codified and promoted as a “best practice.”

Savvy technical people might try to ignore bad advice, but more and more auditors are demanding that we follow “best practice” guidelines. I can’t blame the auditors; they're just doing what they're told. Most of them aren’t that technically savvy, but auditors follow guidelines like they were a religion's holy book.

In discussing this problem with my friend Susan Bradley, she correctly pointed out that you don’t get in trouble for following the mandated guidelines, but you will have some “ 'splaining to do, Lucy” if you deviate. Great point.

Fixing guidelines
As guidelines and gold standards become more a part of our mandated life (and for many enterprises, that’s a good thing), we need to fix those that are broken. Here are my ideas:

First, every recommendation made in a guideline should be thoroughly tested and challenged before it’s put into the official document. Sounds like a no-brainer, right? But there's so much proof to the contrary -- because many recommendations don’t work.

Second, every recommendation document should discuss what the recommendation will do to your system, why it’s good, and what legitimate things it could possibly break.

Third, every guideline should include a clause that says something like, “This document is intended to be used solely as a general recommendation. There are many legitimate reasons for deviations from these official recommendations, including and not limited to the catastrophic interruption of legitimate services in some environments. All readers should test the implementation of any of the included settings before implementing in a production environment and deviate where appropriate.” The best-practice document in your hands may not be the best practice for your environment.

Last, every guideline document should include a well-documented avenue for challenging assertions. There needs to be an easy way to get rid of the bad advice. Each recommendation document should include a paragraph detailing the official process, and it should include e-mail addresses for sending challenges. Just as important, the person on the other end of that contact information must reply within a reasonable amount of time and give the sender an official response, either accepting the challenge for further research or denying the sender’s attestation with a reason for doing so.

Is it too much to ask that our official and mandated security guidelines be technically accurate?

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies