My problem was caused by human error. As we have learned to automate human production, some of us have also learned to automate human error.


  • Garbage In, Garbage Out

  • Garbage In, Gospel Out (gullibility about what a computer or website tells you)

  • Got It Going On

    from AcronymFinder
  • The following may be hard to keep up with. It's probably worth it.

    It all started last summer. During one thunderstorm, lightning hit our house, rendering a television monochromatic. We called the insurance company to check on our coverage, then brought the TV to a repair shop. The TV repair wound up costing $80, so we just fixed the TV and went on our way. Little did we know that the insurance company had filed a claim for us with a $0 payout.

    This year, in August, my insurance company of 10 years runs a check on me through ChoicePoint. ChoicePoint is a TRW or Equifax of the insurance world. If you've ever bought insurance, you have a record. If you make claims on those policies, they are noted in that record. My record had a single claim noted, for $560. That was PSNH working on the juice outside my house, cycling the power to my house like a strobe light. The one computer not on a UPS just kinda went "poof".

    Here's where it gets interesting. Somehow, a non-claim made to another insurance company, referencing a policy that was not mine wound up on my ChoicePoint report. According to the database, I now had two claims. A policy pruning process run on the database at my insurance company noted this, flipped a bit in a database table, and my policies get the axe. That's right. Cancelled. One legitimate claim of $560 almost three years ago, and I'm "encouraged to seek coverage elsewhere". Astonishing.

    I called my local agent, and mentioned that this was obviously a "computer mixup" regarding a list claim with a $0 payout, and he thought a call to the underwriter would clear this up. Not so. "The system has flagged you for cancellation due to the number of claims filed. You're not a good risk".

    I promptly terminated my policies with them, and requested a ChoicePoint report on myself. Then I looked for new insurance. Seems that the rates quoted to me by other companies were very high. "Your report is causing that" is what I'm told. My mood was dark, like violets at midnight. I did find coverage that day, but it was not a pleasurable experience. On the plus side, ChoicePoint seems to have a fairly straightforward dispute process, and I'm moving in that direction.

    That was last week. On Monday, I picked up the mail and found a letter from State Farm. I figured it was a termination notice. It was actually a letter addressed to my fathers' name, with my address, informing me that State Farm is "Like a Good Neighbor", and can offer me a full range of insurance services, from auto to home. Needless to say, I was deeply touched by these offers. In fact, I called the underwriter that had cancelled my policy and told her all about it, much as I'm telling it here.

    I'm sure that I'll get this sorted out eventually, but it hasn't been a great way to spend significant amounts of time, and I'd rather not have had to deal with this. It's a sort of inadvertent identity theft, or even identity framing; simply a merging of a fictitious event with a real person, resulting in a semi-automated chain of events that could steamroll into a serious problem. I was technically without insurance for several days, since the letter wasn't delivered until after the coverage had been cancelled. The moral of the story is somewhat unclear to me.

    All of the above is directly related to an incorrect entry made last year by a human. The following can also be traced to a human, but isn't as direct.

    I get the feeling that the problems of this nature may get worse before they get better. Mine is a simple illustration, but the issue could be much more dramatic. New passenger screening systems that are being tested that will flag airline passengers with a color code that represents their risk, ostensibly allowing security personnel to roust those marked as potentially dangerous. As any David Nelson can tell you, flaws in a system like that can be a big problem.

    Data on individuals is collected in thousands and thousands of databases across the country every minute. From a purchase on a credit card at Sears to a cable bill payment, a record will exist on a disk somewhere, then perhaps on a tape somewhere. Security on these databases isn't a requirement, and can be very lax, especially in extremely important government databases.

    My problem was caused by human error. As we have learned to automate human production, some of us have also learned to automate human error.

    If someone can steal data, someone can likely modify it. Imagine a MSSQL Slammer variant that infects database servers. Instead of simply attempting to replicate itself, the worm could also make minute modifications to database tables. How many databases have tables named "SSN"? Imagine the problems caused by a single day of erroneous transaction logs for a large bank. Imagine that account numbers were shuffled by a single digit, or the SSNs were modified at random. Errors on credit reports would take months and months to fix. It would be relatively easy to fix, but before the problem was discovered, plenty of damage would have been done. Should it stay undiscovered for long enough, even replaying the transaction logs or pulling from tape would be closing the barn door after the horse was gone. It's not simply restoring from backup, since the backups may contain dirty data stretching back days or weeks. The reconciliation of four-day-old clean backups and filtered transaction logging would be a significant, time consuming problem. Problems like that would be highly visible and very costly to the institution.

    Yes, these database servers are usually secured. Many thousands aren't however. Should a client on that network or an adjacent, unprotected network get infected, the database server is fully exposed. The technical solution is core-level packet filtering. The social and procedural solutions -- both with the weight placed on potentially erroneous data and the coding of automated pruning processes -- are tough ones to address. It may take a few very high-profile examples for this problem to really get the attention it deserves, and I sincerely hope that I'm not one of the innocent bystanders when it does. I would not enjoy explaining to a SWAT team that they have the wrong guy.

    Copyright © 2003 IDG Communications, Inc.

    How to choose a low-code development platform