Database basics

A few procedural changes can help maintain high performance

It’s a common mistake: a company puts a poorly performing database on larger and larger servers, throwing more and more money away while making the same mistakes, but the database never gets any better. Many performance problems stem from just a few basic issues. First, hire a real DBA. If you’re going to throw money at the problem anyway, throw it in a direction where it can do you some good. If you can’t afford a DBA, then check these few simple, but extremely effective techniques to gain tremendous performance benefits.

Separate your data and log files onto separate disks. These must be physical disks (or disk arrays), and not just logical partitions. All databases have both data and log files, and no vendor has discovered a magic formula to get around disk contention caused by putting these files on the same physical volume. For example, to check this in the Windows environment, go to Windows NT Performance Monitor and look at the PhysicalDisk\Avg Disk Queue Length. The rule of thumb here is no more than a sustained queue length of two per processor. So if there are two processors on the server, you should not sustain a disk queue higher than four. Spikes now and then are usually OK.

Review your indexes and queries. Ninety percent of the performance issues in all databases are index/query related. Developers often write horrible queries that go completely unchecked and get pushed into production. Some of the worst performers include queries that return all columns when only a few are needed, have overly complex logic in the search criteria, or join too many tables (four is usually a sensible limit). More often than not, no thought is put into indexing in production databases. The three categories of bad indexing are: not enough indexes, too many indexes, and improper indexes. One way to tell whether you’re using effective indexes is to compare your database’s buffer cache with the procedure cache. If the procedure cache seems to be getting steadily larger, it could be that query plans are not being cached. In this case, you should review your stored procedure recompiles and execution plan usage. Another metric to check would be your database’s buffer manager/page life expectancy counter. The lower the average value, the more likely you’ll have issues with not having your queries in cache.

Stay on top of your server’s memory. Memory leaks from other processes contained on your database server are also common. Monitoring your database’s target server memory counter will reveal memory being stolen from your database. In the case of a true memory leak, this activity shows up as a steady decrease in target server memory.

Create baselines and benchmarks for your database servers. You must have a solid picture of your database’s normal performance before you know that it’s not performing at peak efficiency. Then, by comparing your benchmark to your baseline, you can uncover variances in execution plan usage, stored procedure recompiles, memory, disk and CPU queuing, and space usage. After reviewing your comparative benchmark, you may need to upgrade your hardware, but never buy any hardware without solid numbers to back your decision. You will spend a lot of money and you still may not solve your problem.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies