Oracle Database 11g Advanced Compression testbed, methodology, and results

Available as a separately licensed option for Oracle Database 11g Enterprise Edition, Advanced Compression is a way of writing data to disk so that it takes up less space

Available as a separately licensed option for Oracle Database 11g Enterprise Edition, Advanced Compression is a way of writing data to disk so that it takes up less space. It can reduce storage costs, reduce memory and bandwidth requirements, and even improve query performance. (Read my full review, "Oracle Database 11g shoots the moon.")

I tested Advanced Compression in two ways, against two separate databases. In the first case, the database I used for testing was generated from the TPC-C order entry benchmark, which simulates a complete environment for online transaction processing. Transactions include entering and delivering orders, recording payments, checking order status, and monitoring stock levels. See TCP.org for details.

I started by testing bulk loads to get some initial compression numbers on five tables (ranging in size from 5MB to 104GB). I then tested read, write, and mixed read-write performance by using the Quest Benchmark Factory 5.5 load generator to simulate activity of 50 users against the database, running Benchmark Factory on the same server as Oracle Database. The data block size for these tests was the default 8K.

In the second case, I tested Advanced Compression by installing and running the OLTP Table Compression Test Kit provided by Oracle. This involved importing two uncompressed tables (532MB and 1.357GB), creating compressed copies of these tables in 8K and 16K block sizes, and then executing a set of scripts to test the performance of table scans and writes against each of those tables.

For both tests, the database was stored on a RAID-5 array with 14 SATA spindles, directly attached to an Intel-based Windows Server 2003 Enterprise server (32-bit with Service Pack 2 installed) running Oracle Database 11g. The server was configured with four 3GHz Intel Xeon CPUs and 4GB of RAM.

TPC-C compression results. I tested several tables in the TPC-C database with randomly generated data. Although the data was generated, the generated names are real names and addresses and real-world order data. The product description fields in order lines do include random characters. Considering the ordered nature of Oracle compressed columns, and the fact that these descriptions wouldn't typically be ordered together, these random characters shouldn't have a significant impact on overall compression results. However, it will reduce the compression ratio, simply by remaining uncompressed. Again, this is expected, and in fact simulates real-world production environments pretty well. Below are the compression results. Note that the two tables with 0% compressed (C_District and C_Item) are lookup tables; they have no duplicate data, so wouldn't be expected to benefit from Advanced Compression.

Table Name Original Size (MB) Compressed Size (MB) % of Original % Compressed
C_District 5 5 100 0
C_Item 11 11 100 0
C_New_Order 624 448 72 28
C_Customer 93,882 78,249 83 17
C_History 7,680 3,327 43 57
C_Order_Line 104,528 72,703 70 30
C_Stock 66,304 56,065 85 15

It's clear from the results that some columns in the database had a high level of duplicate values, and some didn't. This TPC-C data reflects the operations of a company with 4,000 warehouses that supply a total of 40,000 districts (10 per warehouse) and 120 million customers (3,000 per district). The company sells 100,000 different items, and each warehouse maintains stocks for each item. The C_New_Order table I generated contained 36 million rows, and the C_Order_Line and C_Stock tables had 1,578,942,568 and 186,338,813 rows, respectively.

Despite the high numbers of unique values in these tables, which would tend to reduce the benefits of compression, compression ratios reached as high as 57%. Here, C_History benefited from duplicate order dates and repeating ID numbers for customers, districts, and warehouses. C_Order_Line (30% compression) benefited from repeating dates and ID numbers, and C_New_Order (28%) from repeating ID numbers.

OLTP Table Compression Test Kit compression results. The tables Oracle provided with the OLTP Table Compression Test Kit, References and Locations, were much riper for de-duplication than the TCP-C data. Advanced Compression was able to squeeze both tables to about one-quarter of their original size. The differences in the data block size didn't make much difference. References compressed by 75% at both 8K and 16K block sizes. Locations was reduced 75% using 16K blocks, and 73% using 8K blocks. Here are the results:

Table Name Size (MB) % of Original % Compressed
References 1,357 100 0
References_16C 336 25 75
References_8C 344 25 75
Locations 532 100 0
Locations_16C 136 25 75
Locations_8C 144 27 73

TPC-C performance results. To test the performance of table scans and DML operations (inserts, updates, deletes) against the TCP-C data, I removed as many extraneous factors from the bulk load test as possible. The tables were completely unindexed, and I used Oracle's "append" command that removes any kind of locking or other contention management hooks and even skips the SQL-level of translation and talks to the storage engine directly. I also removed logging from the tables to prevent disk contention from Oracle having to write undo data.

I then tested the compression process under user load. I generated 50 users from Quest Benchmark Factory and isolated the tests into three sections: Read/Write/Mixed.

I performed the write tests first. I created empty tables and again turned logging off to remove disk contention from writing undo data. I had the 50 users insert random data into the uncompressed tables for 30mins and then repeated the test with the compressed tables. I then performed the read tests, but I had to do some prep first. First off, the tables won't reach maximum compression until they're ordered properly. So I had to reorder the tables in the way that they would be ordered in production.

Again, I had to perform the write tests without any indexes on the tables so that I could isolate the write activity with as few other factors as possible. I also took into consideration the fact that Advanced Compression doesn't benefit index lookups; it only helps performance for table lookups. With that in mind, I kept the indexes off the tables for the read tests. One other factor came into play for the read tests: I chose to use the tables that were generated in the write tests instead of the ones I generated in the initial bulk load tests. This is because the bulk load tests generated hundreds of millions of rows in some cases, and doing table scans against that much data would prove unfruitful for a 50 user load. Had I done that, the benchmark would spend all of its time on just a couple queries and I wouldn't get an accurate representation of the differences between the compressed and the uncompressed reads. Because the write tests generated only a few thousand rows, running the read tests against those rows would return the results a lot quicker and I stood a much better chance of actually burning through all the queries in the workload. After all, it's the lookup times I'm interested in, not the data itself.

All that finally said, I ran the read tests in the same way I did the write tests. I chose queries that I thought would accentuate the read performance of the compressed data and ran the tests for 30 minutes each. These are the queries I used:

Select * from c_item where i_id = ?

Select * from c_history where h_c_id = ? AND h_date = ?

Select * from c_district where d_id = ?

I figured these would be good queries to illustrate the performance benefits of Advanced Compression because the "where" clauses all have a high level of dupes and are ordered by those columns that took advantage of compression.

I should also note that all of these load tests were done with the loads generated on the database server itself to remove the network as a factor in performance. The user load generation takes very few system resources so it wasn't a factor, and even if it did become a factor, it would be the same factor in all the tests because the number of users is the same in every case, making it a constant that can be ruled out as a contributing factor of poor performance.

One more factor came into play for the read tests. Since the write tests produced different results for each table, ending up with different row counts, I created compressed and uncompressed versions of each of the write tables to keep the test honest. If I had done the read tests against the actual tables that were loaded in the write tests, then the read test would have been skewed because the uncompressed test would be going against, say, 35,000 rows while the compressed test would be going against 27,000 rows. The difference in rows could easily account for a difference in Transactions Per Second, so tables with the same number of rows had to be used.

Next, I performed a mixed workload test. In this case, I did use the same tables I used for the original bulk loads. Despite what I said above about having the data ordered properly, there's simply no other way to test this mixed workload without either reading and writing to the same tables in this random fashion, or without adding clustered indexes to keep the write order. It's not prudent to introduce clustered indexes into this equation because the contention caused by page splitting through the maintenance of the index would cause undo stress on the test. Further, the PCTFREE I would have to set on the tables to prevent page splitting would negate any benefit I would get from the compression. And if I read from the ordered tables and wrote to the unordered tables, that wouldn't prove anything. Finally, though I could write to the end of the ordered tables I used in the read tests, that would only go so far and the end of the table (the data that's getting generated during this test) would be unordered. Performance would suffer as the test progressed. For all these reasons, I read and wrote to the unordered tables.

I kept the read/write ratio at 50/50. I read to two tables and wrote to the same two tables at the same time for 30 minutes with 50 users, in order to see how these processes performed under a moderate user load. Below are the summary results (average values for all tests).

Test Total Executions Total Rows % CPU Available Memory (MB) Avg. Disk Queue Length Transactions per Second (TPS)
Read Uncompressed 54,015 18,178,763 91 2,328 1 30
Read Compressed 29,039 21,991,413 99 2,325 1 16
Write Uncompressed 89,624 113,777 5 2,367 1 50
Write Compressed 76,419 96,126 25 2,314 3 42
Mixed Uncompressed 46,898 21,181,801 96 2,322 1 28
Mixed Compressed 38,170 14,593,982 58 2,303 6 13

Comparing the Read Uncompressed and Read Compressed results, you can see that in this case compression has not benefited query performance, but resulted in roughly a 50% performance hit (a drop from 30 TPS to 16 TPS). Whether Advanced Compression will improve query performance or not has everything to do with the specific query; some queries will show a performance gain, while others will show a degradation. Naturally, there is some overhead to compression and decompression, which explains the performance hit here.

Likewise, the Write Uncompressed and Write Compressed results show that Advanced Compression took a 16% performance tax (i.e., compression cost 8 transactions per second, on average). Again, you can expect to see some overhead from compression operations, and while the overhead turned out to be a bit higher than I expected from my discussions with Oracle, it isn't horrendous. The impact on the disk queue is negligible, and the memory numbers are expected… 53MB is nothing on a big system. The rise in CPU utilization is probably just a symptom of the other two counters rising slightly. Had the data been higher in dupes, we might have seen better performance, simply because the database wouldn't have to write as much data to the symbol tables in each block. But that's only a guess.

Finally, looking at the results for the Mixed Reads/Writes, TPS falling from 28 to 13 represents a nearly 50% performance hit. Again, this seems significant, but keep in mind that this test is not a true representation of a real-world workload, but mainly a stress test, so the spikes in CPU and memory usage and disk queue length are simply due to the extra load on the box. But clearly the compression hurt the server's ability to handle the heavy workload. It's causing the disk to work harder, which is evidenced by the higher disk queue, and that's probably throwing the CPU and memory numbers. However, it's also notable that on this system, a disk queue length of 6 still isn't cause for worry. On average, an acceptable disk queue is 2 per spindle. Considering there are 14 spindles in the array, I wouldn't see this as a performance issue until the disk queue reached 28. So the results are still well within acceptable parameters.

OLTP Table Compression Test Kit performance results. The performance tests for Oracle's OLTP Table Compression Test Kit included reads (table scans) and writes (inserts, updates, deletes) on the uncompressed References and Locations tables, and on compressed copies of those tables using 8K and 16K block sizes. I didn't run a mixed workload against this data, and I didn't capture system resource statistics.

Advanced Compression brought huge storage savings (roughly 4:1) to both References and Locations, and these helped deliver a substantial increase in query performance as well. Table scans of References were completed roughly 25% faster after compression, and Locations showed an even bigger boost at roughly 40%. Writes (inserts, updates, and deletes) were also faster after compression, but got a milder boost, between 8% and 10% for both tables. Here are the results:

Table Name Table Scans (seconds) % of Original % Increased
References 10.46 100 0
References_16C 7.93 75.8 26
References_8C 8.09 77.3 23
Locations 16.56 100 0
Locations_16C 9.83 59.3 41
Locations_8C 10.33 62.3 38
Table Name DML (seconds) % of Original % Increased
References 48.72 100 0
References_16C 53.63 110 10
References_8C 52.47 107.6 8
Locations 71.43 100 0
Locations_16C 77.49 108.4 8
Locations_8C 78.67 110.1 10

What you would expect from Advanced Compression is that different tables and different data will behave differently, and this is just what the two tests show. The References and Locations tables in Oracle's OLTP Table Compression Test Kit not only compressed to a quarter of their original size, but reaped big gains in query performance and gain even in write speeds. The TCP-C database tables got a less dramatic trim, ranging from 15% to 57%, and saw no benefit to query or write performance. In fact, performance took a hit.

The key point is that the results I got in my tests, and the results that Oracle gets in their tests, will not be the same results you get in your tests. Plus the results you get in your test could be dramatically different from one table to another. Advanced Compression also adds a level of complexity to making changes that you have to be careful about. For example, if you have a very large table and you size your system according to the compression ratio you got in your testing, you can implement that specific scenario in production without any issues. However, if the business or reporting requirements change (as they often do) and you have to order your table differently, you could lose a good deal of your compression. In theory, if you're getting 60% compression when your table is ordered by OrderDate, and your reporting requirements change such that you have to order it by Region, reordering your table could take your compression ratio down from 60% to 30% or even 15%. Worse case scenario, you might suddenly find that you no longer have enough disk space.

Of course, you could luck out and the compression ratio shoot to 80%. The point is that Advanced Compression can be a huge asset for the smart DBA who manages it correctly and has the forethought to pull it off and maintain it well, but there's also a hidden cost. You need to be aware that it can increase the complexity of making changes to the base-level table. And even before you implement it, you'll need to do plenty of regression testing to ensure that your apps will work as before.

Not to mention, at $200 per user plus $10K/CPU, this is an add-on feature that can get rather expensive. DBAs will have to measure their savings up front and very thoroughly before deciding to purchase. For example, if you have four CPUs in your server, Advanced Compression will cost upwards of $48K. If in this scenario you have a huge table or even a huge database that you want to compress, and you get an average of say 50% space reduction for the entire database, then you'd have to save in the neighborhood of 48TB to make Advanced Compression cost effective versus the cost of storing that extra data on disk. That's a calculation of disk at $1/GB. Of course, if you have multiple copies of the database (for backup, off-site DR, testing, etc.) then the benefits of Advanced Compression are multiplied X times and the cost is much easier to justify.

Also be aware that Advanced Compression could impose a cost in system resources. When I set up the read tests for the TCP-C data, I ordered the tables to cluster all the like data together and then queried off of that data specifically. I figured this would greatly improve scan performance. But that's not what happened. So unless you test thoroughly and with your real production data, you may be surprised by the results you get.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies