Oracle Database 11g shoots the moon
Oracle's enormous 11g release rumbles with an impressive array of performance and management aids, elegant application testing, standbys that earn their keep, and the promise of lower storage requirementsFollow @infoworld
That said, there's nothing in the Active Data Guard setup that couldn't be done by a wizard, and in fact a wizard will be coming in Grid Control 11g, according to Oracle. Built-in monitoring and alerts would also be an improvement. As it stands, you have to set up your own scripts and alerts to monitor whether your logs are being shipped and applied.
Caveats aside, Active Data Guard is a huge leap forward, and DBAs will love what it can do. After I got the standby up and running, it seemed to perform very well and did exactly what it's supposed to do. I could hit it with queries, write to it, and it would resync with the primary when I switched back. It's also easy to use. It takes just a single statement at the command line to switch between modes.
Real Application Testing
Real Application Testing, an option comprising Database Replay and SQL Performance Analyzer, is a new feature in Oracle Database 11g that allows you to capture a workload, replay it on the same system or a different one, and then compare the results. Database Replay will replay your workload exactly as it happened complete with concurrency and timing, allowing you to fully test system changes against your actual production workload. Thus, you can see the true impact of changes to the database (index changes, percentage free, table partitioning, and so on) before introducing those changes into production.
In my tests, Database Replay was easy to configure, and it performed exactly as expected. You have to learn just a couple simple concepts related to setting it up -- such as how to create the directory object in the database to capture the workload and how to start the replay from the command line -- but once you get past them, it's smooth sailing.
My test was a 50-user read/write mixed workload. In between tests, I deleted indexes from my tables so that I could see changes in the report numbers. As expected, dropping the indexes did increase the performance of the writes, while decreasing the performance of the reads. I was able to verify that the replay mechanism re-created all my threads and ran them flawlessly.
Setting up a capture is a simple four- or five-step process. You typically restart the database, set up options for the capture (which parts of the workload to include or exclude), create the directory to save the capture files, and set the capture start time and duration or manually start it and stop it. If you don't want to capture the system-level activity such as background processes or indexing operations, you can easily filter them out. Or if you want to limit the capture to just a specific application or a specific piece of code you were having a problem with, you can isolate them to make it easier to look at the deltas. The GUI makes configuring exclusion or inclusion filters easy to do.
You can replay the workload on the host system itself (as I did) or from one or more clients. If you capture a large workload, Database Replay has a calibration tool that will tell you how many replay clients you will need.
The reports produced are replete with result numbers comparing the capture workload and the replay workload. You can get literally hundreds of different calculations, but while all of these different metrics are available, they're not very pretty. In fact, they're just one table of numbers after another. It would be nice to have some high-level graphs and such to point you in the right direction. You can export the data, however, so you should be able to do something else with it if you wish.