Windows on multicore, redux: How I tested

Simulating multiprocess workloads using DMS Clarity Studio's ADO Stress, MAPI Stress, and WMP Stress workload objects

As in my previous round of testing Windows on multicore hardware, I tested the x64 editions of Windows XP, Windows Vista, and Windows 7 on dual-core, quad-core, and eight-core systems (see article, "Windows 7's killer feature: Windows on multicore, redux") by running 10 instances each of a database workload, a MAPI message store workflow workload, and a Windows Media Player workload simultaneously. To draw performance comparisons, I measured the time taken to complete each instance of the database and MAPI workflow transaction loops. The Windows Media Player workload was used only to place additional stress on the systems.

The test suite I used, DMS Clarity Studio, makes it easy to configure and execute packages of diverse workloads. Because it's designed to launch and control these workloads in parallel -- and provides an option to create multiple instances of each workload to further increase the complexity of the simulation -- Studio makes it easy to push today's dual-core and quad-core systems to their limits, something I did for this project by involving three of Studio's bundled workload objects: ADO Stress, MAPI Stress, and WMP Stress.

ADO Stress is a platform- and provider-neutral database workload object that uses the ActiveX Data Objects libraries to run transactions against any ADO/OLE-DB accessible data store. For the purposes of this project, I configured ADO Stress to access a locally installed instance of SQL Server 2008 Developer Edition. Using the various options in the ADO Stress dialog, I configured the workload to access SQL Server using the SQL Native Client driver and to use ADO transaction support if available. I set the workload to execute continuously, with a 1-second delay loop, and to create 10 concurrent instances of itself (each instance being a discrete process) when the test package launched.

MAPI Stress is a platform- and provider-neutral workflow workload object that uses the Collaboration Data Objects (CDO) libraries to run transactions against any MAPI/CDO accessible message store. For the purposes of this project, I used a locally hosted copy of a Microsoft Outlook Mailbox (PST) file. I configured MAPI Stress to generate the maximum number of message objects (approximately 25MB of mixed e-mail and attachment data) per transaction and to execute continuously, with a one second delay loop. As with ADO Stress, I configured the workload to create 10 concurrent instances of itself at package launch.

WMP Stress is a platform- and provider-neutral multimedia player object that uses the Windows Media Player interfaces to play back any media content supported by the currently configured Windows Media environment. For the purposes of this project, I selected a single media file -- the welcome2.asf file from an earlier-generation Windows Media Services platform -- and then configured the workload to play the clip continuously, with a 1-second delay loop. Here again, I configured the workload to create 10 concurrent instances of itself at package launch.

The above scenario represents a massive workload of mixed database, workflow, and media playback tasks -- 30 concurrent processes in all, generating a whopping 430 concurrent execution threads. (See the 30-way workload in action on XP.) I repeated this scenario across all three Windows operating systems installed in a triple-boot configuration on dual-core, quad-core, and eight-core test beds: a Dell OptiPlex 745 with Core 2 Duo E6700, 4GB of RAM, and 10,000-rpm SATA disk; an HP EliteBook 8730w with Core 2 Extreme Q9300, 8GB of RAM, and a 7,200-rpm SATA disk; and an HP Z800 with dual quad-core Intel Xeon 5580 CPUs, 12GB of RAM, and 15,000-rpm SAS drive, respectively.

This story, "Windows on multicore, redux: How I tested," was originally published at InfoWorld.com. Follow the latest developments in Windows 7 and Windows at InfoWorld.com.

Join the discussion
Be the first to comment on this article. Our Commenting Policies