SharePoint, the fastest growing product in Microsoft's history, is used to store reams of documents, so application performance is a key component for successful SharePoint deployment and adoption. Here are 10 steps to improve the performance of your SharePoint servers.
[ Keep up on the day's tech news headlines with InfoWorld's Today's Headlines: Wrap Up newsletter. ]
Step 1: Separate user and database traffic
A common misconception is that servers connected to a high-speed network segment will have plenty of bandwidth to perform all required operations. But SharePoint places a tremendous amount of demand on SQL -- each request for a page can result in numerous calls to the database, not to mention service jobs, search indexing and other operations.
10 things we love about SharePoint 2010
To reduce the conflict between user and database traffic, connectivity between front-end servers and SQL should be isolated, either via separate physical networks or virtual LANs. Typically this requires at least two separate network interface cards in each front-end Web server with static routes configured to ensure traffic is routed to the correct interface. The same configuration may also be applied to application and index server.
Step 2: Isolate search indexing
A typical medium server farm consists of one or more Web front-end servers, a dedicated index or application server, and a separate SQL database server. Search traffic initiated by the index server must be processed by the same servers responsible for delivering user content. To prevent search and user traffic from conflicting, an additional server may be added to the farm, which is dedicated solely to servicing search queries (in smaller environments, the index server may also serve this function). The farm administrator then configures the search service to perform crawls only against this dedicated server. This configuration may reduce traffic to the Web front-end servers by as much as 70 percent during index operations.
Step 3: Adjust SQL parameters
One quick way to avoid future headaches is to provision the major SharePoint databases on separate physical disks (or LUNs if a storage-area network is involved). This means one set of disks for search databases, one for temporary databases and still another for content databases. Additional consideration should be given to isolating the log files (.ldf). Although these do not incur the same level of I/O as other files, they do play a primary role in backup and recovery and they can grow to several times the size of the master database files.
Another technique is to proactively manage the size and growth of individual databases. By default, SQL grows database files in small increments, either 1MB at a time or as a fixed percentage of database size (usually 10 percent). These settings can cause SQL to waste cycles constantly expanding databases, and prevents further data from being written while the databases are expanding. An alternative approach is to pre-size the databases up to the maximum recommended size (100GB) if space is available and set auto growth to a fixed size (such as 10MB or 20MB).
Step 4: Defragment database indexes
SQL Server maintains its own set of indexes for data stored in various databases to improve query efficiency and read operations. Just as with files stored on disk, these indexes can become fragmented. It is important to plan for regular maintenance operations, which includes index defragmentation. Special care should be taken to schedule these types of operations as they are resource-intensive and, in many cases, can prevent data from being written to or read from the indexes.
Step 5: Distribute user data across multiple content databases
Most SharePoint data is stored in lists: tasks, announcements, document libraries, issues, picture libraries, and so forth. A great deal of this data is actually stored in a single table in the content database associated with the site collection. Regardless of how many sites and subsites are created within the SharePoint hierarchy, each site collection has only one associated content database. This means that a site collection with thousands of subsites is storing the bulk of the user data from every list in every site in a single table in SQL.
This can lead to delays as SQL must recursively execute queries over one potentially very large dataset. One way to reduce the workload is to manage the mapping of site collections to content databases. Administrators can use the central administration interface to pre-stage content databases to ensure that site collections are associated with a single database or grouped logically based on size or priority. By adjusting the Maximum Number of Sites setting or changing database status to offline, administrators can also control which content database is used when new site collections are created.
Step 6: Minimize page size
For SharePoint users connected to the portal via a LAN it is easy to manage content and find resources, but for users on the far end of a slower WAN link, the heavyweight nature of a typical SharePoint page can be a real performance-killer.
If you have many remote users, start with a minimal master page, which, as the name implies, removes unnecessary elements and allows designers to start with a clean slate that only contains the base functionality required for the page to render correctly.
Second, most SharePoint pages contain links to supporting files, including JavaScript and style sheets, that require additional time to retrieve and execute. Designers can alter how SharePoint pages retrieve these files using a technique called "delayed loading," which essentially loads the linked files in the background while the rest of the page is rendering.
Step 7: Configure IIS compression
SharePoint content consists of two primary sources: static files resident in the SharePoint root directories (C:\Program Files\Common Files\Microsoft Shared\12 for 2007 and \14 for 2010) and dynamic data stored in the content. At runtime, SharePoint merges the page contents from both sources then transmits them inside an HTTP response to the requesting user. Internet Information Server (IIS) versions 6 and 7 both contain various mechanisms for reducing the payload of HTTP responses prior to transmitting them across the network. Adjusting these settings can reduce the size of the data transmitted to the client, resulting in shorter load times and faster page rendering.
IIS compression settings can be modified from a base value of 0 (no compression) to a maximum value of 10 (full compression). Adjusting this setting determines how aggressive IIS should be in executing the compression algorithms.
Step 8: Take advantage of caching
Much of the content requested by users can be cached in memory, including list items, documents, query results, and Web parts. Site administrators can configure their own cache profiles to meet different user needs. Anonymous users, for example, can be assigned one set of cache policies while authenticated users are assigned another, allowing content editors to get a more recent view of content changes than general readers. Cache profiles can also be configured by page type, so publishing pages and layout pages behave differently, and administrators have the option to specify caching on the server, the client, or both.
In addition, the SharePoint Object Cache can significantly improve the execution time for resource-intensive components, such as the Content Query Web Part. For example, large objects that are requested frequently, such as images and files, can also be cached on disk for each Web application to improve page delivery times.
Step 9: Manage page customizations
SharePoint Designer is a useful tool for administrators and power users but page customization can be harmful to overall performance. When customization occurs, the entire page content, including the markup and inline code, is stored in the database and must be retrieved each time the page is requested. This introduces relatively little additional overhead on a page-by-page basis, but in larger environments with hundreds or even thousands of pages, all that back-and-forth to the database can add up to significant performance degradation.
To prevent this problem, administrators should implement a policy that restricts page customizations to only those situations where it is absolutely necessary. Site collection and farm administrators also have the option to disable the use of Designer or, when necessary, use the Reset to Site Definition option to undo changes and revert back to the original content.
Step 10: Limit navigation depth
One of the most significant design elements on any portal site is the global, drop-down, fly-out menu at the top of each page. It seems like a handy way to navigate through all the various sites and pages -- until it becomes so deep and cluttered that all ability to navigate beyond the first few levels is lost completely. Even worse, fetching all the data to populate the navigation menus can be resource-intensive on sites with deep hierarchies.
SharePoint designers have the ability to customize the depth and level of each navigation menu by modifying the parameters for the various navigation controls within the master page. Administrators should limit that depth to a manageable level that does not affect performance.
Eric Shupps is a SharePoint MVP who specializes in SharePoint performance to help organizations better understand the different ways and places SharePoint performance can be improved. This article was written in partnership with Idera.
Read more about data center in Network World's Data Center section.
This story, "10 steps to optimize SharePoint performance" was originally published by Network World.