Cache aggressively
Caching is crucial in the cloud, and you can ensure high performance through four modes of caching: caching in the browser, caching objects, caching queries, and caching data.
Caching in the browser: All items in a page that the browser fetches can be cached. This includes images, HTML objects, code, and so forth. Although many browsers are configured to perform a lot of caching by default, this is often not sufficient. That's because Web pages are returned to the browser with a cache-control setting that is primarily commanded and configured by the Web server itself. This is good news for the operations team, as it is another switch or dial that can be turned to speed things up.
Typically, cache control should be set with a maximum age of one week. This becomes the default for objects sent to the browser. They can still be expired manually in code and application logic as necessary.
Caching objects: Object caches are a great addition to the caching layer at the Web server tier. Application code such as PHP or Ruby include libraries for making use of memcache, a popular object cache that handles name and value pairs. Whenever data is requested from the database, the object cache is checked first. If it is available, the data will be returned as much as 100 times faster -- no network round trip to the database box, no complex database queries or caching.
If the data is not available in the object cache, a cache miss happens, and a request to the database will follow. The results returned will then be placed in the object cache for future use.
In AWS, a systemwide solution called ElastiCache is available. This technology is built on memcache but does not require individual configuration, monitoring, or tuning. Simply configure your application for this object cache -- you're off and running. Scalability is built in as a bonus.
Caching queries: Oh those queries, requiring heaps of attention to get them just right and as much attention to keep them working properly! That's why the query cache should be another tool in your arsenal of performance equipment. Be sure to have it enabled in MySQL and configured for sufficient memory so that useful queries are not expiring before their useful life has ended.
Caching in the query cache will keep the query plan and data in one place. If the database finds an exact match of a query it's already run in the past, it'll send it back to the Web server without doing any real work at all.
Caching database data: The MySQL database contains a very important setting called the buffer cache. This caches all index and table data. Since data that is previously cached will incur only logical reads, you'll avoid a ton of I/O to your data files. As we've mentioned, because of the challenges of EBS virtualized storage, this is doubly important in the Amazon AWS environment.
Scaling the database tier
The Web server tier can be set up with autoscaling in AWS because each of the server's data is not changing dynamically. Sure, some files and images are uploaded or new code deployed, but this happens intermittently.
Scaling the database tier, however, typically involves a bit of finesse and careful execution as it poses specific challenges. The most widely used method is to maintain a single master database. This database receives all changes, such as Insert, Update, Delete statements from the application. All Select activity from the application gets sent to one or more slaves. These can be regularly reimaged, so you always have the latest copy of your MySQL slave as an image ready. When you need to add read capacity, simply spin up the latest image, the instance will start, and MySQL will start and connect to the single master just as the others do.
If capacity on the master database is waning, you can vertically scale that server. Set up a new, larger EC2 instance in AWS, then set it to replicate off the master. Set all of your existing slaves to point to the new master, then failover the application to the new, larger instance. Voilà, you've scaled vertically with zero downtime.