Last night, my sister team in AWS launced a service I’m very excited about: Amazon Elasticache. Historically, caching has been the one of the most widely used techniques to build scalable web applications where the caches store the most often accessed computation results which take longer (or is harder) to re-compute in the source. In-memory caches are normally used to front databases so that often accessed results can be retrieved from memory faster (see examples of how to use MySQL and memcache together, here). However, to ensure that in-memory cache do not become a scalability bottleneck themselves, distributed cache clusters use techniques like distributed hash tables (DHTs) to ensure that cache cluster can be “scaled out”. As the scale of caching system becomes harder, it is a challenge to manage them in a large scale environment.
Today, AWS has made the process of running a cache cluster easier with a new managed cache offering called Elasticache. A quote from the detail page sums it up well:
“Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. Amazon ElastiCache is protocol-compliant with Memcached, a widely adopted memory object caching system, so code, applications, and popular tools that you use today with existing Memcached environments will work seamlessly with the service. Amazon ElastiCache simplifies and offloads the management, monitoring, and operation of in-memory cache environments, enabling you to focus on the differentiating parts of your applications.”