Filtering effects in cache hierarchies

A friend of mine was building a multi-tiered application with a cache fronting a database and was investigating why his database memory hit ratio was not as high as he expected it to be and how to optimize it. This matters only when you are paranoid about latency percentiles at TP99 or TP99.9s. One of the things, he found after a deeper investigation is what he called as “cherry picking” done by the caching tier that the next level did not see requests with high cache locality. A similar observation was made in Web cache research and was referred to filtering effects.

One of the important works done in this area is by Carey Williamson on “On Filter Effects in Web cache hierarchies”. This paper uses trace driven simulation to look into effects of what happens to workload locality when it travels through various cache hierarchies. Important results to call out:
– Proxy caches filter out the most popular documents from a workload. The filtering effect typically changes the Zipf-like document popularity profile of the input stream into a two-part piecewise-linear document popularity profile for the output stream. The upper leftmost part of the plot is significantly flattened.
– The percentage of unique documents referenced and the percentage of onetimer documents tend to increase as the request stream passes through a cache.
– Among the caches in a caching hierarchy, the first-level cache has the most pronounced filtering effect on the workload.

Even though this is a 10 year old work, highly recommended read for folks trying to understand the effects on caching on their systems.


2 Responses to “Filtering effects in cache hierarchies”

  1. Gokul Soundar Says:

    Very interesting post. I’m surprised that I didn’t know of this work sooner! I looked at a similar problem in the context of database buffer pool and storage servers and tried to derive the secondary hit ratios (at the downstream cache) by monitoring the behavior at the first level. In general, it turns out to be very difficult for arbitrary replacement policies but some approximations can be made for LRU-like policies.

    (Shamless plug)
    More details in VLDB paper:
    Dynamic Partitioning of the Cache Hierarchy in Shared Data Centers. VLDB’08


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: